Research
I am interested in Trustworthy AI for multi-agent reinforcement learning (MARL). My research goal is to make reinforcement learning safe and robust, including practical adversarial attack for RL/MARL, adversarial defense and algorithmic robustness testing.
Now my research mainly includes:
-
Adversarial attacks and defenses for MARL
-
Adversarial attacks for LLM aided RL decision
-
Human-AI alignment
|