Currently, I’m working at Tencent AI Lab as a senior researcher. I received my Ph.D degree from the Chinese University of Hong Kong in 2021, under the supervision of Prof. Irwin King and Prof. Michael R. Lyu. Before that, I received my Bachelor degree and Mphil degree at Nanjing University in 2015 and 2017, respectively.

This is my research group in the Natural Language Processing Center within the Tencent AI Lab. Our research spans machine translation (MT), multilingual pretraining, and large language models (LLMs). We currently focus on LLMs in several aspects:

  1. Evaluating LLMs like ChatGPT/GPT-4/LLaMA/BLOOM on NLP tasks (e.g., MT, GEC)
  2. Exploiting LLMs for MT by instruction tuning, alignment, CoT (e.g., ParroT, MAPS)
  3. Exploring LLMs for multi-agent collaboration (e.g., MAD, SpyGPT)
  4. Evaluating and manipulating LLMs’ cognitive behaviors (e.g., Personality, Emotion, Psycology)
  5. Testing safety alignment of LLMs (e.g., Culture, Cipher)
  6. Evaluation benchmark and framework for LLMs

We regularly exchange ideas and work with our colleagues in the Machine Translation Group. Particularly, we have maintained a long-term and close cooperation with Zhaopeng Tu, Xing Wang, and Longyue Wang.

Spotlight Projects

🔥 News

Oct 19, 2023

Gave a tutorial at CCMT 2023 titled From Machine to Human-like Intelligence: Large-Language Models for Multilinguality, Multimodality, Multi-Agents and Emotion Interaction.

Oct 12, 2023

Two papers accepted to EMNLP 2023 Findings. Congratulations to all the co-authors!

Aug 15, 2023

Our preprint GPT-4 Is Too Smart To Be Safe: Stealthy Chat with LLMs via Cipher is out. Refer to the Github repo CipherChat and Demo page LLMCipherChat for more details.

Aug 08, 2023

Our preprint Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench is out. Refer to the Github repo for EmotionBench.

Jul 02, 2023

Onsite talk titled Large Language Models for Machine Translation in the LLM Series Workshops hosted by CIPS.

May 30, 2023

Our preprint Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate is out.

... see all News