My name is Yuxiang Lin, pronounced as “You-shee-ahng Lin”. I am an MS student at Georgia Institute of Technology . I earned my B.S. degree from Shenzhen Technology University under supervision from Prof. Xiaojiang Peng. In 2023, I was a visiting student at the Shenzhen Institute of Advanced Technology (SIAT), CAS and a research intern with Prof. Chen Chen at UCF remotely. In 2024, I interned at Baidu Inc in the Group of Multimodal Retrieval, gaining experience in representation learning and big data.

Additionally, I volunteered as a Teaching Assistant for a Large Language Models/Computer Vision tutorial hosted by the Shanghai AI Laboratory .

You can find me at yuxiang.lin@gatech.edu or lin.yuxiang.contact@gmail.com.

My research interest mainly includes:

  • Foundation model: Representation Learning, Post-Pretraining, Contrastive Learning.
  • Multi-modal LLM: LLM Reasoning, LLM Application.
  • LLM Agent: Multi-Agent Collabroation.

🔥 News

  • 2025.07: Try MER-Dataset-Builder for automatic construct multimodal emotion recognition and reasoning dataset.
  • 2025.06: Try Multi-Agent Idea Brainstorming System for Research / Project brainstorming.
  • 2024.12: One paper about Multimodal Large Language Model in Emotion Reasoning is accepted by NeurIPS (CCF rank A).
  • 2024.07: One co-first author paper about invisible gas detection is accepted by CVIU (JCR Q1, CCF rank B). 🎉
  • 2024.03: One paper about Conversational Emotion-Cause Pair Analysis with LLM is accepted by SemEval 2024, NAACL.
  • 2024.01: I was awarded the First Prize of Research and Innovation Award (3000 CNY) and Star of Craftsmanship (3000 CNY).
  • 2023.08: My instance segmentation tutorial has been featured in MMYOLO v0.6.0 highlight! Check out the tutorial here to master the essentials of instance segmentation.
  • 2023.07: One paper on multimodal emotion recognition is accepted by ACM MM! 🎉
  • 2023.07: We are the runner up in the Grand Challenge (MER 2023) of ACM MM! 🥈

📝 Publications

📌 Pinned

ArXiv
sym

Why We Feel: Breaking Boundaries in Emotional Reasoning with Multimodal Large Language Models

Yuxiang Lin, Jingdong Sun, Zhi-Qi Cheng, Jue Wang, Haomin Liang, Zebang Cheng, Yifei Dong, Jun-Yan He, Xiaojiang Peng, Xian-Sheng Hua

ArXiv | [Paper] [Slides] [Code]


NeurIPS 2024
sym

Emotion-llama: Multimodal emotion recognition and reasoning with instruction tuning

Zebang Cheng, Zhi-Qi Cheng, Jun-Yan He, Kai Wang, Yuxiang Lin, Zheng Lian, Xiaojiang Peng, Alexander Hauptmann

NeurIPS (CCF-A) | [Paper] [Code] [MER-Dataset-Builder]


CVIU
sym

Invisible Gas Detection: An RGB-Thermal Cross Attention Network and A New Benchmark

Jue Wang*, Yuxiang Lin*, Qi Zhao, Dong Luo, Shuaibao Chen, Wei Chen, Xiaojiang Peng (* denotes equal contribution)

CVIU (JCR Q1, CCF-B) | [Paper] [Code]


ACMMM 2023
sym

Semi-Supervised Multimodal Emotion Recognition with Expression MAE

Zebang Cheng, Yuxiang Lin, Zhaoru Chen, Xiang Li, Shuyi Mao, Fan Zhang, Daijun Ding, Bowen Zhang, Xiaojiang Peng

ACMMM 2023 (CCF-A) | [Paper] [Slides]

👨‍💻 Experience

🏅 Selected Awards

  • 2020   Second Prize of SZTU Freshman Scholarship (6000 CNY)
  • 2022   China Undergraduate Mathematical Contest in Modeling, National Second Prize (top 2%)
  • 2023   Dahua Outstanding Scholarship (4000 CNY)
  • 2023   OpenMMLab MMSTAR I
  • 2024   First Prize of Research and Innovation Award (3000 CNY)
  • 2024   Star of Craftsmanship (3000 CNY)