avatar

Zhaoyang Wang

Ph.D. Student in CS
University of North Carolina
at Chapel Hill

[email protected] where X=first name

About Me

My name is Zhaoyang Wang (王朝阳 in Chinese). I am a first year Ph.D. student in the Department of Computer Science at the University of North Carolina at Chapel Hill, advised by Prof. Huaxiu Yao. My research interests mainly lie in the alignment and reasoning of Large Language Models (LLMs). In my spare time, I'm always excited to learn more about MLSys and Computer System.

Previously, I worked on robustness and text generation issues of Natural Language Processing (NLP). I received my master's degree in Computer Science from Sun Yat-sen University in 2024, and my bachelor's degree in Computer Science from North China Electric Power University in 2021. I also spent wonderful time interning at Microsoft and WeChat AI in 2023.

Publications [ Google Scholar ] [ Full Publications ]

    Alignment

  1. Preprint
    Haibo Tong, Zhaoyang Wang, Zhaorun Chen, Haonian Ji, Shi Qiu, Siwei Han, Kexin Geng, Zhongkai Xue, Yiyang Zhou, Peng Xia, et al.
    Arxiv Preprint 2025.

  2. ICLR 2025
    Yiyang Zhou, Zhaoyang Wang, Tianle Wang, Shangyu Xing, Peng Xia, Bo Li, Kaiyuan Zheng, Zijian Zhang, Zhaorun Chen, et al.
    Proceeding of the 13th International Conference on Learning Representations.

  3. ICLR 2025
    Zhaoyang Wang, Weilei He, Zhiyuan Liang, Xuchao Zhang, Chetan Bansal, Ying Wei, Weitong Zhang, Huaxiu Yao
    Proceeding of the 13th International Conference on Learning Representations.

  4. NAACL 2025
    Zhaoyang Wang, Jinqi Jiang, Huichi Zhou, Wenhao Zheng, Xuchao Zhang, Chetan Bansal, Huaxiu Yao
    Findings of 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.

  5. NAACL 2025
    Xiyao Wang, Jiuhai Chen, Zhaoyang Wang, Yuhang Zhou, Yiyang Zhou, Huaxiu Yao, Tianyi Zhou, Tom Goldstein, et al.
    Findings of 2025 Annual Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics.

  6. Reasoning

    Text Robustness

Experiences

  • Tencent/WeChat AI, Beijing, June 2023 - Sept 2023
    Research Intern, working on the RLHF stage of WeLM (a up to 60B LLM series) under the supervision of Liwen Zhu and Xiao Zhou.
  • Microsoft, Beijing, Jan 2023 - June 2023
    Research Intern, working on LLM Reasoning under the supervision of Shaohan Huang, Minghui Song and Zihan Zhang.





Website credits to Minimal Light template