I am Fangzhi Xu (徐方植), a first-year PhD student in Xi’an Jiaotong University, major in Computer Science, advised by Prof. Jun Liu. Also, I am currently a research intern at Shanghai AI LAB, supervised by Dr. Zhiyong Wu. In 2021, I received the B.S. degree in Electrical Engineering from Xi’an Jiaotong University.

I have published researches in some top-tier conferences and journals, such as ACL, SIGIR, IEEE TKDE, and IEEE TNNLS. Also, I serve as a PC member (or reviewer) for IJCAI, SIGIR and IEEE TNNLS.

My research interests include (but not limited to) natural language processing, large language models and neuro-symbolic reasoning.

🔥 News

  • 2024.03.24:   Our recent survey on neural code intelligence is made public 🎉🎉!
  • 2024.03.11:   One paper is accepted by LLMAgent Workshop @ICLR 2024 🎉🎉!
  • 2024.02.20:   One paper is accepted by COLING 2024 🎉🎉!
  • 2024.02.15:   One paper [SeeClick] is submitted to ACL-ARR (Feb. 2024) !
  • 2024.02.10:   One paper is accepted by ICDE 2024 (TKDE Poster Track) 🎉🎉!
  • 2023.12.15:   Two papers [Symbol-LLM, PathReasoner] are submitted to ACL-ARR (Dec. 2023) !
  • 2023.11.15:   We make the recent work Symbol-LLM public 🚀!
  • 2023.09.12:   One paper is accepted by IEEE TNNLS 🎉🎉!
  • 2023.05.02:   One paper is accepted by ACL 2023 🎉🎉!

📖 Educations

  • 2023.09 - 2026.06 (expected), PhD student in Computer Science, Xi’an Jiaotong University.
  • 2021.09 - 2023.06, M.S. in Computer Science, Xi’an Jiaotong University.   GPA:91.77/100, Rankings:1/173
  • 2017.09 - 2021.06, B.S. in Electrical Engineering, Xi’an Jiaotong University.   GPA:97.62/100, Rankings:1/365

💻 Internships

  • 2023.07 - Present, Research Intern @ Shanghai AI LAB. Focus on Large Language Models and Neuro-Symbolic.

📝 Publications

Preprint
sym

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond 🔥🔥 [Preprint]
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Chang Ma, Kanzhi Cheng, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Pengcheng Yin, Qipeng Guo, Xipeng Qiu, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Code

  • We make a thorough conclusion of the paradigm shifts on neural code intelligence and provide extensive insights on advances and future directions. Check it out!
Preprint
sym

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models 🔥🔥 [Preprint]
Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Qiao Yu and Jun Liu.

Code   Project Page  

Preprint
sym

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond [Preprint]
Fangzhi Xu*, Qika Lin*, Jiawei Han, Tianzhe Zhao, Jun Liu and Erik Cambria

(* means equal contributions)

Code

  • This paper provides a systematic, comprehensive and fine-level evaluation of logical reasoning capability from deductive, inductive and abductive views.

  • We propose a new dataset named NeuLR. It contains 3K samples with neutral content, covering deductive, inductive and abductive reasoning manners.

Under Review
sym

PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering [Under Review]
Fangzhi Xu, Qika Lin, Tianzhe Zhao, Jiawei Han, Jun Liu

Code

  • We are the first to rethink the logical reasoning task by unifying the inputs into atoms and reasoning paths.

  • We propose an atom extension strategy with equivalent logical formulas to generate diverse new samples. Also, we introduce a stack of transformer-style blocks. Specifically, a path-attention module with high-order relation modeling is proposed to joint update information within and across atoms.

SIGIR 2022
sym

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning [CCF-A]
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan and Lingling Zhang

Code

  • Logiformer is a two-branch network for the logical reasoning task in the field of multiple-choice machine reading comprehension.
  • Up to publication, it is the state-of-the-art (SOTA) model compared with all the RoBERTa-Large based single model methods. Also, it ranks on the 9th place in the leaderboard compared with other larger models.
IEEE TNNLS
sym

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text [CCF-B]
Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling Zhang

Code

  • We propose the first benchmark for generalized zero-shot logical reasoning, named ZsLR
  • A strong baseline TaCo is proposed, which is based on contrastive learning.
Pattern Recognition
sym

MoCA: Incorporating Domain Pretraining and Cross Attention for Textbook Question Answering [CCF-B]
Fangzhi Xu, Qika Lin, Jun Liu, Lingling Zhang, Tianzhe Zhao, Qi Chai, Yudai Pan

Code

  • MoCA focuses on the task of Textbook Question Answering, which is a multimodal task with abundant diagrams and large context.
  • It conducts multi-stage pretraining on the text part and introduces dense layers of cross-guided multimodel attention for the diagram part.

🧑‍ Other Paper

🎖 Honors and Awards

Honors

  • Oct.2022   Outstanding Student in Xi’an Jiaotong University
  • Jun.2021   Outstanding Undergraduate Graduates in Shaanxi Province
  • Sep.2020Top-10 Students of the Year (Top Personal Award in Xi’an Jiaotong University)
  • Sep.2019   Outstanding Student in Xi’an Jiaotong University
  • Sep.2018   Outstanding Student in Xi’an Jiaotong University

Competition Awards

  • Dec.2022   ”Huawei Cup” Mathematical Contest in Modeling, Second Prize
  • Mar.2022   Asia and Pacific Mathematical Contest in Modeling (APMCM), First Prize + Innovation Award (Top-2)
  • Dec.2021   ”Huawei Cup” Mathematical Contest in Modeling, Second Prize
  • Sep.2020   IKCEST International Big Data Competition, Excellence Award
  • Apr.2020   Mathematical Contest in Modeling (COMAP), Finalist
  • Dec.2019   China Mathematical Contest in Modeling (CUMCM), Second Prize
  • May.2018   National English Competition for College Students (NECCS), Second Prize

🎖 Scholarships

  • 2023.09   Freshman First Prize Scholarship (PhD)
  • 2022.09   National Scholarship
  • 2022.09   Huawei Scholarship
  • 2022.04   ACM SIGIR Student Travel Grant
  • 2021.09   Freshman First Prize Scholarship (Master)
  • 2020.09   Chinese Modern Scientists Scholarship (Only for Top-10 Student)
  • 2019.09   National Scholarship
  • 2018.09   National Scholarship

💬 Academic Services

  • Program Committee Member: IJCAI 2024, SIGIR 2024, AACL 2023, SIGIR 2023, SIGIR-AP 2023
  • Reviewer: IEEE TNNLS