I am Fangzhi Xu (徐方怍), a first-year PhD student in Xi’an Jiaotong University, major in Computer Science, advised by Prof. Jun Liu. I am currently a research intern at Shanghai AI LAB, supervised by Dr. Zhiyong Wu. Also, I will start my visiting to Univeristy of Waterloo this Fall, advised by Prof. Wenhu Chen.

I have published researches in some top-tier conferences and journals, such as ACL, SIGIR, IEEE TKDE, and IEEE TNNLS. Also, I serve as a PC member (or reviewer) for IJCAI, SIGIR and IEEE TNNLS.

My research interests include (but not limited to) natural language processing, large language models and neuro-symbolic reasoning.

πŸ”₯ News

  • 2024.05.16: Β  Three papers are accepted by ACL 2024 (main conference) πŸŽ‰πŸŽ‰!
  • 2024.03.24: Β  Our recent survey on neural code intelligence is made public πŸŽ‰πŸŽ‰!
  • 2024.02.20: Β  One paper is accepted by COLING 2024 πŸŽ‰πŸŽ‰!
  • 2024.02.10: Β  One paper is accepted by ICDE 2024 (TKDE Poster Track) πŸŽ‰πŸŽ‰!
  • 2023.11.15: Β  We make the recent work Symbol-LLM public πŸš€!

πŸ“– Educations

  • 2023.09 - 2026.06 (expected), PhD student in Computer Science, Xi’an Jiaotong University.
  • 2021.09 - 2023.06, M.S. in Computer Science, Xi’an Jiaotong University.   GPA:91.77/100, Rankings:1/173
  • 2017.09 - 2021.06, B.S. in Electrical Engineering, Xi’an Jiaotong University.   GPA:97.62/100, Rankings:1/365

πŸ’» Internships

  • 2023.07 - Present, Research Intern @ Shanghai AI LAB. Focus on Large Language Models and Neuro-Symbolic.

πŸ“ Publications

Preprint
sym

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond πŸ”₯πŸ”₯ [Preprint]
Qiushi Sun, Zhirui Chen, Fangzhi Xu, Chang Ma, Kanzhi Cheng, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Pengcheng Yin, Qipeng Guo, Xipeng Qiu, Xiaoli Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Code

  • We make a thorough conclusion of the paradigm shifts on neural code intelligence and provide extensive insights on advances and future directions. Check it out!
Preprint
sym

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond [Preprint]
Fangzhi Xu*, Qika Lin*, Jiawei Han, Tianzhe Zhao, Jun Liu and Erik Cambria

(* means equal contributions)

Code

  • This paper provides a systematic, comprehensive and fine-level evaluation of logical reasoning capability from deductive, inductive and abductive views.

  • We propose a new dataset named NeuLR. It contains 3K samples with neutral content, covering deductive, inductive and abductive reasoning manners.

ACL 2024
sym

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models πŸ”₯πŸ”₯ [CCF-A]
Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Qiao Yu and Jun Liu.

Code Β  Project Page Β 

ACL 2024
sym

PathReasoner: Modeling Reasoning Path with Equivalent Extension for Logical Question Answering [CCF-A]
Fangzhi Xu, Qika Lin, Tianzhe Zhao, Jiawei Han, Jun Liu

Code

  • We are the first to rethink the logical reasoning task by unifying the inputs into atoms and reasoning paths.

  • We propose an atom extension strategy with equivalent logical formulas to generate diverse new samples. Also, we introduce a stack of transformer-style blocks. Specifically, a path-attention module with high-order relation modeling is proposed to joint update information within and across atoms.

SIGIR 2022
sym

Logiformer: A Two-Branch Graph Transformer Network for Interpretable Logical Reasoning [CCF-A]
Fangzhi Xu, Jun Liu, Qika Lin, Yudai Pan and Lingling Zhang

Code

  • Logiformer is a two-branch network for the logical reasoning task in the field of multiple-choice machine reading comprehension.
  • Up to publication, it is the state-of-the-art (SOTA) model compared with all the RoBERTa-Large based single model methods. Also, it ranks on the 9th place in the leaderboard compared with other larger models.
IEEE TNNLS
sym

Mind Reasoning Manners: Enhancing Type Perception for Generalized Zero-shot Logical Reasoning over Text [CCF-B]
Fangzhi Xu, Jun Liu, Qika Lin, Tianzhe Zhao, Jian Zhang, Lingling Zhang

Code

  • We propose the first benchmark for generalized zero-shot logical reasoning, named ZsLR
  • A strong baseline TaCo is proposed, which is based on contrastive learning.
Pattern Recognition
sym

MoCA: Incorporating Domain Pretraining and Cross Attention for Textbook Question Answering [CCF-B]
Fangzhi Xu, Qika Lin, Jun Liu, Lingling Zhang, Tianzhe Zhao, Qi Chai, Yudai Pan

Code

  • MoCA focuses on the task of Textbook Question Answering, which is a multimodal task with abundant diagrams and large context.
  • It conducts multi-stage pretraining on the text part and introduces dense layers of cross-guided multimodel attention for the diagram part.

πŸ§‘β€ Other Paper

πŸŽ– Honors and Awards

Honors

  • Oct.2022   Outstanding Student in Xi’an Jiaotong University
  • Jun.2021   Outstanding Undergraduate Graduates in Shaanxi Province
  • Sep.2020   Top-10 Students of the Year (Top Personal Award in Xi’an Jiaotong University)
  • Sep.2019   Outstanding Student in Xi’an Jiaotong University
  • Sep.2018   Outstanding Student in Xi’an Jiaotong University

Competition Awards

  • Dec.2022   ”Huawei Cup” Mathematical Contest in Modeling, Second Prize
  • Mar.2022   Asia and Pacific Mathematical Contest in Modeling (APMCM), First Prize + Innovation Award (Top-2)
  • Dec.2021   ”Huawei Cup” Mathematical Contest in Modeling, Second Prize
  • Sep.2020   IKCEST International Big Data Competition, Excellence Award
  • Apr.2020   Mathematical Contest in Modeling (COMAP), Finalist
  • Dec.2019   China Mathematical Contest in Modeling (CUMCM), Second Prize
  • May.2018   National English Competition for College Students (NECCS), Second Prize

πŸŽ– Scholarships

  • 2023.09   Freshman First Prize Scholarship (PhD)
  • 2022.09   National Scholarship
  • 2022.09   Huawei Scholarship
  • 2022.04   ACM SIGIR Student Travel Grant
  • 2021.09   Freshman First Prize Scholarship (Master)
  • 2020.09   Chinese Modern Scientists Scholarship (Only for Top-10 Student)
  • 2019.09   National Scholarship
  • 2018.09   National Scholarship

πŸ’¬ Academic Services

  • Program Committee Member: IJCAI 2024, SIGIR 2024, AACL 2023, SIGIR 2023, SIGIR-AP 2023
  • Reviewer: IEEE TNNLS