Chao Peng
Chao Peng
Home
News
Featured Publications
Service
Teaching
Awards
Work Experience
CV
简历
Light
Dark
Automatic
Paper-Conference
CodeVisionary: An Agent-based Framework for Evaluating Large Language Models in Code Generation
Large language models (LLMs) have demonstrated strong capabilities in code generation, underscoring the critical need for rigorous and …
Xinchen Wang
,
Pengfei Gao
,
Chao Peng
,
Ruida Hu
,
Cuiyun Gao
PDF
Cite
DOI
RepoMasterEval: Evaluating Code Completion via Real-World Repositories
With the growing reliance on automated code completion tools in software development, the need for robust evaluation benchmarks has …
Qinyun Wu
,
Chao Peng
,
Pengfei Gao
,
Ruida Hu
,
Haoyu Gan
,
Bo Jiang
,
Jinhe Tang
,
Zhiwen Deng
,
Zhanming Guan
,
Cuiyun Gao
,
Xia Liu
,
Ping Yang
PDF
Cite
DOI
DialogAgent: An Auto-engagement Agent for Code Question Answering Data Production
Large Language Models (LLMs) have become increasingly integral to enhancing developer productivity, particularly in code generation, …
Xiaoyun Liang
,
Jingyi Ren
,
Jiayi Qi
,
Chao Peng
,
Bo Jiang
PDF
Cite
SoRFT: Issue Resolving with Subtask-oriented Reinforced Fine-Tuning
Mainstream issue-resolving frameworks predominantly rely on commercial models, leading to high costs and privacy concerns. Existing …
Zexiong Ma
,
Chao Peng
,
Pengfei Gao
,
Xiangxin Meng
,
Yanzhen Zou
,
Bing Xie
PDF
Cite
DOI
Understanding Large Language Model Performance in Software Engineering: A Large-scale Question Answering Benchmark
In this work, we introduce CodeRepoQA, a large-scale benchmark specifically designed for evaluating repository-level question-answering …
Ruida Hu
,
Chao Peng
,
Jingyi Ren
,
Bo Jiang
,
Xiangxin Meng
,
Qinyun Wu
,
Pengfei Gao
,
Xinchen Wang
,
Cuiyun Gao
PDF
Cite
DOI
Prompting Large Language Models to Tackle the Full Software Development Lifecycle: A Case Study
Recent advancements in large language models (LLMs) have significantly enhanced their coding capabilities. However, existing benchmarks …
Bowen Li
,
Wenhan Wu
,
Ziwei Tang
,
Lin Shi
,
John Yang
,
Jinyang Li
,
Shunyu Yao
,
Chen Qian
,
Binyuan Hui
,
Qicheng Zhang
,
Zhiyin Yu
,
He Du
,
Ping Yang
,
Dahua Lin
,
Chao Peng
,
Kai Chen
PDF
Cite
Project
Go-Oracle: Automated Test Oracle for Go Concurrency Bugs
The Go programming language has gained significant traction for developing software, especially in various infrastructure systems. …
Foivos Tsimpourlas
,
Chao Peng
,
Carlos Rosuero
,
Ping Yang
,
Ajitha Rajan
PDF
Cite
DOI
AEGIS: An Agent-based Framework for General Bug Reproduction from Issue Descriptions
In software maintenance, bug reproduction is essential for effective fault localization and repair. Manually writing reproduction …
Xinchen Wang
,
Pengfei Gao
,
Xiangxin Meng
,
Chao Peng
,
Ruida Hu
,
Yun Lin
,
Cuiyun Gao
PDF
Cite
DOI
RepoSim: Evaluating Prompt Strategies for Code Completion via User Behavior Simulation
Large language models (LLMs) have revolutionized code completion tasks. IDE plugins such as Copilot can generate code recommendations, …
Chao Peng
,
Qinyun Wu
,
Jiangchao Liu
,
Jierui Liu
,
Bo Jiang
,
Mengqian Xu
,
Yinghao Wang
,
Xia Liu
,
Ping Yang
PDF
Neat: Mobile App Layout Similarity Comparison based on Graph Convolutional Networks
A wide variety of device models, screen resolutions and operating systems have emerged with recent advances in mobile devices. As a …
Zhu Tao
,
Yongqiang Gao
,
Jiayi Qi
,
Chao Peng
,
Qinyun Wu
,
Xiang Chen
,
Ping Yang
PDF
»
Cite
×