Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States
Tue 12 Nov 2019 11:00 - 11:20 at Cortez 2&3 - AI and SE Chair(s): Kaiyuan Wang

Code retrieval techniques and tools have been playing a key role in facilitating software developers to retrieve existing code fragments from available open-source repositories given a user query (e.g., a short natural language text describing the functionality for retrieving a particular code snippet). Despite the existing efforts in improving the effectiveness of code retrieval, there are still two main issues hindering them from being used to accurately retrieve satisfiable code fragments from large-scale repositories when answering complicated queries. First, the existing approaches only consider shallow features of source code such as method names and code tokens, but ignoring structured features such as abstract syntax trees (ASTs) and control-flow graphs (CFGs) of source code, which contains rich and well-defined semantics of source code. Second, although the deep-learning-based approach performs well on the representation of source code, it lacks the explainability, making it hard to interpret the retrieval results and almost impossible to understand which features of source code contribute more to the final results. To tackle the two aforementioned issues, this paper proposes MMAN, a novel \underline{M}ulti-\underline{M}odal \underline{A}ttention \underline{N}etwork for semantic source code retrieval. A comprehensive multi-modal representation is developed for representing unstructured and structured features of source code, with one LSTM for the sequential tokens of the code, a Tree-LSTM for the ASTs of the code and a GGNN (Gated Graph Neural Network) for the CFG of the code. Furthermore, a multi-modal attention fusion layer is applied to assign weights to different parts of each modality of source code and then integrate them into a single hybrid representation. Comprehensive experiments and analysis on a large-scale real-world dataset show that our proposed model can accurately retrieve code snippets and outperforms the state-of-the-art methods.

Tue 12 Nov
Times are displayed in time zone: (GMT-07:00) Tijuana, Baja California change

ase-2019-paper-presentations
10:40 - 12:20: Papers - AI and SE at Cortez 2&3
Chair(s): Kaiyuan WangGoogle, Inc.
ase-2019-papers10:40 - 11:00
Talk
Kang Hong JinSchool of Information Systems, Singapore Management University, Tegawendé F. BissyandéSnT, University of Luxembourg, David LoSingapore Management University
Pre-print
ase-2019-papers11:00 - 11:20
Talk
Yao WanZhejiang University, Jingdong ShuZhejiang University, Yulei SuiUniversity of Technology Sydney, Australia, Guandong XuUniversity of Technology, Sydney, Zhou ZhaoZhejiang University, Jian WuZhejiang University, philip yuUniversity of Illinois at Chicago
ase-2019-papers11:20 - 11:40
Talk
Christoph GladischCorporate Research, Robert Bosch GmbH, Thomas HeinzCorporate Research, Robert Bosch GmbH, Christian HeinzemannCorporate Research, Robert Bosch GmbH, Jens OehlerkingCorporate Research, Robert Bosch GmbH, Anne von VietinghoffCorporate Research, Robert Bosch GmbH, Tim PfitzerRobert Bosch Automotive Steering GmbH
ase-2019-Journal-First-Presentations11:40 - 12:00
Talk
Yan XiaoDepartment of Computer Science, City University of Hong Kong, Jacky KeungDepartment of Computer Science, City University of Hong Kong, Kwabena E. BenninBlekinge Institute of Technology, SERL Sweden, Qing MiDepartment of Computer Science, City University of Hong Kong
Link to publication
ase-2019-papers12:00 - 12:10
Talk
Nghi Duy Quoc BuiSingapore Management University, Singapore, Yijun YuThe Open University, UK, Lingxiao JiangSingapore Management University
Pre-print
ase-2019-Demonstrations12:10 - 12:20
Demonstration
Xiaoning DuNanyang Technological University, Xiaofei XieNanyang Technological University, Yi LiNanyang Technological University, Lei MaKyushu University, Yang LiuNanyang Technological University, Singapore, Jianjun ZhaoKyushu University