AutoFocus: Interpreting Attention-based Neural Networks by Code Perturbation
Despite being adopted in software engineering tasks, deep neural networks are treated mostly as a black box due to the difficulty in interpreting how the networks infer the outputs from the inputs. To address this problem, we propose AutoFocus, an automated approach for rating and visualizing the importance of input elements based on their effects on the outputs of the networks. The approach is built on our hypotheses that (1) attention mechanisms incorporated into neural networks can generate discriminative scores for various input elements and (2) the discriminative scores reflect the effects of input elements on the outputs of the networks. This paper verifies the hypotheses by applying AutoFocus on the task of algorithm classification (i.e., given a program source code as input, determine the algorithm implemented by the program). AutoFocus identifies and perturbs code elements in a program systematically, and quantifies the effects of the perturbed elements on the network’s classification results. Based on evaluation on more than 1000 programs for 10 different sorting algorithms, we observe that the attention scores are highly correlated to the effects of the perturbed code elements. Such a correlation provides a strong basis for the uses of attention scores to interpret the relations between code elements and the algorithm classification results of a neural network, and we believe that visualizing code elements in an input program ranked according to their attention scores can facilitate faster program comprehension with reduced code.
Tue 12 NovDisplayed time zone: Tijuana, Baja California change
10:40 - 12:20 | AI and SEResearch Papers / Journal First Presentations / Demonstrations at Cortez 2&3 Chair(s): Kaiyuan Wang Google, Inc. | ||
10:40 20mTalk | Assessing the Generalizability of code2vec Token Embeddings Research Papers Hong Jin Kang School of Information Systems, Singapore Management University, Tegawendé F. Bissyandé SnT, University of Luxembourg, David Lo Singapore Management University Pre-print | ||
11:00 20mTalk | Multi-Modal Attention Network Learning for Semantic Source Code Retrieval Research Papers Yao Wan Zhejiang University, Jingdong Shu Zhejiang University, Yulei Sui University of Technology Sydney, Australia, Guandong Xu University of Technology, Sydney, Zhou Zhao Zhejiang University, Jian Wu Zhejiang University, philip yu University of Illinois at Chicago | ||
11:20 20mTalk | Experience Paper: Search-based Testing in Automated Driving Control ApplicationsACM SIGSOFT Distinguished Paper Award Research Papers Christoph Gladisch Corporate Research, Robert Bosch GmbH, Thomas Heinz Corporate Research, Robert Bosch GmbH, Christian Heinzemann Corporate Research, Robert Bosch GmbH, Jens Oehlerking Corporate Research, Robert Bosch GmbH, Anne von Vietinghoff Corporate Research, Robert Bosch GmbH, Tim Pfitzer Robert Bosch Automotive Steering GmbH | ||
11:40 20mTalk | Machine Translation-Based Bug Localization Technique for Bridging Lexical Gap Journal First Presentations Yan Xiao Department of Computer Science, City University of Hong Kong, Jacky Keung Department of Computer Science, City University of Hong Kong, Kwabena E. Bennin Blekinge Institute of Technology, SERL Sweden, Qing Mi Department of Computer Science, City University of Hong Kong Link to publication | ||
12:00 10mTalk | AutoFocus: Interpreting Attention-based Neural Networks by Code Perturbation Research Papers Nghi D. Q. Bui Singapore Management University, Singapore, Yijun Yu The Open University, UK, Lingxiao Jiang Singapore Management University Pre-print | ||
12:10 10mDemonstration | A Quantitative Analysis Framework for Recurrent Neural Network Demonstrations Xiaoning Du Nanyang Technological University, Xiaofei Xie Nanyang Technological University, Yi Li Nanyang Technological University, Lei Ma Kyushu University, Yang Liu Nanyang Technological University, Singapore, Jianjun Zhao Kyushu University |