Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States

Deep neural networks have been increasingly used in software engineering and program analysis tasks. They usually take a program and make some predictions about it, e.g., bug prediction. We call these models neural program analyzers. The reliability of neural programs can impact the reliability of the encompassing analyses. In this paper, we describe our ongoing efforts to develop effective techniques for testing neural programs. We discuss the challenges involved in developing such tools and our future plans. In our preliminary experiment on a neural model recently proposed in the literature, we found that the model is very brittle, and simple perturbations in the input can cause the model to make mistakes in its prediction.

Wed 13 Nov

Displayed time zone: Tijuana, Baja California change

15:20 - 16:00
Poster Session: Late Breaking ResultsLate Breaking Results at Kensington Ballroom
15:20
40m
Poster
Recommendation of Exception Handling Code in Mobile App Development
Late Breaking Results
Pre-print
15:20
40m
Poster
LVMapper: A Large-variance Clone Detector Using Sequencing Alignment Approach
Late Breaking Results
Ming Wu , Pengcheng Wang University of Science and Technology of China, Kangqi Yin , Haoyu Cheng , Yun Xu University of Science and Technology of China, Chanchal K. Roy University of Saskatchewan
Pre-print
15:20
40m
Poster
K-CONFIG: Using Failing Test Cases to Generate Test Cases in GCC Compilers
Late Breaking Results
Pre-print Media Attached
15:20
40m
Poster
An Empirical Study on the Characteristics of Question-Answering Process on Developer Forums
Late Breaking Results
Yi Li Nanyang Technological University, Shaohua Wang New Jersey Institute of Technology, USA, Tien N. Nguyen University of Texas at Dallas, Son Nguyen The University of Texas at Dallas, Xinyue Ye , Yan Wang
Pre-print
15:20
40m
Poster
Testing Neural Programs
Late Breaking Results
Md Rafiqul Islam Rabin University of Houston, Ke Wang Visa Research, Mohammad Amin Alipour
Pre-print Media Attached
15:20
40m
Poster
Self Learning from Large Scale Code Corpus to Infer Structure of Method Invocations
Late Breaking Results
Pre-print
15:20
40m
Poster
Data Sanity Check for Deep Learning Systems via Learnt Assertions
Late Breaking Results
Haochuan Lu Fudan University, Huanlin Xu , Nana Liu , Yangfan Zhou Fudan University, Xin Wang
Pre-print
15:20
40m
Poster
Software Engineering for Fairness: A Case Study with Hyperparameter Optimization
Late Breaking Results
Joymallya Chakraborty North Carolina State University, Tianpei Xia , Fahmid M. Fahid , Tim Menzies North Carolina State University
Pre-print
15:20
40m
Poster
API Misuse Correction: A Statistical Approach
Late Breaking Results
Pre-print
15:20
40m
Poster
Should We Add Repair Time to an Unfixed Bug? An Exploratory Study of Automated Program Repair on 2980 Small-Scale Programs
Late Breaking Results
Pre-print
15:20
40m
Poster
Learning test traces
Late Breaking Results
Pre-print
15:20
40m
Poster
The Dynamics of Software Composition Analysis
Late Breaking Results
Pre-print
15:20
40m
Poster
A Process Mining based Approach to Improving Defect Detection of SysML Models.
Late Breaking Results
Mounifah Alenazi , Nan Niu University of Cincinnati, Juha Savolainen Danfoss
Pre-print
15:20
40m
Poster
Open-Source Projects and their Collaborative Development Workflows
Late Breaking Results
panuchart bunyakiati kasetsart university, Usa Sammapun kasetsart university
Pre-print
15:20
40m
Poster
Detecting Deep Neural Network Defects with Data Flow Analysis
Late Breaking Results
Jiazhen Gu , Huanlin Xu , Yangfan Zhou Fudan University, Xin Wang , Hui Xu , Michael Lyu The Chinese University of Hong Kong
Pre-print
15:20
40m
Poster
On building an automated responding system for app reviews: What are the characteristics of reviews and their responses?
Late Breaking Results
Pre-print