Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States
Mon 11 Nov 2019 11:30 - 12:00 at Cortez 1B - SESSION I

In recent years, various benchmark suites have been developed to evaluate the efficacy of Android security analysis tools. The choice of such benchmark suites used in tool evaluations is often based on the availability and popularity of suites and not on their characteristics and relevance. One of the reasons for such choices is the lack of information about the characteristics and relevance of benchmark suites.

In this context, we empirically evaluated four Android-specific benchmark suites: DroidBench, Ghera, IccBench, and UBCBench. For each benchmark suite, we identified the APIs used by the suite that were discussed on Stack Overflow in the context of Android app development and measured the usage of these APIs in a sample of 227K real-world apps (coverage). We also identified security-related APIs used in real-world apps but not in any of the above benchmark suites to assess the opportunities to extend benchmark suites (gaps).

Mon 11 Nov
Times are displayed in time zone: Tijuana, Baja California change

11:00 - 12:30: SESSION IA-Mobile at Cortez 1B
11:00 - 11:30
Research paper
Olivier Le GoaerLIUPPA, Université de Pau et des Pays de l'Adour
Link to publication Media Attached
11:30 - 12:00
Research paper
Joydeep MitraKansas State University, Venkatesh-Prasad RanganathKansas State University, Aditya Narkar
Pre-print Media Attached
12:00 - 12:30
Research paper
Felix PauckPaderborn University, Germany, Shikun Zhang