Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States

Java Pathfinder is the annual workshop on technologies related to Java Pathfinder and similar program analyis tools and approaches.

Workshop program (tentative)

Start End Title
9:00 9:05 Opening
9:05 10:05
Invited talk: Elena Sherman.
Benchmark Engineering for Program Analysis (click for details)
Abstract: In evidence-based program analysis research benchmarks play an important role. The more realistic programs contained in a benchmark, the stronger conclusions researchers can make. The researchers who drive the development of benchmarks might include programs relevant to their program analysis research agenda. Then it is unclear whether it is appropriate to reuse those benchmarks for a different program analysis. Thus, the problem that we would like to present in this talk is how to characterize a suitable benchmark for a given analysis. Moreover, what does “a suitable benchmark” even mean?
In this presentation we elaborate on this question and argue that there should be a systematic approach to engineering program benchmarks, similar to engineering software. Thus, it should include benchmark requirement, design, development, and verification steps. As the first step towards benchmark engineering, we present PAClab infrastructure. Unlike other frameworks that obtain programs from software repositories, PAClab allows users to specify program characteristics and how adapt selected programs to the targeted program analyzer. We conclude with invitation to join us in this effort by using PAClab in your JPF/SPF evaluations and providing us feedback.
Bio: Dr. Elena Sherman is an assistant professor in computer science at Boise State University. She received her Ph.D. in CS from the University of Nebraska-Lincoln and joint Boise State in 2013. Her primary research focuses on improving the efficiency and precision of program analysis techniques. In particular, Dr. Sherman published work on string constrain solvers, decision procedures and data-flow analysis. In her research she experienced first-hand how scarce program analysis benchmarks are. To address this problem, in collaboration with Dr. Robert Dyer, she started developing a program analysis collaboratory (PAClab) that produces realistic benchmarks generation for target program analyzers
10:05 10:30 Dinh Xuan Bach Le, Corina Pasareanu, Rohan Padhye, Willem Visser, Koushik Sen and David Lo.
SAFFRON: Adaptive Grammar-based Fuzzing for Worst-Case Analysis
10:30 11:00 Coffee break
11:00 11:25 Alyas Almaawi, Hayes Converse, Milos Gligoric, Sasa Misailovic and Sarfraz Khurshid.
Quantifying the Exploration of an Imperative Constraint Solver
11:25 11:50 Kyle Storey, Eric Mercer and Pavel Parizek.
A Sound Dynamic Partial Order Reduction Engine for Java Path Finder
11:50 12:10 Zhenbang Chen, Hengbiao Yu, Ji Wang and Wei Dong.
Symbolic Verification of Regular Properties for Java Programs (fast abstract)
12:10 12:30 Egor Namakonov, Eric Mercer, Pavel Parizek and Kyle Storey.
Symbolic data race detection for Habanero programs (fast abstract)
12:30 14:00 Lunch
14:00 15:00
Keynote: Simon Goldsmith.
Adventures in Commercial-Grade Static Analysis (click for details)
Abstract: Building a commercial grade static analysis presents a lot of interesting problems. Everything not forbidden is compulsory: language specifications are wonderful documents, but in reality anything the user’s compiler and runtime accepts is fair game. Analysis abstractions that scale “except in pathological cases” don’t scale: analyzing tens of thousands of code bases that routinely exceed millions of lines of code means that those pathological cases inevitably arise. Build a good analysis that runs overnight, and users will ask you to run it in their IDE for near-immediate feedback. A bug finding tool needs a low false positive rate, but a tool aimed at finding security vulnerabilities needs a low false negative rate. Only analyzing “source” code and only starting from main() is insufficient for understanding modern web and mobile applications: frameworks imply a different programming model with a lot of auto-magical program behavior, often including idiosyncratic configuration regimes and various template languages. We’ll talk about these problems and how we tackle them.
Bio: Simon Goldsmith has been working on the Coverity static analysis tool for more than ten years, an architect for more than seven. Before that, he finished his Ph.D. at the University of California Berkeley and his B.S. at Carnegie Mellon University. He lives in Oakland, California with his wife, two adorable knuckle-head dogs, and enough piles of books to constitute an earthquake hazard.
15:00 15:25 Yannic Noller, Hoang Lam Nguyen, Minxing Tang, Timo Kehrer and Lars Grunske.
Complete Shadow Symbolic Execution with Java PathFinder
15:25 16:00 Coffee break
16:00 16:25 Seemanta Saha, William Eiers, Ismet Burak Kadron, Lucas Bang and Tevfik Bultan.
Incremental Attack Synthesis
16:25 16:50 Lasse Berglund, Cyrille Artho.
Method summaries for JPF
16:50 17:10 Joshua Hooker, Peter Aldous, Eric Mercer, Benjamin Ogles, Kyle Storey and S. Jacob Powell.
JPF-HJ: A Tool for Task Parallel Program Analysis (fast abstract)
17:10 17:25 Overview of GSoC projects; closing.
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 11 Nov

Displayed time zone: Tijuana, Baja California change

09:00 - 17:30
Java Pathfinder WorkshopJPF at Hillcrest 1

Call for Papers (now closed)

Java Pathfinder is the annual workshop on technologies related to Java Pathfinder and similar program analysis tools and approaches.

We solicit regular paper submissions on existing research and applications related to Java Pathfinder (JPF) or its extensions. If the underlying research idea has been published in another venue, the paper needs to clarify the novel aspects that are being presented in the paper. We also solicit extended abstracts and position paper submissions on recent work or work in progress. We welcome comparative analysis papers that evaluate algorithms in JPF or its extensions with other relevant tools. The goal of the workshop is to encourage the flow of ideas relevant to JPF and Java/Android program analysis in general.

Topics of interest include the following:

  • JPF extensions or tools
  • JPF case studies
  • Position papers on JPF, such as future directions
  • Java program analysis or verification
  • Android program analysis or verification
  • General software verification and symbolic execution techniques or tools

Submission formats

Submissions are single-blind: author names will be visible to the reviewers in the PDF. Reviewers will not be disclosed to the authors.

We solicit two types of submissions to be uploaded on EasyChair/JPF2019:

  • Regular papers (submissions closed): At most 5-page long papers (incl. references) in the ACM SEN Proceedings format will be reviewed by the organizing committee. Accepted regular papers will be presented at the workshop and published in the ACM SIGSOFT Software Engineering Notes and the ACM Digital Library, as in previous years.
  • Extended (“fast”) abstracts: At most 2-page long abstracts will be selected by the organizing committee. Accepted abstracts will be presented at the workshop but NOT published.