NPEDetector: Null dereference verification for Java programs

NPEDetector is a tool for sound, demand-driven, scalable null-dereference analysis for Java programs. The tool uses a backwards abstract-interpretation-based weakest-preconditions analysis. The underlying approach, as well empirical results obtained by applying the tool on real Java programs, are described in this publication.

The tool is available for download as a VirtualBox VM image.

Ross: Verifying safety of Java programs using points-to analysis

Ross is a tool that uses refinement-based points-to analysis to conservatively verify safety of Java programs from low-level errors. Specifically, the tool identifies all potentially unsafe downcast sites in a given program, and identifies all allocation sites that allocate immutable objects only. The underlying approach, as well empirical results obtained by applying the approach on real Java programs, are described in this publication.

The tool is available for download as a Java application.

DFAS: Static analysis of asynchronous systems

Asynchronous message-passing systems are employed frequently to implement distributed mechanisms, protocols, and processes. This work addresses the problem of precise data flow analysis for such systems. To obtain good precision, data flow analysis of message-passing systems needs to somehow skip execution paths that read more messages than the number of messages sent so far in the path, as such paths are infeasible at run time. Current data flow analysis techniques for such systems either compromise on precision by traversing infeasible paths in addition to feasible paths, or traverse only feasible paths but admit only finite abstract analysis domains. Our approach elides a large class of infeasible paths, and generalizes upon the state of art by admitting infinite abstract analysis domains (such as constant propagation). An application of data flow analysis, namely, verification of assertions, is also supported by our implementation.

Our software tool is available for download here.

PageModeler: Analysis of JSP-based web applications

Web applications are difficult to analyze using code-based tools because data-flow and control-flow through the application occurs via both server-side code and client-side web pages. To address this challenge we propose a static-analysis approach that automatically constructs a “model” of each view/page in a given application from the given view specifications. The server-side code in conjunction with the page models becomes a standard (non-web) program, thus amenable to analysis using standard code-based tools. This work is described in our paper accepted in ISSTA 2017.

We have implemented our approach in the context of J2EE applications that use JSP for view specifications. We demonstrate the versatility and usefulness of our approach by applying three standard analysis tools on the resultant programs from our approach: a concolic-execution based model checker (JPF), a dynamic fault localization tool (Zoltar), and a static slicer (Wala).

The artifact that we had submitted for ISSTA 2017 Artifact Evaluation, which contains the code of our tool as well as evaluation results, is available for download.

Matching service descriptions with existing code

The objective of this project is to develop tools to match a service in a domain model with fragments of code in a given existing application that potentially implement this service. The applications of this work include (a) identifying portions of code in the application that satisfy a business requirement, (b) transforming an existing monolithic application into a service-oriented architecture (SOA), and (c) migrating custom business logic from a custom application to a standard package.

We describe an initial semi-automated technique for this problem in this publication.

A subsequent automated technique for this problem is described in this publication. We provide here data from two case studies that we performed using our technique, in which we matched two real Java applications with two real domain models.

A third, more precise automated technique for this problem is described in this publication. We provide here data from two similar case studies that we performed using this technique.

Memory optimizations in Java programs

Java programs typically create large numbers of duplicated (i.e., content-wise matching) objects. If these objects are immutable, then memory can be saved by modifying programs to throw away duplicated objects right after they are created, and instead reuse existing objects. We have devised an approach and a tool to identify allocation sites that create large numbers of duplicated objects, and to estimate the memory savings to be obtained by inserting de-duplication code right after these sites. Our approach is described in this paper. Detailed data from the experiments described in the paper are available here.

Detecting clones in source code

This paper describes our approach to detecting code clones using slicing. This paper describes our approach to eliminating clones by factoring out duplicated code into procedures.

This is our software for detecting clones in C code.