Testcomp20

HybridTiger: Hybrid Model Checking and Domination-based Partitioning for Efficient Multi-Goal Test-Suite Generation

Abstract

In theory, software model checkers are well-suited for automated test-case generation. The idea is to perform (non-)reachability queries for the test goals and extract test cases from resulting counterexamples. However, in case of realistic programs, even simple coverage criteria (e.g., branch coverage) force model checkers to deal with several hundreds or even thousands of test goals. Processing these test goals in isolation with model checking techniques does not scale. Therefore, our tool HybridTiger builds on recent ideas on multi-property verification. However, since additional properties (i.e., test goals) reduce the model checker's abstraction possibilities, we split the set of test goals into different partitions. In Test-Comp 2019, we applied a random partitioning strategy and used predicate analysis as model checking technique. In this year's edition, we improved our technique in two ways. First, we use post dominance information in our partitioning strategy to group test goals that occur on similar paths. Second, we account to weaknesses of the predicate analysis and apply a hybrid software model checking approach that switches between explicit model checking and predicate-based model checking on the fly. Our tool HybridTiger is integrated into the software analysis framework CPAchecker.

Supplementary Data

ArtifactDownload
HybridTigerZip
Benchmark definitionXML
ResultsTBD

 

Reproducing the Results

The submitted Version of HybridTiger for Test-Comp is built from revision 32312 from the official CPAchecker repository, branch tigerIntegration2.

To run HybridTiger, enter the following command:

  • scripts/cpa.sh -benchmark -heap 10000M -tigertestcomp20 -spec spec.prp task

where spec is either coverage-error-call or coverage-branches and task is the c or intermediate file to be analysed. HybridTiger prints statistics of the run to the console and writes test-cases and test-suite meta-data in the output folder. To reproduce the results, use a Linux system with Java 8 runtime environment, BenchExec, SV-Benchmarks and run Benchexec with the following files: