Statistical Model Checking of Response Times for Different System Deployments

Bernhard K. Aichernig, Severin Kann, Richard Alexander Schumi

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review


Performance testing is becoming increasingly important for interactive systems. Evaluating their performance with respect to user expectations is complex, especially for different system deployments. Various load-testing approaches and performance-simulation methods aim at such analyses.However, these techniques have certain disadvantages, like a high testing effort for load testing, and a questionable model accuracy for simulation methods. Hence, we propose a combination of both techniques. We apply statistical model checking with a learned timed model and evaluate the results on the real system with hypothesis testing. Moreover, we check the established hypotheses of a reference system on various system deployments (configurations), like different hardware or network settings, and analyse the influence on the performance. Our method is realised with a property-based testing tool that is extended with algorithms from statistical model checking. We illustrate the feasibility of our technique with an industrial case study of a web application.
Original languageEnglish
Title of host publicationDependable Software Engineering. Theories, Tools, and Applications – 4th International Symposium, SETTA 2018
Publication statusPublished - 2018
Event4th International Symposium on Dependable Software Engineering (SETTA 2018) - Beijing, China
Duration: 4 Sept 20186 Sept 2018


Conference4th International Symposium on Dependable Software Engineering (SETTA 2018)
Internet address

Fields of Expertise

  • Information, Communication & Computing

Cite this