Workshop Agenda

Note: LTB 2026 is scheduled on Monday, May 4, 2026.

Workshop Program
8:50 - 12:30 Joint Keynote / Program Joint keynote and program with WEPPE
https://esulabsolutions.godaddysites.com/sponsored-events
12:30 - 14:00 Lunch
14:00 - 14:15 LTB 2026 Opening Remarks
14:15 - 14:40 Paper Presentation Daniele Di Pompeo (University of L'Aquila), Andrea Reale (Sapienza Università di Roma), Michele Tucci (University of L'Aquila)
Performance Evolution of Jakarta EE Application Servers
14:40 - 15:05 Paper Presentation Iain Dixon (Newcastle University), Matthew Forshaw (Newcastle University and The Alan Turing Institute), Joe Matthews (Newcastle University)
Monitor, Mitigate, Moderate: Backpressure in Stream Benchmark Generators
15:05 - 15:30 Paper Presentation David Georg Reichelt (Lancaster University Leipzig), Juozas Skarbalius (Lancaster University Leipzig)
Benchmarking Change Detection Exactness and Overhead of Instrumentation and Sampling
15:30 - 16:00 Coffee Break
16:00 - 16:30 Paper Presentation Dan Eidelman (Redis)
From Manual Benchmarking to Continuous Performance Validation in CI
16:30 - 17:00 Paper Presentation Iosif Itkin (Exactpro Systems LLC), Alyona Bulda (Exactpro Systems LLC)
Beyond message injection: advanced load testing capabilities for trading systems with th2-shark
17:00 - 17:05 Wrap Up and Conclusion

Call for Papers

Software systems (e.g., smartphone apps, desktop applications, telecommunication infrastructures and enterprise systems, etc.) have strict requirements on software performance. Failing to meet these requirements may cause business losses, customer defection, brand damage, and other serious consequences. In addition to conventional functional testing, the performance of these systems must be verified through load testing or benchmarking to ensure quality service.

Load testing examines the behavior of a system by simulating hundreds or thousands of users performing tasks at the same time. Benchmarking compares the system's performance against other similar systems in the domain. The workshop is not limited by traditional load testing; it is open to any ideas of re-inventing and extending load testing, as well as any other way to ensure systems performance and resilience under load, including any kind of performance testing, resilience / reliability / high availability / stability testing, operational profile testing, stress testing, A/B and canary testing, volume testing, and chaos engineering.

Load testing and benchmarking software systems are difficult tasks that require a deep understanding of the system under test and customer behavior. Practitioners face many challenges such as tooling (choosing and implementing the testing tools), environments (software and hardware setup), and time (limited time to design, test, and analyze). Yet, little research is done in the software engineering domain concerning this topic.

Adjusting load testing to recent industry trends, such as cloud computing, agile / iterative development, continuous integration / delivery, micro-services, serverless computing, AI/ML services, and containers poses major challenges, which are not fully addressed yet.

This one-day workshop brings together software testing and software performance researchers, practitioners, and tool developers to discuss the challenges and opportunities of conducting research on load testing and benchmarking software systems. Our ultimate goal is to grow an active community around this important and practical research topic.

We solicit two tracks of submissions:

  1. Research or industry papers:
  2. Presentation track for industry or research talks:
Research/Industry papers should follow the standard ACM SIG proceedings format and need to be submitted electronically via HotCRP. Extended abstracts for the presentation track need to be submitted as "abstract only'' submissions via HotCRP as well. Accepted papers will be published in the ICPE 2026 Companion Proceedings. Submissions can be research papers, position papers, case studies, or experience reports addressing issues including but not limited to the following:


Instructions for Authors from ACM

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Authors of accepted papers are required to obtain an ORCID ID in order to complete the publication process. ACM uses ORCID IDs to ensure proper attribution, improve author discoverability, and support accurate name normalization across its digital library. Please note that double blind reviews will not be enforced.

ORCID IDs are not required for the presentation-only track.

Important Dates

Paper Track (research and industry papers):

[Submission Link]
Abstract submission (Optional): January 19 26, 2026, AOE;
Paper submission: January 23 26, 2026, AOE;
Author notification: February 27, 2026;
Camera-ready version: March 4, 2026

Presentation Track:

Extended abstract submission: February 16, 2026, AOE;
Author notification: February 27, 2026;
Workshop date: May 4, 2026


Organization:

Chairs:

Alexander Podelko Amazon, USA
Alireza Pourali York University, Canada
Kundi Yao Ontario Tech University, Canada


Program Committee:

Daniel Seybold benchANT GmbH, Germany
Daniele Di Pompeo University of L'Aquila, Italy
Filipe Costa Oliveira Redis, Portugal
Gerson Sunyé LS2N - Nantes Université, France
Junjielong Xu The Chinese University of Hong Kong, Shenzhen, China
Lizhi Liao Memorial University of Newfoundland, Canada
Qiaolin Qin Polytechnique Montreal, Canada
Sai Sindhur Malleni Nvidia, USA
Yiming Tang Rochester Institute of Technology, USA
Zhenhao Li York University, Canada


Past LTB Workshops: