AI in software testing: Sii publishes a breakthrough Testing Lab – AI Edition report
23.04.2026
Artificial intelligence has firmly entered into the daily work of IT teams, becoming a natural element of the environment rather than just a temporary experiment. Although in areas such as documentation analysis or creating manual test scenarios the benefits of its use seem intuitive, the industry still lacks reliable data regarding test automation.
Sii conducted Testing Lab – AI Edition – a controlled research experiment aimed at measuring the real increase in productivity and verifying whether faster work does not come at the expense of code quality. Read on to see the results and access the full report from the Study.
20 experts, 8 hours, and a clash of two worlds
The experiment was designed as a controlled comparison of two working methods. The event involved 20 experts, divided into ten two-person teams with a similar level of experience. Participants were assigned to two groups:
- Team AI: Teams using coding assistants and LLM-based models.
- Team Oldschool: A group working with traditional methods, without the support of artificial intelligence.
Participants faced a “greenfield” challenge – they had to design and implement from scratch a framework for UI and API test automation for an e-commerce system in a Java, C#, and Playwright/Selenium stack. The content was supervised by a jury, made of Sii Poland experts: Krzysztof Bednarski, Tomasz Kuran, and Remigiusz Bednarczyk.
Why is it worth reading our report?
You will learn the answer to the question: how much AI changes the productivity and quality of QA teams’ work. In the report, we present:
- The scale of the difference that cannot be ignored
AI teams delivered from 5 to nearly 200 tests. Teams without AI – from 5 to 8. This is not optimization, but a complete change in performance level. - What happens to quality
We evaluated the code according to 8 engineering criteria (including: architecture, stability, test data, diagnostics). AI not only accelerates work – in many areas, it improves the quality of the solution in the organization. - Why experience is even more important today
The best results were achieved by teams working iteratively and consciously steering the model. AI does not level the field – it amplifies the best experts, who use AI the right way. - Where AI fails
Models can “get stuck” in the wrong direction – for example, with dynamic selectors. Sii Poland experts show specific cases and their consequences. - How to make decisions about AI implementation
The report provides a basis for building a strategy based on data, not on trends.
Smart use of AI starts with knowledge – download the report
Do not base your strategy on assumptions – check the facts and prepare your team for a new era of testing. Download the full report and find out how to implement AI wisely in your projects!
TESTING LAB – AI EDITION
.
What really determines the success of testing
in the era of LLMs?
We explored this during a research experiment.
.
.
What’s next? Continuation of the experiment
The first edition showed very clearly: AI can provide a huge advantage, but the outcome is determined by how it is used. The differences between teams were so significant that not the technology itself, but the approach to working with it became the most important conclusion of the study.
That is why the next edition of Testing Lab will focus on finding the most effective ways to use AI. Sii Team wants to check which methods, tools, and approaches actually accelerate work, allow for maintaining high code quality, and at the same time limit the costs resulting from the use of models.
If the first edition answered the question “does AI work”, the next one will answer a much more important one: how to use AI to maximize the effect without compromising quality and costs.