Last year we implemented a Session Based Testing approach across all our development teams at Brandwatch. In this post I’ll explain what Session Based Testing (SBT) is, how we implement it, and when it’s suitable.
Session Based Testing (SBT) is a form of exploratory testing used for rapid testing. The most simple definition of exploratory testing is “learning, test design and test execution at the same time”. This is the opposite of scripted testing, where you have predefined test procedures.
Exploratory testing is sometimes confused with "ad hoc" testing. Ad hoc testing normally refers to a process of improvised bug searching, which by definition, anyone can do.
We become more exploratory when we can't tell what tests should be run in advance of the test cycle, or when we haven't yet had the opportunity to create those tests. If we are running scripted tests and new information comes to light that suggests a better test strategy, we may switch to an exploratory mode.
Why did we choose Session Based Testing at Brandwatch?
Every day we need to know what we tested, what we found, and what our priorities should be for further testing. To get that information, we need each tester to be a disciplined, efficient communicator. Then, we need some way to summarize that information to the team lead
Testers do a lot of things during the day that aren’t just manual testing. If we wanted to track some part of testing, we need a way to distinguish manual testing from everything else. Focus attention on the testing that’s actually performed, not just on test case results. It also encourages testers to modify existing tests and add new tests to meet those objectives as they test.
We create at least one Mission per two week sprint. A Mission defines what we should test and what problems we are looking for. In each Mission there are several Sessions ( uninterrupted blocks of reviewable test effort, where testers are not distracted by email, meetings, chatting or telephone calls). "Reviewable" means a report, called a session sheet.
The Session report can be examined by a third-party, such as the product team leader, test manager or other test engineers, and provides information about what happened.
Test sessions are separated into three areas:
- test design and execution
- bug investigation and reporting
- session setup
More than one area can be undertaken in a single session.
We defined three kinds of charters for sessions: long, at 2 hours, normal at 1.5 hours, and short at 45 minutes. What happens in each session depends on the tester and the charter of that session.
Test sessions increase the time available for testing by improving flexibility and responsiveness, whilst reducing the time spent on test planning and documentation. That makes them valuable for both traditional and Agile testing.
Each session is debriefed and the debriefing occurs as soon as possible after the session.
Here you can find checklist for debriefing.
During the reviewing of the session sheets (reports) we might discover additional risks or areas which need to be covered with tests. The debriefing also increases our testing knowledge, with each team member benefiting as the session reports are shared with everyone. This helps to build better tests.
Another objective is to provide feedback and coaching to the tester. Based on the debriefings we can understand how much can be done in a test session, and by tracking how many sessions are actually done over a period of time, we gain the ability to estimate the amount of work involved in a test cycle and predict how long testing will take even though we have not planned the work in detail.
"Opportunity" vs "Charter"
Testers can also report the portion of their time they spend "on charter" versus "on opportunity". Opportunity testing is any testing that doesn’t fit the charter of the session. Since we’re doing exploratory testing, we remind and encourage testers that it’s okay to divert from their charter, if they stumble into an off-charter problem that looks important.
The entire session report consists of these sections:
- Session charter (includes a mission statement and areas to be tested, any other descriptions)
- Tester name
- Date and time started (including the type of session)
- Data files (attachments)
- Test notes
This information is accessible for whole team.
This short and explanatory report helps the whole team to understand test coverage and areas of the application which need to be covered with automation tests.
This approach helps QA improve the quality of testing during the test cycle.
I think one of the biggest advantages of the SBT is that we learn through Sessions. Since every executed session will produce some form of data, we can use this information in the future to improve the quality of the application.
It allows us to test creatively using exploratory techniques, but at the same time, remain disciplined, by relying on information obtained by running sessions to help us to be even more productive and efficient.
For more information you can refer to this document by Jonathan and James Bach, the founders of SBT: original paper (PDF)