Housing Technology: Are you one in ten?
Article originally published at Housing Technology magazine – issue March/2024. Author: Andrew Lazenby, Director of Consulting Services of Acutest.
Working on large IT projects or software rollouts, we know the feeling when go live is imminent, stakeholders are briefed, status reports are showing more green than red, and all bugs have been found… or have they?
Nine out of ten organisations report finding severe defects that were not identified before they went live. This was the finding of a survey by the IT research advisory firm IDC/ Foundry commissioned by Acutest in 2023. The responses of 109 organisations of different sizes and across all industry sectors, showed that undetected bugs resulted in a range of negative outcomes. Most importantly, there was clear evidence that these oversights directly caused system outages and severely impacted performance levels.
In our upcoming series of articles, we want to encourage organisations to put assurance and testing at the heart of change programmes. Why? Not only does this approach accelerate delivery, it also improves confidence in go-live and meets customer’s expectations. By the end of this series we want you to:
- Learn the survey’s key findings and why they matter for the housing sector.
- Understand what these findings reveal about an organisation’s approach to QA and testing.
- Share our insights on the common pitfalls in QA and how you can avoid them.
For a long time, the QA industry has been stressing the importance of testing early to resolve defects before they become more difficult to fix.
In 2007, The Journal of Defence Software Engineering highlighted that just over 60% of defects reported during testing are attributable to issues introduced at requirements specification and solution design—a value confirmed by Acutest’s own review of its client base which also revealed that 39% of the defects found in live started as flaws in requirements and design.
Despite these years of repeated failures, 40% of respondents to the survey did not agree that testing should start on the first day of a project – when the defects that would eventually be manifest in live would first be detectable.
Improve requirement reviews
Requirement and design reviews need to be effective at finding issues—not just a passive sign-off milestone in a project’s life cycle. Reviews need:
- Clear objectives from the start.
- The right stakeholders to be present and to actively participate.
- Dedicated time set aside for the review.
- To scrutinise each requirement.
- Review findings to be documented, tracked and acted on.
All too often requirements are documented in ways that do not make it easy to find defects—requirements that prescribe how the technology should behave, while the business need is often missing entirely. Requirements often without success criteria, or with criteria that cannot be measured, leading to applications that do not perform as the business intended.
Embrace Artificial Intelligence
Whatever development methodology is followed to deliver software solutions, requirements need to be captured in a way that they are easily understood and can be reviewed more effectively. Clarity comes from creating requirements as user stories and scenarios that focus on what the business wants, why it needs it, who benefits from it, and how will we know when it has been delivered. It is the stakeholder—the “who” identified by each story—that needs to participate in reviews.
Software delivery teams are increasingly looking to generative AI to assist them with capturing requirements. Using AI to collate information from disparate documents, emails, meeting notes and process flows into clear and consistent user stories and scenarios accelerates delivery and ensures consistency while reducing the load on already busy teams. If used correctly AI enables teams to generate an initial draft of user stories far more speedily than traditional methods—leaving more time for rigorous review of these requirements by SMEs.
Not just whether it works but how well it works
System downtime and performance issues were highlighted as the most common arising from defects found in live. For many organisations it is easier to create requirements that describe how the solution should function rather than describing those properties that do not impact how the solution works but contribute to how well it works—such as its performance, resilience, or security. These “non-functional” requirements are often addressed too late or sometimes totally forgotten, but it is these characteristics that are likely to determine the long-term value of an IT solution more so than its functional requirements. A way to address this is to develop a catalogue of non-functional user stories and use them as a check list for all solution implementations adapting them for each delivery rather than reinventing the wheel every time.
Start as you mean to go on
In summary, the survey shows us that almost all companies releasing software into live operation fall foul of previously undetected severe defects. The majority of defects found both during testing, and in live operations, are caused by defects in requirements or gaps in non-functional coverage. These issues can be minimised by effective reviews of requirements and design documents at the very start of the project. Writing requirements as user stories aids clarity and conciseness, making reviews easier and more effective. The use of generative AI can accelerate the definition of user stories and release stakeholders to conduct effective reviews and remove defects at source.