It turns out that following the “golden rules” of testing does not always bring about the desirable effects. Disregarding the rules can be equally disastrous as exemplified by companies and institutions that have become household names. One thing is for certain: software defects are expensive. Fortunately, there are ways to mitigate the damage.
Imagine that, as a result of an defect in the medical care system, the patients’ medical records which are the basis for writing out prescriptions, are not updated. As a result, physicians prescribe medications unsuitable for the reported ailments which may pose a threat to some people’s health or even lives. This is not a script of a science fiction film. This situation took place in 2018 in Europe’s second largest economy. What caused it? A bug in the system operating in National Health Service (NHS) in the United Kingdom. Back then, more than 10 thousand people received wrong prescriptions. Fortunately, adverse consequences for the health were not involved. It is difficult to evaluate the losses incurred by this type of defects because, on top of purely economic costs of rectifying these mistakes, the reputation-related consequences may be even more damaging (especially for companies from outside the public sector). In the United States alone, the damage resulting from launching to the market poor quality software was estimated in 2018 at $2.8 billion.
The results of testing defects
The trade experts are far from disregarding the importance of defects generated in the process of designing software. Following a survey carried out in 2019 by Micro Focus in the US, it was possible to define how the three areas of software written by means of the Agile/DevOps methodologies: functional, performance- and security-related, affect the specific areas of a company’s operations. It was agreed that defects on all the three levels impact most a company’s revenues, brand reputation and have legal consequences.
Allegedly, the situation can be improved by the rapidly growing automatic testing market which (and this comes as a surprise) is also an area which poses considerable challenges to companies, as indicated in the latest “World Quality Report 2019-2020″:
The most typical challenges indicated in the survey included:
- Lack of appropriate test environment and data.
- Inability to apply test automation at appropriate levels.
- Difficulty in identifying the right areas on which tests should focus.
Interestingly, testing was indicated as the biggest challenge accompanying actions aimed at preventing new defects in software:
Why is testing perceived as such a difficult task? This may stem either from lack of knowledge or inability to apply it. One solution is to audit the activities instigated in the company; the list below may prove useful.
Poor testing strategies or how extra costs can be generated
1. Underestimated testing scenarios.
Testing scenarios should result from consultations held within the testing team working on the scenarios aligned with the previously collected business requirements. At the very early stage of talking with product owners, project managers and developers, detailed product documentation should be compiled. Together with other things, it will provide the basis for creating detailed action scenarios (the testers’ skills and experience play an important role, too). Ironically, a firm plan, implemented since the beginning of work on designing software in accordance with the shift-left testing idea (i.e. starting tests at the earliest stage of development) facilitates flexible adjustments to changes typical of the Agile approach. It is easier for testers to modify an existing plan than to work in chaos aggravated by subsequent sprints. A case in point is a small software company, described in a scientific work entitled “’Good’ Organisational Reasons for ‘Bad’ Software Testing: An Ethnographic Study of Testing in a Small Software Company” by D. Martin, J. Rooksby, M. Rouncefield and I. Sommerville. For the benefit of the study, a company from the United Kingdom is referred to as W1REsys. It creates development environments which make it possible for customers to design mobile applications. W1REsys employs seven people including four software developers. Due to the lack of a dedicated team of testers, the requirements for new products or functionalities are laid down either with respect to the clients’ needs or ideas for the elements which potential new clients can find attractive. These ideas are provided by representatives of the marketing and sales departments while the scientists surveying the company decided that testing was based on suggestions of what needs to be tested rather than how to do it. Therefore, the automatic tests created in W1REsys are not a part of a structured plan and as such, they allow only selected functionalities to be developed. The ideas for enhancing them are originated also in the process of developers’ work; they result from discussions about the way the system is working. This approach incurs the risk of grave defects in production and, as the survey’s authors admit, these defects do occur yet they are maintained on a level acceptable by the company.
2. The testing process is not aligned with the way in which an organization operates.
The example of W1REsys shows how important it is to adjust the principles of effective testing to an organisation’s capabilities. In larger companies which are in charge of complex projects, following the good practices guarantees success. In smaller companies, oftentimes coping with staff shortages, these rules need to be tailored to the employees’ actual capabilities. Otherwise, they will work under enormous pressure of time and frustration that they cannot accomplish all the goals towards which they have been working. As the Micro Focus research from 2019 suggests, the defects detected at the last stage of designing software (production) tend to be the most costly. This stems from research into both projects managed in Waterfall, now a less popular methodology, and Agile, a methodology enjoying great interest:
According to Eric Elliot who has written an article on production defects for Medium.com, the cost of remedying a production defect can be 100 times higher than the cost of correcting it at the design stage, and 15 times higher that the cost of repair at the level of implementation.
Smaller companies, which cannot afford extended testing scenarios, should arrange their tasks from the most to the least important to a project. The mentioned software company W1REsys from the United Kingdom set a goal of testing the functionalities most frequently used by the clients. As a result, it can properly manage defects identified at the production stage. This is a good example of approaching testing not like a way of excellent testing of the entire code which, in many cases, is simply impossible but rather, as a way of managing risk that can be mitigated when specific testing tasks are given priorities.
3. Ignoring the importance of tests in the speed and efficiency of mobile applications.
As a result of focusing on functionality tests (working/not working), some teams ignore the importance of speed and efficiency tests. A survey conducted by Dimensional Research in 2015 informs about the actual costs of adopting this attitude. Users of mobile applications were asked for their opinions on the response time. As the diagram below shows, their expectations are very high with 2 seconds required for the optimum response time:
80% of the respondents admitted that in the case of problems with an application, they give it not more than three chances before they abandon it for the benefit of competitive solutions. Interestingly, in the process of choosing an application, the functionalities promised by the producers are as important as the reviews published by other users:
In 2017, Pinterest found out about the business benefits of rebuilding an application to make a faster and more responsive one; following its implementation, Pinterest enjoyed bigger revenues and traffic:
4. Giving up on Test Driven Development.
This approach assumes designing automatic testing before works on writing the right code even commence. Of course, at the beginning the tests are failed; over time, as the subsequent lines of the code are written, at a certain stage they all end positive. As a result, the “trouble-free” section of the code grows; you can also think over and plan how the entire application works. Eric Elliot presented the following calculations as evidence of the effectiveness of the method which daunts many IT managers due to the high costs involved at the initial stage of a project:
In the fictitious example provided by Elliot, development without TDD was to cost 1,000 hours (the cost of maintaining the system excluded). TDD allowed to save 623 hours which is equivalent to a month of work of a team of four.
5. Under- or overestimating automatic testing.
The success of any testing team is based both on qualified employees and skilfully selected tools. Automatic testing significantly speeds up and fosters work thus releasing the testers who can focus on exploratory testing or a product analysis with respect to the users’ requirements or experience. Teams distrustful of automation tools or disappointed with older generation programmes may prefer manual tests. Frequently, this results in overloading the employees with work. They are also frustrated because they need to handle repetitive, mundane tasks. On the other hand, automatic testing cannot replace an employee who may come up with the right testing scenarios aligned with the business guidelines or even supplement them with suggestions of new and interesting functionalities. An optimum testing strategy helps to design the testers’ responsibilities to ensure that they can make the most of their automatic testing tools, at the same time taking advantage of the specific workers’ competence in carrying out manual exploratory testing.