How Much Time Does it Take to Test an Application?

Share on FacebookTweet about this on TwitterShare on LinkedIn

Share on FacebookTweet about this on TwitterShare on LinkedIn

How much testing is enough to ensure your application meets your requirements, functions as you need, and is relatively bug-free? What determines an ample amount of testing? These are questions we often receive from clients here at Segue and this article will address both.

segue-blog-for-great-quality-bring-your-software-testers-early

 

Time is an interesting factor to consider when determining how much testing is considered to be “enough.” It covers constraints based on a development/project schedule, as well as cost in terms of the number of hours spent. Considering Time in scoping a test effort shows that it is important to realize that we cannot test every possible situation.

The Scope of Testing  

The scope of testing you execute must fit within your project schedule, milestone and delivery date constraints. Within your allotted test time, scope can vary based on the following factors: size and complexity of the system, how much new system integration work is required, performance of the application system and the experience of the project team. If any of these factors change, the amount of test planning and execution will also increase, and you may need more time.

Testing for Defects

If your main goal in software testing is to find every single defect, then you should stop testing after you have found them all. Unfortunately, time may not allow this open-ended approach, and the cost of doing so may not be worth the diminishing return on investment. You may spend a lot of time trying to get to 100% coverage, but if you have no way of knowing if all the defects have been found, or whether more remain in the system, then you need to restrain from the idea of finding all the issues and use a different approach. For example, you need to evaluate the likelihood of finding more defects based on the amount of effort you have already put in and based on the results you have obtained so far. If the likelihood of finding major defects is low, then you are done testing. Additional testing would not provide an adequate return on investment in terms of new issues found compared to the effort already expended. However, in general, the more defects you find, the more likely it is that there are additional defects and investing more time for testing may be worthwhile.

As 100% bug-free software is nearly impossible to develop, you need a different standard to determine test completion.  You need to keep in mind the limitation of time with respect to schedule, budget costs and ample test coverage. Through testing multiple build releases, deployment of production test data and going through automated test scripts, there should be a sense of what has been fully covered. If your ultimate goal in testing is to assess whether the software is ready to be released into production, then you are finished with testing once you have obtained enough results and feel confident that the application is ready for production.

Need Help? Contact us