News about software testing, software testers and certification

2015 Issue #1 | ASTQB Website | Certified Tester Lookup | Request Info | FAQ

How can you reduce late-stage software defects and delays? Learn the answer below in this issue's feature article.

Articles in this issue:

The Ins and Outs of Entrance and Exit Criteria

Keep Your Software Out of the Headlines

Apply for an ASTQB Scholarship

You Picked the Right Career!

Show Off Your Certification

Join the Official ASTQB Facebook Community

Why Should You Be ISTQB Certified?

Solve Problems with IQBBA Business Analyst Certification

News and Offers from ASTQB Accredited Course Providers

A Road Map for Your Career


The Ins and Outs of Entrance and Exit Criteria

By Randall W. Rice, CTAL

Entrance and exit criteria are essential components of an overall project lifecycle, whether it’s a sequential lifecycle (such as the waterfall model) or an iterative/incremental lifecycle, such as an agile approach. However, these criteria are often not defined or applied in many projects because the delivery schedule often takes precedence.

The research indicates that instead of costing time, solid entry and exit criteria actually save time because they act as defect “filters.” By finding and fixing defects to meet either an entrance or exit criteria, not only does the cost of late-stage defects drop dramatically, but so do the delays often seen at the end of projects.

In Figure 1, we see on the left how it looks to have only one defect filter at the end of the project. This is also known as the “big bang” approach to software testing. Everything is tested at the end of the project. This can be a no-win situation. If that one filter fails to find enough defects, they go straight to the user or customer and the overall project effort may be a failure. In this case, too much trust is placed on one filter. On the other hand, if the single filter finds a lot of defects, you often have no time to recover by trying to fix them all – or even a portion of them. In this case, the project may fail because it is not suitable to be released. To attempt a release in this situation is highly risky and the odds of success are against you.

Figure 1 – Defect Filters

On the right side of Figure 1, we see a series of filters that are performed throughout the project. It is important to note that new defects are continually being created during the project. The multiple filters help find defects close to when they are created. This adds great value to a project because the defects can be found faster and fixed cheaper than if left undetected until the end of the project.

One way to visualize entry and exit criteria is in the workbench model. This model goes back over 60 years ago when Dr. W. Edwards Deming taught the workbench to Japanese auto manufacturers. It was published in Deming’s book “Out of the Crisis” (1986) and was soon adopted by U.S. automakers seeking to compete with foreign automakers on the basis of cost and quality.

Figure 2 – The Deming Workbench

The workbench model applies very well to software projects, regardless of lifecycle approach. The above diagram comes from a paper, “The Deming Process Workbench: An Application at MCI” (1995, Tim R. Norton, author) in which the workbench was successfully applied to IT projects.1

Let’s Look At Entrance Criteria First

In their book, “Software Inspection”, Tom Gilb and Dorothy Graham write that entrance criteria should be applied to any work product considered for review. The entrance criteria serve as a way to ensure that the work product (requirement, design, code, etc.) is of sufficient quality to be reviewed without wasting the reviewers’ time.2

If an item has too many defects, the review is not productive because there are so many issues to identity and resolve. In cases where reviews are performed pre-maturely, it is not uncommon for the review leader to stop the review meeting and reject the item. The author must correct the obvious defects before re-submitting. That is the essence of entry criteria.

Applied to software testing, the same idea is seen. If testing is started on an item too soon, then higher levels of defects are seen. At first, this might seem like a good thing, since the job of testers is to find defects. However, if the defects should have been found earlier in the project, testers are finding defects too late. Testers have to report the defects, wait for the defects to be fixed (which in some cases can be a long wait), then re-test the fix. This causes project delays and many times, project failures. In addition the cost of finding and fixing late stage defects is 10 to 20 times higher than if the defects were found and fixed as a result of requirement reviews (Figure 3).3

Figure 3 – The Relative Cost of a Fixing a Defect

Some examples of entrance criteria for a software application to be system tested might be:

  • Component testing performed with 100% code coverage and 100% decision coverage.
  • Integration testing performed with components that have interactions to the extent that all pairs of related conditions are tested.
  • Performance testing of each system component must demonstrate that it meets or exceeds performance requirements.
  • Test cases have been defined for each requirement that adequately verify the conditions described in the requirements.
  • Project stakeholders have approved all test objectives.
  • A risk assessment has been performed on the system and the tests have been prioritized based on relative risks.
  • The test manager has approved promotion to the system test environment.

If the application meets these criteria, then it is deployed to the system test environment and system testing can begin. Ideally, the types of defects found will be those that prior levels of testing (such as component and integration testing) would not have been expected to find.

Exit Criteria

While the big question is “when should we stop testing?” exit criteria should be applied before any work product (requirements, design, code, etc.) is used as a basis for other project tasks. Exit criteria should also be applied to the system or application before being released to final levels of testing, or before it is release for general use by customers or users. Exit criteria are essentially a Quality Control check to ensure an item meets a given set of conditions.

From the ISTQB Advanced Test Manager syllabus, the following statement is made regarding exit criteria:

“As part of results reporting and exit criteria evaluation, the Test Manager can measure the degree to which testing is complete. This should include tracing test cases and discovered defects back to the relevant test basis. For example, in risk-based testing, as tests are run and defects found, testers can examine the remaining, residual level of risk. This supports the use of risk-based testing in determining the right moment to release. Test reporting should address risks covered and still open, as well as benefits achieved and not yet achieved.”4

The syllabus also goes on to state that after the project, exit criteria should be evaluated as to its effectiveness:

“Finally, during test closure, the Test Manager should evaluate metrics and success criteria which are pertinent to the needs and expectations of the testing stakeholders, including the customers’ and users’ needs and expectations in terms of quality. Only when testing satisfies these needs and expectations can a test team claim to be truly effective.”5

The challenge with exit criteria is that despite all the good intentions, people on projects tend to find ways to justify releasing items before they are ready to be released. The most common driver of this effect is the release schedule.

While the release schedule is an important consideration for releasing software, it should not be the only criterion. Just because testing stops, that doesn’t mean defects will stop being discovered.

Many times, project leaders and stakeholders have the expectation that all problems can be fixed in production. That is not always the case. This is where project failure after launch occurs – and it is costly when it happens. In some cases, the costs to fix and remediate a defect may be low, but on average, the costs may be substantial and in some cases rise to millions of dollars.

Examples are:

  • The NASDAQ performance defect during the Facebook IPO. To date, that incident has cost NASDAQ over 72 million dollars in fines and restitution payments.6 Fixing the defect was easy and probably not very costly. Cleaning up after the defect wasn’t so easy or cheap.
  • “Erroneous trades, which disrupted trading activity on several options exchanges on Tuesday, could cause Goldman Sachs to lose as much as $100 million if they are not cancelled, sources told the Financial Times. The financial institution’s computer system, which normally takes expressions of client interest, had misfired, sending those expressions as actual orders to the exchanges.”7
  • “And just over a year ago, Knight Capital Group lost $461.1 million thanks to defective trading software, an episode that was a factor in the firm’s sale to Getco, a rival trading group. These issues will put the risks of the high-speed systems firms use to trade securities and derivative contracts under regulators’ microscopes.”8

Example exit criteria for a system or application under test could include:

  • 100% of the planned tests performed.
  • No critical defects remain unresolved.
  • All major project risks have been mitigated or have contingencies.
  • Technical support is trained and comfortable with being able to support the application in production.
  • The defect discovery rate is less than 2 defects each test cycle.
  • Regression testing is performed on every test cycle, including the final test cycle.
  • All regression defects that are “Critical” or “Major” have been successfully resolved and retested

Defining Entrance and Exit Criteria

Each organization will need to define its own sets of entrance and exit criteria. These are based on the level of risk and the maturity of the organization. Project stakeholders such as business management, end-users and so forth may also feel strongly that certain criteria be met.

Once the criteria are established, they can form the basis of criteria for future projects. It’s also common to adjust criteria by project. Sometimes, it is necessary to either strengthen or lessen the criteria during a test. However, doing so may introduce new project risks.

Metrics for Judging the Quality of Exit Criteria

One of the key things about both entrance and exit criteria is that they should be measureable or tangibly defined. There should be no doubt as to whether or not the criteria have been met.

Some metrics that are helpful are:

Defect Detection Percentage (DDP)

This is the number of defects found in your organization divided by the total number of defects found, including those found after release by the users. This is a purely historic metric, but helps you to know how effective your testing is, and if you are improving or not.

Defect Fixed Percentage (DFP) or Defect Removal Efficiency (DRE)

This is the number of defects found and fixed in your organization divided by the total number of defects found and fixed, including those found after release by the users. Like DDP, this is in hindsight, but helps you to know how effective your testing is, how well the defects are being fixed, and if you are improving overall or not in finding and fixing defects.

Mean Time Between Failure (MTBF)

This metric can be applied in various levels of testing, but the idea is to know how long do you test (as a team) before you find a defect. At first, the times are normally short – sometimes within minutes. Then, you measure when the next failure is seen. Once you get to the point where you are testing for days and only finding a few minor defects, your MTBF is probably a day or more and your tests have pretty much done their job. The defect discovery curve has leveled out. An example is seen in Figure 4.

Figure 4 – The Plateauing of Discovered Defects

Percentage of defects resolved by severity level

This is a very important metric because it tells how known critical defects have been resolved and re-tested.

Percentage of defects outstanding by type and by severity level

Likewise, this metric is important because is tells how many known critical defects have not been resolved. Ideally, this number should be zero. This is not to say you have zero defects in the application, but that you have no known critical defects.

Percentage of tests that eventually passed

You want all your tests to pass eventually. However, some tests that reveal minor defects may go unresolved before release. The danger here is “death by a thousand paper cuts” in which too many minor defects can have an overall devastating effect.

Percentage of tests that continue to fail

Once again, you must consider the severity level of the failures. So, it is good to know which percentage of continued failures are due to major defects versus minor ones.

Defect backlog

One of the best ways to tell if a product or system is ready for release is to look at the size of the defect backlog. The defect backlog is the count of defects assigned to development or other area for resolution. In Figure 4, the defect backlog is shown by two dotted lines. The top dotted line is how an out of control defect backlog might appear. In this case, the number of defects being addressed is constantly increasing, which is bad news for both for testing and releasing the application. When the defect backlog is constantly increasing, it means that testers are finding defects faster than they can be resolved. Plus, each resolved defect must be re-tested. In extreme cases, re-testing might mean repeating the entire test. When software is released in face of a climbing defect backlog, the problems will be experienced by actual users. Although risky, some companies have released software in this condition. The result is almost always very costly and sometimes results in de-installing the release.

The lower dotted line is the ideal way a defect backlog should appear. The backlog never reaches an unmanageable level. One consideration to note is that in Figure 4, defect severity is not indicated in any of the defect metrics.

Taken together, these metrics tell an important story about your project. A testing dashboard is an ideal way to measure and display this information in one place so everyone can see it.

Industry Averages

First, a disclaimer. Industry averages are very broad and may not apply well to a specific situation. They are interesting to consider as a secondary benchmark, but your best metrics will be based on your own history.

Best Practices for Releasing Software

Some people reject the notion of “best practices” because they say it’s impossible to say one set of practices is superior to another in all cases. I can agree with that, but I also think that there is the view of best practices that are proven to work in a wide variety of situations.

That said, here are some things I consider as best practices for releasing software:

  1. Have a defined and orderly release process. This is even more essential when you have many people working in separate areas, all trying to get their work ready for release.
  2. Have well-defined exit criteria that align with the risk of the project.
  3. Perform a risk assessment just before release
  4. Perform a pre-implementation review/walkthrough (this could be checklist-driven) to make sure everything is in place.
  5. Perform configuration testing in a test environment or environments that closely mirror the production environment.
  6. Deploy first to a smaller, low risk production environment as a pilot or beta project. This reduces the risk of a large-scale failure. If something does go wrong, the damage can be contained.

While I cannot recommend specific metrics because each situation is different, here are a few interesting industry metrics. The metric mentioned below by Capers Jones, Defect Removal Efficiency (DRE), has also been called Defect Detection Percentage (DDP). An issue that can be found in testing literature is the distinction of using these metrics as ways to determine test effectiveness as opposed to test efficiency. For example, a case could be made that while a set of tests might be very efficient in terms of resources and coverage, those tests may be ineffective at finding defects. I consider DRE and DDP to be measures of test effectiveness.

According to Capers Jones, who has studied thousands of projects in many industries,

“Testing has been the primary software defect removal method for more than 50 years. Unfortunately most forms of testing are only about 35% efficient or find only one bug out of three.

Defects in test cases themselves and duplicate test cases lower test defect removal efficiency. About 6% of test cases have bugs in the test cases themselves. In some large companies as many as 20% of regression test libraries are duplicates which add to testing costs but not to testing rigor.

Due to low defect removal efficiency at least eight forms of testing are needed to achieve reasonably efficient defect removal efficiency. Pre-test inspections and static analysis are synergistic with testing and raise testing efficiency.

Tests by certified test personnel using test cases designed with formal mathematical methods have the highest levels of test defect removal efficiency and can top 65%.”9

Another interesting finding by Jones is:

“Testing by itself without any pre-test inspections or static analysis is not sufficient to achieve high quality levels. The poor estimation and measurement practices of the software industry have long slowed progress on achieving high quality in a cost-effective fashion.

However modern risk-based testing by certified test personnel with automated test tools who also use mathematically-derived test case designs and also tools for measuring test coverage and cyclomatic complexity can do a very good job and top 65% in defect removal efficiency for the test stages of new function test, component test, and system test.”10

As far as United States metrics are concerned, Jones writes of the companies that release high-quality software:

“Successful quality control stems from a synergistic combination of defect prevention, pre-test defect removal, and test stages. The best projects in the industry circa 2012 combined defect potentials in the range of 2.0 defects per function point with cumulative defect removal efficiency levels that top 99%.

The U.S. average circa 2012 is about 5.0 bugs per function point and only about 85% defect removal efficiency. The major forms of overall quality control include:

  1. Formal software quality assurance (SQA) teams for critical projects
  2. Measuring defect detection efficiency (DDE)
  3. Measuring defect removal efficiency (DRE)
  4. Targets for topping 97% in DRE for all projects
  5. Targets for topping 99% in DRE for critical projects
  6. Inclusion of DRE criteria in all outsource contracts ( > 97% is suggested)
  7. Formal measurement of cost of quality (COQ)
  8. Measures of “technical debt” but augmented to fill major gaps
  9. Measures of total cost of ownership (TCO) for critical projects
  10. Monthly quality reports to executives for on-going and released software
  11. Production of an annual corporate software status and quality report
  12. Achieving > CMMI level 3”11

As for the big picture, Jones concludes:

“The software industry spends more money on finding and fixing bugs than for any other known cost driver. This should not be the case. A synergistic combination of defect prevention, pre-test defect removal, and formal testing can lower software defect removal costs by more than 50% compared to 2012 averages. These same synergistic combinations can raise defect removal efficiency (DRE) from the current average of about 85% to more than 99%.

Any company or government group that averages below 95% in cumulative defect removal efficiency (DRE) is not adequate in software quality methods and needs immediate improvements.

Any company or government group that does not measure DRE and does not know how efficient they are in finding software bugs prior to release is in urgent need of remedial quality improvements.

When companies that do not measure DRE are studied by the author during on-site benchmarks, they are almost always below 85% in DRE and usually lack adequate software quality methodologies. (Emphasis mine) Inadequate defect prevention and inadequate pre-test defect removal are strongly correlated with failure to measure defect removal efficiency.”12


Many people see the job of testers as that of finding defects. Actually, the job of testers is to find the evidence of possible defects by causing the software to fail. The cause of the failure may or may not be a defect.

It is only when testers get involved in reviews and inspections that they have the opportunity to find defects because they can see the defects in the requirements, code, design, test cases, and other work products.

Having a series of defect filters is the most effective and least expensive way to ensure product and project quality. Relying on end-game testing is very risky, inefficient and ineffective.

Another major purpose of testing is measurement. Entry and exit criteria are two key ways to measure the quality of software at every stage of a project, such as applied in the Deming Workbench. Each organization must establish and evaluate its own criteria. Then, the project team must be accountable for how well it enforces those criteria. Otherwise, even though the criteria are defined, they have little overall effect in assuring quality releases.

About the Author

Randy Rice is a leading author, speaker, trainer and consultant in the field of software testing and software quality. He has over 30 years experience building and testing mission-critical projects in a variety of environments and has authored over 60 training courses in software testing and software engineering. Randy serves on the board of directors of the American Software Testing Qualifications Board (ASTQB). He is co-author with William E. Perry of the book, Surviving the Top Ten Challenges of Software Testing and Testing Dirty Systems, published by Dorset House Publishing Co. Randy can be reached at

  2. Software Inspection, Gilb and Graham, 1994, Addison-Wesley Publishing
  3. Adapted from Barry Boehm, EQUITY Keynote Address, March 19th, 2007
  4. ISTQB Advanced Test Manager Syllabus, 2012 release, section 2.3
  5. ibid
  8. ibid
  9. Capers Jones, Software Defect Origins And Removal Methods,
  10. ibid
  11. ibid
  12. ibid


Keep Your Software Out of the Headlines

Our upcoming ASTQB Conference in Washington, D.C. is designed to help you stay out of trouble while driving down costs. Everyone involved in software testing, QA, security and application development, from tester to manager to CIO, should be at this conference September 14-16. This is your chance to develop your software QA and security expertise so you can keep your company - and yourself - out of the headlines.

The conference will include full day tutorials, keynotes, and breakout sessions. Every element of the event is designed to improve quality and security, while reducing the cost associated with expensive - and embarrassing - software defects.

The ASTQB Conference is unique. Deliberately intimate, our informal setting will allow our expert speakers to share their insights and connect with you to build a supportive software quality community.

Our line-up of speakers will soon be announced. Lock in these dates now: September 14-16 in Washington, D.C. Learn more about the ASTQB Conference.


Apply for an ASTQB Scholarship

ASTQB seeks to support the teaching and study of software testing within U.S. colleges and universities, so we are offering scholarships of $2500 to qualified undergraduates for the 2015-2016 academic year. Applicants must be students currently enrolled and majoring in Computer Science, Information Systems, or an equivalent subject at an accredited U.S. institution of higher education. Their academic program must include at least one course devoted to software testing.

To apply, submit the following application materials to no later than April 1, 2015:

  • Official transcript of undergraduate studies to date
  • List of departmental course offerings, including a description of the course(s) devoted to testing
  • Indication that a testing course will be taken during 2015-2016 (if not already completed)
  • At least one recommendation letter from a current faculty member
  • Brief (~500 word) essay on contribution of testing to software development
Learn more and apply today.


You Picked the Right Career!

Need a reason to smile today? We've seen three articles recently that prove you're brilliant for choosing software testing as your career!


Show Off Your Certification

Do your co-workers and boss know you are an ISTQB Certified Software Tester?

Do your career a favor, and give them a gentle reminder with your own ASTQB coffee mug from the ASTQB Store. The ASTQB Store also offers planners, mousepads, and cards.

It's always the perfect time to remind everyone of your ISTQB certification, so visit the store today.


Join the Official ASTQB Facebook Community

Join the wonderful community of ISTQB Certified Testers in the U.S. on the ASTQB Facebook page.

The ASTQB Facebook page offers the latest ISTQB news, special discounts for the ASTQB Store, and the chance to help your fellow software testers. "Like" us right now.


Why Should You Be ISTQB Certified?

It's easy to understand why ISTQB Certification is the most popular software tester certification in the world. For software testers, ISTQB Certification demonstrates your knowledge and provides a professional pathway. For companies and test managers, it can help to reduce costs and speed delivery.

Did you know that if you obtain your ISTQB Certification through ASTQB, there are even more benefits? Here are some examples:

Congratulations on making the brilliant choice to become ISTQB Certified through ASTQB! It keeps getting better every day.


Solve Problems with IQBBA Business Analyst Certification

What is at the root of most failed software projects? The development? The testing? No. Ask around and you will find that the most common answer is "the requirements". Does this mean that business analysts (BAs) are incapable of writing good requirements? If it were that simple, we could just do some quick requirement writing training and be done with it. In reality, problems with requirements are myriad. The problems generally start with a customer who doesn't really know what they want (but they know what they don't want when they see it!). From there the problems just cascade - schedules are too tight, priorities are shifting, changes are made without the BA being consulted, the requirements are not clear enough for implementation....

The IQBBA business analyst certification is designed to help address the problems with requirements gathering and implementation. It takes a full lifecycle approach and explains the BA's involvement throughout the project initiation, development, testing and deployment. Even if you're not a BA, you will find this information useful because everyone is affected by the requirements and the only way to have a good project is to have a good understanding of the requirements.

If you're Agile, you still have the need to document requirements in stories, develop them through the iterations and track them. It's the same tasks, just with different tools and different levels of formality.

Take a look at IQBBA business analyst certification. Learn more about good requirements practices. While the requirements world will not transform magically overnight, implementing good practices can start now and those practices can evolve into a more complete requirements process. After all, we all want good requirements don't we?

Learn more about IQBBA business analyst certification offered in the U.S. by ASTQB.


News and Offers from ASTQB Accredited Course Providers

SQE Training: Special Offer - Get the skills you need to build better software with ISTQB® Foundation Level Certification, Agile Extension, and Advanced Level Certification courses through SQE Training and save $300. Through March 31, register for any public certification class here with promo code 300QB and save $300. Restrictions apply, only valid on new public registrations.

Rice Consulting Services: Rice Consulting Services is offering discounts on individual and team e-learning courses for ISTQB Foundation Level and Advanced Level training through March 20, 2015. To see the discounted pricing, visit

RBCS: In celebration of RBCS providing softcopy, downloadable, complete course note sets with ISTQB Certified Tester e-learning and live courses, take 15% off of any live or e-learning ISTQB Certified Tester course! Visit the RBCS Store today and download your materials to your PC, tablet or phone within 24-48 hours after purchase. Enter the code GREEN15 into the promo code field in your cart. Your discount will be reflected on your final receipt. Expires April 15, 2015. Cannot be combined with any other discount.

ALP International (ALPI): Let ALPI train & certify and your test team in 2015 and SAVE with multi-person discounts at any of our 3 locations: Bethesda, MD, Denver, CO, or Virtual Live. Choose ISTQB Certification Training: Foundation Level, Agile Tester, Advanced Test Analyst, Advanced Technical Test Analyst, Advanced Test Manager. Choose Test tools Training: HP (QTP, UFT, LR, QC) and Microsoft (MTM, CodedUI, LoadTest). Contact our Training & Education team at or by calling (301) 654-9200 ext. 404.


A Road Map for Your Career

Are you ready to take your software testing career to the next level with Foundation, Agile, Advanced and Expert certification? ISTQB Certification has a professional pathway for every tester at every level and stage of their career. See these helpful resources to help you map out your software testing career.


Stop by the ASTQB Booth and Say Hi

We would love to meet you in person! If you are attending any of these shows, stop by the ASTQB booth, say hello and learn about the latest ISTQB Certifications and benefits. Also keep in mind that many of these shows offer public ISTQB Certification exams, so be sure to check with the organizers for the date/time of the exam you wish to take.

  • STPCon Spring, March 30 - April 2
  • Mobile Test + Dev, April 15-16 (use code M15VE when registering to save up to $200)
  • STAREAST, May 5-7 (use code S15VE when registering to save up to $200)
  • Agile/Better Software West, June 9-11
  • ASTQB Conference - Washington, D.C., September 14-16
  • StarWest, September 29 - October 1
  • Agile/Better Software West, November 10-12
  • STPCon Fall, TBD


What Would You Like to Learn About?
As always, we welcome your feedback and criticism. Let us know what we can do to help make you and your company better at software testing at

About ISTQB Certification News
ISTQB Certification News is a free software testing newsletter from ASTQB providing news, analysis, and interviews for the software tester community. Feel free to forward to colleagues or ask them to subscribe at:

Non-profit, non-commercial publications and Web sites may reprint or link to articles if full credit is given. Publication, product, and company names may be registered trademarks of their companies.

Copyright 2015
American Software Testing Qualifications Board, Inc. (ASTQB)
15619 Premiere Drive, Suite 101
Tampa, FL 33624 USA
Phone 813.319.0890
Fax 813.968.3597

If you want to change your address, use this link: