Tag Archives: issue_tracking

Edible versions – Tips for implementation

Estimated Reading Time: 1 minute

So you read my Edible versions post and want to get the good stuff on how to make it happen in your organization. Well, to be honest, its not that difficult once all the parties sit together, talk about their expectations and design the protocols between the groups. See my earlier post for some general pointers.

Having said that – Maybe I CAN provide some tips that I saw working in the past:

  • Ensure all content in a delivery is tracked as a change request (bug/feature/other) in an issue tracker.
  • Provide an “Impact Level” for each change, so QA can easily focus on the high impact changes first.
  • For complex changes or large builds get used to hold delivery meetings where the DEV and QA discuss the changes and exchange ideas on how to proceed with covering this build. Be effective – know when the changes are small and the process can be lighter.
  • Try to establish an environment which automatically generates release notes for a version. At a minimum, as a report on whatever the issue tracker says. If possible, it should be based on actual deliveries to the SCM system. Use something like the integration between JIRA and Quickbuild/LuntBuild

QA Effort Effectiveness

Estimated Reading Time: 3 minutes

How do you know your QA effort is being effective ?

Based on the different stakeholders which require input from the QA a typical answer might be that Product quality is high when released to customers.

Assuming that is indeed more or less what someone expects (I’d say effective QA needs to answer to some other requirements as well) how does one go about checking whether the product quality is indeed high?

Those who reached a fairly intermediate level of QA understanding would easily point out that the percentage “QA Misses” (namely, the number of issues missed in QA and detected in the field) should be below a certain threshold. A high number here means simply that too many issues/bugs are not detected during the entire QA coverage only to be embarrassingly detected by a customer.

If one naively optimizes just for this variable, the obvious result is a prolonged QA effort, aiming to cover everything and minimize the risk. If no reasonable threshold is set, there is a danger of procrastinating and avoiding the release.
See The Mismeasure of Man of a cool article on abusing measurements in the software world…

Of course, a slightly more “advanced” optimization is to open many many bugs/issues so the miss ratio will become smaller due to the larger bugs found in QA, not due to missing less bugs. This can result in a lot of overhead for the QA/PM/DEV departments as they work on analyzing, prioritizing and processing all those bugs.
Did I forget to factor in the work to “resolve/close” those issues? NO! Several of those issues might indeed be resolved and verified/closed, but those are probably issues that were not part of the optimization but part of a good QA process (assuming your PM process manages the product contents effectively and knows how to enforce a code-freeze…).

My point is that there are a lot of issues that are simply left there to rot as open issues, as their business priority is not high enough to warrant time for fixing them or risking the implications of introducing them to the version.

A good friend has pointed this phenomena to me a couple of years ago, naming it “The Defect Junk Factory” (translated from hebrew). He meant that bugs which are not fixed for the version on which they were opened on, indicate that the QA effort was not focusing on the business priorities. The dangers of this factory is a waste of time processing them, and the direct assumption that either the QA effort took longer because it spent time on these bugs, or that it missed higher business priority bugs when focusing on these easy ones.
Kind of the argument regarding speed cameras being placed “under the streetlight” to easily catch speed offenders (with doubtful effect on overall safety), but all the while missing the really dangerous offenders.

So what can be done? my friend suggested measuring the rate of defects that are NOT fixed for that version. The higher this number, the more your QA effort is focusing on the wrong issues.
Just remember that this is a statistical measure. Examining a specific defect might show that it was a good idea for the QA to focus there, and the fix was avoided due to other reasons. But when looking across a wide sample, its unreasonable that a high number of defects are simply not relevant. If not a QA focus issue, something else is stinking, and is worth looking at in any case.

Another factor of an effective QA is fast coverage. What is fast? I don’t have a ratio of QA time related to development time. Its probably a factor of the type of changes (Infrastructure, new features, Integration work) done in the new version as each type has a different ratio of QA to DEV effort. (e.g. kernel upgrade usually requires much more QA compared to DEV effort )
Maybe one of the readers has a number he’s comfortable with – I’d love to hear.
What I do know is that version-to-version the coverage time should become shorter, and that the QA group should always aim to shorten this time further without significantly sacrificing overall quality. I expect QA groups to do risk-based coverage, automation for regression testing, and whatever measures which assist them in reducing the repeatable cost of QA coverage at the end of each version. The price/performance return on reducing the QA cycle is usually worth it to some extent.

To sum up, a good QA effort should:

  • Minimize QA misses
  • Minimize the defect junk factory
  • Minimize QA cycle time without compromising quality

What do you think is a good QA effort? How are you measuring it?

Severity and Priority – The Debate

Estimated Reading Time: 6 minutes

There are a couple of alternatives for managing severity and priority in the Issue Tracker.
Although there are many resources out there on this subject (see http://del.icio.us/yyeret/priority_severity) I’ll try to consolidate them and provide my 2c on the matter, as I think its an important subject.

Single-field Priority
First, seemingly simpler alternative, is Single-field priority – representing Customer Impact
The idea here is to only have a single priority/severity field. The reporter assigns it according to his understanding of the customer impact (severity, likelihood, scenario relevance, etc.). Product Management or any other business stakeholder can shift priority according to current release state, his understanding of the customer impact according to the described scenario. The developers prioritize work accordingly.
The strength of this approach is in its simplicity, and the fact that several issue trackers adopt this methodology and therefore support it better “out of the box”.
The weakness is that once in the workflow the original reasoning for the priority can get lost, and there is no discerning between the customer impact and other considerations such as version stability, R&D; preferences, etc.

An example why this is bad? Lets say Keith opened bug #1031 with a Major priority. Julie the PM later decided that since there is some workaround and we are talking just about specific uses of a feature rarely used, the business priority is only Normal or Minor. Version1 is released with this bug unresolved. When doing planning for Version2 Julie missed this bug since its priority is lower.
Even if the feature its related to is now the main focus of the version. Even if not missed, looking for this bug and understanding the roots and history is very hard, especially consider the database structure of issue trackers. History is available, but its not as easy as fields on the main table of issues…

Brian Beaver provides a clear description of this approach at Categorizing Defects by Eliminating “Severity” and “Priority”:
I recommend eliminating the Severity and Priority fields and replacing them with a single field that can encapsulate both types of information: call it the Customer Impact field. Every piece of software developed for sale by any company will have some sort of customer. Issues found when testing the software should be categorized based on the impact to the customer or the customer’s view of the producer of the software. In fact, the testing team is a customer of the software as well. Having a Customer Impact field allows the testing team to combine documentation of outside-customer impact and testing-team impact. There would no longer be the need for Severity and Priority fields at all. The perceived impact and urgency given by both of those fields would be encapsulated in the Customer Impact field.

Johanna Rothman in Clarify Your Ranking for System Problem Reports talks about single-field risk/priority:

Instead of priority and severity, I use risk as a way to deal with problem reports, and how to know how to fix them. Here are the levels I choose:
o Critical: We have to fix this before we release. We will lose substantial customers or money if we don’t.
o Important: We’d like to fix this before we release. It might perturb some customers, but we don’t think they’ll throw out the product or move to our competitors. If we don’t fix it before we release, we either have to do something to that module or fix it in the next release.
o Minor: We’d like to fix this before we throw out this product.

bugzilla-style Severity+Priority

Here, the idea is to use a severity field for technical risk of the issue, and a priority field for the business impact. The Reporter assigns severity according to the technical description of the issue, and also provides all other relevant information – frequency, reproducability, likelihood, and whether its an important use-case/test-case or not. Optionally, the Reporter can assign priority based on the business impact of the issue to the testing progress. e.g. if its a blocker to significant coverage, suggest a high priority. If he thinks this is an isolated use case, suggest a lower priority. A business stakeholder, be it PM, R&D; Management, etc. assigns priority based on all technical and business factors, including the version/release plan. Developers work by Priority. Severity can be used as a secondary index/sort only.
Developers/Testers/Everyone working on issues should avoid working on high-severity issues with unset or low priority. This is core to the effectiveness of the Triage mechanism and the Issue LifeCycle Process
Customers see descriptions in release notes, without priority or severity. Roadmap communicated to customers reflects the priority, but not in so many terms.

Strengths of this approach are:
* Clear documentation of the business and technical risks, especially in face of changing priorities.
* Better reporting on product health when technical risk is available and not hidden by business impact glasses.
* Less drive for reporters to push for high priority to signify they found a critical issue. It’s legitimate to find a critical issue and still understand that due to business reasons it won’t be high priority.
* Better accommodation of issues that transcend releases – where the priority might change significantly once in a new release.

The weaknesses are that it’s a bit more complex, especially for newbies, and might require some customization of your issue tracker, although if your tool cannot do this quite easily, maybe you have the wrong tool…
In addition, customers have trouble understanding the difference between their priority for the issue and the priority assigned within the product organization. The root cause here is probably the lack of transparency regarding the reasoning behind the business priority. I’d guess that if a significant part of the picture is shared, most customers would probably understand (if not agree) with the priorities assigned to their issues. Its up to each organization to decide where it stands on the transparency issue. (see Tenets of Transparency for a very interesting discussion on the matter in the wonderful weblog of Eric Sink)

To see how our example works here – Keith will open the bug, assign a major severity, and a low priority since the bug blocks just one low-priority test case. Julie the PM sees the bug, and decides to assign a low priority value, so the bug is left for future versions for all practical matters. When planning V2 Julie goes over high-severity issues related to the features under focus for the version, and of course finds this issue as it’s a Major severity.

See the following resources for this approach:
* http://c2.com/cgi/wiki?DifferentiatePriorityAndSeverity
* Priority is Business; Severity is Technical:

business priority: “How important is it to the business that we fix the bug?” techn
ical severity: “How nasty is the bug from a technical perspective?” These two questions sometimes arrive at the same answer: a high severity bug is often also high priority, but not always. Allow me to suggest some definitions.

Severity is levels:
o Critical: the software will not run
o High: unexpected fatal errors (includes crashes and data corruption)
o Medium: a feature is malfunctioning
o Low: a cosmetic issue

Priority levels:
o Now: drop everything and take care of it as soon as you see this (usually for blocking bugs)
o P1: fix before next build to test
o P2: fix before final release
o P3: we probably won’t get to these, but we want to track them anyway

* Corey Snow commented on Clarify Your Ranking for System Problem Reports:
Comment: Great subject. This is a perennial topic of debate in the profession. The question at hand is: Can a defect attribute that is ultimately irrelevant still serve an important function? Having implemented and/or managed perhaps a dozen different defect tracking systems over the years, I actually prefer having both Priority and Severity fields available for some (perhaps) unexpected reasons. Priority should be used as the ‘risk scale’ that the author describes. 3 levels, 5 levels, or whatever. Priority is used as a measure of risk. How important is it to fix this problem? Label the field ‘Risk’ if that makes it more clear. Not so complicated, right? So what good is Severity? Psychology! Its very existence makes the submitter pause to consider and differentiate between the Priority and Severity of the defect. In other words, without Severity, the submitter might be inclined to allow Severity attributes to influence the relative Priority value. Example 1: Defect causes total system meltdown. Only users in Time Zone GMT +5.45 (Kathmandu) are affected on leap years. There is one user in that time zone, but there is a manual workaround, and a year to fix it besides. Priority=Super Low, Severity=Ultra High Severity gives a place for the tester to ‘vent’ about their spectacular meltdown, without influencing the relative Priority rating. Example 2: Defect is a minor typo. Typo in on the ‘Welcome to Our Product’ screen, which is the first thing every user will see. Priority=Ultra High, Severity=Super Low Again, Severity gives a place for the tester to express how unimportant the defect is from a functional perspective, without clouding their Priority assessment. I once managed a defect tracking system with only a Priority field. This frequently led to a great deal of wasted time in defect discussion meetings as one side would argue about Severity attributes while another would argue about Priority attributes, but the parties were not even aware of the distinction that was actually dividing them. Having both fields serves to head off this communication problem, even if Severity is completely irrelevant when fix/no fix decisions are actually made. ~ Corey Snow (03/11/03)

Author’s Response: Corey, Great counterpoint to my argument. ~ Johanna Rothman (03/12/03)

Personal Favorite
As can probably be understood by now, My personal favorite is the Severity+Priority approach. I confess I don’t have much experience with the single priority approach, but I really feel the Severity+Priority way is very effective, without significant costs once every stakeholder understands it.

What is your favorite here?