Category Archives: Agile Testing

Collaborating with specialized roles using kanban classes of service

Estimated Reading Time: 2 minutes

 

I want to share a solution I came up with together with a team of performance / non-functional testing, working in a product group in a large enterprise. This solution deals with the challenge of bridging the principles of "Those who build the system test it", "Non functional testing is a collaboration role", and the fact that specialized roles such as performance testers are usually stretched to cover the demand for their services. 

This group wanted Performance testing for the product of course. What usually happened though is that the performance team only got usable features towards the end of the release (think waterfall like behaviour). This usually meant serious waivers and compromises around performance testing. 

The first step taken by the product group was to work more on effectively breaking the features into smaller viable features and stories. Once this was done, it was easier for the performance testing team to get involved throughout the version, and achieve a more reasonable flow. 

Things should have been great by now. 

BUT then another problem surfaced, even while we were discussing how this would work. 

It was clear that the capacity of the performance testing team wasn't sufficient to really address everything. 

The naive policy meant that when a feature arrived at performance testing, they would decide whether they have time to cover it, do risk management and either test it or skip it. 

The downside for this was that its a black/white decision. This misses the opportunity for the delivery agile teams to do at least SOME performance testing, even if they don't have the capabilities, tools, expertise of the dedicated performance testing team. 

Our suggested solution was to use the concept of kanban Classes of Service to improve on this naive policy. 

Since we already know not every feature requires the same performance test attention, lets not wait until it arrives at the performance team to make this classification. Lets do it earlier, before we go into development/testing. 

With this classification, policies can be setup that can involve both the performance testing team as well as the delivery agile teams in the work of performance / non-functional testing. 

We came up with a flag system:

       Red – performance team Must be involved hands on – probably by joining the Feature team working on this feature for the duration of the effort

       Yellow – performance team Advise/Consult, but most work is in Teams. Representative of the performance team will be visiting the Feature team quite often while the Feature is being worked on. 

       Green – don’t need any involvement from performance team

This system helps drive collective ownership of non-functional testing. One of the first things that happened is that the Feature teams told the performance testers that there are some kinds of tests they can run on their own, although they don't have highly specialized performance tools.

We are also experimenting with systems like this for involving architecture teams, and it applies for most kinds of super-specializations that are not easily integrated into Feature teams. 

Managing the overall flow using kanban will also help see whether a bottleneck develops, and what can be done further to overcome it. 

How to use kanban and lean concepts to effectively manage agile testing end to end

Estimated Reading Time: 1 minute

 

I recently have been working on an agile testing training. Why is it relevant here? Because one of the modules I'm most fond of (not surprisingly) introduces kanban and CFD as a way to deal with some common problems that occur around testing once you start to work feature-driven (agile). 
 

Any thoughts appreciated!
 
We are currently running this training course both publicly and internally. If you are interested, contact AgileSparks at [email protected] 

Lean/Agile Testing

Estimated Reading Time: 1 minute

I've been a bit quiet lately on the blog front (as well as twitter for those who are following)

Mainly I've been busy preparing an Agilesparks Agile Testing training with Ronen Bar Nahor. While a lot of work, it has been a great experience. We tried to take some of the Lean/Kanban work we've been focused on lately and apply it to the Testing domain. Those following my blog can see some of that thinking already. 

Applying concepts like Limited WIP to Defects, Hardening, DONE DONE, and how to instill collective test ownership make a lot of sense for Agile Testing in our view. 
It also helps that one of the teams that has progressed the most towards Agile Testing approach is a Scrumban team…

The first round is running next week, already sold out, and we are scheduled to run it publicly as well as internally several times in the upcoming months, all in Israel (for the moment…)

If you are interested to hear more about this, feel free to contact me or Agilesparks.

I hope in the upcoming days I'll return to blogging more regularly. My next area of focus is the synergy between Agile and Theory of Constraints. Keep tuned…

Finding the right Dev to Test Ratio when working in Kanban

Estimated Reading Time: 4 minutes

In a previous post I started talking about the ratio between Dev and Test, and promised to revisit how it looks like in an Agile/Kanban environment. Whenever I talk to teams/managers about Kanban, whether as part of a workshop, or with a team actually practicing Kanban, the issue of testing as the bottleneck surfaces quite quickly. A typical situation looks like this: What we see here is a classic bottleneck in a kanban system. Testing are at their work in progress limit, meaning they cannot take on more work. Acceptance has no work in progress, what we call a "bubble", and development are at their limit as well. When we take a closer look, we see even more indications this is a bottleneck. We see nothing from Testing is DONE waiting to be pulled, which explains why Acceptance has a bubble. In development, a significant part of the work in progress is DONE waiting to be pulled into testing. The implications of this situation is that Development will not be able to pull in new work and will have to look what else they can do to help the flow of work. In theoretical discussions workshop participants are quick to grasp that development now need to go and see how they can help the testing. In real life what you usually see at first is the developers seeming oblivious to this simple conclusion, and the testers starting to get defensive about blocking the flow. All of it quite natural… I try to get teams to use the five focusing steps from TOC at this point. IDENTIFY the system's constraint – Our Kanban board found the bottleneck/constraint for us. Decide how to best EXPLOIT the constraint – Here I ask teams to think about whether testing is the most efficient it can be, and whether there are ways the team can help them be more efficient. Some practical ways to do that are:

  • Break down the work of testing into smaller tasks. This will help the team identify tasks which can be offloaded from testing to other members of the team, or ideally automated. It provides better visibility to where the majority of the bottlenecks time is spent.
  • Go see how testing spend their time and how much of the time they're actually testing versus doing other things. You can get some nice ideas from the "TOC Blue Light" story. The idea here is that testing should be able o spend most of their time actually testing. If they're waiting for code, for developers to come by, for setups to happen, for data to populate, etc. then there are probably ways to help them be more efficient. As a manager you might need to ask your team questions to try and direct them towards exploring this issue.
  • Explore how much rework testing has to deal with. Rework comes in the form of Defects they need to open, wait for resolution, and verify. Repeated testing due to repeated problems. Changes to implementation that come in late after a round of testing, since the implementation and acceptance criteria / test plan are not aligned. Reduce rework to help exploit the testing constraint.
  • Practices such as ATDD help align requirements, acceptance criteria and implementation. Other practices from the XP world such as TDD, Pair Programming/Code Reviews, Coding Standards, Continuous Integration, help increase the actual quality of code that comes into testing.
  • Discussing and defining what it means for a story to be Ready for Testing (or alternatively the Definition of Done of Development) is a very good way to reduce rework as well.

SUBORDINATE everything else to the above decision – Kanban limited WIP inherently subordinates everyone to the constraint, Providing them with slack time that can be used to help exploit the constraint after identifying some potenial areas for improvement. Other ways to help may be to take on some testing work that they can help with. This is a short term solution though, both since developers don't really like to do testing for a long time, as well as since its not their strong suit. ELEVATE the system's constraint – Sometimes Exploiting will be enough. In other cases, the constraint is so strong that you will want to elevate the constraint in a more strategic way. This is where actual investment and changes in structure come in. One alternative is to shift or share responsibilities – e.g. make test automation the baby of the entire team, not just the testers (see "Why test automation costs too much" for some related ideas ) Sometimes elevating via changed responsibilities will not be enough. One other thing to look at before coming to the conclusion that you have the wrong ratio, is the strength of the developers and testers. I've seen many cases where the team has real stars in the dev team, but the engineers in testing are not up to their level. Especially in an Agile environment, its quite important to have strong testers that are able to keep up the pace of the team. Its also important that that they have the respect of the developers, especially if you are going for something like ATDD where the testers actually participate in guiding the way of the team. If you have strong testers and they're still the constraint/bottleneck, maybe indeed your ratio is not a good one, and you need to consider what to do. But if you went thru the focusing steps, tried to exploit the constraint, have a kanban system where this process is visible and there is concrete data that shows the delivery implications of the constraints (ideally the economic cost of the constraint as well), it will be easier to make and get buyin for decisions that have economic outcomes (increasing  headcount, moving headcount) Summary Your kanban system will surface any constraints that are related to different throughput of Dev and Test. Kanban together with TOC 5 Focusing Steps helps the team make the best out of what they currently have and improve their processes and tools focusing on the areas which require the most improvement. If all of this still is not enough, you have a good story and substantiation to make farther reaching decisions about ratio with. So what is the right ratio? The best answer would be to try using an Agile/Kanban system and find out. For those looking for specific numbers, I can say based on what we've seen so far at Agilesparks that a ratio of 2:1 has a good chance of working for a team that is willing to adopt Agile engineering practices including test automation by the team and ATDD, and where the testers strength is comparable to the developers.