Collaborating with specialized roles using kanban classes of service

 

I want to share a solution I came up with together with a team of performance / non-functional testing, working in a product group in a large enterprise. This solution deals with the challenge of bridging the principles of "Those who build the system test it", "Non functional testing is a collaboration role", and the fact that specialized roles such as performance testers are usually stretched to cover the demand for their services. 

This group wanted Performance testing for the product of course. What usually happened though is that the performance team only got usable features towards the end of the release (think waterfall like behaviour). This usually meant serious waivers and compromises around performance testing. 

The first step taken by the product group was to work more on effectively breaking the features into smaller viable features and stories. Once this was done, it was easier for the performance testing team to get involved throughout the version, and achieve a more reasonable flow. 

Things should have been great by now. 

BUT then another problem surfaced, even while we were discussing how this would work. 

It was clear that the capacity of the performance testing team wasn't sufficient to really address everything. 

The naive policy meant that when a feature arrived at performance testing, they would decide whether they have time to cover it, do risk management and either test it or skip it. 

The downside for this was that its a black/white decision. This misses the opportunity for the delivery agile teams to do at least SOME performance testing, even if they don't have the capabilities, tools, expertise of the dedicated performance testing team. 

Our suggested solution was to use the concept of kanban Classes of Service to improve on this naive policy. 

Since we already know not every feature requires the same performance test attention, lets not wait until it arrives at the performance team to make this classification. Lets do it earlier, before we go into development/testing. 

With this classification, policies can be setup that can involve both the performance testing team as well as the delivery agile teams in the work of performance / non-functional testing. 

We came up with a flag system:

       Red – performance team Must be involved hands on – probably by joining the Feature team working on this feature for the duration of the effort

       Yellow – performance team Advise/Consult, but most work is in Teams. Representative of the performance team will be visiting the Feature team quite often while the Feature is being worked on. 

       Green – don’t need any involvement from performance team

This system helps drive collective ownership of non-functional testing. One of the first things that happened is that the Feature teams told the performance testers that there are some kinds of tests they can run on their own, although they don't have highly specialized performance tools.

We are also experimenting with systems like this for involving architecture teams, and it applies for most kinds of super-specializations that are not easily integrated into Feature teams. 

Managing the overall flow using kanban will also help see whether a bottleneck develops, and what can be done further to overcome it.