Making promises you can keep WITHOUT Scrum Sprint Commitment using Classes of Service

How can we make promises we can keep without a commitment to the sprint content?

So I convinced you that the Scrum Sprint Commitment is not such a great idea. I convinced you it is mainly there for learning. You want to move to a commitment to try to meet the forecast instead of committing to deliver the whole forecast. But your Product Owner has a real problem with this. He understands all this learning rationale but his stakeholders want to know whether he can promise then a certain delivery on a certain date. So can we make promises without the Sprint Commitment?

Making promises to deliver certain backlog items in this sprint

Sometimes a Scrum team is expected to deliver certain backlog items for a specific sprint. Examples can be stuff other teams need to consume, a fixed date commitment to clients, a regulatory requirement etc. Such backlog items have a very high cost of delay so we want to really make sure when we promise to deliver them we deliver them. One way to make sure that is to put them at the top of the Sprint Backlog. If the team is working down the Sprint Backlog by priority (as they should) there is a higher chance they will deliver these backlog items.

But I believe we should be more explicit. We should have a clearer signal that these are special fixed-date items and clearer policies for what to do to make our promises around them. In Kanban teams use classes of service for this purpose. I recommend Scrum Teams in such a context simply do the same. Mark these items with a special color in the Backlog. Establish policies such as “If they are in danger we make whatever effort needed to deliver. Sustainable Pace will be put on hold”. Visualizing that these items are different will earn them a different class of service by the team. It also means that normal items without this fixed-date commitment might be put aside to make the extra effort to deliver those, even at the price of overall throughput. These items might call for deeper estimation and planning up front than normal items.

One key point is to make sure that these fixed-date items are not the majority of items in the Sprint Backlog, otherwise they cannot rely on preferential service. If you have a case of your whole work being fixed-date driven you need to be extra careful with planning and consider taking a time-buffer to protect against the inherent variability in sprint results.

With time the Scrum Team and Product Owner will learn about their ability to deliver these items and might be able to make promises earlier before the Sprint Planning, knowing that the price will only be the effect on other normal items in the sprint.

 

Making promises to deliver on a bigger project across several sprints

I won’t go deeply into this aspect in this post. Normal Agile Release Planning using history of throughput/velocity and setting hard commitments and soft commitments is the way to make promises you can keep. This means that within each sprint there will be a certain level of hard commitment related to the overall project hard commitment. If that level of commitment is already a stretch for the team then you have a dangerous project in which you cannot really expect to have safe-to-fail thinking or improvement, rather a tight focus on meeting commitment. Sometimes we have those projects. If you are always doing these kinds of projects time to look in the mirror and have a discussion about whether you are really trying to set up the organization for opportunities to improve/learn or just constantly meet commitments without any slack for improvement.

Do we need stack ranking in a kanban system?

I want to talk about a kanban system design issue today.

Do we need a way to portray full stack ranking of cards throughout a kanban system or is it enough to see stack ranking per lane ?

To elaborate – sometimes it feels necessary to convey the priority of cards and have a way for that priority to travel with the card throughout the system. A way to achieve this is to allocate a running priority such as 10, 20, 30, etc. ┬áto cards. Those of us with background in the “Basic” programming language will understand the numbering scheme – it allows for inserting other cards between priority slots later on.

Since not many electronic kanban systems support this, I was forced to think again about whether this is actually necessary and reached a conclusion it is a bit redundant if a few key assumptions are made.

Assuming all cards are of the same “class of service”, it is typically reasonable to adopt a “FIFO after pulling into WIP” policy – which means that after a card is pulled into work, we should make all effort to finish it even if it originally had a slightly lower priority than other card options. If we pulled it we had a good reason and it is less relevant anymore – lets just get it over with. If that is the case, after pulling the card all we care about is FIFO – so stack ranking within a lane as well as lane position should be enough – just pull topmost card from the rightmost lane possible to obey the FIFO (or look at start date).

The plot thickens when there are more classes of service. The policies for Fixed Date and Expedite deal themselves with priorities and pull order so I won’t elaborate on them here as they’ve been covered elsewhere in Kanban literature in depth.

A class of service that is not often discussed is a “stretch” card from a release backlog. Essentially a card that is a “wishlist” but not something that was committed as part of the release, so we prefer not to pull it before committed scope for the release. The policy we probably want to have is to always pull “scope” cards before “stretch” cards when possible. Even if a “stretch” card made it into WIP we might want to bypass it if a “scope” card is catching up. (You might wander why a “stretch” card would be pulled in the first place – the typical answer is resource capabilities – “scope” cards were skipped because of lack of knowledge until a “stretch” card was reached – this is an indication of lack of capabilities versatility and should be monitored and improved on by the way).
So basically all we need to support this policy is a visualization of the fact that the card is “stretch” scope. It is possible to use a “low priority” indicator for this. Maybe it is better to have it as a different card type to make it easier to filter it out when tracking release progress using a Cumulative Flow Diagram Burnup Chart. You might want to display “stretch” cards as scope completed, or ignore them and just track completion rate of the “scope” cards. I would think that is preferable.

So bottom line seems like full board-wide stack ranking is not required in most cases. Classes of Service or a few priority pools can suffice for most reasonable pull policies.

What is your experience? Do you track priorities in a different way?

Collaborating with specialized roles using kanban classes of service

 

I want to share a solution I came up with together with a team of performance / non-functional testing, working in a product group in a large enterprise. This solution deals with the challenge of bridging the principles of "Those who build the system test it", "Non functional testing is a collaboration role", and the fact that specialized roles such as performance testers are usually stretched to cover the demand for their services. 

This group wanted Performance testing for the product of course. What usually happened though is that the performance team only got usable features towards the end of the release (think waterfall like behaviour). This usually meant serious waivers and compromises around performance testing. 

The first step taken by the product group was to work more on effectively breaking the features into smaller viable features and stories. Once this was done, it was easier for the performance testing team to get involved throughout the version, and achieve a more reasonable flow. 

Things should have been great by now. 

BUT then another problem surfaced, even while we were discussing how this would work. 

It was clear that the capacity of the performance testing team wasn't sufficient to really address everything. 

The naive policy meant that when a feature arrived at performance testing, they would decide whether they have time to cover it, do risk management and either test it or skip it. 

The downside for this was that its a black/white decision. This misses the opportunity for the delivery agile teams to do at least SOME performance testing, even if they don't have the capabilities, tools, expertise of the dedicated performance testing team. 

Our suggested solution was to use the concept of kanban Classes of Service to improve on this naive policy. 

Since we already know not every feature requires the same performance test attention, lets not wait until it arrives at the performance team to make this classification. Lets do it earlier, before we go into development/testing. 

With this classification, policies can be setup that can involve both the performance testing team as well as the delivery agile teams in the work of performance / non-functional testing. 

We came up with a flag system:

       Red – performance team Must be involved hands on – probably by joining the Feature team working on this feature for the duration of the effort

       Yellow – performance team Advise/Consult, but most work is in Teams. Representative of the performance team will be visiting the Feature team quite often while the Feature is being worked on. 

       Green – don’t need any involvement from performance team

This system helps drive collective ownership of non-functional testing. One of the first things that happened is that the Feature teams told the performance testers that there are some kinds of tests they can run on their own, although they don't have highly specialized performance tools.

We are also experimenting with systems like this for involving architecture teams, and it applies for most kinds of super-specializations that are not easily integrated into Feature teams. 

Managing the overall flow using kanban will also help see whether a bottleneck develops, and what can be done further to overcome it.