The more experience I get with Kanban, and the more I talk about it with people, I see that one of the main challenges is maintaining some form of goal-driven cadence that energizes the team.
If every one of your Kanban Cards/Stories is an independent goal (e.g. a support environment) its easy to connect to the business and there's usually an SLA to energize you.
If you are working in an environment where the business goals are quite big, and have been broken down in order to flow through your system, its a different challenge.
I've lately been thinking more and more about how to use the MMF level to generate this cadence.
It actually started in a pure-scrum environment where a team was frustrated about fixed length sprints, and asked why not to do a sprint that is aligned with the delivery of the feature that they're currently working on. Later, I've started to dive deeper and deeper into Kanban, and I'm seeing teams that I think will benefit from a clearer higher level goal than delivering stories. It also makes a lot of sense to align the cadence with the higher level activities, the "mini-projects" that you are working on.
In parallel, there was some discussion over in kanbandev around improving the status reporting/visibility around features in a kanban world. So I sat down with @AmitElad7, another Kanban freak in the Agilesparks team, to think about what we can experiment with here. Also, doing some research, I recalled that Scrum Type C is quite similar to what we are discussing. And I also came (again) across Kanban and the New New Product Development Game | AvailAgility which discusses MMFs and Scrum Type C and talks more or less about our direction here.
So, down to business, what are we talking about?
The main thing we came up with is the understanding that ideally, a team should be able to do an MMF-based one piece flow – meaning a WIP limit of one MMF per team.
What does this mean? lets assume the team is currently working on an MMF. This MMF has some stories in progress, some are done, some have been identified but not yet started. This is similar to observing a team which is in the middle of a Scrum sprint. They are working on stories, doing a daily standup, reviewing each story with their customer/product owner as it materializes. Once all the stories are done (working tested software, potentially shippable product), this MMF is also done, and can be reviewed (at the MMF level), retrospected. Then the team can start planning the next MMF – understanding the story, breaking down to smaller bits that can flow in the team and which the team can swarm on. Once the planning is done, the team can start working on this MMF-driven sprint.
So far so good.
Now a few questions come up:
* What is the length of this MMF-based sprint? Is it pre-determined using Estimation/Commitment? Does it emerge as the stories are completed and progress is made?
* What happens if the team cannot effectively swarm to one MMF? Do we introduce another MMF? What happens to the Cadence then? Do we do the Cadence as one big team, or break into teamlets that do the cadence for their MMF separately?
* How do we deal with the issue of first/last days of an MMF sprint? How do we use the Slack there, and how do we deal with overloading Testing at the end? Is it enough to say "break work into smaller story, use much more automation" and the other typical recommendations of how to avoid the ScrumFall? Since we are more of the Kanban evolution model, shouldn't we allow a better way to deal with this than requiring a fast revolution?
* What kind of visibility/metrics can we align with th
I will not answer all questions today, but try to follow up in other posts soon. In the meanwhile we're also trying out those things in actual client deployments, which in a true agile fashion will probably affect our thinking… 🙂
To sum up for today, it seems like what we are talking about is improving the ability of the MMF to be a boundary object between a team and the bigger project/release, emphasizing it as the main object for discussion, tracking, reviewing, delivering to the downstream aspects of the workflow, and ideally releasing. This idea is sort of a mashup of different ideas raised by others, and I suspect some of the Kanban practitioners are already doing this at some level, but its not yet documented or supported fully by tooling at this point (as the discussion about feature-level burnups/CFDs/parking lots in kanbandev highlights).
It also might warrant a catchy name, which is not one of my strengths.
So what do you think? Are you actually doing this today and can share from your experience? Do you think its a good/bad idea?
Lately I’ve been trying to understand how best to represent Known Unknowns (KU) stories/tasks in the BDC. I’m talking about things like Support cases from customers – things not related to stories from the backlog essentially, that you know you will need to address.
Its the kind of stuff that “is not supposed to happen” as part of the sprint, but we’re in the real world, and in the real world, it happens.
I’m familiar with two major ways to account for those:
- One is to remove them from the capacity. This removes it from visibility, OTOH it makes for a pure burndown/capacity/velocity calculation/tracking
- The other is to create stories for those “buckets”. This way you can track them, have better visibility.
Assuming an organization is interested in tracking and measuring this part of the work, the team needs to integrate this into their burndown chart, velocity, etc.
The issue is that those tasks are sort of a “buffer for rainy day”, so the remaining effort on them as the sprint goes by is not necessarily related to actual invested effort compared to the whole bucket, but rather to time left where the events might happen.
Think support cases from customers – assuming you have a 30 day sprint. you planned on 20 days for the sprint based on yesterday’s weather. after 15 days, assuming you spent 5 days on this already, what should be the remaining effort for the BDC?
Naively, 20-5=15 days. But realistically, the statistics are that after half of the sprint, if your yesterday’s weather is to be counted on, 20/2=10.
Is anyone aware of any tools that manage these kinds of KU effectively without requiring the team/SM to manually update the remaining effort?
Together with some other coaches, we raised the option of drawing the planned work as an internal burndown line BELOW the total burndown. Another alternative was to show a burnup of this kind of work separately, so the team has visibility.
The idea is to make sure the team knows whether they got lucky and had more capacity left for their planned work, so should be more aggressive than their original plans, or the other way around. Without this kind of visibility, a critical aspect of the burndown chart is lost – teams don’t trust it, there is too much fog to see clearly where they SHOULD be so they don’t hold themselves accountable to where they ACTUALLY are.
Bonus question – do you have teams that count this work into velocity? using SP? How?
Note that the rationale for counting this into velocity is the “metric” side of velocity, not the planning aspect.
Its clear that this cannot be used to plan the release. It can, if you find the right way to account for it, reflect productivity of the team around this kind of work.
A key theme in Agile is embracing change. We hear time and again that change is good, we should “welcome change”, yada yada yada. Most teams I see are sick of change. On one hand they are not fully exposed to the business case for change, to the sense of urgency around changing fast. On the other hand, change is HARD for them to absorb.
Effectively embracing change means overcoming the cost of change curve. Traditionally, we were taught that change costs more, the later you learn about it. This is still true. The cost is comprised of the complexity of dealing with change when it is buried under layers of work that was done since the original development was finished.
In many systems the quality of the codebase makes it even harder – spaghetti code, copy pasting, and a bunch of other smells / technical debt aspects make time work against us.
So how can Agile Engineering practices help us deal with change?
First, since we anticipate anything can change, its important to avoid wasting work in areas which might change.
In general its important to do as much of our work “just in time”, avoiding the planning ahead that is based on wishful thinking about how the future will look. The XP principle of “Simple Design” helps us provide the simplest solution that can possibly work. Complexity and “Fully Featured flexible solutions” are our enemy here. We have no way to know that they will actually be used since we cannot really predict the path the system will take. You Ain’t going to need it – YAGNI is another key XP phrase.
We also need to keep our systems at very high codebase quality to minimize the cost of changing when it does happen. Refactoring as part of the ongoing development cycle helps us achieve a Sustainable Pace where we might pay an ongoing small price, in order to be ready for whatever curve ball change throws at us. Think of exercising in order to keep our body healthy and able to cope with whatever life throws at us…
The air force uses AWACS planes and other types long range radars to get early warning.
In agile we use iterations, TDD and CI.
Iterations are a key factor here. Since we get feedback after every iteration we hear about required changes earlier, as well as discover many defects due to the testing we do in order to achieve PSP. This much can be achieved without any special engineering practices.
Test-driven Development provides much earlier feedback about the behaviour of our units, modules and systems, and makes sure that our system is REALLY Designed to meet its specification.
Continuous Integration provides early warning for any integration issues, avoiding any form of “Big Bang” integration problems.
Once we do need to make a change, we rely on a high quality codebase, as well as several safety nets, in order to proceed quickly.
If we use ongoing Refactoring, applied the relevant Design Patterns, kept the design as simple as possible, the cost of making the change will be as low as it can be. Once we implement the change, Continuous Integration, together with the test suite we accumulate using TDD, will catch any problem we create very quickly. Weeks of regression testing, or Dev/QA managers losing sleep over a risk-taking testing coverage, will thankfully become a part of our past problem.
Next in the series – how fast iterations strain the typical team starting with agile…
Welcome to a new blog for Agilesparks, our Agile/Scrum solutions company.
On last Thu, we had the opportunity to present Agile/Scrum to a group of R&D leaders from various companies/organizations in Israel.
Danko provided his signature exciting intro to Agile, followed up by a customer success story, an intro to Engineering practices delivered by yours truly, a discussion of Agile Metrics and Measurements, and a Q&A panel.
Based on initial feedback we got it seems like we got people quite interested in what Agile/Scrum can provide…
If you would like to hear about Agile/Scrum we have similar opportunities coming up. Check http://www.agilesparks.com for more details.
At the meantime, checkout my presentation:
and some pictures from the event: