Tag Archives: MMF

Explaining MVPs, MVFs, MMFs via the Lean/Agile Requirements Dinosaur

Estimated Reading Time: 3 minutes

In the last few weeks I’ve been using a new visualization that people find useful for understanding the relationship between the various Lean/Agile requirement containers. Some people call the full model a dinosaur. Others are reminded of the snake who ate an elephant from “The Little Prince”. (I’m sure there is a good connection to elephant carpaccio somewhere in here …)

 

IMG_0449

 

The first step is to understand that for a new product there is a unique value proposition hypothesis. This is the area where your product/service will be unique.
IMG_0450

The next step is creating a Minimum Viable Product (MVP) to test your hypothesis. This is focused on your unique value proposition but typically also provides a little bit of “Tablestakes” features just to make sure it is “Viable” as a product.

 

IMG_0451

Your MVP is also an hypothesis. It might be good enough to find Product Market Fit or not. The case where each potential customer you engage tells you “This is great but in order for me to use it I need X” and X is different for each customer/user is shown below. This shows you are not in a Product Market Fit yet.

 
IMG_0452

If on the other hand you are seeing more and more answers pointing to the SAME X then it makes sense to revise your Customer/Problem/Solution Hypothesis.

IMG_0453

 

You essentially are executing a Pivot. You are building MVP2 focused on the new hypothesis based on recent Customer Development learning generated by the previous MVP.

IMG_0454

 

Let’s say MVP2 is successful and you are seeing real traction of early adopters. You want to increase growth and are looking for deeper penetration of your early adopters as well as bringing on new clients some of them beyond the early adopters crowd. Based on feedback you’ve been collecting and your product management research you have a couple of areas that can potentially bring this growth. Some of them by the way extend your unique value proposition and some of them make your current product more robust.

IMG_0455In the case of areas with strong indication of value you might go straight for Minimally Marketable Features (MMF). Finding the minimum piece that can start bringing in growth. The aim of the MMF is to bring in value. It assumes high certainty that there is value in this area and that we know what the product needs to be to provide this value. The reason to break a big feature into smaller MMFs is mainly time to market and the ability to bring in value in many areas, always keeping your option to move to another area and provide value in it rather than focusing for too long on a single direction. An indication that you are working on MMFs is that when one is being shipped you feel comfortable working on the next MMF in that area. If on the other hand you want to wait and see if your first MMF sticks…

IMG_0456…then you are back in hypothesis land. But now your hypothesis is centered on a feature rather than your product. You have an area with high potential but also high uncertainty. The way to deal with it is to build a “pioneering” feature – the Minimum Viable Feature. The minimum feature that can still be viable for real use and learning from real customers.

IMG_0457If you learn that the MVF has hit gold you can develop more MMFs in that area to take advantage (if that makes sense). If not, you can pivot to another approach towards that feature area, or at some point look for alternative growth path. Essentially the MVF is a mini-me version of the MVP.

IMG_0458There you have it. The full model. Essentially my point is that you grow a product in uncertain markets by attempting various MVPs. Then once you achieve Product Market Fit you mix MMFs and MVFs depending on the level of Business/Requirements uncertainty in the areas you are focusing on.

While MVPs/MMFs/MVPs are atomic from a business perspective (you cannot deploy and learn from something smaller) they might be quite big from an implementation perspective.

The dinosaur carpaccio now comes in as slicing each of those pieces here to smaller slices aimed at reducing execution/technology risk. (typically these are called User Stories) Those smaller slices might have tangible business value but on the other hand some might not. It is more important for them to provide early implementation decision feedback along the way.

Feel free to use this model. Let me know what you think about it and how I can improve it!

 

 

 

 

Encouraging Feature-level progress tracking in Kanban

Estimated Reading Time: 4 minutes

One of the key questions project managers and senior management in general ask themselves and their teams on an ongoing basis is – "Are we on track to deliver the scope we committed to, on time". In some environments "on budget" is added to the question.

If you are talking about a Release Scope, the answers are quite similar whether you're doing Scrum or Kanban. If you don't care too much about the budget aspects, a Release Burnup can show you the commited scope, the committed date, and the actual progress in working software towards that goal – Plan versus Actual. If you ARE interested in the budget picture – committed budget versus actual, and are we on track to finishing the release with the budget we committed to – use AgileEVM on top of that. (http://www.infoq.com/articles/agile-evm is a good place to start)

Basically for all of this – you are measuring the amount of done features work compared to the amount of features work originally planned for. Whether sized using effort days, story points, function points, the idea is the same. 

In a conference a couple of months ago I talked about Agile Release Management and covered this subject somewhat. You can check out the slides at http://www.slideshare.net/yyeret/managing-projectsreleases-using-leanagile-techniques

I would add that this expectation of management is what we call Predictability in the Kanban world, and based on some encounters I've had with senior management, we as the Agile community have not been doing a great job at connecting to the expectation of Predictability. In many cases its the opposite – we create the impression that Predictability is a lost cause because everything is Agile. 

In Kanban we try to better connect to this expectation of Predictability/Commitment to the important things. Senior management doesn't care about committing to a sprint goal and meeting it. They care about meeting commitments to deliver a release on time and with feature highlights communicated to the stakeholder community. They care about meeting commitments to deliver certain features on time to internal and external parties that count on this feature to continue and do something else. 

Predictability will continue to be important. The way its measured might change. For now, most teams/projects are indeed evaluated based on the answer to "Are we on track to meeting the release goal on time". We should support those teams with an approach that complements their kanban flow-based workflow. The methodology is all there if you connect the dots. 
The room for improvement is mainly in connecting the dots and providing a structured methodology that can be applied as a framework, as well as better tool support. 

What are the gaps?

First, The thinking around CFD needs to switch from history to also a forward-looking predictive chart. What do I mean? 

Most CFDs you see today focus just on an operational view CFD – what is the current state, as well as history, which can help you improve your process, operation. 

I'm Missing a view of the work needed by a certain date, and whether we are on track to achieve our commitments/goals. Tools that extend the CFD to a view that includes current trend, required trend to meet the goal, and trend of requirements churn can answer this question – you see whether the DONE trending towards the overall committed scope is on time or not. 

One more complication is that of course you sometimes want your board to reflect many releases, not just one. You're working to finish one release, and then you move to another. 
In this case, You probably want this view per-Release on the board. 
 

So we need visibility charts that can aggregate the status of several cards e.g. Feature, Release, MMR, MMF whatever you want to call it. In FDD Parking Lot diagrams are a popular way to convey the status of various features/aspects in a Project/Release. An extension of a Parking lot diagram can be to have a mini-burnup of that entity. So beyond just the status (which is basically the current point of a burnup), you can have a mini-graph showing the status of entities comprising this feature. See below for a sketch of how this can look. ( Note that the Warning Indicators box are taken straight from the organizational dashboard page of LeankitKanban. I recently started to explore the capabilities in this dashboard and find them quite useful to help bring a process under control, and the sort of stuff you might want to look at in an operational review). 

The color of each parking lot / feature can easily be derived from where the actual progress is compared to the expected progress curve. The expected curve can be defined to be Linear (yeah right), S-curve based as David Anderson is fond of, or whatever you think the profile should look like. Once you are below the curve, you start to gain reddish colors. Above it – you are green. With Agile approaches relying on Working Software as a measure of progress, you can really trust those colors… Its no more a watermelon (green outside, red inside – courtesy Kent Beck)

For those interested in the details, here is one way a CFD can be extended to provide burnup capabilities. 

 

 

With this in mind, the mini-burnup in the parking lot can be upgraded to a mini-CFD

Now, with a CFD, some more intelligence can be applied to help determine the color/state of the Feature. High level of WIP can be a leading indicator of problem (but knowing about Little's law and how a CFD looks like you probably know that it will be apparent in the burnup being quite flat as well…). I'm guessing that with time, we will learn to study and identify progress risks using a CFD, beyond the basics we currently use. 

Bottom line – my feeling is that in order for Kanban to cross the chasm into the majority of projects/development groups, who are quite concerned with delivering Releases and Features on schedule, not just with trusting the Flow, we will need to provide more and more tools and focus to support this use case. The core thinking is there, the hunger on the part of the IT world is there as well it seems, so lets go out there and make it happen. my 2c…

 

 

 

MMF driven sprints in a Kanban world

Estimated Reading Time: 3 minutes

The more experience I get with Kanban, and the more I talk about it with people, I see that one of the main challenges is maintaining some form of goal-driven cadence that energizes the team. 

If every one of your Kanban Cards/Stories is an independent goal (e.g. a support environment) its easy to connect to the business and there's usually an SLA to energize you.

If you are working in an environment where the business goals are quite big, and have been broken down in order to flow through your system, its a different challenge.

I've lately been thinking more and more about how to use the MMF level to generate this cadence.

It actually started in a pure-scrum environment where a team was frustrated about fixed length sprints, and asked why not to do a sprint that is aligned with the delivery of the feature that they're currently working on. Later, I've started to dive deeper and deeper into Kanban, and I'm seeing teams that I think will benefit from a clearer higher level goal than delivering stories. It also makes a lot of sense to align the cadence with the higher level activities, the "mini-projects" that you are working on.

In parallel, there was some discussion over in kanbandev around improving the status reporting/visibility around features in a kanban world. So I sat down with @AmitElad7, another Kanban freak in the Agilesparks team, to think about what we can experiment with here. Also, doing some research, I recalled that Scrum Type C is quite similar to what we are discussing. And I also came (again) across Kanban and the New New Product Development Game | AvailAgility which discusses MMFs and Scrum Type C and talks more or less about our direction here.

So, down to business, what are we talking about? 

The main thing we came up with is the understanding that ideally, a team should be able to do an MMF-based one piece flow – meaning a WIP limit of one MMF per team.

What does this mean? lets assume the team is currently working on an MMF. This MMF has some stories in progress, some are done, some have been identified but not yet started. This is similar to observing a team which is in the middle of a Scrum sprint. They are working on stories, doing a daily standup, reviewing each story with their customer/product owner as it materializes. Once all the stories are done (working tested software, potentially shippable product), this MMF is also done, and can be reviewed (at the MMF level), retrospected. Then the team can start planning the next MMF – understanding the story, breaking down to smaller bits that can flow in the team and which the team can swarm on. Once the planning is done, the team can start working on this MMF-driven sprint.

So far so good.

Now a few questions come up:

* What is the length of this MMF-based sprint? Is it pre-determined using Estimation/Commitment? Does it emerge as the stories are completed and progress is made? 

* What happens if the team cannot effectively swarm to one MMF? Do we introduce another MMF? What happens to the Cadence then? Do we do the Cadence as one big team, or break into teamlets that do the cadence for their MMF separately?

* How do we deal with the issue of first/last days of an MMF sprint? How do we use the Slack there, and how do we deal with overloading Testing at the end? Is it enough to say "break work into smaller story, use much more automation" and the other typical recommendations of how to avoid the ScrumFall? Since we are more of the Kanban evolution model, shouldn't we allow a better way to deal with this than requiring a fast revolution?

* What kind of visibility/metrics can we align with th

I will not answer all questions today, but try to follow up in other posts soon. In the meanwhile we're also trying out those things in actual client deployments, which in a true agile fashion will probably affect our thinking… 🙂

To sum up for today, it seems like what we are talking about is improving the ability of the MMF to be a boundary object between a team and the bigger project/release, emphasizing it as the main object for discussion, tracking, reviewing, delivering to the downstream aspects of the workflow, and ideally releasing. This idea is sort of a mashup of different ideas raised by others, and I suspect some of the Kanban practitioners are already doing this at some level, but its not yet documented or supported fully by tooling at this point (as the discussion about feature-level burnups/CFDs/parking lots in kanbandev highlights). 

It also might warrant a catchy name, which is not one of my strengths. 

Screatureban? (Scrum-Feature-Kanban…)

FeatureBan? 

So what do you think? Are you actually doing this today and can share from your experience? Do you think its a good/bad idea?