Category Archives: Lean Startup

How to REALLY Add Learning to your Agile Marketing Flow

Estimated Reading Time: 4 minutes

Adding Learning/Feedback to the Value Stream – A core aspect of Agile “Inspect and Adapt” – Especially on Agile Marketing teams.

Whenever I work with a team, program, value stream, or whatever group of people that are trying to establish a leaner/more agile way of working that inspects and adapts along the way I mention the fact that feedback/learning should be part of their flow (Eric Ries has a good story about it in the Lean Startup btw).

I’m working more and more with “Agile Marketing” teams and organizations where this is even more of an emphasis than usual.  Continue reading

Lean Startup Blues

Estimated Reading Time: 4 minutes

Yesterday I had an interesting session with a company who’s been a big believer in Lean Startup and the MVP concept but have lately reduced the amount of MVP-driven product development dramatically. Not because there is less uncertainty to iterate through but due to something we can call the “Lean Startup Blues” – the lack of faith that the MVP process works at scale over the long term.

The main challenge is that building MVPs is not enough. Some organizations don’t really close the learning loop as often as they would like to. They build MVPs but they don’t spend enough effort to really learn whether this MVP is in the right direction. What typically happens is that they move on to another idea after the MVP goes to production. Iterative development is dangerous when there isn’t a steady captain at the helm. The ability to shift direction brings back the classic dysfunction where the first phase of many ideas are abandoned or moved into maintenance mode where it is hard to have a serious impact. It is actually the fear of this that drives many Product Managers and business stakeholders to specify all they think they will need up front because they know they have a limited window in which to get it. After this window is closed most chances are their project will be abandoned. Several difficult changes need to happen to overcome this problem. First, people need to trust that MVPs will lead to learning and a wise decision between pursuing an MVP towards a real feature/product or killing it.Then, people need to trust that the right prioritization decisions will be made in the future and that if it is really a good idea to continue investing in an idea then this will happen. To make it harder, people need to think holistically – taking the chance that their ideas will not turn out to be winners and therefore will be abandoned – as opposed to trying to force implementation of their ideas by going for a full implementation instead of an MVP.

Using the right Kanban reality board can at least make visible the amount of this dysfunction going on by showing the size of features and the amount of MVPs that are left as is without leveraging the learning to drive further development and business value in that direction. Steps like “Deployed”, “Validate/Learn”, “Pursue” can help you see what is really going on. Having WIP limits on these stages will force some discussion about what to kill and what to pursue and end purgatory for the miserable MMFs.

Another thing that might be a challenge is actually learning whether the MVP hints at gold or not. Data-driven decision making is the holy grail but it is very hard to get there. Some organizations are giving up on it and don’t do any learning/feedback at all. Soft learning is better than no learning process at all in my opinion.

The problem with MVP purgatory is not just that we lose on the business benefits. It is also that since an MVP is typically an experiment, it typically leaves the product in a state of debt. Both technical debt – since we developed an MVP we allowed some shortcuts on the architecture/automation/clean code/etc. And Product Debt – We added a feature, it covers a certain set of use cases, but if we did the MVP right it is not whole. far from it. It was actually painful to go with it in the first place but since we were looking for the real minimum to enable learning, we did it. But the assumption is that we will either follow through or kill it. Letting the MVP stay in the product as is leads to usability issues, maintenance issues and makes future development more difficult.

Leave enough MVPs in purgatory and people will simply stop using the MVP approach. They will prefer to skip on the fast learning loop and get back to familiar ground of developing something cleaner and more usable even if it is not useful.

The way out of this mess is to have a clear policy that says you either kill it. really kill it. Or you clean it. really clean it. Please decide. And limit the amount of ideas that are in progress without a clear decision. And don’t allow starting a new MVP unless there is room for it. Make room for it by selecting another idea and kill it or clean it. Or pivot it. This is “Stop starting start finishing” applied at the MVP portfolio level.

By the way MVP is just one option of how to approach building something. It is a good option when there is a lot of business/requirements uncertainty. If not, just build minimal slices of functionality that are marketable (also called Minimum Marketable Features / MMFs) and release them. Don’t expect to do much learning and don’t skip on the technical excellence while building them. With MVPs you keep the option to kill it or grow it to later. But that option has a price. Refactoring to clean or killing a feature doesn’t come for free.  If there isn’t a lot of uncertainty that price might not be worth it. So define your policies for how to build different kind of risk profile ideas/features/products. assign the right profile to each idea. feel free to move ideas between profiles along the way as more information becomes available. Make the team building it aware of the profile. Explain to them the context. This will help them make the right tradeoffs along the way.

Another interesting thing to look at would be the “Killed Ideas/RIP” bucket. You would expect to see some ideas ending up there. That is a good thing. But you should also expect to see the cycle time until that area grow shorter and shorter meaning faster learning loop. (Just be careful to avoid setting it as a target otherwise people might not kill an item just to avoid increasing the time to kill metric…)

To sum up, there is nothing bad with the Lean Startup MVP concept. It is actually a great idea. Done right. And doing it right requires discipline & process maturity. attention to end to end flow from idea through validated learning all the way to kill/grow. Enterprise Kanban looking at the portfolio of MVPs/MVFs is a great way to grow this discipline and maturity. It also requires strong Product Managers who are able to define effective MVPs, guide the learning process, have the courage to hit the kill switch or to stick to something even though there is a lot of pressure to “start the new thing”.

 

Explaining MVPs, MVFs, MMFs via the Lean/Agile Requirements Dinosaur

Estimated Reading Time: 3 minutes

In the last few weeks I’ve been using a new visualization that people find useful for understanding the relationship between the various Lean/Agile requirement containers. Some people call the full model a dinosaur. Others are reminded of the snake who ate an elephant from “The Little Prince”. (I’m sure there is a good connection to elephant carpaccio somewhere in here …)

 

IMG_0449

 

The first step is to understand that for a new product there is a unique value proposition hypothesis. This is the area where your product/service will be unique.
IMG_0450

The next step is creating a Minimum Viable Product (MVP) to test your hypothesis. This is focused on your unique value proposition but typically also provides a little bit of “Tablestakes” features just to make sure it is “Viable” as a product.

 

IMG_0451

Your MVP is also an hypothesis. It might be good enough to find Product Market Fit or not. The case where each potential customer you engage tells you “This is great but in order for me to use it I need X” and X is different for each customer/user is shown below. This shows you are not in a Product Market Fit yet.

 
IMG_0452

If on the other hand you are seeing more and more answers pointing to the SAME X then it makes sense to revise your Customer/Problem/Solution Hypothesis.

IMG_0453

 

You essentially are executing a Pivot. You are building MVP2 focused on the new hypothesis based on recent Customer Development learning generated by the previous MVP.

IMG_0454

 

Let’s say MVP2 is successful and you are seeing real traction of early adopters. You want to increase growth and are looking for deeper penetration of your early adopters as well as bringing on new clients some of them beyond the early adopters crowd. Based on feedback you’ve been collecting and your product management research you have a couple of areas that can potentially bring this growth. Some of them by the way extend your unique value proposition and some of them make your current product more robust.

IMG_0455In the case of areas with strong indication of value you might go straight for Minimally Marketable Features (MMF). Finding the minimum piece that can start bringing in growth. The aim of the MMF is to bring in value. It assumes high certainty that there is value in this area and that we know what the product needs to be to provide this value. The reason to break a big feature into smaller MMFs is mainly time to market and the ability to bring in value in many areas, always keeping your option to move to another area and provide value in it rather than focusing for too long on a single direction. An indication that you are working on MMFs is that when one is being shipped you feel comfortable working on the next MMF in that area. If on the other hand you want to wait and see if your first MMF sticks…

IMG_0456…then you are back in hypothesis land. But now your hypothesis is centered on a feature rather than your product. You have an area with high potential but also high uncertainty. The way to deal with it is to build a “pioneering” feature – the Minimum Viable Feature. The minimum feature that can still be viable for real use and learning from real customers.

IMG_0457If you learn that the MVF has hit gold you can develop more MMFs in that area to take advantage (if that makes sense). If not, you can pivot to another approach towards that feature area, or at some point look for alternative growth path. Essentially the MVF is a mini-me version of the MVP.

IMG_0458There you have it. The full model. Essentially my point is that you grow a product in uncertain markets by attempting various MVPs. Then once you achieve Product Market Fit you mix MMFs and MVFs depending on the level of Business/Requirements uncertainty in the areas you are focusing on.

While MVPs/MMFs/MVPs are atomic from a business perspective (you cannot deploy and learn from something smaller) they might be quite big from an implementation perspective.

The dinosaur carpaccio now comes in as slicing each of those pieces here to smaller slices aimed at reducing execution/technology risk. (typically these are called User Stories) Those smaller slices might have tangible business value but on the other hand some might not. It is more important for them to provide early implementation decision feedback along the way.

Feel free to use this model. Let me know what you think about it and how I can improve it!

 

 

 

 

Impressions from Lean Systems and Software Conference 2012 Boston

Estimated Reading Time: 4 minutes

As I prepare to check out from the Boston Seaport Hotel which was the venue of this year’s LSSC conference (and did a magnificent job hosting us!), here are my highlights/impressions of the conference.

The buzzword of the conference seems to have been “Lean Startup”. It permeated into many talks (including mine) in two main aspects – One was the classic product/customer-focused Lean Startup as an alternative narrative to Lean/Agile. The other was taking the ideas of fast cycles of Validated Learning and adopting them as a narrative for the approach to change. This came up in Jeff Anderson’s ambitious and thought-provoking or even provocative talk about the transition participation engine as well as in my attempt to “fix” continuous improvement.

So what is Lean Startup for Change – LS4CHG

Estimated Reading Time: 4 minutes

Lets try to summarize real quick what we came up with in a short lean coffee session about this today in LSSC12.

Lean Startup for Change is applying the concepts of Lean Startup to Change programs. It comes real handy when you have a complex environment where you don’t know exactly what will work for the organization context (market) and you want to test it rigorously until you come up with an approach that can stick/grow effectively in a way that creates a sustainable change program.

The flow can look something like this:

  • You identify a problem/dissatisfaction at the capabilities level of the organization. This can be – we are not able to meet demand, we need to scale without increasing overheads, we need to improve quality, etc.
  • You create a change model (maybe using a kind of canvas – riffing on Business model canvas or Lean canvas or closer to A3 who knows – we need to find out what works best…).
  • You identify risky assumptions/hypothesis in this model and design experiments to test. The risks can be value risks – whether we actually know for sure that the change we are introducing will bring value ASSUMING it happens. Or growth risks – We know it will work but we don’t know IF it will happen/stick/grow. This is similar to the value/growth hypothesis and metrics in Lean Startup.
  • Some times Value hypothesis will be hard to test if there isn’t enough information/history about the system, and therefore a growth engine that brings the system to a level where it generates enough information (e.g. by having enough teams use kanban to manage their work) might be the first priority, as a MEANS to establish the Value experiment.
  • With the hypothesis you start with, design the Minimum Viable Change that will test your assumption about how things will work. If you are aiming at growth, your MVC needs to focus on how you will grow the system and measure things like virality of teams infecting each other (virality), how many teams infected actually get interested about this (conversion?), start using it (activation), and keep using (retention) and infecting others. You run this MVC and see what are the results by measuring them (How? separate and VERY though question – but it comes down to looking at behaviour in some way. need to do it in a reasonable and humane way though – e.g. how many teams contacted the local coach about a practice they saw elsewhere without being forced to start). You then start to tune the engine. Play with things that might affect growth e.g. the way you train, the way you coach, the way you present things, who’s in charge, etc. When you see you have a working growth engine you can stop experimenting and pursue the winning approach you found. If you see your experiments are not really getting anywhere you consider pivoting to another kind of growth engine altogether, or even to another kind of change engine.
  • When your growth engine worked well and you have a good population of people using the system you can start looking at the value hypothesis. You have enough of a sandbox to test your assumptions on what will bring value. You design MVCs that are aimed at bringing in value like improved quality, faster cycles, improved customer satisfaction, etc. You execute an MVC and then measure its effect. There is a challenge here because some of these will take time to see the results of (e.g. Refactoring?). Work real hard to defining the MINIMUM viable change. All these MVCs are not a test whether the “change approach” is possible, just whether it is a good fit for the organization’s context (so basically not solution-space uncertainty but problem-space uncertainty – the natural habitat of MVPs…)
  • After tuning the MVC until you see you are meeting your value hypothesis and have a strong change engine you can pursue it by deploying it elsewhere, and move on to testing new hypothesis that can improve performance further.
  • I’m feeling quite comfortable thinking about this in an A3 for each MVC kind of form. Maybe need to take some things from the Lean Canvas into this A3, but most of the stuff is already IMO.
  • Bottom line you tested uncertainty at the problem space – whether we are indeed solving the right problem for the organization, as well as the growth space – are we running the change in the most effective way.

One comment from the discussion was that several of us Lean/Kanban practitioners feel that we don’t really have to validate the assumption that Kanban will provide value to the organization. We are quite sure that it can serve as the diagnostic that will stimulate improvement. What we are not sure about is how to best take on Kanban in an organization. And that is why the growth engine/hypothesis is so important and why it might seem that we are focusing on mechanical implementation rather than a value driven one. We see Kanban as one way to enable the Value-seeking MVC via its fast method of installing a system which will visualize what is going on and invite potentially value-improving experiments.

Just initial thoughts from a very long day at LSSC12. We are keeping the discussion alive with hashtag #LC4Chg on twitter as well as on kanbandev. Join us and let us know what you think about this.