FAQ
Straight answers to common leadership questions
Practical answers to broad, cross-context questions about delivery friction, portfolio choices, and AI execution.
Common questions
Broad questions leaders ask me in AMAs, webinars, and conference Q&A, with direct answers and next steps.
We have been doing Agile for years, but delivery is still slow and unpredictable. Where should leadership look first?
I usually start by looking above the team level: portfolio load, decision flow, and how priorities change week to week. Most of the time, teams are not the core bottleneck. The real issue is too many parallel initiatives and unclear trade-offs. If you want more context, see why transformations stall and my Fixing Your Agility approach.
We are not a pure software company. Are Product-oriented ways of working still relevant for us?
Yes. I work with software-heavy, product-and-service, and even operations-heavy environments. Product-oriented ways of working are not about being a software company. They are about improving how you choose, run, and learn from strategic bets. A good place to start is When and Why You Need a Product Operating Model.
We added people, but throughput barely moved. Why does this keep happening?
Because adding capacity into a constrained system usually increases coordination tax before it increases outcomes. I typically focus first on reducing active WIP, shrinking dependency knots, and improving decision cadence. For practical guidance, see WIP Limits in Scrum with Kanban.
What does a Product Operating Model look like in practice, not just in deck language?
I look at it as the way strategy, product choices, funding, governance, and delivery behavior connect in day-to-day reality. In practical terms: clearer outcome ownership, better focus, healthier decision quality, and less coordination drag. Start with this overview and then explore the Solutions Page.
We have multiple AI pilots and demos, but business impact is weak. What is usually missing?
In my experience, teams over-index on experiments and under-index on ownership, sequencing, governance, and adoption signals. I treat AI as a product and portfolio challenge, not a demo challenge. See AI Transformation: Strategy to Execution and AI is not the problem, your operating system is.
If our pain spans strategy, product, and portfolio at the same time, where do you usually start?
I typically start with transparency. My go-to is creating a portfolio-level Kanban visualization focused on strategic initiatives. That quickly surfaces load, flow, and evidence gaps and enables better conversations without a heavy initial change load. For deeper context, see Actively Managing Portfolio Flow, the Solutions Page, and the Glossary.
Our OKRs became reporting theater. How would you reset them?
I reset OKRs by turning them back into steering logic for real work and real trade-offs, not quarterly reporting artifacts. I focus on outcome clarity, portfolio focus, and ownership. Start with Fix Your OKRs and then see my OKR support options.
We implemented SAFe but the benefits have been disappointing. Is the framework the problem?
Rarely. The framework is usually not the core problem — implementation depth and context-fit are. The most common failure modes are: SAFe adopted as ceremony without changing how decisions flow, PI Planning running but portfolio prioritization unchanged above it, and ART structures organized around functions rather than value streams. See Deconstructing SAFe Criticism for an honest breakdown and SAFe advisory options.
How do we know if an agile or operating model change is actually working?
I use a small set of leading indicators: cycle time trend for strategic initiatives, number of actively running work items at the portfolio level, and decision-to-delivery lag. Outcome metrics (revenue, retention, NPS) are the lagging proof, but these flow metrics show earlier whether the system is improving. See Actively Managing Portfolio Flow for the measurement approach I typically use.
Should we start with a big framework rollout or smaller targeted experiments?
Almost always start smaller. A targeted diagnostic sprint on your most visible constraint — portfolio load, decision speed, or team focus — will reveal whether a broader framework is needed, what to adapt, and build the leadership muscle to sustain it. Big framework rollouts work best when you have clear leadership alignment and organizational readiness that a smaller experiment helps build. The Speed and Impact Breakthrough is designed precisely for this starting point.
Where should we go deeper before deciding whether to work together?
I recommend this path: browse Insights, review the Solutions Page, use the Glossary to align language, then check Work With Me and Results. If you want a practical push without a full engagement, I am happy to have an initial clarity conversation and give you pointers for making progress on your own: Book a Clarity Conversation.
Looking for a context-specific FAQ?
I keep page-specific questions on each relevant service page so answers stay grounded in that context.
Have an additional question you'd like my answer/perspective for?
Send me your question and I will use it in future AMA content and FAQ updates.
Need context-specific guidance?
Bring the situation that is most stuck. We will map constraints and identify the highest-leverage next move.