Category: Plan

  • North Star Metrics in the Age of AI

    North Star Metrics in the Age of AI

    Grammarly had a problem. Their North Star Metric was “suggestions accepted” — a classic engagement metric that went up and to the right. Then they realized something uncomfortable: their best users had fewer suggestions remaining, not more accepted. The metric that made the dashboard look good was measuring the wrong thing.

    This is not just a Grammarly problem. Every product team that adds AI capabilities runs into the same trap. AI makes users faster, which means they spend less time in your product, which means your engagement metrics go down — even as the value you deliver goes up. I call this The Efficiency Paradox, and it breaks the fundamental assumption that North Star Metrics were built on.

    What Is a North Star Metric?

    A North Star Metric (NSM) is the single number that captures the core value your product delivers to customers. Sean Ellis, who coined the term, was clear: it is not a revenue metric. It is a value metric. Revenue is a lagging indicator of value delivered.

    Amplitude’s North Star Framework organizes products into three “games” — attention (time spent), transaction (volume completed), and productivity (tasks accomplished). Spotify plays the attention game: time spent listening. Airbnb plays the transaction game: nights booked. Slack discovered that teams crossing 2,000 messages retained at 93% — a value threshold, not a vanity number.

    The framework works well for traditional products. For AI products, it falls apart.

    The Efficiency Paradox

    Here is the core tension. Traditional NSMs assume that more engagement equals more value. But AI products are designed to reduce effort. When your AI feature works perfectly, the user finishes faster and leaves sooner.

    The data is striking. Irving Wladawsky-Berger’s research on the AI productivity paradox found that developers using AI complete 21% more tasks and merge 98% more pull requests — but PR review time increases 91%. The bottleneck does not disappear. It shifts.

    Even more surprising: developers take 19% longer to complete issues with AI, yet believe it sped them up by 20%. The perception gap is real, and it means your users will tell you AI is helping even when your metrics say otherwise.

    Elena Verna, who studies AI growth patterns, estimates that 60-70% of traditional growth tactics no longer apply to AI products. Time spent goes down. Daily active users may go down. But value goes up. If your NSM cannot see that distinction, you are optimizing for the wrong thing.

    What Smart Companies Measure Instead

    The companies getting this right have shifted from input metrics to outcome metrics.

    Grammarly pivoted from “suggestions accepted” to “suggestions remaining.” Fewer remaining errors is a better signal than more clicks on the accept button. The shift from measuring your product’s activity to measuring the user’s result is the pattern.

    GitHub Copilot tracks three metrics simultaneously: acceptance rate (27-30%), code retained after 30 days (88%), and PR cycle time (reduced from 9.6 to 2.4 days). No single number captures the value. Notably, acceptance rate is increasingly called a vanity metric — it does not capture thinking assistance, only typing assistance.

    Notion doubled AI feature adoption by bundling it into standard plans at no extra charge. Their metric is adoption rate (50%+), not usage volume. They bet that broad adoption creates stickiness.

    The Three-Layer Metric Stack

    I have found a framework useful for thinking about AI product metrics. Instead of one North Star, AI products need a three-layer stack:

    Business layer. Revenue impact, cost reduction, customer lifetime value. This is what the board cares about. Example: “Reduced support costs by 20%.”

    Product layer. Adoption, activation, retention — but measured as outcomes, not engagement. Example: “50% of users complete their task in one session” instead of “average session length.”

    Model layer. Accuracy, latency, hallucination rate, and trust. This is unique to AI products. LangChain introduced a metric called CAIR — Confidence in AI Results — that measures user trust, not model accuracy. The distinction matters: accuracy is table stakes, but confidence determines whether users actually rely on the output.

    An Example: ProjectFlow Adds AI

    ProjectFlow, a fictional project management tool, adds an AI feature that auto-generates weekly status updates from team activity. Their existing NSM is “weekly active projects.”

    After launching the AI feature, weekly active projects stays flat. But something interesting happens: teams using the AI feature log in less frequently. The PM panics — engagement is down.

    Then they look at outcomes. Teams using AI status updates are completing projects 15% faster. Project leads are spending zero minutes writing status reports (down from 45 minutes per week). And satisfaction scores for the status update feature jumped from 3.2 to 4.6 out of 5.

    The old NSM could not see any of this. ProjectFlow shifts to a three-layer approach: business (projects completed per team per quarter), product (time-to-first-status-update under 5 minutes), and model (status accuracy rated by project leads). The dashboard gets simpler, but the signal gets clearer.

    Common Metric Mistakes for AI Products

    Measuring acceptance rate as your NSM. Acceptance rate tells you how often users click “accept” on an AI suggestion. It does not tell you whether the output was valuable, whether it saved time, or whether the user trusted it enough to keep it. Use retention of AI-generated output instead.

    Treating daily active users (DAU) as a health metric. For AI productivity tools, fewer visits can mean the product is working. Track task completion per session instead of sessions per week.

    Ignoring the trust layer. One-third of generative AI users encountered incorrect or misleading answers in 2025. If you do not measure hallucination rate and user corrections, you are flying blind on the dimension that will determine long-term retention.

    How to Use With AI

    AI can help you find the right metrics faster than manual analysis.

    Identify leading indicators. Export your retention data alongside AI feature usage. Ask Claude: “Which behaviors in the first 7 days most strongly predict 30-day retention? Separate users who engaged with AI features from those who did not.” This surfaces candidate NSMs you would not find in a dashboard.

    Stress-test your current NSM. Describe your product and current NSM to an AI and ask: “If our AI feature works perfectly and users accomplish their goal in half the time, what happens to this metric? What would be a better metric that captures value delivered?” The Efficiency Paradox test.

    Build a metric scorecard. Ask an AI to generate a three-layer metric stack for your product type, with one metric per layer. Then validate each against the test: “If this metric changed 20% tomorrow, would we do something different?”

    The guardrail: AI can find correlations in your data and suggest metric frameworks. It cannot tell you which metrics align with your strategy. Choosing your NSM is a Plan-phase decision that shapes every team’s priorities. That judgment is yours.

    Why This Matters

    Choosing the wrong NSM for an AI product is not a measurement error. It is a strategy error. When Grammarly measured “suggestions accepted,” every team optimized for more suggestions. When they switched to “suggestions remaining,” every team optimized for better writing. Same company, same product, different metric, fundamentally different incentives.

    In the 5Ps framework, the NSM sits in the Plan phase alongside product vision and strategy. It is the number that translates your vision into daily decisions. Get it right and teams self-organize around value. Get it wrong and you spend your time chasing engagement while your users quietly get less from your product.

    The Efficiency Paradox is not going away. As AI gets better, the gap between engagement and value will widen. The PMs who recognize this early — and rebuild their metrics accordingly — will build products that last.

    What do you think? I would love to hear how you measure AI product value. Comments are gladly welcome.

  • Technical Roadmap Planning

    Technical Roadmap Planning

    Most product management advice assumes you ship every two weeks. Agile sprints. Continuous deployment. Ship, measure, iterate. And for many software products, that advice is excellent.

    But what if your development cycle is eighteen months? What if your “sprint” involves designing hardware, building tooling, running physical tests, and waiting for regulatory approval? What if changing direction mid-cycle isn’t a matter of rewriting code but of retooling a factory?

    I spent several years working on products where the roadmap stretched to three years and beyond. The experience taught me that the same strategic principles apply — you still need a clear vision and a strategy pyramid — but the roadmapping mechanics are fundamentally different. You can’t “fail fast” when each experiment costs months and significant capital.

    What Is a Technical Roadmap?

    A technical roadmap is a plan that sequences architectural and engineering investments over an extended time horizon — typically one to five years. Unlike a feature roadmap, which lists what users will see, a technical roadmap describes what the team will build beneath the surface to enable future capabilities.

    Think of it as the difference between planning a road trip (feature roadmap) and building the highway (technical roadmap). The highway needs to be designed before anyone can drive on it, and once you pour the concrete, changing the route is expensive.

    The Core Challenge: Uncertainty Over Long Horizons

    The fundamental tension in technical roadmap planning is that you are making binding decisions with incomplete information. In a two-week sprint, you can course-correct cheaply. In an eighteen-month development cycle, a wrong architectural bet can waste years of work.

    This means technical roadmapping isn’t really about predicting the future. It is about managing the cost of being wrong.

    The best technical roadmaps I have seen don’t try to be prophetic. They try to be resilient. They build in options — architectural choices that keep multiple futures open.

    The Components

    1. Anchor Investments

    These are the bets you are making with high confidence. They align directly with your strategy and are unlikely to change regardless of how the market evolves. Anchors go on the roadmap first and get the most resources.

    2. Option Investments

    These are smaller, deliberate investments that create future flexibility without committing you to a specific path. They cost something now but give you the right — not the obligation — to move in a direction later.

    A modular design is an option investment. It costs more upfront than a monolithic design, but it lets you adapt without redesigning the entire product.

    3. Sequencing Gates

    In long-cycle products, you need sequencing gates — decision points where you evaluate new information before committing to the next phase. The gate isn’t “did we hit a deadline?” It is “do we still believe this bet is correct given what we now know?”

    A Concrete Example: SynthLabs

    Imagine a company called “SynthLabs” that builds industrial automation hardware. Their product is a robotic arm with an eighteen-month development cycle from concept to production.

    SynthLabs faces a classic roadmap tension. Their current product serves automotive manufacturers. Sales wants to expand into food processing. But the two industries have different requirements: automotive needs precision and speed; food processing needs washdown compliance and gentler handling.

    Anchor Investments (High Confidence): Next-generation motor controller (needed regardless of industry). Safety certification for current platform. Core firmware upgrade.

    Option Investments (Creating Flexibility): Modular end-effector interface — a standardized mounting system that lets them swap grippers without redesigning the arm. Environmental sealing research — a small team investigates washdown-rated enclosures. Not a full commitment to food processing, but enough to make a credible proposal.

    Sequencing Gates: Gate 1 (Month 6): Review food processing market data. Gate 2 (Month 12): Prototype of modular interface tested. Gate 3 (Month 15): If food processing proceeds, commit to specialized certification.

    SynthLabs doesn’t commit to the food processing market on day one. They invest in optionality so that when Gate 1 arrives, they can decide with six months of data instead of guessing.

    The Platform Debt Trade-Off

    Long-cycle products accumulate “platform debt” — architectural choices that were right when made but now constrain future options. Every technical roadmap needs to budget time for paying down this debt.

    The instinct is to defer platform work in favor of customer-visible features. For one cycle, you can get away with it. But the compounding effect is vicious. Skip a platform investment in Year 1, and by Year 3, every new feature takes twice as long.

    The best approach I have seen is to reserve 20 to 30 percent of engineering capacity for platform work. This connects to the PRD process: every major architectural investment should have its own brief explaining the rationale and alternatives.

    Why This Matters

    Technical roadmap planning matters because the cost of getting it wrong is measured in years, not sprints. A bad feature in a SaaS app can be rolled back next week. A bad architectural bet in a long-cycle product means you live with it — or scrap months of work.

    But the discipline also applies to software teams building long-lived platforms. If you are building infrastructure that other products depend on, you face the same long time horizons and binding commitments.

    How to Use With AI

    AI is useful for the analytical work that supports technical roadmapping — particularly dependency analysis and scenario planning.

    1. Dependency Mapping

    Technical roadmaps have complex interdependencies. Paste your roadmap items and ask AI to identify hidden dependencies.

    Prompt: “Here are 15 items on our technical roadmap with estimated timelines. Identify any dependency chains where Item X must complete before Item Y can start. Flag items scheduled in parallel that appear to have dependencies.”

    2. Scenario Planning

    Long-horizon roadmaps benefit from “what if” analysis.

    Prompt: “Here is our roadmap with anchor and option investments. Play out two scenarios: (1) the food processing market grows 30% and we enter, (2) it stays flat and we stay focused. For each, which option investments become anchors, and which become unnecessary?”

    3. Gate Criteria Definition

    When defining sequencing gates, AI can help articulate the criteria.

    Prompt: “We need to decide at Month 6 whether to pursue a new market variant. What data points should we collect before this gate? Suggest 5-7 specific, measurable criteria for a clear go/no-go decision.”

    Guardrail: AI can analyze dependencies and generate scenarios, but the strategic judgment about which bets to make belongs to humans who understand the market, the team, and the competitive dynamics.

    Conclusion

    Technical roadmap planning is a different discipline from agile feature prioritization. The time horizons are longer, the commitments are more binding, and the cost of changing direction is higher. But the core idea remains: sequence your investments so that you learn the most before committing the most.

    What do you think? Comments are gladly welcome.

  • PRD Templates

    PRD Templates

    In my career, I’ve seen two types of PRDs. The first is a 40-page “Requirement Bible” that took three months to write and was obsolete the day it was finished. The second is a vague one-pager that says “Make it pop.” Neither works.

    The trick isn’t to write more; it’s to write the right things for your specific context. A B2C mobile app needs a completely different definition of success than a backend data pipeline.

    In this article, I provide a comprehensive list of PRD sections, and then curate specific templates for four common use cases: B2B Enterprise, B2C Consumer, Data Infrastructure, and AI Infrastructure.

    What is a PRD Template?

    A Product Requirements Document (PRD) template is just a checklist to ensure you haven’t forgotten anything critical. It is the container for your strategy. It aligns stakeholders, communicates vision, and provides a clear roadmap for engineering.

    But remember: The template is not the product. Filling out every section of a template doesn’t guarantee a good product. Use these templates as a starting point, not a straightjacket.


    1. B2B Enterprise (The “Heavy Lifter”)

    Best for: Complex SaaS, Workflow Tools, Regulated Industries

    This format is designed for products where “the buyer is not the user.” It separates the Problem Space (why we are doing this) from the Solution Space (what we are building) to prevent jumping to conclusions.

    Key Characteristic: Heavy emphasis on Business Logic, Permissions, and Integrations.

    Format: The Enterprise PRD

    Overview

    • Document Control: Product name, author, version, status, stakeholders.
    • Executive Summary: High-level overview of purpose and strategic context.
    • Background & Scope: Why now? What is in/out of scope?

    Problem Space

    • Customer Segments: Who are we solving for? (Buyer vs User personas).
    • User Personas: Detailed profiles of the target users (Goals, Frustrations, Behaviors).
    • Problem: Detailed articulation of the top 3 problems, ranked by severity.
    • Impact Analysis: What is the cost of inaction? (Revenue loss, churn risk).
    • Existing Alternatives: How do they hack this together today?

    Solution Space

    • Unique Value Proposition: The “Hook” – why is this different?
    • Solution Features: Top features mapped directly to the problems above.
    • User Stories: Functional requirements in story format (“As a [user], I want…”).
    • UX/Design Requirements: Wireframes, complex flows, state diagrams.
    • Technical Requirements: Security, compliance (SOC2/GDPR), integrations.
    • Unfair Advantage: What makes this defensible?

    Go-to-Market

    • Channels: Sales enablement, partner channels.
    • Revenue Model: Pricing strategy (per seat, per usage).
    • Cost Structure: CAC, implementation costs.

    Execution

    • Key Metrics: Adoption, retention, ACV impact.
    • Project Planning: Milestones, dependencies, risks.

    2. B2C Consumer (The “Growth Engine”)

    Best for: Mobile Apps, Social Networks, D2C Subscriptions

    In consumer products, utility is often less important than psychology and habit formation. Users don’t “have” to use your product; they have to want to. This template focuses less on functional requirements and more on the user journey and growth loops.

    Key Characteristic: Heavy emphasis on User Psychology, Virality, and Experimentation.

    Format: The B2C PRD

    Core Experience

    • Target Audience & Personas: Who is the primary user? What is their archetype?
    • The “Hook”: What is the single trigger that gets a user to try this?
    • Emotional Goal: How should the user feel after using this? (e.g., “Smart,” “Connected,” “Relieved”).
    • User Stories / Job Stories: Key scenarios (“When [situation], I want to [action], so I can [outcome]”).
    • The Core Loop: Trigger → Action → Reward → Investment. (Reference: Nir Eyal’s Hooked).

    Growth & Virality

    • Acquisition Channel: Organic search, paid ads, referral?
    • Viral Mechanism: How does one user bring in the next? (e.g., “Invite to collaborate,” “Share content”).
    • Monetization Moment: Where is the friction introduced? (Paywall, Ad).

    UX & Design (Critical)

    • Visuals: High-fidelity mockups are mandatory here. Text is insufficient.
    • Micro-interactions: Delightful animations or feedback loops.
    • Onboarding Flow: Step-by-step breakdown of TTTV (Time to Target Value).

    Experimentation

    • Hypothesis: “We believe that X will result in Y.”
    • A/B Test Variants: Variant A (Control) vs Variant B.
    • Success Criteria: Specific conversion rates (e.g., “Day 1 Retention > 40%”).

    3. Data Infrastructure (The “Plumbing”)

    Best for: APIs, Data Pipelines, Platform Migrations

    Here, there are no “users” in the traditional sense. The “user” is another system or a developer. This PRD is technical, precise, and unforgiving. Ambiguity here causes outages.

    Key Characteristic: Heavy emphasis on SLAs, Schemas, and Migration Plans.

    Format: The Data Infra PRD

    Contract & Schema

    • Consumer Personas: Who are the downstream users? (e.g., Data Scientists, Dashboard Owners).
    • Data Dictionary: Exact field names, types, and definitions.
    • API Spec: Endpoints, request/response bodies (OpenAPI/Swagger link).
    • Usage Scenarios / Stories: Key access patterns and query types supported.
    • Data Freshness: Real-time vs. Batch? What is the maximum acceptable lag?

    Service Level Agreements (SLAs)

    • Availability: 99.9% vs 99.99%?
    • Latency: p95 and p99 requirements.
    • Throughput: Events per second (EPS) expectations (Peak vs Average).

    Consumers & Dependency

    • Downstream Consumers: Who breaks if this changes?
    • Upstream Dependencies: What source systems do we rely on?
    • Versioning Strategy: How do we handle breaking changes?

    Migration Plan

    • Backfill Strategy: How do we move historical data?
    • Cutover Plan: Dual-write period? Hard cutover?
    • Rollback Plan: “Break glass” procedure if data is corrupted.

    4. AI Infrastructure (The “Brain”)

    Best for: LLM Features, Recommendation Engines, Chatbots

    AI products are probabilistic, not deterministic. You cannot write a requirement like “The model must answer correctly 100% of the time.” This template focuses on evaluation and guardrails.

    Key Characteristic: Heavy emphasis on Evaluation (Evals), Context, and Safety.

    Format: The AI Infra PRD

    The Job to Be Done

    • User Personas: Who interacts with the model? (e.g., End User vs Admin).
    • User Intent: What is the user trying to achieve? (e.g., “Summarize text,” “Generate code”).
    • Interaction Stories: Key prompts and expected model behaviors.
    • Model Selection: Build vs Buy? (GPT-4, Claude, Llama, custom fine-tune?). Why?

    Data Strategy

    • Context Window: What data creates the prompt? (RAG strategy).
    • Training/Fine-tuning Data: Source, cleanliness, and bias checks.
    • Feedback Loop: How does user feedback (thumbs up/down) improve the model?

    Evaluation Framework (The most important part)

    • Golden Dataset: The “Test Set” of 50-100 examples we trust.
    • Success Metrics:
      • Quantitative: Latency, Tokens per second, Cost per query.
      • Qualitative: “Helpfulness,” “Factuality” (measured by human review or LLM-as-a-judge).
    • Acceptance Threshold: “Must be better than current baseline on 80% of Golden Set.”

    Safety & Guardrails

    • Refusals: What should the model refuse to do?
    • Hallucination Mitigation: How do we ground the answers?
    • Privacy: PII handling and data retention.

    5. PRD Section Buffet (The “Kitchen Sink”)

    This comprehensive format combines all unique sections from every PRD template. Use this as a menu to cherry-pick what you need.

    All Sections

    • Document Control: Product/feature name, author, version, status, last updated, key stakeholders
    • Overview & Context: Executive summary, strategic context, background/history, scope (in/out)
    • Problem Space: Problem definition, user pain points, impact analysis, market gaps, business case
    • Target Users: Personas, use cases, user journey maps, underserved needs
    • Solution: Product vision, value proposition, differentiators, alternatives rejected
    • Features & Requirements: Prioritized feature list (MoSCoW), user stories, acceptance criteria, MVP set
    • Technical Requirements: Architecture, performance specs, security, integration points, API contracts
    • UX/Design Requirements: Wireframes, mockups, user flows, design constraints
    • Go-to-Market: Launch strategy, release phases, channels, support requirements
    • Success Metrics: KPIs, pirate metrics (AARRR), measurement plan
    • Business Model: Revenue streams, pricing, cost structure, unfair advantage
    • Project Planning: Timeline, milestones, resource requirements, dependencies
    • Risk Management: Risk assessment, mitigation strategies, open questions
    • AI Specifics: Model selection, evaluation set, prompt strategy, safety guardrails

    Deep Dive: The Living Document

    A PRD is not a statue; it’s a living organism. The biggest mistake I see is teams treating the PRD as “Done” once development starts.

    The PRD should be the Single Source of Truth (SSOT). When requirements change (and they will), update the PRD. If you make a trade-off decision in a Slack thread, copy it back to the PRD. If the PRD drifts from reality, it becomes useless trash.

    Pro Tip: Add a “Decision Log” section at the bottom of your PRD to track why changes were made during development.


    Why This Matters

    Standardizing your PRD structure (or at least having a thoughtful one) reduces cognitive load.
    1. Speed: Stakeholders know exactly where to look for “Success Metrics” or “Risks.”
    2. Completeness: You won’t wake up 2 days before launch realizing you forgot to define the “Error States.”
    3. Alignment: It forces the hard conversations early, when they are cheap to fix, rather than in code, when they are expensive.


    How to Use With AI

    AI is the world’s best PRD drafter. It removes the “Blank Page Syndrome.” But you must drive it.

    Pro Tip: Store your chosen PRD template as a markdown file (e.g., prd_template.md) in a context/ or docs/ folder within your code repository. This keeps your requirements version-controlled alongside your code and makes it easy for engineers to reference the “Single Source of Truth” without leaving their IDE.

    1. Drafting: Don’t say “Write a PRD.”
      • Prompt: “Act as a Senior Product Manager. I am building a [B2C Fitness App] for [Busy Parents]. Based on the transcript of my user interviews below, draft the ‘Problem Space’ and ‘User Personas’ sections of the PRD. Focus on psychological triggers.”
    2. Critique: Use AI as a hostile stakeholder.
      • Prompt: “Act as a cynical Engineering Lead. Review this ‘Solution’ section for technical feasibility and edge cases. What am I missing? What is too vague?”
    3. Expansion: generating edge cases.
      • Prompt: “List 10 potential error states or ‘unhappy paths’ for this user flow that I should account for.”

    Guardrail: Never let AI define your Strategy or Success Metrics. Those require human judgment and accountability.


    Conclusion

    There is no “Perfect PRD.” The best PRD is the one that gets your team to build the right product with the least amount of friction.

    If you are a 3-person startup, a bulleted list in Notion is fine. If you are building a banking platform, you better have that Data Infra template locked down.

    Pick the template that fits your stage, your user, and your risk profile. And then, get to work.

    What templates does your team use? Do you have a specific “AI” section yet? Comments are gladly welcome.

  • Crafting a Product Vision That Inspires Action

    Crafting a Product Vision That Inspires Action

    I have sat in dozens of product strategy meetings that started with someone saying, “We need a vision.” Then the room went quiet. Someone suggested a brainstorm. Someone else opened a Google Doc. Two hours later, we had a paragraph of corporate aspirations that could apply to any company in any industry. Nobody was inspired. Nobody changed what they were doing on Monday. And three months later, the team shipped a feature nobody wanted because the vision never told them what to focus on.

    The problem is not that teams lack ambition. The problem is that most product visions are too abstract to change behavior. “We want to be the leading platform for X” tells you nothing about what to build, what to say no to, or why your product matters more than the five alternatives the customer already has.

    A good product vision does three things: it clarifies the destination, it constrains the path, and it energizes the people doing the work. Or put differently: clarity of destination creates speed of execution. In my experience, getting all three right is harder than it sounds, but the payoff is enormous.

    What is a Product Vision?

    A product vision is a clear, compelling description of the future state your product will create for its customers. It is not a mission statement (that is the company’s “why”). It is not a strategy (that is the “how”). The vision is the “where” — the destination you are working toward.

    What it answers Example
    Mission Why do we exist? “Organize the world’s information”
    Vision Where are we going? “A computer on every desk and in every home”
    Strategy How will we get there? “Win through product-led growth in SMB”

    Think of it this way: if the mission is the reason your company exists, the vision is the picture of what the world looks like when you succeed. It sits right below the mission in the Strategy Pyramid and above the goals and strategy that drive daily work.

    The best product visions share a few qualities. They are concrete enough to guide decisions. They are ambitious enough to inspire effort. And they are short enough to remember. Microsoft’s original vision — “a computer on every desk and in every home” — is the classic example. It was specific. It was bold. And it told every engineer at the company exactly what they were building toward.

    The Components of a Strong Product Vision

    Getting from a vague aspiration to a vision that drives behavior takes work. Here is the process I have found useful.

    1. Start with the Customer Problem

    The vision is not about your product. It is about the change your product creates in your customer’s life. The most common mistake I see is teams writing a vision that describes what the product does (“the leading analytics platform”) instead of what it changes for the customer (“every owner decides in 5 minutes”). Before you write anything, answer: What does the world look like for your customer today, and what will it look like when your product succeeds?

    This is the hardest part. It requires genuine empathy and specificity. “Businesses will be more efficient” is not a vision. “Every small business owner closes their books in 10 minutes, not 10 hours” is.

    2. Define the Future State

    Write a description of the ideal future, 3-5 years out, as if your product has succeeded. Be concrete. Name the change. Describe what the customer is doing differently.

    Amazon uses the “press release from the future” technique, originally described by Ian McAllister. Before building anything, the team writes a mock press release announcing the finished product. This forces you to articulate the customer benefit in plain language before you get lost in technical details.

    3. Make It a Constraint

    A vision is only useful if it helps you say “no.” If your vision is so broad that it justifies every feature request, it is not doing its job. The best visions are specific enough to eliminate options.

    Spotify’s vision centered on “enabling human creativity to reach its full potential.” That guided them to build tools for podcast creators and musicians. It also guided them away from becoming a general social network, even though they had the user base for it.

    4. Choose a North Star Metric

    A North Star metric is the single number that best captures the core value your product delivers to customers. It connects the abstract vision to something you can measure every week.

    For a project management tool, the North Star might be “weekly active projects” — not revenue, not signups, but a measure of whether people are actually using the product to do meaningful work. For a marketplace, it might be “weekly transactions completed.” The metric operationalizes the vision. It turns aspiration into accountability.

    5. Evangelize Relentlessly

    A vision that lives in a slide deck nobody opens is not a vision. It is a document. In my experience, the PM’s most important job is making the vision a living part of how the team thinks and decides.

    This means repeating the vision in every kickoff, every sprint review, every roadmap discussion. It means connecting individual features back to the vision: “We are building this notification system because our vision says teams should ship with confidence, and right now they are missing critical updates.” It sounds repetitive. It is. That is the point.

    A Concrete Example: DataFirst

    To make the discussion more concrete, let’s pick a specific example. Imagine a startup called “DataFirst” that builds a data analytics platform for small e-commerce businesses. They are pre-product-market-fit, with 200 beta users and a team of 12.

    Their founder keeps saying the vision is “to democratize data.” That is a fine aspiration, but it does not help the team decide what to build next. Every analytics feature “democratizes data.”

    Here is how they might sharpen it:

    • Customer problem today: Small e-commerce owners spend hours in spreadsheets trying to figure out which products to restock, which ads to cut, and whether they are actually profitable. Most give up and go with gut feel.
    • Future state (3 years out): Every e-commerce owner with fewer than 50 employees makes confident, data-backed decisions about inventory, marketing, and profitability in under 5 minutes a day — without needing a data analyst.
    • Vision statement: “Every small e-commerce owner makes confident decisions in 5 minutes a day.”
    • North Star metric: Daily active decision-makers (users who take an action based on a DataFirst recommendation in a given day).
    • What it constrains: This vision says “no” to enterprise features, custom dashboards, and complex query builders. It says “yes” to opinionated recommendations, mobile-first design, and one-click actions.

    See how the specific vision is actually more useful than “democratize data”? It tells the engineering team to optimize for speed, not depth. It tells the designer to prioritize mobile. It tells the PM to build recommendations, not raw charts.

    Here is what changed in practice: when a customer asked for a custom SQL query builder, the team said no in five minutes instead of debating for two weeks. When an engineer proposed a complex analytics engine, the PM pointed to “5 minutes a day” and asked, “Will the average store owner with 15 employees use this?” The answer was no, so they built a simpler recommendation widget instead. That is the difference between a vague vision and a sharp one.

    The Vision-Strategy Gap

    I want to highlight something I see often: teams that have a clear vision and clear action plans, but nothing connecting them. The vision says “every team ships with confidence” and the sprint backlog says “fix the CSV export bug.” There is a gap in the middle.

    The product strategy is what fills this gap. It translates the aspirational language of the vision into the tactical language of the roadmap. Without it, the vision floats above the daily work like a poster on the wall — visible but irrelevant.

    A quick diagnostic: ask three people on your team to explain how their current sprint connects to the product vision. If they cannot draw a clear line in under 30 seconds, you have a vision-strategy gap.

    In the 5Ps framework, the vision lives in the Plan pillar. It feeds into the product strategy, which feeds into goals and initiatives, which feed into the features and sprints you work on every day. Each layer provides context for the layer below it.

    Why This Matters

    A strong product vision matters for three practical reasons.

    It gives the team autonomy. When people understand where you are going and why, they can make good decisions without asking permission. The designer does not need to ask the PM whether to simplify the onboarding flow — the vision already answers that question.

    It makes saying “no” easier. Every product team drowns in requests. A clear vision is the filter. “Does this move us toward our vision?” is the most powerful question a PM can ask.

    It attracts the right people. I once watched a candidate’s face light up during an interview when the hiring manager described the product vision in one sentence. The candidate said, “That is exactly the problem I want to spend the next three years solving.” A compelling vision is a recruiting tool. “We are building a world where every small business owner makes confident decisions in 5 minutes a day” is a lot more motivating than “we are building a dashboard.”

    How to Use With AI

    AI is surprisingly good at the parts of vision work that teams find tedious: synthesizing scattered inputs, generating options, and stress-testing language. Here is how I have seen PMs use AI effectively in this process.

    Draft a vision from messy inputs. Feed your AI tool (Claude, ChatGPT, or a product-specific tool like Productboard’s AI features) your last strategy memo, customer interview notes, and competitive analysis. Ask it to propose three candidate vision statements. For example: “Here are our last 3 strategy memos and 5 customer interview summaries. Propose 3 product vision statements that are specific enough to help us say no to feature requests. Each should be under 15 words.” You will probably not use any of them verbatim, but they compress the blank-page phase from hours to minutes.

    Write the “press release from the future.” Give the AI your target customer, the problem you are solving, and your rough timeline. Ask it to write a one-page press release announcing your product’s success. This is the technique Amazon uses internally, and AI produces a solid first draft that the team can then debate and refine.

    Stress-test for specificity. Paste your draft vision into the AI and ask: “What products or companies could this vision also describe?” If the answer is “dozens,” your vision is too generic. Ask the AI to suggest more specific language.

    Generate a North Star metric. Describe your product and vision, then ask the AI to propose five candidate North Star metrics with the trade-offs of each. Tools like Miro AI can help visualize how these metrics connect to your broader strategy on a shared canvas.

    Guardrail: Never let the AI make the final call on your vision. The vision is an identity choice — it reflects what your team believes and what you are willing to sacrifice. AI can synthesize and sharpen, but humans must choose. If DataFirst had let AI write their vision, it might have suggested the safe, generic “democratize data” — exactly the kind of vision that sounds good but constrains nothing.

    Conclusion

    A product vision is not a decoration. It is a decision-making tool. The best ones are specific enough to constrain, ambitious enough to inspire, and short enough to repeat in every meeting without boring your team.

    The process does not need to be complicated. Start with the customer problem. Describe the future state. Choose a North Star metric. Then repeat the vision until your team can recite it from memory.

    This is not a cookie-cutter template. You will need to adapt these specifics to your context. A business-to-business enterprise product needs a different vision than a consumer app. A startup needs a different level of ambition than a team inside a large company.

    But the underlying principle is the same: clarity of destination creates speed of execution.

    What do you think? Comments are gladly welcome.

  • Strategy Pyramid

    Strategy Pyramid

    Over the past 15 years, I have found that the biggest cause of friction in product teams isn’t a lack of talent or effort. It is a lack of context. It all starts with the Mission: Why does the company exist? When that answer isn’t clear, everything downstream suffers. Engineers argue about features because they don’t see the strategy. Product managers argue about strategy because they don’t see the vision. And executives wonder why the roadmap doesn’t move the needle on the company goals.

    We often talk about “alignment,” but alignment is impossible if we don’t have a shared map of where we are going and why.

    In developing the overall strategy for a company to provide context for the product strategy, one best practice is to express the components at different levels in a Strategy Pyramid. This simple visual framework has been what works best for me to get everyone—from the CEO to the newest intern—on the same page.

    Critically, this framework is not meant to be onerous or slow things down. It is meant to provide clarity so you can move faster. The amount of detail should depend on your context. If you are a solopreneur, your entire Strategy Pyramid might be a half-page Google Doc. It is still helpful. At a larger company, it will need to be more detailed, with clear communication to the various teams.

    What is a Strategy Pyramid?

    A strategy pyramid lays out the key components of the strategic planning process for 5Ps Of Product. The concept draws on the classic strategic planning hierarchy described by authors like Michael Porter and Roger Martin, adapted here to create a product-focused process.

    The idea is simple: strategy is a hierarchy. You can’t decide what to do today (Action Plans) if you don’t know how you plan to win (Strategy). And you can’t define a winning strategy if you don’t know why you exist (Mission).

    The Components

    Let’s walk through the pyramid from the top down. Each layer provides the constraints and context for the layer below it.

    1. Mission

    The “Why”

    At the very top is the Mission. Why does the company exist? This is your core purpose. It rarely changes. It is the North Star that guides the ship through storms and calm waters alike.

    Crucially, a Mission is not just “to make money.” Making money is a result, not a purpose. Your Mission is the positive change you want to bring to the world.

    If your mission is “to organize the world’s information” (Google), that tells you immediately that you probably shouldn’t be building a toaster (unless it’s a very smart toaster).

    2. Values

    The “How” (Principles)

    Right below the mission are your Values. These are the timeless guiding principles that dictate how you behave as you pursue your mission.

    Think of Amazon’s famous “Customer Obsession.” It isn’t just a slogan; it is a mechanism that allows them to make decisions like offering free returns, even when it costs them money in the short term.

    I have seen many companies treat values as posters on a wall. That is a mistake. Real values are decision-making tools. If one of your values is “Move Fast,” you might accept more bugs in production than a company whose value is “Reliability First.” Neither is wrong, but they lead to very different products.

    3. Vision

    The “Where”

    The Vision is a compelling image of the ideal future. If the Mission is the “Why,” the Vision is the “Where.” What does the world look like in 5 or 10 years if you succeed? I dive deeper into crafting a vision that actually drives decisions in a separate article.

    A classic example is Microsoft’s original vision: “A computer on every desk and in every home.” It was concrete, ambitious, and at the time, completely revolutionary. It painted a picture of the destination so everyone knew what they were aiming for.

    4. Goals

    The “What”

    Now we get specific. Goals are the key financial and operational metrics that tell us if we are making progress toward the vision. These are usually time-bound.

    The ultimate example is JFK’s goal for NASA: “Land a man on the moon and return him safely to the Earth before this decade is out.” It wasn’t vague. It was binary. You either did it or you didn’t.

    Goals ground the lofty Vision in reality. They give us a scorecard.

    5. Strategy (Business & Product)

    The “Game Plan”

    This is the pivot point of the pyramid. This is where many teams get stuck.

    • Business Strategy: This is the overall “game plan” for the company’s success. It includes sales strategy, marketing strategy, operational strategy, and yes, product strategy. Tesla’s “Secret Master Plan” is a perfect example: Build a sports car -> Use that money to build an affordable car -> Use that money to build an even more affordable car.
    • Product Strategy: This is the specific plan for the product’s future state and how it will help the business win. Think of the original iPhone launch: Drop the physical keyboard to enable a full-screen, adaptable interface. That was a strategic choice to win by changing the rules of the game.

    Notice that Product Strategy sits inside or alongside Business Strategy. It serves the business goals. A great product strategy that bankrupts the company is a bad business strategy.

    6. Initiatives

    The “How” (Tactics)

    Initiatives are operational tactics organized into key themes. These are the big rocks you are moving.

    When Netflix decided to pivot to original content (starting with House of Cards), that was a massive Initiative. It wasn’t just “buy more movies”; it was a fundamental shift in how they operated to support their strategy of becoming a global TV network.

    Initiatives group your efforts so you aren’t just reacting to random requests. They focus your energy on the areas that matter most for the Strategy.

    7. Action Plans

    The “Now”

    Finally, at the base of the pyramid, we have Action Plans. These are the detailed plans with owners, timelines, and Key Performance Indicators (KPIs). This is the roadmap. This is the sprint plan. This is the JIRA ticket—often written as a user story—you are working on today.

    For that Tesla engineer working on the Model S, the Action Plan wasn’t just “design a door handle.” It was “design a flush door handle that reduces drag (Strategy) to increase range (Goal) so we can prove electric cars are viable (Vision).”

    A Concrete Example: ProjectFlow

    To make the discussion more concrete, let’s pick a specific example. Imagine a company building a project management tool called “ProjectFlow.” They are a small startup trying to break into a crowded market.

    Here is how their Strategy Pyramid might look:

    • Mission: To help teams build better things together.
    • Values:
      • Simplicity over power: We will remove features if they make the product harder to learn.
      • Transparency by default: Everyone sees everything unless explicitly hidden.
    • Vision: A world where no project fails due to miscommunication. We want to be the default “operating system” for modern creative teams.
    • Goals: Reach 1,000 paying teams by the end of the year. This gives us the revenue to hire our next two engineers.
    • Business Strategy: Win the SMB market by undercutting enterprise tools on price and beating them on ease of use. We will rely on product-led growth (PLG) rather than a sales team. (This is where your Go-to-Market strategy lives.)
    • Product Strategy: Build the fastest, most intuitive interface in the market. Focus on “zero-setup” collaboration so a team can start working in seconds, not days. We explicitly choose not to build complex reporting or permissioning features yet.
    • Initiatives:
      • “Instant Onboarding”: Reduce time-to-first-project to < 1 minute.
      • “Mobile First”: Full feature parity on iOS/Android to support remote teams.
      • “Viral Loops”: Build “guest access” features that encourage users to invite external clients.
    • Action Plans:
      • PM: Write specs for “One-click Google Sign-in” to support Instant Onboarding.
      • Eng: Refactor the database for faster mobile sync to support Mobile First.
      • Design: Simplify the “New Project” modal to remove mandatory fields.

    See how it flows? If a designer proposes a complex, powerful feature like “Gantt Chart Dependencies” that requires manual setup, the PM can point to the Values (“Simplicity over power”) and the Strategy (“Zero-setup”) and say, “That’s a great idea, but it doesn’t fit our current strategy. We are optimizing for speed, not complexity.”

    The Product Strategy Bridge

    I want to highlight the Product Strategy component specifically. In my experience, this is often the missing link.

    Teams often have high-level Goals (“Make money”) and low-level Action Plans (“Build feature X”), but nothing in between. They lack the “connective tissue” that explains why feature X leads to making money.

    The Product Strategy provides that bridge. It translates the financial language of the Business Strategy (“Capture 20% market share”) into the language of the product (“Build a viral loop through free guest access”).

    Why This Matters

    Using a Strategy Pyramid isn’t just about filling out a template. It is about communication.

    When I join a new team, I often ask people to draw this pyramid for their product. Usually, the top is fuzzy (“Something about making the world better?”) and the middle is missing. Everyone knows the Action Plans (the bugs they are fixing today), but they have no idea how it connects to the top.

    By explicitly writing this down, you give your team a tool to make decisions. You empower them to say “no” to things that don’t fit the strategy. You give them the context they need to be autonomous.

    How to Use With AI

    Gen AI is surprisingly good at the part of strategy work that teams hate: turning scattered context into a clean draft. Use it like a tireless facilitator, not a CEO. Feed it your raw inputs, ask it to propose a first-pass Strategy Pyramid, then review it live with the team and force the hard choices in the open. The goal isn’t to let AI decide your mission. It’s to compress the “blank page” phase and spend more human time on the debates that actually matter.

    Draft the first pyramid from messy inputs: give it your last 3 strategy memos, a recent roadmap, and notes from customer calls, then ask for a one-page Mission → Values → Vision → Goals → Strategy → Initiatives → Action Plans draft.

    Stress-test alignment: ask it to flag contradictions (example: “Simplicity over power” vs “enterprise permissioning initiative”) and list the decisions you’re implicitly making.

    Generate a “strategy narrative” for comms: have it produce a one-minute version for execs, a one-pager for the org, and a team-level “what changes Monday” summary.

    Turn the pyramid into an operating cadence: ask for a quarterly review agenda and the top 10 questions each layer should answer.

    Guardrail: never accept AI wording for Mission/Values blindly. Those are identity choices. Use AI for synthesis and clarity, then make the final call as humans.

    Conclusion

    This framework is what has worked for me to bring order to chaos. It pushes you to be disciplined about your thinking. You can’t just hand-wave the strategy if you need to write it down in a box that sits between Goals and Initiatives.

    This is not a cookie-cutter template. You will need to adapt these specific labels to your context. Maybe you call “Initiatives” “Themes” or “Bets.” That’s fine. The important thing is the hierarchy of context.

    What do you think? Does your team have a clear path from Mission to Action? Comments are gladly welcome.

  • Why This New Framework? The Story Behind the 5Ps

    Why This New Framework? The Story Behind the 5Ps

    I have been building products for over 15 years. And for most of that time, I have been looking for a map.

    Not a process. Not a methodology. A map — something that shows the full terrain of product management in a way you can hold in your head while making decisions on a Tuesday afternoon. I never found one that worked for me. So I built my own.

    The Gap

    Product management knowledge is scattered everywhere. It lives in blog posts, in books, in conference talks, and in the heads of experienced PMs who never write anything down. You pick up prioritization frameworks from one source, discovery methods from another, go-to-market thinking from a colleague. Each piece is valuable. But there is no structure to hold it all together.

    I experienced this firsthand when I moved between PM roles. The companies were different, but the underlying questions were always the same: What are we building and why? Who is this for? How do we get it to market? How do we scale? I kept solving the same categories of problems with no shared vocabulary for those categories.

    The existing frameworks I tried were either too complex or too narrow. The Pragmatic Framework from Pragmatic Institute covers 37 activities but reads as a practitioner checklist, not a mental model. Teresa Torres’s Opportunity Solution Tree is excellent for structuring discovery but operates at the feature level. None of them gave me the truly end-to-end picture — from vision and idea all the way through to revenue and happy customers. And almost none addressed what happens after you ship: who builds the team, designs the organization, and makes sure the infrastructure can support the product at scale.

    And in a world where machine learning and AI are increasingly the core of what teams are building, the old playbooks fit even less. How do you define an MVP (minimum viable product) when the model needs training data before it can do anything? How do you find product-market fit — the point where your product meets real demand — when the product gets better over time? I needed a framework that covered the full lifecycle and made sense for AI-powered products too.

    The Five Ps

    The insight was simple: products have a natural lifecycle, and the big questions a PM faces follow a sequence.

    Plan is where it starts. Vision, mission, and product strategy. This is the “why” and the “where” — before you build anything, you need to know what game you are playing.

    Problem is where you get specific. Who are you building for, and what do they actually need? Not what they say they want. What they need. Customer interviews are the primary tool here. For AI products, this phase is especially critical — you need to understand whether machine learning is genuinely the right solution.

    Product is where you build. MVP development, finding product-market fit, pricing, and packaging. But without Plan and Problem, you are building in the dark.

    Promotion is where you scale demand. Go-to-market strategy, marketing, sales support, customer loyalty. Many PMs think their job ends when the feature ships. It does not.

    Platform is where you scale the organization. Team structure, hiring the right roles, leadership development. This is the least discussed phase in most PM frameworks, and in my experience, the one where companies struggle the most.

    Five phases. A natural sequence from strategy through scale. Simple enough to remember over coffee. And the alliteration is not an accident — mnemonics work. If a framework is easy to remember, people actually use it. Especially under pressure, when nobody is pulling up a slide deck.

    An Example: DataFirst

    Imagine a startup called DataFirst that builds an ML-powered tool for detecting fraudulent insurance claims.

    Their Plan: make fraud detection accessible to mid-size carriers who cannot afford in-house data science teams. Start with auto insurance, expand from there.

    Their Problem phase reveals a surprise — after interviewing 40 claims adjusters, they learn adjusters do not want automated fraud flags. They want a tool that surfaces suspicious patterns and lets them make the call. False positives damage customer relationships, and adjusters know it.

    This reshapes the entire Product from “AI that catches fraud” to “AI that makes adjusters smarter.” They build an MVP that presents confidence scores alongside evidence. Product-market fit arrives when adjusters start using it voluntarily.

    Promotion reveals that insurance conferences and peer case studies outperform traditional marketing. And in Platform, the ML team is burning out on retraining cycles — so DataFirst hires an ML ops engineer and restructures into separate squads. This is what lets them scale from 5 carrier clients to 50.

    Skip any one phase and you have problems. Without Problem, they build the wrong product. Without Platform, they stay small forever. You will need to adapt these specifics to your context, but the five categories remain the same.

    What This Framework Is Not

    The 5Ps are not a process — real product development is messy and you will jump between phases constantly. They are not comprehensive — each P could fill a book. And they are not original in their parts. I did not invent strategy or customer segmentation. The value is in the arrangement — a structure that works for traditional products and AI-powered products alike, from a simple mobile app to a complex ML pipeline.

    How to Use With AI

    Structured frameworks are exactly what AI tools need to be useful. If you ask an AI “help me with my product,” you get generic advice. If you ask “help me with the Problem phase — specifically, help me identify underserved customer segments for my B2B (business-to-business) analytics tool,” you get something actionable.

    Use the 5Ps as a diagnostic. Paste your product strategy into Claude or ChatGPT and ask: “Which of the five areas is weakest? Where are the gaps?” The AI cannot make strategic decisions for you, but it can identify blind spots.

    Pressure-test launch readiness. Walk an AI through each P in order: “Here is our Plan, our Problem definition, our Product, our Promotion plan. What are we missing in Platform?” The sequential structure forces you to check each phase.

    AI is a facilitator, not the CEO. It can spot gaps. The strategic judgment is always yours.

    Why Share This?

    I built the 5Ps for myself. I am sharing it because every PM I have mentored has described the same gap — scattered knowledge, no unifying structure. And the gap is growing as more teams build AI-native products that demand end-to-end thinking.

    If you are early in your career, the 5Ps give you a map before you have explored the territory yourself. If you are experienced, they give you a shared vocabulary for mentoring and cross-functional conversations.

    The best framework is ultimately the one you develop for yourself based on what works for your personality, the company culture, and the market context. The 5Ps are my map. I hope they help you find yours.

    What do you think? I would love to hear how you organize your PM knowledge. Comments are gladly welcome.

  • What Is Product Management?

    What Is Product Management?

    Early in my career, someone asked me what a product manager does. I gave a ten-minute answer that involved strategy, roadmaps, customer research, cross-functional alignment, stakeholder management, and agile ceremonies. When I finished, they said, “So… you go to meetings?” I laughed, but the question stuck with me. If I could not explain my own job in a sentence, maybe I did not actually understand it yet.

    It took me years to get to a simple answer. The core job of product management is to figure out the most important customer problem to solve — and then determine the best thing to build to solve it. That is it. Everything else — the roadmaps, the specs, the stakeholder updates — is scaffolding around that central act of judgment.

    What Is Product Management?

    Product management is the discipline of discovering what to build and why, so that engineering, design, and the rest of the organization can focus on building the right thing well.

    The PM does not write the code. The PM does not design the screens. The PM does not sell the product. But the PM is responsible for making sure the thing that gets built is worth building in the first place.

    Marty Cagan, in his book Inspired, frames this as the PM being responsible for ensuring that what gets built is valuable (customers want it), usable (they can figure it out), feasible (engineers can build it), and viable (the business can sustain it). Of those four, value and viability are squarely the PM’s territory.

    Ken Norton, in his influential essay How to Hire a Product Manager, described the PM as someone who combines elements of engineering, design, marketing, and business — a leader who earns authority through judgment rather than title. I think that captures something essential: you are responsible for the outcome, but you control almost none of the inputs.

    The Two Spaces

    The simplest way I have found to explain what a PM does day-to-day is to split the work into two spaces.

    The Problem Space

    This is where you figure out what problem to solve. You talk to customers. You study the data. You map the market. You understand what people need, what frustrates them, and what they are trying to accomplish. Teresa Torres calls this continuous discovery — the habit of maintaining weekly touch points with customers so you always have fresh data when decisions need to be made.

    The problem space is where most PMs underinvest. It is tempting to skip ahead to solutions because solutions feel productive. But a beautifully executed solution to the wrong problem is still a failure. Understanding who your customers actually are and what they struggle with is the foundation of everything that follows.

    The Solution Space

    This is where you figure out what to build. You write the spec. You work with design on the experience. You scope the work with engineering. You make trade-offs about what to include and what to cut.

    The solution space is where most people think the PM job lives. And it does — partly. But the solution space only works if the problem space did its job first. The order matters: problem first, then solution. Always.

    Cagan calls this “product discovery” — the process of testing whether your proposed solution actually solves the problem before you commit engineering resources to building it. The best PMs I have worked with spend at least half their time in the problem space, even when everyone around them is pressuring them to “just ship something.”

    The Core Responsibilities

    If the two spaces define what a PM thinks about, here is how they spend their time. Lenny Rachitsky, in his newsletter, breaks the PM role into three areas: shape the product, ship the product, and synchronize the people. I would frame it slightly differently.

    Decide What to Build

    This is the heart of the job. Given limited resources and unlimited possibilities, what should the team work on next? This requires understanding the customer, the business, the technology, and the competitive landscape — and then making a call. As Ben Horowitz wrote in his classic memo on product management, a good PM takes ownership of the product’s direction and makes decisions based on customer needs, not internal politics.

    Prioritize and Sequence

    Deciding what to build is hard. Deciding what to build first is harder. Every feature request sounds reasonable in isolation. The PM’s job is to see all of them together and choose the sequence that creates the most value fastest. This is where the Strategy Pyramid becomes essential — it gives you a framework to evaluate whether a given feature serves the strategy or just sounds like a good idea.

    Align the Team

    A PM who makes the right decision but cannot get the team to execute on it has accomplished nothing. You need engineering to understand why they are building this. You need design to understand the customer pain. You need leadership to understand the trade-offs. This is not about slides and status updates. It is about giving people enough context to make good decisions on their own.

    A Concrete Example: ProjectFlow

    To make the discussion more concrete, imagine a company called ProjectFlow that builds a project management tool for creative teams. They have 50 customers and a team of eight.

    The PM at ProjectFlow gets a request from their biggest customer: “We need Gantt chart dependencies.” The sales team is excited. The customer is excited. It seems obvious.

    But the PM steps back into the problem space. She asks: what problem are dependencies solving? After three customer interviews, she discovers the real issue is not dependencies — it is that team leads cannot see when one person’s delay will affect someone else’s deadline. The customer said “Gantt chart” because that is the tool they know. The problem is visibility into blockers.

    Now in the solution space, the PM works with design on three options: full Gantt dependencies (six weeks of engineering), a simpler “blocked by” tag on tasks (two weeks), or an automated daily digest showing at-risk deadlines (one week). She evaluates each against ProjectFlow’s product vision — “every team ships with confidence” — and their strategy of simplicity over power.

    She chooses the daily digest. It solves the real problem, ships in a week, and aligns with the strategy. The customer is happy. The sales team is happy. And the PM protected six weeks of engineering time that can now go toward the next most important problem.

    That sequence — hear the request, dig into the real problem, explore multiple solutions, choose based on strategy — is product management in practice.

    What Product Management Is Not

    One of the most persistent confusions is between product management and project management. The names are similar. The work sometimes overlaps. But they are fundamentally different disciplines.

    A project manager asks: How do we deliver this on time and on budget? A product manager asks: Should we be building this at all?

    Project management is about execution. Product management is about direction. Both are valuable. But confusing them leads to PMs who spend all their time tracking tickets and running standups, and none of their time talking to customers or questioning whether the roadmap is right.

    If you find yourself spending most of your week on timelines, status reports, and coordination — and almost none of it on customer problems and strategic choices — you might be doing project management with a product management title. That is not a judgment. It is a signal that the role might need to be redefined.

    Why This Matters

    Product management matters because someone has to own the “why.” Engineers own the “how.” Designers own the experience. Sales owns the revenue. But the PM owns the question that precedes all of those: why are we building this, and why now?

    Without that function, teams drift toward building what is easy, what the loudest stakeholder wants, or what competitors already have. They ship features without conviction. They fill roadmaps without strategy. And they discover, months later, that nobody wanted what they built.

    A good PM is the person who prevents that drift. Not by having all the answers, but by asking the right questions at the right time and making sure the team solves problems worth solving.

    How to Use With AI

    AI is useful for the parts of product management that eat up time without requiring deep judgment. Think of it as a research assistant that never gets tired.

    Synthesize customer feedback at scale: paste a batch of support tickets, NPS comments, or interview transcripts into an AI tool and ask it to group the top five themes with representative quotes. What used to take a full day of reading now takes minutes. The PM still decides which theme matters most — that is judgment — but the sorting is mechanical.

    Stress-test your prioritization: describe your top three priorities and ask the AI to argue the case for each one, then argue against each one. This surfaces blind spots you might miss when you are attached to a particular direction.

    Draft the spec, not the strategy: AI can write a solid first draft of a product requirements document given a clear problem statement and constraints. Use it to compress the blank-page phase, then edit for nuance and context that only a human with product judgment can add.

    Guardrail: Never let AI make the prioritization call. Deciding which problem to solve is the PM’s core judgment. AI can inform that decision with data, synthesis, and alternative perspectives. But the final call — what to build and what to say no to — must come from someone who understands the customer, the business, and the team.

    Conclusion

    Product management is not about having all the answers. It is about asking the right questions. What is the most important problem our customers have? What is the best thing we could build to solve it? And what should we do first?

    If you get those questions right, the roadmap writes itself. If you get them wrong, no amount of execution will save you.

    This is not a formula. You will need to adapt these ideas to your context — your company size, your market, your team. But the core remains: problem space first, solution space second, and always, always own the “why.”

    What do you think? Comments are gladly welcome.