Category: Platform

  • Platform Product Management

    Platform Product Management

    About eight years ago, I joined a team building developer tools. On my first day, I did what I always did: I pulled up usage analytics, read support tickets, and started drafting a roadmap based on what end users were asking for. Within a week, the CTO pulled me aside. “You’re thinking about this wrong,” he said. “Our users don’t want features. They want capabilities. They want to build things we haven’t imagined yet.”

    That conversation changed how I think about product management. When your product is a platform — APIs, SDKs, developer tools — the rules shift. You are no longer building for the person who clicks buttons. You are building for the person who writes code on top of your product. And that difference touches everything: how you define success, how you build your roadmap, and how you talk to your customers.

    In the 5Ps framework, platform PM lives at the intersection of Platform and Product. Getting the strategy right still matters, but the strategy itself looks different when your customer’s customer is the real end user.

    What Is Platform Product Management?

    Platform product management is the practice of building and managing products that other developers build on top of. Your product isn’t the final experience. It is the foundation that enables others to create final experiences.

    This is fundamentally different from consumer or SaaS product management. In a typical SaaS product, you control the entire user journey. In a platform, you control the building blocks, but someone else assembles the house. Your job is to make those building blocks reliable, composable, and well-documented.

    The distinction matters because it changes your relationship with your user. A SaaS PM asks, “How do I get the user to complete this task?” A platform PM asks, “How do I give the developer everything they need to solve problems I haven’t anticipated?”

    The Core Components

    Platform PM has three pillars that differ from traditional product work.

    1. API Design Is Your Product

    In a consumer app, the UI is the product. On a platform, the API is. Every endpoint, every parameter name, every error message is a product decision. A confusing API is like a confusing checkout flow — developers will abandon it.

    This means your product spec isn’t a wireframe. It is an API contract. And you need to think about backward compatibility the way a consumer PM thinks about onboarding flows — break it, and you lose trust.

    2. Documentation Is a Feature

    On a typical product, docs are a support function. On a platform, documentation is one of your most important features. If a developer can’t figure out how to integrate your SDK in an afternoon, you have a product problem, not a docs problem.

    3. Your Success Metric Is Their Success

    In SaaS, you measure adoption: DAUs, retention, NPS. On a platform, you measure what your developers build. Are they shipping integrations? Are those integrations retaining their own users? Your North Star isn’t “how many developers signed up.” It is “how many developers built something that works.”

    A Concrete Example: DevGrid

    To make this concrete, imagine a developer tools company called “DevGrid.” They provide infrastructure APIs that let other companies build real-time collaboration features — shared cursors, live editing, presence indicators.

    DevGrid’s early roadmap looked like a typical SaaS roadmap. They prioritized features that their biggest customer asked for: a specific data export format, a custom authentication flow, a dashboard for monitoring usage.

    The problem was that each of these was a one-off. Every feature served one customer’s needs but didn’t make the platform more capable for everyone.

    The turning point came when DevGrid’s PM team shifted to a platform mindset. Instead of building the custom export format, they built a flexible webhook system that let any developer pipe data wherever they wanted. Instead of the custom auth flow, they built a pluggable authentication layer with clear extension points.

    The result: DevGrid went from supporting 12 integration patterns to over 200. And the PM team stopped being a bottleneck for custom requests because developers could solve their own problems.

    The Platform Roadmap Trap

    Here is a nuance that catches many PMs moving into platform work. In consumer products, you can sequence features by user impact. “Feature A helps 80% of users, so we build it before Feature B, which helps 20%.”

    On a platform, the calculus is different. You often need to build horizontal capabilities — things like rate limiting, versioning, or error handling — that no individual developer asked for but that every developer needs. These don’t show up in customer interviews. Nobody emails you saying, “Please build better API versioning.” But if you skip them, you will hit a wall when you try to update your API without breaking existing integrations.

    The best platform PMs I have worked with keep a dual roadmap: one track for developer-requested capabilities, and one track for platform infrastructure that enables future growth. The infrastructure track often feels thankless, but it is what separates platforms that scale from platforms that collapse under their own weight. This is similar to how a strong go-to-market strategy balances short-term wins with long-term positioning.

    Why This Matters

    Platform product management matters because the stakes are higher than in typical product work. When you break a consumer feature, your users are frustrated. When you break a platform API, you break every application built on top of it. Your mistakes cascade.

    But the upside is equally amplified. A well-designed platform creates leverage that a single product never can. You build one capability, and a thousand developers use it in ways you never imagined.

    Getting the team structure right is critical here. Platform teams need deep technical expertise and long time horizons. They can’t be organized like feature teams churning out quarterly deliverables.

    How to Use With AI

    AI can be a powerful tool for the analytical side of platform PM — the parts that involve synthesizing large amounts of developer feedback and spotting patterns across hundreds of API consumers.

    1. Developer Feedback Synthesis

    Platform teams get feedback from many channels: GitHub issues, support tickets, developer forum posts, Slack messages. Feed a batch of these into an LLM and ask it to cluster them by underlying need, not surface request.

    Prompt: “Here are 50 developer support tickets from the last month. Group them by the underlying capability gap, not the specific feature request. For each group, suggest whether the fix is a documentation improvement, an API change, or a new primitive.”

    2. API Design Review

    Before shipping a new endpoint, paste the spec and ask for a critique from the developer’s perspective.

    Prompt: “Review this API endpoint design. Act as a developer who has never seen our platform before. What is confusing? What naming conventions are inconsistent with REST best practices? What error cases are missing?”

    3. Breaking Change Impact Analysis

    When you need to deprecate or modify an existing API, use AI to scan your documentation and sample integrations for potential downstream impact.

    Prompt: “I need to change the response format of this endpoint. Here is the current spec and the proposed new spec. What migration steps would a developer need to take? Draft a migration guide.”

    Guardrail: AI is useful for finding patterns in developer feedback and stress-testing API designs. But the decision about what to build — and what to deliberately not build — requires human judgment about your platform’s strategic direction.

    Conclusion

    Platform product management is a different discipline from consumer or SaaS PM. It requires a shift in mindset: from controlling the user experience to enabling the developer experience. From measuring clicks to measuring what gets built. From building features to building capabilities.

    What do you think? Comments are gladly welcome.

  • AI-Native Product Management

    AI-Native Product Management

    Andrew Ng said something in a widely-cited 2025 interview that stopped me cold. One of his teams proposed flipping the traditional PM-to-engineer ratio on its head. Instead of one PM for every six engineers, they wanted one PM for every half an engineer. The reason? Engineers using AI-assisted coding were generating work so fast that the PM couldn’t evaluate it all. The bottleneck had moved.

    I have been a product manager for over 15 years, and for most of that time the bottleneck was engineering capacity. There was never enough of it. PMs spent their days negotiating priorities, slicing scope, and horse-trading features across teams. That world is disappearing. And what’s replacing it demands a fundamentally different way of working.

    What Is AI-Native Product Management?

    There is an important distinction here that most people blur. Using ChatGPT to clean up your product requirements document (PRD) is “AI-enhanced” product management. You bolt a tool onto your existing workflow and keep doing what you were doing, a little faster. That’s fine. But it’s not what this article is about.

    AI-native product management means redesigning how your team discovers, decides, and ships with AI embedded at every stage. It’s not a tool upgrade. It’s a workflow redesign. The difference is like the difference between putting a motor on a horse cart and designing a car from scratch.

    O’Reilly’s Radar team identified three PM archetypes emerging from this shift. AI Builder PMs create AI-powered products. AI Experience PMs design how users interact with AI features. AI-Enhanced PMs use AI to do their existing job better. Most PMs I know are in that third category, which is a fine starting point. But the teams pulling ahead are the ones rethinking the work itself.

    In the 5Ps framework, this sits squarely in the Platform P — the people, process, and infrastructure layer. How you structure your team, what tools make up your operating stack, what skills PMs carry forward. This is the 2025-2026 evolution of the Platform P’s central question: what does a high-functioning product team actually look like?

    The AI Operating Stack

    Here is a term I find useful: the “AI Operating Stack.” It’s the deliberate set of AI tools and workflows a product team assembles at the Platform layer. Not a random collection of subscriptions. A connected system where each tool serves a specific stage of the product development cycle.

    In my experience, the stack has four layers:

    Discovery

    This is where you figure out what to build. Tools like Dovetail auto-transcribe user interviews, cluster themes, and surface patterns that would take a human researcher days to find. Perplexity pulls real-time competitive data from Reddit, review sites, and news faster than manual research, and I use it before scheduling discovery calls to walk in already knowing the frustrations. The PM’s job shifts from manually sorting feedback to directing the synthesis and questioning what the AI misses.

    Documentation

    Productboard’s Spark AI agent ingests signals from Slack, support tickets, and sales calls, then clusters them and drafts context-aware PRDs. Tools like ChatPRD generate structured first drafts from brief descriptions. The shift here is from authoring to editing. You spend less time staring at a blank page and more time sharpening what the AI produces.

    Prototyping

    This one surprised me the most. Figma’s Make feature converts text prompts into clickable prototypes in seconds. A PM can now build a proof-of-concept and put it in front of users the same day an idea surfaces. No design queue. No two-week wait. That compression of time-to-feedback changes the economics of experimentation.

    Collaboration

    Notion’s team has gone deep here. They’ve built over 2,800 internal AI agents using MCP integrations that connect to Linear, Figma, and HubSpot. Brian Lovin, a product designer at Notion, built a shared prototype playground using Claude Code and a Next.js environment where the design team turns Figma files into working, testable code without engineering hand-offs. That’s not “AI-enhanced.” That’s a fundamentally different way of working.

    What Changes About the PM Role

    Here is what surprised me most. People assume AI mostly automates the boring stuff — data pulling, formatting, status updates. Lenny Rachitsky’s research found the opposite: AI most disrupts the high-level PM skills like strategy, vision, and PRD writing. The things we thought were uniquely human.

    What becomes more valuable? The soft skills. Influence, product sense, stakeholder alignment, the ability to look at an AI-generated analysis and say “this is missing something.” Judgment about AI output turns out to be the durable PM skill.

    Marty Cagan at SVPG has been tracking this closely. He notes that engineering teams are shrinking from eight to five or six as AI-assisted development improves productivity by 20-30%. But the PM role becomes more essential and more difficult, not less. Product sense and judgment matter more when AI handles the analytical load. For delivery-oriented product owners who mostly coordinate and project-manage, AI may automate many of those tasks entirely.

    A Concrete Example: Ramp’s AI Agents

    To make the discussion more concrete, consider what Ramp has done. Ben Levick, their Head of Ops and Internal AI, built over 300 Notion Custom Agents that now handle product and operational questions every day. Onboarding queries, product FAQs, internal enablement questions, all handled by agents that free up PM bandwidth for the work that actually requires human judgment.

    This is not a theoretical exercise. It is a team that identified the repetitive, information-retrieval parts of PM work and deliberately moved them to AI. The PMs did not lose their jobs. They gained time for discovery, strategy, and the cross-functional alignment work that no agent can do.

    Why This Matters

    The numbers tell the story. McKinsey’s State of AI report found that 88% of organizations now deploy AI in at least one business function, up from 78% just a year prior. This is not a trend you can wait out.

    But here is the thing I keep coming back to: the risk is not that AI replaces PMs. The risk is that PMs who build an AI Operating Stack will consistently outpace those who don’t. Cagan is honest about this. He says virtually all PMs will need to become “AI PMs.” The only question is how quickly your team makes the transition.

    How to Use With AI

    If you want to start building your own AI Operating Stack, here is a workflow I have found useful:

    1. The Stack Audit

    Start by mapping your current product development cycle end to end. For each stage, ask: where am I spending time on synthesis, formatting, or information retrieval that AI could handle?

    Paste your actual weekly workflow into Claude or ChatGPT and ask it to identify every task involving synthesis, summarization, or information retrieval. For each one, ask for a specific AI tool that could handle 80% of it.

    2. The Discovery Synthesis Workflow

    Export your last 50 support tickets or sales call notes. Ask an AI to cluster them into 3-5 groups by underlying problem, naming the persona and frustration for each cluster. Then compare its clusters to your own intuition. Where they disagree is where the interesting insights live.

    3. The PRD Editing Workflow

    Stop writing PRDs from scratch. Give the AI a brief description and let it generate a first draft. Spend your time editing, questioning assumptions, and adding context only you have — competitive dynamics, internal politics, technical debt the AI doesn’t know about.

    The Guardrail: Your AI Operating Stack should serve your team’s actual workflow, not the other way around. If you find yourself adapting your process to fit the tool, that’s a signal to reassess. And the strategic choices (which segment to target, what to build next, what to kill) remain human decisions. AI can inform them. It cannot make them for you.

    Conclusion

    AI-native product management is not about using more AI tools. It is about deliberately redesigning how your team discovers, decides, and ships, with AI embedded in the workflow from the start. The teams that treat this as a Platform question — how do we structure ourselves to work this way? — will pull ahead. The teams that treat it as a tools question — which AI should I subscribe to? — will keep bolting motors onto horse carts.

    This is what I’m seeing work. Your context is different, and you will need to adapt these ideas to your team, your product vision, and your stage. But the direction feels clear to me: the PM role is becoming more about judgment and less about production. That is a good trade.

    What do you think? I’d love to hear how your team is approaching this. Comments are gladly welcome.

  • Product Management Roles Explained: PM, PMM, PO, and Beyond

    Product Management Roles Explained: PM, PMM, PO, and Beyond

    One of the most confusing things about product management is that the title “Product Manager” can mean completely different things at different companies. I have seen PMs who write SQL queries all day and PMs who never open a database. PMs who own pricing and PMs who have never seen a revenue number. Same title. Different jobs entirely.

    If you are trying to break into product management, this is disorienting. If you are a hiring manager, it is a recipe for mismatched expectations. This article maps the major product roles, what each one actually owns, and how they differ across company stages.

    Product Manager (PM)

    The core role. A PM owns the what and the why — deciding what to build, for whom, and why it matters. They do not own the how (that is engineering) or the when (that is a negotiation).

    At a startup, a PM does everything: customer interviews, writing specs, prioritizing the backlog, analyzing data, coordinating launches. At a large enterprise, the role is more specialized — a PM might own a single feature area and spend most of their time aligning stakeholders.

    The one constant across contexts: a PM is responsible for outcomes, not output. Shipping features is not the goal. Solving customer problems in ways that serve the business is the goal.

    Product Marketing Manager (PMM)

    PMMs own the story. While PMs figure out what to build, PMMs figure out how to explain it — positioning, messaging, competitive differentiation, and launch strategy.

    The handoff between PM and PMM is one of the most important (and most neglected) partnerships in a product organization. A PM who ships a feature without involving PMM gets a technically correct product that nobody understands. A PMM who positions a product without understanding the PM’s intent gets beautiful messaging that misrepresents the product.

    At smaller companies, the PM often handles both roles. This works until it does not — usually around the time you start losing deals because prospects cannot understand your value proposition in under 30 seconds.

    Product Owner (PO)

    The PO role comes from Scrum. In theory, the Product Owner manages the backlog, writes user stories, and ensures the development team always has clear work ahead of them.

    Marty Cagan at SVPG has written extensively about this confusion — his piece on Product Manager vs. Product Owner describes how organizations conflate two very different responsibilities. In practice, PO and PM overlap significantly, and the distinction varies by company. Some organizations use them interchangeably. Others draw a clear line: the PM sets the strategy and the PO executes it in sprint-level detail. Neither approach is wrong — the important thing is that everyone on the team knows who makes which decisions.

    In my experience, companies that have both a PM and a PO for the same product often struggle with unclear ownership. If you are setting up a product organization, pick one model and be explicit about decision rights.

    Technical Product Manager (TPM)

    Technical PMs own platform capabilities, APIs, infrastructure, and developer-facing products. They sit closer to engineering than to customers and often have an engineering background themselves.

    The distinguishing feature of a TPM is who their “customer” is. For a traditional PM, the customer is an end user. For a TPM, the customer might be internal engineering teams, third-party developers, or data scientists. The work is less about user experience and more about system architecture, API design, and technical scalability.

    If your company builds a platform that other products are built on, you need TPMs. If you are an aspiring PM with an engineering background, this is often the most natural entry point.

    Growth PM

    Growth PMs focus on acquisition, activation, and retention — the levers that drive user growth. They tend to be more data-driven than feature-oriented, running experiments at high velocity and making decisions based on statistical significance rather than customer interviews.

    The Growth PM role is most common at consumer companies and product-led growth B2B companies where small improvements in conversion rates translate directly to revenue. At a company where growth depends on enterprise sales, you are less likely to see this role.

    How Roles Change With Company Stage

    At a 10-person startup, one person is the PM, the PMM, the PO, and half the growth team. At a 1,000-person company, these are separate roles with separate career ladders.

    Understanding this is important for two reasons. First, if you are hiring, match the role to your stage. A PM from a large enterprise who is used to deep specialization may struggle at a startup that needs a generalist. The reverse is equally true.

    Second, if you are building a career, know which skills transfer across roles. Customer empathy, communication clarity, and data literacy matter in every product role. Sprint planning skills are more PO-specific. Messaging and positioning are PMM-specific.

    An Example: How DataBridge Structures Its Team

    DataBridge, a fictional mid-stage B2B analytics company (80 employees), has three product squads. Each squad has a PM who owns strategy and customer research, and a PO who translates that into sprint-ready work. They have one PMM who covers all three squads — positioning, launch plans, and sales enablement. Their platform team has a TPM who owns the API and data pipeline. There is no Growth PM because their sales motion is enterprise-driven.

    At 200 employees, they will probably split the PMM role into two and add a Growth PM for their self-serve tier. The structure evolves with the business.

    How to Use With AI

    AI can help you think through role design for your specific context.

    Design a role matrix. Describe your company stage, product, and team size to Claude or ChatGPT. Ask: “What product roles do I need, and what should each one own? Where will responsibilities overlap?” The AI will surface common patterns and potential conflicts.

    Write better job descriptions. Paste a draft JD into an AI and ask: “Does this describe a PM, a PO, or a PMM? Are the responsibilities clear and internally consistent?” Most job descriptions accidentally describe two different roles. AI is good at catching that.

    The guardrail: AI can suggest role structures based on patterns. It cannot account for your company’s culture, politics, or the specific humans involved. Org design is ultimately about people, not org charts.

    Why This Matters

    Getting product roles right is a Platform-phase decision in the 5Ps framework with Product-phase consequences. If you hire the wrong type of PM for your stage, you get misaligned expectations and underperformance — not because the person is bad, but because the role does not match the work.

    If you are an aspiring PM, understanding the full picture helps you target the right role for your strengths. If you are a hiring manager, it helps you write job descriptions that attract the right candidates. And if you are a PM trying to explain your job to your parents, you can now say “it depends on the company” with specificity.

    What do you think? I would love to hear how your company structures product roles. Comments are gladly welcome.

  • Product Team Structures

    Product Team Structures

    You can hire the best engineers in the world. You can recruit the smartest product managers. You can buy the most expensive tools. But if your team structure is flawed, you will still move at a glacial pace.

    I have seen this pattern repeat in startups and enterprises alike. The symptoms are always the same: meetings multiply, decisions stall, and shipping even a simple feature feels like pulling teeth. The problem isn’t the people. It is the structure.

    There is a famous observation called Conway’s Law, which states that organizations design systems that mirror their own communication structure. If you have a database team, a backend team, and a frontend team, you will inevitably build a product that is stitched together from three separate pieces. And every time you want to release a new feature, you will need a meeting with all three teams to coordinate.

    In the 5Ps framework, team structure sits in the Platform phase. It is the foundation that allows—or prevents—your teams from doing their best work. Getting the strategy right means nothing if the structure won’t let your teams execute.

    Two Common Models

    There are many ways to organize product teams, but most fall into two main categories. Marty Cagan of SVPG famously distinguishes between “feature teams” and “component teams.”

    Component Teams

    This is the traditional IT approach. You group people by their technical skill or architectural layer. You have a “Mobile Team” (iOS/Android engineers), a “Backend Team” (Java/Go engineers), and a “QA Team.”

    On paper, this looks efficient. The mobile engineers sit together and learn from each other. The backend architecture stays clean because one team owns it.

    But in practice, it is a nightmare for speed. To ship even a simple feature—like adding a “Save to Wishlist” button—you need the Mobile Team to build the UI, the Backend Team to update the API, and the Database Team to modify the schema. If one team is busy, the whole feature waits.

    Feature Teams

    This is the modern product approach. You group people by the customer problem they are solving or the business outcome they are driving.

    A “Search Team” might include a product manager, a designer, two backend engineers, one frontend engineer, and a data scientist. They have everything they need to build, ship, and measure search features. They don’t need to ask permission from another team to change the database schema for their search index.

    A Concrete Example: ShopRight

    To make the discussion more concrete, let’s look at a hypothetical e-commerce company called “ShopRight.”

    The Old Way (Component Teams)

    ShopRight started with a classic structure:
    * Web Team: Owned the website.
    * App Team: Owned the iOS and Android apps.
    * Platform Team: Owned the APIs and database.

    When the Head of Product wanted to launch a “Buy Online, Pick Up in Store” feature, it was a disaster. The Web Team built the interface in two weeks. But the Platform Team was backed up with technical debt work and couldn’t touch the API for a month. The App Team realized halfway through that they needed a different API endpoint than the Web Team.

    The project took six months. By the time it launched, a competitor had already captured the market.

    The New Way (Feature Teams)

    ShopRight reorganized into “Squads” based on the customer journey:

    1. Discovery Squad: Focused on helping users find products (Search, Browse, Recommendations).
    2. Conversion Squad: Focused on the transaction (Cart, Checkout, Payments).
    3. Retention Squad: Focused on bringing users back (Loyalty, Wishlist, Notifications).

    Now, let’s look at that same “Buy Online, Pick Up in Store” feature. It falls clearly under the Conversion Squad. This team has its own backend engineers and mobile engineers. They sat down, designed the API changes they needed, updated the app and website, and shipped the first version in three weeks.

    They didn’t have to wait for the “Platform Team.” They were the team.

    Team Topologies

    While “Feature vs. Component” is a great starting point, the reality is often more complex. You can’t just have feature teams; otherwise, who maintains the shared infrastructure?

    This is where the work of Matthew Skelton and Manuel Pais in Team Topologies is incredibly useful. They identify four fundamental team types:

    1. Stream-aligned teams: These are your feature teams. They align to a stream of work (like a customer journey) and deliver value directly.
    2. Enabling teams: Specialists who help stream-aligned teams acquire missing capabilities (e.g., an accessibility expert who rotates between teams).
    3. Complicated-subsystem teams: Teams that manage highly specialized technical components (e.g., a video processing engine or a face recognition algorithm) that require deep expertise.
    4. Platform teams: Teams that provide internal services (like a deployment pipeline or a design system) to reduce the cognitive load on stream-aligned teams.

    This nuance is critical. You want most of your teams to be stream-aligned, but you support them with a strong platform team so they don’t have to reinvent the wheel.

    How to Choose

    So, which structure is right for you?

    If your goal is technical consistency and resource efficiency, component teams often win. It is cheaper to have one pool of DBAs than to embed a database expert in every squad.

    But if your goal is speed of delivery and customer value, feature teams are superior. The cost is redundancy—you might have two teams solving similar problems—but the benefit is autonomy.

    In my experience, startups should almost always use feature teams. Speed is your only advantage. As you scale, you will need to layer in platform teams to keep the chaos in check, but the core unit of delivery should remain the cross-functional squad.

    How to Use With AI

    Reorganizing a team is a sensitive human process, but AI can be a powerful tool for analyzing your structural logic before you move a single desk.

    1. The “Conway’s Law” Detector
    If you aren’t sure where your current structure is broken, paste a month’s worth of “blockers” from your status reports into an LLM.

    • Prompt: “Here is a list of reported blockers from our last 4 weeks of status updates. Analyze these to identify patterns of dependency. Which teams are most frequently waiting on other teams? Based on this, suggest which components might need to be moved into the blocked team’s ownership to increase autonomy.”

    2. Stress-Testing Team Topologies
    When drafting new team boundaries, you can use AI to simulate the “cognitive load” on a proposed team—a key concept from Team Topologies.

    • Prompt: “I am designing a new ‘Checkout Squad’ that will own the cart, payment processing, and post-purchase email flows. Act as a devil’s advocate using the principles of Team Topologies. Is this scope too broad for one team (too high cognitive load)? Or is it appropriate? Identify potential fracture points.”

    Guardrail: AI sees logic, not relationships. It might suggest a mathematically perfect structure that fails because two key leaders hate each other. Use AI to check the theoretical soundness of your model, but make the final decisions based on your knowledge of the people involved.

    Conclusion

    Changing team structure is painful. It disrupts relationships and changes reporting lines. But it is also one of the highest-leverage changes you can make as a leader.

    If your team feels slow, don’t just ask them to work harder. Look at your org chart. Are you structured to ship value, or are you structured to ship code components? The difference is everything.

    What do you think? Have you seen a transformation from component to feature teams work well (or fail)? Comments are gladly welcome.

  • Why This New Framework? The Story Behind the 5Ps

    Why This New Framework? The Story Behind the 5Ps

    I have been building products for over 15 years. And for most of that time, I have been looking for a map.

    Not a process. Not a methodology. A map — something that shows the full terrain of product management in a way you can hold in your head while making decisions on a Tuesday afternoon. I never found one that worked for me. So I built my own.

    The Gap

    Product management knowledge is scattered everywhere. It lives in blog posts, in books, in conference talks, and in the heads of experienced PMs who never write anything down. You pick up prioritization frameworks from one source, discovery methods from another, go-to-market thinking from a colleague. Each piece is valuable. But there is no structure to hold it all together.

    I experienced this firsthand when I moved between PM roles. The companies were different, but the underlying questions were always the same: What are we building and why? Who is this for? How do we get it to market? How do we scale? I kept solving the same categories of problems with no shared vocabulary for those categories.

    The existing frameworks I tried were either too complex or too narrow. The Pragmatic Framework from Pragmatic Institute covers 37 activities but reads as a practitioner checklist, not a mental model. Teresa Torres’s Opportunity Solution Tree is excellent for structuring discovery but operates at the feature level. None of them gave me the truly end-to-end picture — from vision and idea all the way through to revenue and happy customers. And almost none addressed what happens after you ship: who builds the team, designs the organization, and makes sure the infrastructure can support the product at scale.

    And in a world where machine learning and AI are increasingly the core of what teams are building, the old playbooks fit even less. How do you define an MVP (minimum viable product) when the model needs training data before it can do anything? How do you find product-market fit — the point where your product meets real demand — when the product gets better over time? I needed a framework that covered the full lifecycle and made sense for AI-powered products too.

    The Five Ps

    The insight was simple: products have a natural lifecycle, and the big questions a PM faces follow a sequence.

    Plan is where it starts. Vision, mission, and product strategy. This is the “why” and the “where” — before you build anything, you need to know what game you are playing.

    Problem is where you get specific. Who are you building for, and what do they actually need? Not what they say they want. What they need. Customer interviews are the primary tool here. For AI products, this phase is especially critical — you need to understand whether machine learning is genuinely the right solution.

    Product is where you build. MVP development, finding product-market fit, pricing, and packaging. But without Plan and Problem, you are building in the dark.

    Promotion is where you scale demand. Go-to-market strategy, marketing, sales support, customer loyalty. Many PMs think their job ends when the feature ships. It does not.

    Platform is where you scale the organization. Team structure, hiring the right roles, leadership development. This is the least discussed phase in most PM frameworks, and in my experience, the one where companies struggle the most.

    Five phases. A natural sequence from strategy through scale. Simple enough to remember over coffee. And the alliteration is not an accident — mnemonics work. If a framework is easy to remember, people actually use it. Especially under pressure, when nobody is pulling up a slide deck.

    An Example: DataFirst

    Imagine a startup called DataFirst that builds an ML-powered tool for detecting fraudulent insurance claims.

    Their Plan: make fraud detection accessible to mid-size carriers who cannot afford in-house data science teams. Start with auto insurance, expand from there.

    Their Problem phase reveals a surprise — after interviewing 40 claims adjusters, they learn adjusters do not want automated fraud flags. They want a tool that surfaces suspicious patterns and lets them make the call. False positives damage customer relationships, and adjusters know it.

    This reshapes the entire Product from “AI that catches fraud” to “AI that makes adjusters smarter.” They build an MVP that presents confidence scores alongside evidence. Product-market fit arrives when adjusters start using it voluntarily.

    Promotion reveals that insurance conferences and peer case studies outperform traditional marketing. And in Platform, the ML team is burning out on retraining cycles — so DataFirst hires an ML ops engineer and restructures into separate squads. This is what lets them scale from 5 carrier clients to 50.

    Skip any one phase and you have problems. Without Problem, they build the wrong product. Without Platform, they stay small forever. You will need to adapt these specifics to your context, but the five categories remain the same.

    What This Framework Is Not

    The 5Ps are not a process — real product development is messy and you will jump between phases constantly. They are not comprehensive — each P could fill a book. And they are not original in their parts. I did not invent strategy or customer segmentation. The value is in the arrangement — a structure that works for traditional products and AI-powered products alike, from a simple mobile app to a complex ML pipeline.

    How to Use With AI

    Structured frameworks are exactly what AI tools need to be useful. If you ask an AI “help me with my product,” you get generic advice. If you ask “help me with the Problem phase — specifically, help me identify underserved customer segments for my B2B (business-to-business) analytics tool,” you get something actionable.

    Use the 5Ps as a diagnostic. Paste your product strategy into Claude or ChatGPT and ask: “Which of the five areas is weakest? Where are the gaps?” The AI cannot make strategic decisions for you, but it can identify blind spots.

    Pressure-test launch readiness. Walk an AI through each P in order: “Here is our Plan, our Problem definition, our Product, our Promotion plan. What are we missing in Platform?” The sequential structure forces you to check each phase.

    AI is a facilitator, not the CEO. It can spot gaps. The strategic judgment is always yours.

    Why Share This?

    I built the 5Ps for myself. I am sharing it because every PM I have mentored has described the same gap — scattered knowledge, no unifying structure. And the gap is growing as more teams build AI-native products that demand end-to-end thinking.

    If you are early in your career, the 5Ps give you a map before you have explored the territory yourself. If you are experienced, they give you a shared vocabulary for mentoring and cross-functional conversations.

    The best framework is ultimately the one you develop for yourself based on what works for your personality, the company culture, and the market context. The 5Ps are my map. I hope they help you find yours.

    What do you think? I would love to hear how you organize your PM knowledge. Comments are gladly welcome.