Your SaaS Product Is Becoming Liquid: The Bundle Is Unbundling
Two parallel shifts are converging: AI agents that generate interfaces on-demand, and LLMs that compile intent into code. Together, they're ending the era of fixed software. If the UI and the logic layer goes away, what's left of your SaaS product? Time for an uncomfortable exercise.
Two Rivers, One Sea
I spent the first decade of my career as an engineer. In that world, everything is Imperative. You write a specific line of code to solve a specific problem, you compile it, and you ship it. Then I moved into Product Management, where the world is Anticipatory. You try to guess every user need weeks or months in advance so you can design the right "fixed" flows.
But as someone who has lived on both sides, I'm seeing two parallel shifts converge on the same pattern:
Agent-to-User Interface: A new framework ( like Google's A2UI) that stops treating the UI as a pre-built destination. Instead, it treats it as a "disposable" interface that an AI agent assembles on the fly to support a specific task. Google's already using this in products like Opal, where agents create modal dialogs, forms, and data visualizations without touching a line of code.
LLM-as-Compiler: The realization that LLMs aren't just "chatbots"; they are high-level transpilers. They take messy human intent and, in real-time, map it to a sequence of technical actions. Tools like Plang (natural language to code) and v0 (Vercel's intent-to-UI generator) are already showing us a world where "intent" creates working code instantly.
When you put these together, you realize the fundamental assumption of software—that it must be "frozen" before it reaches a user—is ending.
The "Fixed" Workflow Tax
Here's the traditional product development cycle:
- User expresses a need
- PM interprets and creates a spec
- Designer creates fixed UI flows
- Engineer implements those workflows
- User learns the workflow and adapts to it
- PM analyzes feedback, and the cycle repeats
But look at what's actually happening at each step:
The PM assumes what the user really needs, what problems they'll encounter, and which workflows will feel natural to them.
The designer assumes how users will navigate, what mental models they'll have, which information should be prominent, and what the "happy path" looks like.
The engineer assumes how the data will be structured, what edge cases matter, which validations are needed, and how different features will interact.
Even with frequent user check-ins, usability tests, and discovery sprints, we're still building static software for a dynamic world. We codify our best guesses into a fixed product, then expect every user—regardless of their context, expertise, or immediate goal—to use it the same way.
This creates the Workflow Tax: we force humans to adapt to rigid software, rather than software adapting to humans.
And yes, AI is collapsing development cycles—from three months to three weeks, even three days. But here's the catch: even instant development of fixed software is still the wrong model. You're just shipping obsolete predictions faster. The problem isn't speed; it's that you're building a fixed prediction at all.
From Paper Maps to GPS: The Liquid Stack
To understand the shift, think of the difference between a Paper Map and a GPS:
The Paper Map (Traditional SaaS): It is a fixed prediction of a route. It was printed months ago. If a road is closed or a new path opens, the map is wrong. The user has to do the hard work of "translating" their current reality into the map's frozen lines.
The GPS (Liquid Product): It doesn't predict your route; it responds to your location. When you miss a turn or hit traffic, it regenerates the path in real-time based on where you actually are, not where it thought you'd be. It regenerates your route as you drive.
We are moving to a "Liquid Stack." We are moving from software that is an Appliance (a fixed tool you have to learn) to software that is a Concierge (a system that learns you).
This magic happens by re-ordering of how code is executed. The Liquid Stack requires three fundamental shifts:
Atomic Capabilities: Moving from monolithic features to small, independent building blocks that can be discovered and composed by AI agents. Instead of "the checkout flow," you have "Calculate Tax," "Verify Payment," "Update Inventory" as separate, callable capabilities.
Intent Resolution: A layer that translates messy human requests into sequences of capabilities. This is where LLMs act as semantic adapters—taking "I need to refund this broken item" and mapping it to the specific building blocks needed.
Trust Boundaries: A hard separation between what can be fluid (searching, showing, exploring) and what must be rigid (saving, changing, deleting). You query like water, but you write like stone.
These are the foundations for software that morphs to fit intent rather than forcing intent to fit the software.
What This Actually Looks Like
Let's make this concrete with a story.
Sarah is a sales manager heading into a board meeting. She's walking to the conference room when she realizes the board will ask why Q4 enterprise deals are stuck. She pulls out her phone and asks her AI assistant:
"Why are our Q4 enterprise deals stalling in legal review?"
She doesn't open an app. She doesn't log into her CRM. The AI is part of her operating system—it already knows her context, her role, her access permissions.
What happens in the next 3 seconds:
The AI decomposes her intent: She needs deal pipeline data, contract metadata, stage analysis, and pattern recognition.
It identifies which atomic capabilities to orchestrate:
- Salesforce's
QueryDeals(filtered: Q4, Enterprise, Legal stage) - DocuSign's
AnalyzeContractPatterns(what's different about stalled deals?) - Her company's internal
LegalReviewTimelinecapability - A visualization engine to make the data digestible
The AI generates an interface on her phone—not the Salesforce UI, not DocuSign's dashboard—a custom view built for this exact question:
A timeline showing 12 enterprise deals. Each one stuck on the same contract clause. The pattern is obvious: the standard IP ownership language is triggering 14-day delays with enterprise legal teams. Three deals have competing clauses highlighted in red.
There's a suggestion: "Legal recommends softening clause 7.3 for enterprise deals over $500K. Draft revision?"
Sarah taps "Yes." The AI generates revised contract language, sends it to Legal for approval, and adds a note to each stalled deal.
Total time: 8 seconds. No logins. No navigation. No searching.
By the time she walks into the boardroom, she has the answer, the pattern, and the fix already in motion.
What just happened architecturally:
The interface didn't exist until she asked the question. Her phone's OS-level AI orchestrated capabilities from Salesforce, DocuSign, and internal systems—none of which knew about each other. The AI generated the UI natively using her phone's design components. The data was queried fluidly. The contract change was written through rigid, validated gates (Legal approval required).
When Sarah closes the response, the interface disappears.
This is the liquid stack: the software morphs to fit the question, not the other way around.
Who Survives This Shift?
Not all SaaS dies in this world, but the survival criteria fundamentally change. The Workflow Tax drops to zero, and your current product becomes a commodity overnight—unless you have the right moat.
Some will survive: Stripe isn't just a payment form—it's fraud detection, compliance, and bank relationships that took decades to build. Salesforce isn't just a UI—it's the trusted system of record with validation rules and audit trails. These companies own capabilities that can't be compiled away.
But many won't: CRUD apps with nice interfaces, workflow automation that's mostly glue code, static dashboards, "All-in-One" platforms that hide value under navigation layers—these are all vulnerable when agents can generate the UI and orchestrate the logic on-demand.
The question every PM should be asking: If the interface and the logic both disappear, what's left of your product?
The Unsolved Product Challenges
This shift creates new problems we haven't solved yet:
The Liability Problem: If an AI agent generates an interface that pulls capabilities from your SaaS platform, and that interface confuses a user or causes them to make a mistake, who's liable? The SaaS provider? The agent platform? The user who crafted the prompt?
The Optimization Problem: Traditional product management relies on A/B testing interfaces and tracking user journeys. But when the interface is generated per-session, experimentation shifts to the orchestration layer: you're testing which capability compositions resolve intent most effectively, which atomic blocks have the highest fidelity, and where agent orchestration breaks down. The new metrics are intent resolution rate, capability accuracy, and orchestration efficiency—not button clicks and conversion funnels.
The Security Problem: When an LLM orchestrates capabilities on behalf of users, how do you prevent prompt injection attacks where users trick the AI into accessing things they shouldn't? How do you maintain role-based access control when the workflow is generated at runtime? How do you prove compliance with SOC2, GDPR, and HIPAA when the audit trail is "the AI decided to call these five functions"?
But What About System Consistency?
Some will argue that this creates chaos—that users need fixed interfaces to build mental models, that brands need controlled experiences to maintain identity, that compliance teams need stable workflows to audit.
Liquid UIs don't mean chaos.
A well-designed liquid stack means the interface adapts to your role, task, and context while still feeling coherent. The security boundaries remain. The validation rules remain. What changes is the arrangement of components to fit what you're actually trying to do.
Think of it this way: iOS maintains consistent system-level patterns (navigation gestures, notifications, security permissions), but every app inside has its own UI—DocuSign looks nothing like Salesforce, which looks nothing like Slack. Yet they all feel native to iOS because they follow the operating system's design principles.
In Sarah's example, her phone's AI orchestrated capabilities from Salesforce, DocuSign, and internal systems—but she never saw three different app interfaces. The AI generated one unified view using her phone's native design components. The DocuSign contract analysis and Salesforce deal data appeared in a single, contextual interface that felt like it belonged to her OS, not to any individual app.
That's what liquid UIs enable: adaptive interfaces that maintain system-level consistency while pulling from any underlying capability. The brand constraints remain at the OS level. The security boundaries remain. What changes is the arrangement of components to fit what you're actually trying to do.
The companies that survive won't resist this shift—they'll be those that maintain trust and coherence while embracing it.
Your Next Move: Test Your Product's Moat
Most SaaS companies have been selling convenience: packaging capabilities into discoverable, usable bundles. Users paid because finding, integrating, and orchestrating those capabilities themselves was too hard.
But LLMs are getting good at compilation, and dynamic interfaces are making runtime generation practical. The bundle is unbundling—not into micro-SaaS, but into capabilities that agents compose on-the-fly.
So here's an exercise worth doing if you're building a SaaS product:
Strip away the UI. Strip away the pre-built workflows. Strip away anything an LLM could reasonably compile from a natural language description.
What's left?
If the answer is "not much," it's time to pivot toward infrastructure, data moats, or domain expertise that's genuinely hard to replicate.
If the answer is "actually, quite a lot," then you're probably building the right thing—you just need to think about exposing it as a protocol rather than hiding it behind a fixed interface. And if you're building something new? Design it as composable capabilities from day one, not as a monolith you'll later try to unbundle.
The companies that figure this out early will compound the advantage. The ones that wait will find themselves competing on price against commodity solutions.
Data and core capabilities have always been valuable—that's not new. What's changing is how that value gets accessed. In a world where agents orchestrate capabilities combinatorially—pulling your "Calculate Tax" alongside someone else's "Verify Address" and another's "Process Payment"—the moat becomes the fidelity of your atomic capabilities. How accurate, how reliable, how trustworthy your building blocks are when they're composed in ways you never designed for.
The future of SaaS isn't about building better apps. It's about building better atomic capabilities that survive in a world where apps themselves are compiled on-demand. You win by becoming the Trust Anchor that the AI mesh depends on.
Think of it as moving down the stack. You're no longer selling "the experience of using our app." You're selling "the capability to do X reliably, at scale, with guarantees."
And that's a fundamentally different product challenge than what most of us signed up for.
The core PM responsibility—deciding what to build and why—doesn't change. What changes is the abstraction layer: you're building liquid capabilities, not fixed features.