From Screens to Intent
I've been thinking a lot about how much of frontend work starts before the user even arrives.
We define layouts, flows, empty states, happy paths, edge cases, breakpoints, and every pixel in between. Then we polish the system until the product feels predictable and coherent.
I still think that model matters. But I don't think it is the whole picture anymore.
As AI becomes a real interaction layer, more users are starting from intent instead of navigation. They are not always moving through a fixed tree of screens. They are asking for an outcome, and increasingly they expect software to respond with the right interface for that moment.
The Old Contract Was Simple
Traditional product development gave us a pretty stable contract:
- designers define the experience
- frontend engineers implement the experience
- users discover the right screen and operate within it
The interface was the starting point. Even when it was dynamic, it still came from a controlled set of predefined decisions.
Generative UI changes that contract in a way I find hard to ignore.
The user starts with a goal. The system decides what interface helps most. The UI becomes a response, not only a container.
That doesn't mean design disappears. It means design moves up a level. Instead of designing one static screen, we design the rules, components, and constraints that let the right screen emerge.
The Same Prompt Can Produce Different UI
One of the things I keep coming back to with Generative UI is that the same request can lead to different results depending on context.
Ask for "show me my project status" and the right interface depends on what the system knows:
- a founder might need a high-level summary with risks and momentum
- an engineer might need blocked tasks and failing checks
- a support lead might need customer impact and escalation signals
The prompt is the same. The interface is not.
This is the shift that makes the topic so interesting to me as a frontend engineer. We are no longer only composing pages. We are building systems that can compose experiences from primitives.
That requires more than clever prompting. It requires strong components, clear semantics, predictable state transitions, and a way for models to request UI safely.
MCP Is the Missing Transport Layer
If Generative UI is the experience layer, MCP is one of the protocols that makes it feel practical.
MCP gives models a structured way to discover capabilities, call tools, access data, and coordinate with application logic. Instead of stuffing everything into text, it creates a cleaner contract between the model and the software around it.
That matters because UI generation only becomes useful when it is connected to real context:
- data about the user
- permissions and product state
- actions the system can actually perform
- design-system components that can be composed with confidence
Without that connection, generated UI is just a mockup. With it, the interface becomes operational.
What I like about MCP is that it shifts the conversation away from "can the model output JSX?" and toward a much better question: "what capabilities should the system expose so the right interface can be assembled for this situation?"
Frontend Systems Need to Evolve
This doesn't replace frontend engineering. If anything, it raises the bar for it.
In a world of Generative UI, the valuable frontend work moves toward:
- component systems with clean boundaries and strong defaults
- schemas and contracts that describe what can be rendered
- state models that remain reliable even when the entry point is open-ended
- guardrails that keep generated experiences accessible, consistent, and safe
The practical question is no longer "how do I render this page?"
It is closer to:
"What set of composable UI primitives, data contracts, and policies lets the system build the right interface repeatedly?"
That is a different mindset. It pushes us toward systems thinking. I also think it makes frontend work more strategic, not less.
This Is Where Things Get Hard
The demos are easy to love. The production constraints are where the real work starts.
Generative UI introduces new failure modes:
- the wrong component can appear for the situation
- the model can ask for more than the user should be allowed to see
- the interface can become inconsistent across repeated requests
- accessibility can regress if generated output is not grounded in trusted primitives
This is why I don't think the answer is free-form generation everywhere.
The strongest implementations will likely feel constrained in the right ways. Models should orchestrate from a well-designed set of components, actions, and patterns. The system should know what is allowed, what is not, and how to degrade gracefully when confidence is low.
Generative UI is not about surrendering the interface to the model. It is about creating a system where intent can shape the interface without breaking the product.
What I Find Most Interesting
The web taught us to think in pages. Design systems taught us to think in components. AI-native products will push us to think in intent.
That does not erase everything we already know. It reframes it.
We still need strong visual design. We still need accessibility. We still need hierarchy, motion, feedback, and trust. But now we also need interfaces that can adapt their shape when the context changes.
That is why I keep coming back to Generative UI. For me, it is not a novelty layer on top of chat. It is a new design and engineering problem space, and it is one we can already start building for today.