When AI Creates Everything, Who Decides What Matters?
A conversation between a Product Designer and Product Manager on identity, creativity, and the uncomfortable future of digital work in the era of AI.
There’s a question neither of us wanted to ask out loud.
We were both using AI tools daily. Both getting work done faster. Both creating things that would have taken weeks in just hours. And both feeling... something we couldn’t quite name.
It wasn’t guilt. It wasn’t even impostor syndrome. It was simpler and more unsettling than that.
If AI can do most of what we do, what exactly are we here for?
That’s the conversation that led to this article. Not a debate, but an honest examination of what’s happening to our roles as product builders in an age where machines can build too.
This article represents an ongoing conversation between (Product Designer) and (Product Manager) about where our roles are heading in an AI-driven world. Special thanks to Vadym for being willing to explore these uncomfortable questions publicly.
If you’re a product person wrestling with similar questions, subscribe to Product Release Notes for more honest conversations about AI, product development, and what happens next.
What You’ll Learn
Why efficiency isn’t the same as creativity (and why that distinction matters now)
The hidden cost of AI personalization in user experience
How product roles are shifting from makers to curators
What skills actually matter when AI handles execution
The uncomfortable truth about AI and taste
Why the future of product work is about saying no, not shipping faster
The Shift Nobody Prepared Us For
I remember the exact moment it hit me.
I was drafting a product spec for a new dashboard feature. Usual process: stare at blank page, outline the structure, fill in details, review, revise, send to team. Three hours minimum, usually more.
This time, I asked Perplexity to draft it. Thirty seconds later, I had a comprehensive spec. Proper structure, technical considerations, edge cases, acceptance criteria. Everything.
And it was... good. Better than my first drafts usually are.
That’s when I realized this wasn’t about saving time anymore. According to a BusinessWire report, 98% of product managers are already using AI at work. But here’s what the statistic doesn’t capture: we’re not just using AI as a tool. We’re using it to do the thinking we used to do ourselves.
The question isn’t “Will AI replace product managers?” anymore. It’s “What’s my role when AI can do 60% of what I used to spend my time on?”
I was designing an onboarding refresh. It’s one of those seemingly simple, yet messy, flows. It’s all the usual designer stuff: Map the steps, write microcopy, shape the hierarchy, and tune the visuals. Then comes the last part: iterating on variations until everything looks and feels good.
Out of curiosity, I fed the entire problem to ChatGPT. I provided context, constraints, the user’s intent, the business goal, and the tone. I asked for a direction. It gave me three. They were not random noise, but actual flows with rationale, copy options, UI patterns, accessibility notes, and edge cases to consider.
Then, I did the thing that changed everything: I asked it to argue with itself.
“Now critique these like a skeptical designer. Where would users drop off? What’s unclear? What would break on small screens? Rewrite the copy to inspire trust, not conversion.”I remember being amazed by the quality of its answers. They were uncomfortably sharp. I caught myself thinking, “Wait... I didn’t just outsource a layout. I outsourced the first round of judgment.”
That’s when it stopped feeling like “faster design” and started feeling like “different design.” If a machine can generate competent screens on demand, then producing screens isn’t my value. It’s in framing the problem so that the right screens exist, setting boundaries, making trade-offs explicit, protecting users, and deciding what “good” means for this product, these people, and this context.
The scary part isn’t that AI can draw rectangles. It’s that it can imitate our process convincingly. If we’re not careful, we’ll let it imitate our thinking, too.
The Problem With Personalization
One of AI’s biggest promises is hyper-personalization. Every user gets a tailored experience. Sounds perfect, right?
Personalization is one of those ideas that automatically sounds like progress.
“Make it relevant.” “Make it tailored.” “Make it feel like it knows me.”
Sure, but that’s also the recipe for a very polite cage.
The problem isn’t personalization itself. It’s personalization without intention. When the product can adapt to you in real time, you’re not just designing an interface; you’re designing a steering mechanism. If you don’t clearly define your goal, you’ll end up optimizing the easiest metric: attention, clicks, purchases, or retention. At that point, “personal” becomes a prettier word for “predictable.”
I’ve seen this happen in two common ways.
First, personalization can kill discovery.
You open an app, and everything is “For you.” The feed is curated, the recommendations are safe, and the UI learns what you skip over. Gradually, you stop exploring, and you’re stuck in a never-ending loop of content you like. This happens not because you don’t want to, but because the product stops inviting you to explore. Productive friction is removed from browsing, along with the joy of finding something unusual and the little detours that develop taste. The system gets better at giving you what you already like and worse at helping you discover new things you’ll like.
Second, personalization creates echo chambers.
AI is great at pattern matching. If you click on two pieces of content about anxiety regarding the economy, it will happily build you a whole world where that anxiety is the central truth. The same goes for shopping. Same with news. Same with learning. Users think they’re being informed, but they’re actually being reinforced. This feedback loop is subtle because the content isn’t obviously extreme. It’s just consistently one-sided. The algorithm doesn’t need to radicalize you. It only needs to narrow your field of view.
Here’s the underlying design problem: personalization creates a power imbalance.
The system knows the user’s patterns. However, the user doesn’t know the system’s goals.
For me, the ethical line comes down to a simple question: Is this personalization serving or replacing the user’s intent?
There are good versions of personalization. In my opinion, they have a few traits:
They’re legible, meaning the product can explain why something is shown.
They’re controllable: Users can adjust them, reset them, or turn them off.
They protect exploration. Alongside “For you,” there’s always “Surprise me,” “Outside your bubble,” “Popular with opposite preferences,” and “Editorial picks.” These options aren’t optimized purely for your past.
They avoid pressuring users to behave in ways they didn’t explicitly choose.
Once AI personalization becomes sophisticated enough, UX stops being “user experience.” It becomes user trajectory. And if we don’t design that trajectory carefully, we’ll end up with products that feel effortless and make people feel smaller.
This reminds me of what’s happening in product strategy right now. We’re optimizing for engagement metrics that AI can measure, and losing sight of experiences that matter but can’t be quantified.
Spotify got this right with Discovery Weekly. Yes, it learns your taste. But it also deliberately pushes you outside your comfort zone. That tension between comfort and surprise requires human judgment. AI would optimize you into listening to the same 20 songs forever because that maximizes immediate satisfaction.
The best products have always balanced relevance and serendipity. AI kills serendipity by design. It’s optimized to give you exactly what you expect, which creates experiences that feel safe but ultimately boring.
Exactly. And we’ve spent years treating every extra tap as a failure, so the idea of ‘zero-friction’ with AI seems like progress. However, a truly frictionless UX often turns into autopilot. Autopilot is efficient, but it can also be quietly controlling.
The rule is simple: AI can reduce effort, but it shouldn’t reduce agency.
So I ask:
If the user already knows what they want (e.g. pay, export or book), let the AI speed things up.
If they are forming intent (e.g. money, privacy, content or settings), add friction on purpose.
If it’s reversible, provide more guidance. If it’s costly or difficult to undo, slow down.
This concept is nothing new. NN/g published a perfect explanation of this principle six years ago, long before the AI hype. And to dig even deeper, I can highly recommend this article on IDF, which explains the what we call ‘meaningful friction’: designed pauses at moments that require thought.
So the additional risk that AI brings today is the convenience of having choices made for you.
When Everyone’s Role Looks The Same
Here’s where it gets uncomfortable for both of us.
The lines between our roles are blurring in ways I didn’t anticipate. I’m using AI to generate specs, prototypes, even user research synthesis. Vadym is using AI to generate design variations, copy, interaction flows. Engineering is using AI to write code.
At some point, aren’t we all just... prompting things into existence?
If the tool is the same and the output is similar, what makes my contribution different from a designer’s? Or an engineer’s?
I feel this blur too. A few months ago, I presented AI-generated wireframes in Lovable, and an AI-generated research summary from ChatGPT. And the uncomfortable thought was: if we can all produce plausible artifacts now, what exactly is the purpose of each role?
So I think the boundaries won’t be defined by what you can produce. Instead, they’ll be defined by what you’re accountable for and the judgement you apply.
PM still owns intent. This includes what problem we solve, for whom, why now, what we will not do, what success means and what trade-offs we accept.
Design still owns the human contract. This involves understanding the user’s mental model, shaping behavior without manipulation, creating clarity and trust, and making the experience coherent across time, devices and edge cases.
Engineering still owns reality. What is feasible, secure, scalable and maintainable, and how will the system behave under stress, not just in demos?
I believe we all becoming generalists in a way. But not in the sense of “everyone doing everything”. It’s more like being a T-shaped operator: having a broader reach across the stack, but with deeper judgement in one domain. You might write a specification, sketch a workflow, and generate code. However, it is in moments of ambiguity and risk, when the right answer depends on principles, context, and consequences, that your value will show through.
Totally, I think the answer is context.
Last month I generated that product spec with AI. It was comprehensive, well-structured, technically sound. It also suggested features that would have taken six months to build and served 2% of our users.
Because the AI doesn’t know our engineering capacity. It doesn’t understand our strategic priorities. It can’t read the political dynamics of which stakeholders need to be brought along. It doesn’t know that our top competitor just launched something that changes the entire calculation.
I needed to edit that spec with context only I had. Then I needed a designer to edit it with UX context only they understood. Then engineering needed to edit it with technical reality only they knew.
The AI got us 60% there. Human collaboration got us the other 40%. And that 40% is everything. But people’s fear is: companies will look at that and say “60% is good enough, and it’s free.”
The Skills That Actually Matter Now
That fear is rational. Many companies will look at the ‘60%’ figure and consider the job done. The damage won’t be apparent in a demo, it’ll manifest later in the form of churn, increased support load, trust erosion and brand decay.
That’s why I keep saying designers should focus on taste, synthesis and relationships.
Taste, not “pretty”. It’s making judgements within constraints. When AI can generate infinite options, taste means choosing what’s right for these users, this product and this moment, as well as knowing what to remove. AI generates. Taste commits.
Synthesis: AI aggregates. Designers must connect the dots between messy research, behavior, business, technological limitations and ethics to create a clear direction and identify trade-offs. It’s about seeing the real problem behind the noise.
And relationships. As production speeds up, alignment becomes the bottleneck. This involves trust, influence, negotiation, de-escalation and stakeholder context — the human work that transforms a good draft into a good product. AI can’t do politics, credibility or empathy.
I’ve been thinking about this constantly. I keep coming back to three things that AI can’t fundamentally do, which align with what you said:
Strategic thinking beyond data. IBM reports that the global average cost of a data breach is $4.44 million, and shadow AI incidents add an average of $670,000 to breach costs. PMs need to think about second-order effects. Not just “Can we build this AI feature?” but “What happens when this AI feature fails in production? What’s our exposure? Who gets hurt?”
That requires synthesizing business context, technical architecture, user psychology, and regulatory environment. AI can inform each piece. Only humans can turn them into judgment calls.
Communication that builds trust. AI can draft messages. It cannot build relationships indeed. The PM who can explain a difficult tradeoff in a way that brings stakeholders together will always be valuable. That requires empathy, emotional intelligence, and understanding human motivation in ways that can’t be automated.
Knowing what not to build. This might be the most valuable skill in an AI-driven world. Everyone wants to ship AI features fast. The valuable PM is the one who can say “We shouldn’t build this” and defend that decision with clarity.
The Uncomfortable Truth About Efficiency vs. Creativity
Let’s address what we’ve both been dancing around.
AI isn’t making us more creative. It’s making creativity optional.
We’re becoming more efficient, absolutely. What took three days now takes three hours. But efficiency and creativity aren’t the same thing.
Efficiency is doing things faster. Creativity is knowing which things matter.
AI can generate a hundred product ideas in seconds. It cannot tell you which one aligns with your company’s actual capabilities, serves your users’ real needs, and positions you strategically against competitors.
That curation, that judgment, that taste, it still requires humans. The question is whether companies will value it.
I agree — the cheapness of AI-generated output makes it easy to confuse creativity with productivity.
We can generate mockups, flows, copy, and ideas at a speed that would have sounded like science fiction two years ago. But most of that is execution. It’s variation. It’s remixing existing patterns.
My worry is that we’ll produce more and call it creativity.
To me, creative design has a very specific signature: it changes the way people understand the problem. It’s not just about how quickly we deliver a solution.
So if I remove the visuals and keep only the decision, is it still interesting? Does it offer a fresh perspective on the user’s needs? Does it make a smart trade-off that feels inevitable in hindsight Does it create trust, not just conversion? Many other questions pop up in my mind.
AI can generate “cool.” It can generate “polished.” It can even generate “different.”
However, it struggles with “right” because “right” depends on context, values, and consequences, not just patterns.
Here’s my test for any AI-generated work: Can I defend this decision without referencing what AI suggested?
If I can’t explain why this product direction matters independent of what the AI told me, then I’m not doing product management. I’m just being a sophisticated prompt engineer.
The future belongs to PMs and designers who use AI as a tool for creativity, not a replacement for it.
What This Means For How We Work Together
I love your test, and I’d like to add one more: Can we defend it together as a product choice rather than as an AI output?
The old relay race: the product manager (PM) writes a spec, the designer turns it into screens, and the engineers ship it — was already fragile. Now it’s broken. With AI, anyone can generate “artifacts.” Therefore, collaboration can’t be about passing documents. It has to be about shared judgment.
Exactly, so I think we need to be co-creating much earlier in the process. The best products I’ve worked on recently came from tight collaboration between strategic thinking and user-centered design from day one.
When I’m using AI to explore product directions and you’re using AI to prototype concepts, we should be doing that together. Not in sequence. Because the magic happens in the conversation between business strategy and user experience, not in either one alone.
The teams that will win are the ones where PM and designer are almost indistinguishable in the early phases. We’re both asking: What should we build? Why does it matter? How do we know it’s working?
AI makes that collaboration more important, not less.
So in practice, it looks less like “handoffs” and more like a shared lab session, doesn’t it?
Before AI, we would spend days producing artifacts just to have a real conversation. Now, we can generate artifacts in minutes, so the conversation itself becomes the valuable part.
In the past, we followed this approach: “PM writes requirements, then designer mocks.” Now, however, we can work side-by-side.
The PM prompts for risks, edge cases, rollout strategy, and scope cuts.
I prompt for flows, information hierarchy, copy directions, and failure states.
Then, we compare our outputs and immediately perform the human part of the process: we choose what we believe, reject what’s wrong, and identify what we need to validate.
It feels like pairing in engineering. There are fast loops and lots of “yes” and “no, because.”
Looking Ahead: A Prediction We’ll Probably Regret
All right, let’s make some predictions so our readers can tell us how wrong we were in a few years.
I think by 2030, the term “product manager” will mean something completely different than it does today. The execution part, the documentation, the coordination, AI will handle most of that. What remains will be almost entirely judgment-based work.
The PMs who thrive will be the ones who got really good at:
Saying no with conviction
Synthesizing context AI can’t access
Building relationships across functions
Thinking in systems, not features
The PMs who struggle will be the ones who defined their value by output volume rather than decision quality.
The only safe prediction about 2030 is that whatever I say now will seem outdated by then, because AI evolves faster than I can update my website.
But okay, here’s my prediction.
By 2030, “designer” will mean less “person who makes screens” and more “person who shapes decisions through experience.” Execution will be inexpensive. Interfaces will be generated. Variations will be infinite. The differentiator will be the ability to consistently steer a product toward clear, trustworthy, and human-centered outcomes.
What changes?
Design craft becomes a baseline, not a differentiator. This is not because craft disappears, but because the first 80% is automated, allowing anyone to produce something “pretty.”
Design shifts upstream. More time is spent defining problems, setting principles, deciding on defaults, and designing interactions between systems, not just within screens.
Designers become guardians of trust. AI will optimize for what moves a metric. Designers will need to optimize for what protects users.
What stays the same?
Humans are still humans. Emotions like confusion, fear, impatience, curiosity, and pride won’t disappear. That’s why I wrote a book called Emotional UX. Even the practical parts (e.g., how to create attractive designs in Figma) are already outdated, but the book stands on the fundamentals of human psychology. Despite any technological advances, learning psychology is, was, and always will be extremely important for designers.
Taste and clarity still win. People don’t fall in love with features. They fall in love with experiences that make sense and feel fair.
Good design is still about making trade-offs. However, there is more temptation to pretend there are none.
Final Thoughts
We started this conversation trying to understand what our roles are becoming. I’m not sure we found a definitive answer. But maybe that’s the point.
The future of product work isn’t about having all the answers. It’s about being comfortable asking the hard questions. About slowing down when everyone else is shipping fast. About choosing creativity over efficiency when it matters.
AI is inevitable. But creativity, judgment, and taste? Those are choices we make every single day.
If there’s one thing I want people to take away from this, it’s this: AI will multiply your output. However, it won’t protect your judgment.
We’re entering a phase where it’s easy to appear competent. It’s easy to ship. It’s easy to generate artifacts that feel finished. The real skill lies in noticing when “good enough” becomes your standard and when speed makes decisions for you.
In the end, AI doesn’t have taste. It doesn’t have values. It doesn’t have to live with the consequences. But we, designers, do.
How are you navigating this shift in your own work? Are you feeling more creative or just more efficient? And what skills are you focusing on developing?
Drop your thoughts in the comments. Let’s figure this out together. 👇





