Generative AI App Development Is 80% UX
Most generative AI apps don't fail because the model is weak. They fail because the UX is lazy. That's the part people still get wrong about generative AI app...

Most generative AI apps don't fail because the model is weak. They fail because the UX is lazy.
That's the part people still get wrong about generative AI app UX design. Everyone talks about model quality, latency, and stack choices. Sure, those matter. But if users can't tell what the system is doing, can't refine the output, can't trust the result, and can't recover from bad generations, your app is already in trouble.
And the evidence is piling up. Developers are adopting AI fast, users are downloading these apps at absurd scale, and trust is still fragile. In this article, I'll break down the six UX decisions that separate a clever demo from a product people actually keep using.
What Generative AI App Development Really Means
Hottest take first: most teams obsess over the wrong thing. They argue about model choice, retrieval quality, latency budgets, context windows — all the stuff that looks smart on a whiteboard — and then act surprised when the product still feels shaky in a live demo.
I think that’s the central mistake. Users don’t care about your stack nearly as much as your team does. They care whether the thing behaves in a way they can read, trust, and recover from when it goes sideways.
A CTO said almost exactly that to me a few weeks ago: “The model is good, but the product still feels unreliable.” Not because the answers were nonsense. Because nobody could tell what the system was doing, why it stalled for a few seconds, whether an answer was confident or flimsy, or how to repair a bad output without scrapping everything and starting over. I’ve seen that movie before: 90 seconds of dead air, one blinking cursor, six people pretending not to notice. That’s the part people remember. Not your architecture diagram.
The demand is real. Base44 reported 1.7 billion generative AI app downloads and $1.9 billion in revenue in H1 2025. So this isn’t a market skepticism problem. People clearly want AI products. They just don’t want awkward chatbot UX wearing a fancy blazer and calling itself innovation.
Buried in the middle is the part that actually matters: generative AI app development isn’t just model integration. Sure, integration matters. It’s just often easier than building something a real person can use to finish a task without feeling like they’re negotiating with a moody intern.
Research points in the same direction. A 2025 TU Wien thesis found GenAI performs best in text-heavy UX work like questionnaires and transcript analysis. It struggles more with high-judgment tasks like wireframing and usability testing. That should change product scope on day one. If your system is strong at drafting, summarizing, extracting information, or suggesting next steps, present it that way. Don’t market it like an all-purpose expert if it plainly isn’t one.
And yes, tiny interface choices punch way above their weight. Groovy Web reported that perceived wait time drops by 55–70% when output streams in real time, even when total generation time stays exactly the same. Same backend speed. Different feeling. I’d argue that “different feeling” is half the product.
So what do you actually do differently?
- Pick the jobs AI should handle well, then hand off high-judgment moments to human-in-the-loop UX instead of pretending the model can do everything
- Give users a refinement flow so they can edit, steer, and improve weak outputs without re-prompting from scratch every single time
- Add trust signals people can actually read: status cues, scope limits, editable outputs, plain indicators of what’s happening now
If you want patterns for that kind of product thinking, this breakdown of AI Native Applications UX Design Patterns is worth your time.
That’s the real job: build something people can understand, steer, correct, and rely on. Not just something your team can say has a model attached to it.
Strange part is this: sometimes the smartest AI product decision isn’t making the model sound more human — it’s making the interface more honest. How often do teams admit that out loud?
Why Most GenAI Apps Fail on UX, Not AI
Everybody says the same thing when a genAI product flops: the model wasn't good enough. It hallucinated. It was too slow. It needed better retrieval, a bigger context window, another round of tuning. That's the standard postmortem.

And sure, sometimes that's true. If your chatbot invents a court case or freezes halfway through an answer, you've got an AI problem. Obvious stuff. But I think that's become a lazy explanation for products that were actually failing somewhere less glamorous and way more fixable.
We shipped one of those. On paper, it looked solid. Strong model, clean architecture, retrieval polished enough that engineers could sketch the whole flow on a whiteboard and feel great about it. Internal demos went well for weeks. Then actual users showed up and absolutely hated using it.
Not because it was dumb. Because it was slippery.
The app always sounded confident, even when it was drifting. It hid the prompt logic. It didn't show how it got to an answer. And when people wanted to recover from a bad turn, the grand solution was basically a “try again” button, which is product-speak for: good luck.
That's the missing piece people skip. Most generative AI apps don't lose users because every output has to be perfect. They lose users because people can't tell what's happening, can't guide the system without friction, and can't see when the machine is guessing instead of knowing.
That matters more now than it did even a year ago, because this isn't some tiny experimental category anymore. Base44, citing Sensor Tower, reported 1.7 billion generative AI app downloads and $1.9 billion in revenue in just the first half of 2025. H1 2025 alone. If your app feels weird for five minutes, someone can bounce to three competitors before their coffee cools down.
People keep talking about intelligence like that's the whole game. It isn't.
A lot of teams are really shipping UX mistakes in an AI wrapper:
- Hidden prompts. This one drives me nuts. If better output depends on prompting, then don't make users poke at an empty box like they're trying to crack a safe at 11:40 p.m. Show assumptions. Let them edit inputs. Offer prompt suggestions tied to the job they're actually doing.
- Unclear confidence. Grounded output and pure speculation can't look equally polished. If both arrive in the same smooth tone with the same visual treatment, trust collapses fast. Transparency can't be decorative here; it has to signal what came from sources and what was inferred.
- No recovery path. People need version history, regeneration controls, editable sections, and a real way back after things go sideways. I've watched users abandon a task after one bad rewrite simply because there was no rollback.
- Fake autonomy. Teams love pitching “automatic” help, then quietly make users supervise every single step anyway. That mismatch kills agency fast.
I saw this in one workflow where a generated summary rewrote itself three times in under 90 seconds with no side-by-side comparison view at all. Users called it “possessed.” Honestly? They weren't wrong.
Jakob Nielsen's point about Generative UI is worth keeping in your head here: interfaces are moving away from static screens toward systems assembled in real time around user intent, context, and history. Fine. That's real. But if the interface keeps changing without telling people why, you haven't made anything smarter. You've just rearranged the furniture while someone's still sitting in the room.
The fix I come back to is dead simple: show, steer, save.
- Show what the AI is doing, which inputs or sources it used, and how certain it is.
- Steer with explicit controls inside the refinement flow instead of hiding everything behind yet another blank prompt field.
- Save users from irreversible mistakes with rollback, comparison views, and human-in-the-loop checkpoints.
This hits especially hard in UX for AI chatbots because chat can hide bad information architecture for months. A conversational shell can make a confused product feel smooth right up until somebody needs precision and auditability instead of vibes. If you want a more practical example of controllable behavior and stronger prompting UX, this piece on Generative AI Development Services Prompt Engineering gets into the mechanics.
One number should make this impossible to shrug off: Lyssna says 93% of designers were already using generative AI tools in their work in 2026. That's mainstream adoption, not experimentation. So no, the winners won't be the teams with the flashiest demo or the smug launch video. They'll be the ones that make AI behavior legible before trust breaks. If your product still can't do that when it gets something wrong, what exactly is your user supposed to believe?
UX Patterns That Give Users Control Over AI
Everybody says the same thing about AI UX: remove friction, make it feel magical, keep the interface clean. Sounds great in a pitch deck. It also falls apart fast when somebody clicks send on an AI-written sales email at 4:47 p.m. on a Friday and the model has quietly invented a policy detail that never existed.

That’s the part people skip. The polished draft. The tired team. The little moment where the app feels “probably fine.” I’ve seen this happen with support replies, contract summaries, outreach copy—the whole lot. One wrong source, one made-up clause, one summary that gets shared too early, and suddenly the sleek no-friction experience doesn’t look modern. It looks reckless.
I think the old idea is backwards. In generative AI app UX design, control isn’t clutter around the product. Control is the product. If users can’t shape outputs, limit sources, approve actions, or fix one bad section without starting from scratch, trust disappears the second something expensive happens.
The market already moved on, by the way. Base44 says 84% of developers use or plan to use AI in 2025. Once a number gets that high, nobody cares that your model can generate text. That’s table stakes now. The real question is simpler and harder: can people steer it without fighting it?
Microsoft’s guidance for custom copilot-style products lands in the same place, even if they say it in calmer language: pick the right UX framework, apply human-AI interaction principles, and get input and output design right from day one. Exactly. The prompt box isn’t the star of the show. What matters is everything that happens after the model answers.
Approval gates
Use them when a bad output actually costs money. Put an obvious review step before sending emails, updating records, approving claims, or kicking off workflows.
This isn’t optional in finance. It isn’t optional in healthcare. Legal ops definitely can’t pretend it’s optional. And internal knowledge systems need it too, even though teams usually don’t notice until a bad summary spreads across three departments before lunch. Quiet automation feels efficient right up until it makes one very loud mistake.
Draft-first workflows
Best when AI should recommend something, not do it outright. Let the model produce a draft first, then make a human inspect it before anything gets filed, published, shared, or sent.
This is where enterprise tools either earn trust or torch it. Contract summaries. Sales outreach drafts. Policy responses. Support macros. Same pattern every time: AI makes the first pass, a person decides whether it becomes real. That’s human-in-the-loop UX doing its job instead of being treated like some ceremonial checkbox added after launch.
Editable outputs
Use this when people want to refine more often than rerun. Let them edit inline, rewrite one paragraph, shorten sections, or change tone without blowing up everything else.
This matters more than teams admit. In customer-facing products and consumer apps, editable meal plans beat static meal plans. Editable travel itineraries beat dead-end itineraries. Same for study guides and job application drafts. If one sentence is wrong, people shouldn’t have to roll dice on an entirely new answer just to fix it.
Regenerate controls
Variation helps, but vague variation doesn’t. A lonely “regenerate” button looks finished on a mockup and works terribly in real life.
Give people specific paths: “make shorter,” “use only cited sources,” “try a more formal version.” Now they’re refining intent through controls instead of guessing what to type next into an empty box. I’d argue this is one of the easiest misses in AI UX because generic regenerate feels neat and minimal while doing almost nothing useful for actual users.
Scope selection
This matters whenever source boundaries matter—which is more often than teams think. Let users choose what the model can pull from: this document only, my workspace, approved knowledge base, public web, last 30 days of tickets.
That pattern does two jobs at once. It gives people agency, and it cleans up your information architecture. In enterprise generative AI app development that’s huge; in practice it matters outside enterprise too. A support manager restricting answers to approved docs only isn’t asking for extra complexity—they’re asking not to spend Tuesday morning cleaning up hallucinated guidance sent to customers.
One boring detail gets overlooked all the time: accessible controls. Lyssna says 50% of designers are focusing on accessibility from the start in 2026. Good. Hidden chips, tiny secondary actions, fuzzy icons—those are annoying in any app. In UX for AI chatbots, they’re worse because they make oversight harder exactly where people need clarity most.
The weird thing is more control often makes AI feel faster. Not because generation got faster—it didn’t—but because uncertainty dropped. People move quicker when they know what the system can touch, what sources it can use, what happens next, and where they still get final say.
If your app treats control like friction instead of value, why would anybody trust it?
Output Refinement Interfaces That Improve Quality
Everyone says the hard part of generative AI is getting the model to write a good first draft. That's the line. Better model in, better output out. Sounds neat. It's also incomplete.

The uglier problem shows up after the draft is 85% right. That's where products usually crack. I watched a product manager in a staging build last month fight with an AI-generated release note that only had one broken sentence in it—about 12 words, give or take. The top half was fine. The ending was usable. One line in the middle went sideways, so she re-prompted the whole thing four times. Four. Each pass fixed one issue and introduced another, which is exactly the kind of nonsense that makes people stop trusting the tool. Then she said, “Why can't I just keep the top half and rewrite this one bit?” I'd argue that's the real product question.
One-shot generation kills in demos. Revision is what pays rent.
Nielsen Norman Group describes GenAI as a new paradigm for how people interact with data, users, and design problems. I think that's right, but most teams still design like they're shipping a single polished answer instead of an editable workspace. That's the missing piece: not smarter output alone, but an interface where people can trim, swap, reject, compare, and repair without feeling like every click is a coin toss.
Chat history isn't enough. Everybody knows it, but teams keep pretending endless vertical scroll counts as version control. It doesn't. In high-stakes work, side-by-side comparison wins because memory is terrible under pressure. Put Version A next to Version B. Let someone lift the intro from one draft, keep the closing from another, merge the middle, and move on in under 90 seconds instead of hunting through three prior turns for “the decent one.”
Buried in all this is the thing people miss: editing should start from the output itself, not from another blank prompt box.
Highlight a sentence. Hit “rewrite clearer.” Highlight a shaky claim. Hit “add evidence.” That's not prompting with prettier buttons; it's direct manipulation, and it gives users actual control over what changes and what stays put. Regenerate-all is lazy design dressed up as flexibility.
Same story with prompt chips. They can help, sure, but only if they're specific enough to map to an obvious intent shift. “Improve” is fluff. Nobody knows what that means in practice. “Shorter,” “more formal,” “use bullet points,” “explain for CFO,” “cite source”—those do real work because users can predict what each one will change. I've seen teams ship vague chips that looked smart in Figma on a Tuesday and were basically dead by the next sprint review.
The feedback loop matters more than most product roadmaps admit. Base44 reports that up to 30% of Python code is now AI-generated. That's not a cute stat; that's a lot of people spending real hours editing machine output instead of writing from zero. If someone's fixing inaccurate text, bloated wording, wrong tone, or an unsupported claim, let them say exactly that with structured feedback tied to an action. Don't make them climb back into an empty prompt field and explain everything again like the system learned nothing.
Lyssna found that 73% of designers say AI as a design collaborator will have the most impact in 2026. The keyword isn't AI. It's collaborator. Collaborators stay present while you mark things up; they don't toss you one draft and vanish behind a curtain.
- Use compare views when drafts vary in meaningful ways and users need to judge tradeoffs side by side.
- Add inline edit handles at sentence, paragraph, and section level so fixes happen exactly where the problem lives.
- Support highlight-to-rewrite so refinement begins from selected output instead of another full re-prompt.
- Offer prompt chips for concrete shifts like tone, length, audience, structure, or sourcing.
- Capture structured feedback so correction is fast and trust grows because users can see the system respond.
If you want practical interaction patterns for this kind of product design, our guide to AI Native Applications UX Design Patterns goes deeper.
Your model can be brilliant and still feel dumb if every fix acts like a reset button. Funny how often “bad AI output” turns out to be bad editing UX instead—so why are so many teams still designing for generation first?
Trust-Building UX for Generative AI Interaction
Everybody says the same thing: the models are getting better, so trust will follow. GPT gets sharper, retrieval gets cleaner, latency drops, and somehow we're all supposed to believe users will stop hesitating. I don't buy that. Better output helps, sure. It just doesn't solve the moment when someone stares at a recommendation and asks the only question that matters: why this?

That gap isn't theoretical. In 2025, Base44 reported that 46% of developers don't fully trust AI-generated code. That's nearly half of a group that's unusually hard to fool because they check logs, read stack traces, compare outputs, and know when something smells off. If developers are still double-checking the machine, imagine what that means for a salesperson getting an account-risk alert or a support lead reading an auto-drafted reply at 4:47 p.m. before hitting send.
I've seen products that looked fantastic in a kickoff deck and then unraveled almost immediately in live use. Nice typography. Smooth microinteractions. A polished answer box that felt expensive. Then someone clicked into the output and wanted to know where the claim came from. No CRM note. No source paragraph. No retrieved record. Just vibes with a gradient.
That's where the usual advice is incomplete.
The missing piece is inspectability. Not decoration. Not a soft “AI-generated” badge floating in a corner like legal garnish. Actual ways to check the work while you're still in the flow.
- Citations and grounding. If a sales copilot says “the customer is at risk,” it should point to the exact evidence: the Salesforce CRM note, the Zendesk support ticket, the contract renewal date, the paragraph in a call transcript, the URL, the row in Snowflake. Same claim, totally different level of trust when people can inspect what it's based on.
- Provenance labels. Users should be able to tell instantly what was generated by AI, what was retrieved from a knowledge base, what came directly from a system of record, and what a human edited after the fact. If that distinction shows up only after three clicks and a hover state, you've already lost them.
- Uncertainty markers. Thin evidence should look thin. Confidence indicators matter. So does wording that admits ambiguity instead of pretending authority it hasn't earned. “I may be wrong” beats fake certainty almost every time because at least it tells the truth about the situation.
- Safe-fail states. Put review-before-send, revert-to-draft, and manual fallback options right where mistakes get expensive. Human-in-the-loop UX isn't there to make compliance happy. It's the emergency brake when an output is close enough to seem safe and wrong enough to cause damage.
I'd argue most teams also underbuild explanation quality by a mile. “AI-generated” tells me almost nothing. What helps is an explanation that shows which inputs shaped the output and which constraints were applied along the way. Was it based on last quarter's support tickets? A policy doc updated in March 2025? Only customer-visible notes? That's interaction design doing real work. It's also better information architecture because the evidence sits beside the claim instead of being buried in some side panel nobody opens.
BCG has been right to keep pushing customer journey mapping, rapid prototyping, and ongoing user testing in generative AI app development. Those basics still matter because outputs shift with context, prompt quality, and user intent more than teams want to admit. But happy-path workshop demos won't tell you much about trust. You have to test the suspicious moments: the pause before sending an AI-written email, the second read of a recommendation that sounds too neat, the screen where someone mutters “wait, is this right?” under their breath.
The funny part is this isn't really about perfection in the first place. It's about control, and that's where people still get turned around. Buried right in the middle of all this model talk is a simpler truth: users can live with imperfection if they can edit it, verify it, or back out cleanly when it goes sideways.
That's why design isn't becoming less important with AI products; it's becoming heavier. In Figma's 2025 AI report summary shared by Andrew Hogan on LinkedIn, 95% said design is just as important for AI products, and 52% said it's even more important. That sounds right to me. I've watched adoption stall over one bad output users couldn't recover from—not because the system was dumb overall, but because it gave them no clean way to question it without starting over.
If you want practical patterns for prompting UX and control logic, AI Native Applications UX Design Patterns is a useful reference. Still, none of this is mysterious anymore: show honest signals, make recovery easy, let people see what's driving the answer, give them enough agency to move without crossing their fingers.
People say trust grows when AI sounds smarter. Sometimes it does. More often it grows when doubting the system feels safe, fast, and built into the interface itself. If your product can't survive “why should I believe this?”, what exactly are users trusting?
How to Design a UX-Led GenAI Product Workflow
Everybody says the same thing when a GenAI product flops: users didn’t trust it, the team didn’t train people well enough, change management was weak.
Sure. Sometimes that’s true. But I think that explanation lets the product off the hook way too easily.
I’ve seen this movie already. A team launches an assistant with slick prompts, tidy orchestration, retrieval wired in, maybe even a tool call or two. It gets demo applause. People click around for a week. Then it dies quietly and ends up in that miserable QBR slide nobody wants to linger on.
The problem usually started earlier. Not at rollout. Not in onboarding. Back when the team mapped model flow instead of user flow.
That old way still sounds smart in meetings: prompt goes in, retrieval happens, draft comes out. Maybe someone circles “agentic” on the whiteboard and calls it innovation. Looks great. Tells you almost nothing about what a real person needed to finish at 4:47 p.m. on a Thursday when they were under pressure and needed something accurate enough to actually send.
That’s the missing piece: the workflow can’t end at generated output. It has to begin with human intent and finish with confident action.
People love saying “start with the user,” but by now that phrase is basically office wallpaper. In a GenAI product, the sequence is more concrete than that: intent, context, generation, refinement, review, release, measurement.
Take a sales leader getting ready for a customer call. The job isn’t “generate text.” It’s getting an account summary they can trust without reading fifty scattered updates across Salesforce, CRM notes, and recent email threads. The system pulls the relevant context, drafts the summary, and then the human does what humans are supposed to do: catch risk. Maybe there’s one shaky claim about renewal timing or contract scope. The rep edits it in the refinement step before it turns into an embarrassing mistake. A manager signs off before anything goes out. After that, the team measures what actually mattered: did prep time drop, did quality improve, and did people come back next week and use it again?
That’s a workflow. Not prompt-first theater.
I’d argue this is where smart teams lose months. They assume adoption will show up once the model gets good enough. It won’t. If the product doesn’t understand the job to be done, if the interaction design doesn’t support that job, and if there’s no room for review, correction, or controlled release, polished prose won’t save it.
And this matters even more now because the interface assumptions are already changing under everyone’s feet. Gartner said on July 2, 2025 that 80% of enterprise software will be multimodal by 2030, up from less than 10% in 2024. That’s not a cute trend line. That’s a warning that a single chat box can’t be your whole UX plan anymore.
People aren’t always going to type neat little requests into a blank field. They’ll speak while rushing between meetings. They’ll drop in a Salesforce screenshot. They’ll upload a PDF and mark it up. They’ll put two charts side by side and ask what changed between Q1 and Q2. Gartner’s point wasn’t just “more inputs.” It was that multimodal GenAI works better when people can express intent in whatever form matches the task in front of them.
Buzzi.ai starts there: behavior first. What does the user actually do? Where do they pause? Where does human-in-the-loop UX matter? What should never auto-publish? We connect those answers to business goals and adoption metrics before we touch workflows or models. If you want the mechanics behind that thinking, our guide to Generative AI Development Services Prompt Engineering ties prompt systems back to product behavior.
There’s another reason all this got harder fast: building got cheap. Base44 reports there are more than 180 million developers globally. That changes team behavior whether people admit it or not. When prototypes are easy, bad workflow decisions multiply faster. I once watched a client team put together a functioning prototype in 11 days using off-the-shelf APIs, only to realize they’d skipped the approval step users needed every single time before anything could go out.
So no, I don’t buy the idea that faster building means workflow matters less. It means workflow matters more.
Code is everywhere. Judgment isn’t.
The strange part is that the best UX for AI chatbots often makes the AI feel smaller, not bigger—less magic trick, more accountable teammate. That’s usually when daily use starts to happen. If your product can generate beautifully but can’t guide intent, support review, and earn release confidence, what exactly are users supposed to trust?
FAQ: Generative AI App Development Is 80% UX
What does generative AI app development actually mean?
It means building products where AI creates text, images, code, summaries, or recommendations in response to user input. But the hard part usually isn't the model choice. It's the generative AI app UX design that helps people prompt well, understand what the system is doing, and refine outputs without getting stuck.
Why do most generative AI apps fail on UX instead of AI?
Because a decent model can still feel useless inside a bad interface. Users need clear prompting UX, response editing, regeneration controls, and trust signals, or they won't know how to recover from weak outputs. That's why so many teams blame the model when the real problem is interaction design.
Does generative AI app development require different UX than traditional apps?
Yes, because generative systems don't return the same result every time and they often produce probabilistic answers instead of fixed outputs. Your UX has to support ambiguity, user control and agency, and iterative refinement. Traditional app flows optimize for completion, while UX for AI chatbots and copilots also has to support steering, checking, and correcting.
How can UX patterns give users more control over AI output?
The best pattern is simple: don't make the first answer feel final. Give users prompt suggestions, editable inputs, tone or format selectors, and clear actions like regenerate, shorten, expand, and rewrite. That kind of output refinement interface turns AI from a black box into something users can direct.
How do output refinement interfaces improve quality in generative AI apps?
They improve quality by making iteration cheap. Instead of forcing users to start over, you let them edit a sentence, compare versions, rate responses, or ask for a more grounded answer with one click. In practice, good output refinement interfaces reduce frustration because users can repair weak results instead of abandoning the workflow.
What trust-building UX elements should a generative AI app include?
You want visible trust and transparency UX, not vague promises. Show citations when possible, label AI-generated content, explain data handling, surface limitations, and make it obvious when a human review step exists. Confidence indicators can help too, but only if they're tied to something concrete like source quality or retrieval coverage.
Can human-in-the-loop UX improve generative AI results?
Absolutely. Human-in-the-loop UX works best when users can approve, edit, escalate, or reject outputs at key moments instead of only at the end. That's especially important in legal, healthcare, finance, and enterprise workflows where a fast wrong answer is worse than a slower reviewed one.
How do you reduce hallucinations through UX?
You won't eliminate hallucinations with interface design alone, but you can reduce the damage. Use grounding and citations, verification prompts, source previews, and warnings when the model is guessing or lacks enough context. Good generative AI app UX design helps users verify before they act, which is the part many teams miss.
How should a UX-led workflow for a generative AI product be structured?
Start with the user decision you're helping with, not the model you want to ship. Then design the flow around input guidance, generation, review, output refinement, feedback loops, and handoff to a human or downstream system. Well, actually, that's the difference between a demo and a product: the workflow keeps working after the first impressive response.
What accessibility issues matter most in AI interfaces and chat experiences?
Accessibility for AI interfaces starts with basics like keyboard navigation, screen reader support, color contrast, and clear focus states. But AI adds extra concerns, including streaming responses that don't overwhelm assistive tech, understandable confidence or error states, and controls that make response editing possible without precision clicking. If users can't steer the system easily, they don't really have control.