The Hidden Costs of Instruction

AI is often framed as an autonomous agent that reduces work, but in reality, it shifts the burden onto users. Rather than performing tasks independently, LLMs rely on users to define goals, provide context, set constraints, and review results-turning what should be automation into an ongoing, labor-intensive process.

Conversational interfaces like chat compound this. They are often criticized for being as opaque as command line interfaces (CLIs) from the 60s, but they are actually worse. In CLIs and GUIs, every input has a predictable result. LLMs can't offer that. You type a prompt, hit enter, and hope the model understands it. If it doesn't, you have to rephrase, clarify, and iterate—slowly.

Interface problems are compounded by the fact that users have to:

Imagine
the result before seeing it.

Define
every detail in advance.

Verify
results and underlying assumptions of the answers.

The more reasoning the query requires, the more effort it takes to make sure it works correctly. Especially when models (will) outperform humans at reasoning, the alignment of results and their underlying assumptions still need to be verified. The illusion of autonomy masks the fact that you're still doing the heavy lifting-just in a different form.

Interfaces That Amplify, Not Restrict

Rather than offloading tasks to an agent, AI works best when it augments human efforts—proactively assisting within the workflow rather than waiting for instructions. Completion-based interfaces reduce friction by exposing the AI's decisions step-by-step, allowing users to refine and adjust in real time, maintaining control while benefiting from the model's assistance.

Completion flips the dynamic:

Proactive assistance
rather than waiting for instructions.

Implicit assumptions
and decisions get exposed step by step

Real-time alignment
instead of rewriting instructions.

This reduces friction and increases control. You don't start from scratch or struggle with phrasing prompts. The AI works within your context, adapts to it, and modifies it in collaboration with you.

With completion, the user remains the agent. The LLM becomes an enabler, not a bottleneck.

When we designed Canvas at Mistral AI, our goal was to create a shared context between the user and the model. By allowing direct reference to elements on the screen, we reduced the need for prompts by 50% and users achieved the same results in half the time. I believe there are many more opportunities to integrate faster and more effective model reasoning