The correct translation

Apr 8, 2025

The correct translation

There is a specific failure mode in automated translation that professional translators call a correct wrong answer. The sentence is grammatically sound. The vocabulary is accurate. Every word maps to an accepted equivalent in the target language. And the translated sentence means something subtly but completely different from what the original intended.

The machine did not make an error by any measure available to it. But a human reader of both languages, familiar with the culture the sentence was written inside, would catch it immediately. Because the correct answer requires knowing something that was never in the text.

I keep coming back to that failure mode as I watch what is happening to design tools right now.

Figma's AI features can now generate complete interface layouts from a text prompt. The outputs are technically competent. They follow grid systems, apply typographic hierarchy, apply consistent spacing. Show the output to someone who does not design for a living and they will often say it looks professional. Show it to a senior designer with ten years of experience shipping products and they will tell you, usually within thirty seconds, what is wrong with it.

Not that it looks bad. But that it is optimised for legibility at first glance rather than usability over time. That it has solved the visual problem without engaging with the interaction problem underneath it. That it is the correct wrong answer.

This distinction is subtle and it is everything.

I was working with a team last year that was using AI tools to accelerate early-stage design exploration. The tools were genuinely useful for generating layout options quickly, for getting to a range of directions without spending three days on each one. The speed was real and the efficiency gains were real.

But something started happening around the third week. The team was moving fast through options but converging slowly on decisions. The AI could generate ten directions in two hours. But evaluating those ten directions, understanding which ones were actually solving the right problem and which ones only looked like they were, that work was taking longer than it had before.

The generation had accelerated. The judgment had not.

In the end, the project took roughly the same amount of time it would have taken without the tools. The shape of the work had changed. But the total cognitive load had not moved.

What AI tools accelerate in design is the production of surface. Layout, visual treatment, component selection, spacing systems. These are real skills and they take real time, and getting faster at them has genuine value. Nobody who has spent an afternoon wrestling with an auto-layout bug will be nostalgic for the hours those bugs consumed.

But surface is not the job. Surface is the visible layer of a decision that happened somewhere earlier and deeper. Why this information hierarchy and not another. Why this interaction model given what we know about how this specific user population reads and clicks and makes errors under pressure. Why simplicity here and detail there, and what the user needs to trust to make the next step.

Those decisions require context that was never in the prompt. They require the scar tissue of watching users fail at things that seemed obvious. They require the specific knowledge of what went wrong the last time someone tried this approach in this category.

No tool has that. The designer does.

The conversation about automation and craft in design keeps getting framed as a binary. Either AI replaces design work, or craft is safe. Both positions are wrong and both produce the wrong response.

The work that is being automated is real work. It took real skill. Designers who built professional identities primarily around production speed are going to feel this change acutely, because the thing they were faster at is now less differentiated.

But the work that is not being automated is also real work. And it is, arguably, the more important work. The framing of the problem. The recognition that the generated output is technically correct but contextually wrong. The judgment that the brief itself needs to be challenged before the solution gets built.

A translator who catches the correct wrong answer is not doing less work than the machine. They are doing different work. Harder work. Work that requires something the machine structurally cannot have: the knowledge of what was meant.

That is still the designer's job. But only if they decide it is.

Enjoyed this article?

Get one practical product lesson every week. Join 1,200+ founders, PMs, and designers.