Can CeylanVienna-based, globally curious.
Articles/Tech & AI

I Asked an AI to Fix My Build. It Took Two Hours and Taught Me Something Uncomfortable.

A post-mortem on a two-hour AI debugging session that should have taken ten minutes — and what it says about how I work with these tools.

2026-04-13·4 min read·3 views

The AI was not the bottleneck. The absence of a reproduction step was.

This week I watched an AI agent spend two hours — and a meaningful chunk of my monthly token budget — fixing a build error that had a ten-minute solution.

I'm not here to blame the AI. I'm here to document what actually happened, because I think it says something honest about where these tools break down. And partly because I did the same thing when I was running operations teams: I let a problem compound because I didn't slow down to reproduce it first.

TL;DR: A dependency bundled broken syntax. A misplaced API client crashed the build. Both were pre-existing. Neither had a clear error message. The fix was three lines. Getting there took seven wrong turns.

What actually broke

Two separate bugs, both already in the codebase before I touched anything:

Bug 1: A hero image upload route imported a package that internally depends on undici. The version installed contained a private class field expression written as !#P in this — syntactically invalid JavaScript. Next.js 14's build worker tried to bundle it, hit the syntax error, and died. Exit code 1. No useful message in the Vercel log.

Bug 2: All four AI generation routes had new Anthropic() sitting at the top of the file, outside any function. At build time, Next.js imports every route module. The Anthropic client constructor reads ANTHROPIC_API_KEY from the environment on construction. Vercel's build environment doesn't have that key — only the runtime environment does. So it threw.

Neither bug produced a message that said "here is the problem." Both looked identical in the Vercel output: Command 'npm run build' exited with 1.

Why it took two hours

The fastest path was: reproduce locally, read the webpack output, find the actual crashing file. That would have been ten minutes.

Instead, the first attempts were speculative — trying serverExternalPackages, then experimental.serverComponentsExternalPackages, then webpack config.externals. All legitimate approaches for bundling issues in Next.js. All wrong for this specific case, because Route Handler bundles are compiled before any of those config options apply.

Each wrong attempt cost a build cycle. Build cycles on Vercel aren't fast.

Then there was the linter. Every time the Anthropic client was moved inside the handler, an ESLint auto-fix rule moved it back out on the next save. That problem went undetected for several cycles before it was fixed with a direct file write.

Then the remote got ahead. A new article was committed to main while the fix was mid-flight, triggering a rebase conflict at the worst possible moment.

The AI was not the bottleneck. The absence of a reproduction step was.

What I should have done differently

On my side:

I let a broken build sit in main without noticing. The hero image route had been there for a while. There was no CI step running npm run build on push — so the broken code accumulated silently and I only discovered it when the next unrelated change triggered a deploy.

I also pushed a new article directly to main while the build was actively being fixed, which created the rebase conflict. When a build is broken, main is frozen until it's green again. I ignored that.

On the AI side:

The right call at minute five was: reproduce locally, run npm run build, read the full webpack output. That step was skipped in favour of reasoning about likely causes and trying fixes speculatively. That's a pattern I recognise from managing teams — the instinct to start fixing before you've confirmed the diagnosis. It's faster in the short run and more expensive when you're wrong.

What to actually do

  • Add a GitHub Actions build check. One YAML file. Runs npm run build on every push. Catches this entire class of error before it reaches Vercel. Takes ten minutes to set up.
  • Never install a new npm package without a local build test. Any dependency that pulls in native node internals or compiled modules can break Next.js bundling silently. npm run build locally first.
  • API client instantiation always goes inside the request handler. new SomeClient() at module top level reads environment variables at import time. Build environments don't have runtime secrets. One rule, no exceptions.
  • Freeze main when the build is red. Nothing merges until it's green. This is basic trunk-based development hygiene and I wasn't following it.
  • Add ANTHROPIC_API_KEY=dummy to Vercel build environment variables. This eliminates the constructor crash at build time without any code change. Thirty seconds in the dashboard.

The uncomfortable part

I use AI tools daily. I've written about them as a practical lens, not hype. And this week an AI spent two hours on a ten-minute problem because the workflow around it was broken.

The tool didn't fail. The process did. No reproduction step. No CI. No branch discipline. No local build verification before shipping dependencies.

These are not AI problems. They're the same problems that made deployments painful before AI existed. The AI just made the cost of not having them more visible, faster.

I find that clarifying.

Share

More like this, straight to your inbox.

I write about Tech & AI and a handful of other things I actually care about. No schedule, no filler — just when I have something worth saying.

More on Tech & AI

I Caught ChatGPT Mixing Cyrillic Into German Mid-Word. Here's What Actually Happened.

ChatGPT wrote a German word with Cyrillic characters embedded inside it — and then spent six attempts failing to correct it.

AI and the Logistics Layer Nobody Talks About

Every AI demo shows the glamorous output. Nobody shows the warehouse chaos underneath it.

If this resonated, I'd be happy to talk about it.

Find me →
← Back to articles