Can CeylanVienna-based, globally curious.
Learn/Backend

Never use Promise.all() with the GitHub Contents API

The GitHub Contents API requires each file commit to complete before the next one starts. Parallel commits produce 409 conflicts, and the error message doesn't make it obvious why.

2026-04-18·2 min read·intermediate

The failure mode

You're saving an article and its associated metadata file. Natural instinct: commit both in parallel.

await Promise.all([
  commitFile("content/article.mdx", mdxContent, "save article"),
  commitFile("content/article.meta.json", metaContent, "save meta"),
]);

This produces an HTTP 409 Conflict roughly half the time. Sometimes it works. Sometimes it fails. The error message from GitHub is: "message": "conflict", no detail about what conflicted or why.

Why it happens

The GitHub Contents API's PUT /repos/{owner}/{repo}/contents/{path} endpoint requires a sha parameter, the current blob SHA of the file you're updating. This is how GitHub prevents lost updates.

When you fire two commits simultaneously:

  1. Request A reads the current tree SHA, calculates blob SHAs
  2. Request B reads the same tree SHA
  3. Request A commits, the tree SHA advances
  4. Request B tries to commit with the now-stale tree SHA → 409

The conflict isn't between the two files, it's between the second request and the new state of the repository that the first request just created.

The fix: always await each commit

await commitFile("content/article.mdx", mdxContent, "save article");
await commitFile("content/article.meta.json", metaContent, "save meta");

Sequential. The second commit always sees the repository state left by the first.

This applies regardless of how many files you're committing. If you have five files to save, commit them one by one with await between each.

The broader principle

Any API that returns a version identifier (SHA, ETag, sequence number, revision) and requires you to pass it back on writes is signalling that concurrent writes will conflict. The API is not broken, it's enforcing optimistic concurrency control.

The GitHub Contents API is explicit about this. Others are not. Watch for:

  • Responses that include a sha, etag, or version field
  • Write endpoints that require you to pass one of these back
  • 409 or 412 errors on concurrent writes

These are all the same pattern: the API is telling you that write ordering matters and it's your responsibility to maintain it.

Don't reach for locks

The temptation when you see this is to add a mutex or a queue. That's overengineering for most content management use cases. Sequential await chains are simpler, easier to reason about, and fast enough, GitHub commits take under a second each.

Only reach for a queue if you genuinely need concurrent write throughput, which content management almost never does.

More like this, straight to your inbox.

I write about Backend and a handful of other things I actually care about. No schedule, no filler. Just when I have something worth saying.

More on Backend

Batch email sends before rate limits look like caps

A newsletter send to 13 people reported 5 accepted and 8 failed. It looked like a hidden recipient cap. The real problem was parallel API calls hitting a provider rate limit.

Separate the editorial date from the publish timestamp, they mean different things

Content systems routinely conflate two different concepts: the date the author wrote something, and when it was actually published. Treating them as one field causes sorting bugs, broken date displays, and incorrect analytics. They need to be separate from the start.

Layering data sources: accept both APIs as fallback, don't choose one

Financial data from a single free API is unreliable. Layer a secondary source on top, not as a replacement, but as a fallback when the primary returns None. You get resilience without complexity.

If this raised a question, I'd be happy to talk about it.

Find me →
← Back to Learn