Can CeylanVienna-based, globally curious.
Learn/Backend

Batch email sends before rate limits look like caps

A newsletter send to 13 people reported 5 accepted and 8 failed. It looked like a hidden recipient cap. The real problem was parallel API calls hitting a provider rate limit.

2026-05-08·3 min read·intermediate

What happened

A small newsletter campaign was sent to 13 subscribers.

The result came back in a strangely neat shape: 5 accepted, 8 failed.

At first glance, that looks like a hidden provider cap. Maybe the free tier only allows five recipients. Maybe the campaign endpoint has a quiet limit. Maybe something is wrong with the subscriber list after the first few addresses.

The number felt meaningful, and it was. Just not in the way it first appeared.

Root cause

The code was batching conceptually, but not at the HTTP request level.

It sliced subscribers into a "batch" and then used Promise.all() to send one request per recipient inside that batch. So 13 subscribers still became 13 simultaneous API calls.

The email provider had a default request-rate limit. The first few requests were accepted, and the rest crossed the throttle. The UI then surfaced the result as "5 sent, 8 failed", which made the failure look like a recipient limit instead of a request-rate problem.

That is the trap: a loop named "batch" is not the same thing as using a batch API.

Why it was non-obvious

The failure did not look like a classic rate-limit bug.

There was no slow ramp-up, no obvious retry storm, and no broken email template. The campaign worked for test sends. It worked for small recipient counts. It failed only when the list was large enough to cross the provider's request ceiling in a single burst.

The misleading part was that "five successful sends" looked like a business rule.

But the system was not limited to five recipients. It was limited to roughly five requests in the same small time window.

The fix

Use the provider's real batch-send endpoint.

Instead of making one API request per subscriber, build one payload containing many recipient-specific messages and submit it as a single batch request. Keep the per-recipient details where they matter:

  • individual unsubscribe URLs
  • recipient-specific headers or metadata
  • per-message errors returned by the provider
  • a clear count of accepted and failed deliveries

For small newsletters, that can turn 13 simultaneous requests into one request. For larger lists, chunk into the provider's documented batch size and send those chunks deliberately.

Reusable rule

When an API exposes a batch endpoint, use it for fan-out workflows.

Promise.all() is useful when independent work should happen concurrently. It is dangerous when every item calls the same rate-limited external service at once.

A practical checklist:

  1. If a job fans out to many recipients, listings, files, or webhook targets, check provider rate limits before shipping it.
  2. If the provider offers batch operations, prefer them over local concurrency.
  3. If batching is not available, add an explicit queue, throttle, or retry strategy.
  4. In the UI, report provider errors in a way that distinguishes recipient failures from request-rate failures.

The general lesson is simple: local concurrency can accidentally turn a small feature into a traffic spike.

The fix is not always to send slower. Sometimes it is to send in the shape the provider designed for.

More like this, straight to your inbox.

I write about Backend and a handful of other things I actually care about. No schedule, no filler — just when I have something worth saying.

More on Backend

Separate the editorial date from the publish timestamp — they mean different things

Content systems routinely conflate two different concepts: the date the author wrote something, and when it was actually published. Treating them as one field causes sorting bugs, broken date displays, and incorrect analytics. They need to be separate from the start.

Financial data APIs fail silently — design for None, not for errors

Rate-limited financial data APIs don't raise exceptions. They return empty DataFrames and None values while your logs show no errors. Every metric computation must handle this explicitly.

Cache AI API results by content hash to prevent cost explosions

Users upload the same image multiple times. AI APIs charge per call. A cache keyed on SHA-256 of the input bytes ensures you pay for each unique input once — not once per upload.

If this raised a question, I'd be happy to talk about it.

Find me →
← Back to Learn