Can CeylanVienna-based, globally curious.
Learn/Architecture

Sequential vs Parallel Execution: when faster is the wrong answer

The instinct is always to run things in parallel. Here's why that instinct can get you banned, blocked, or with corrupted data.

2026-04-13·2 min read·beginner

Every developer's first instinct when they see a loop is: can I run this in parallel? Parallel means faster. And faster is better. Right?

Not always.

What the terms actually mean

Sequential execution means tasks run one after another — task B starts only when task A finishes. Think of a single cashier at a supermarket.

Parallel execution means tasks run simultaneously — A and B both run at the same time. Think of ten cashiers working at once.

Parallel is faster. Sequential is safer and more predictable. Choosing between them is a tradeoff, not a free upgrade.

The hidden cost of parallel

When you run things in parallel, you introduce a new problem: shared state. What happens when two tasks try to write to the same database row at the same time? What happens when two requests share the same session cookie?

The short answer: unpredictable things. Race conditions. Corrupted data. Bans.

This isn't theoretical. When building a price monitoring tool for a European consumer marketplace, the temptation was to parallelize keyword requests — hit 10 searches at once instead of one every 3 seconds. It would have been 10x faster. It also would have triggered the platform's anti-bot system within minutes.

Most large European consumer platforms use bot-detection layers that monitor for non-human traffic patterns. A human browsing a marketplace would never fire 10 requests per second. The moment your code does, you're flagged.

The rule

Sequential is the right default when:

  • You're hitting an external service with rate limiting or anti-bot (most scrapers, payment APIs, social APIs)
  • Tasks share a stateful session — cookies, auth tokens, database connections
  • Order matters — step 2 depends on the result of step 1
  • You're writing to a resource that doesn't support concurrent writes

Parallel is the right choice when:

  • Tasks are truly independent with no shared state
  • You're hitting services you control (your own API, your own database)
  • The service explicitly supports high concurrency (most cloud APIs with documented rate limits)
  • Throughput is the priority and you've verified safety

In practice

A scraper that respects these rules runs sequentially with a fixed pause between requests:

for search in searches:
    results = scrape_keyword(search.keyword)
    save_results(results)
    time.sleep(3)  # politeness delay

No asyncio.gather(). No ThreadPoolExecutor. The delay is the feature, not a bug.

If the platform starts returning errors, the fix is to increase the delay — not to add clever workarounds or reduce it.

The professional term

This tradeoff is often called the throughput vs. safety tradeoff in distributed systems. In scraping contexts, the pause between requests is called a politeness delay or request throttling. The broader pattern of slowing requests to avoid detection is rate limiting.

When you see these terms in documentation, they're telling you: this service expects you to be sequential.

More like this, straight to your inbox.

I write about Architecture and a handful of other things I actually care about. No schedule, no filler — just when I have something worth saying.

If this raised a question, I'd be happy to talk about it.

Find me →
← Back to Learn