Can CeylanVienna-based, globally curious.
Learn/Backend

Scheduled publishing without a cron: runtime-evaluated date filters

You don't need a cron job to make content appear on schedule. Evaluate the scheduled date at request time and the content becomes visible automatically — no deployment, no job, no database update required.

2026-04-18·3 min read·beginner

The cron-dependent approach and its failure mode

The standard approach to scheduled publishing: a cron job runs at the scheduled time, updates a published flag in the database, and the content becomes visible.

The failure mode: the cron job misses its window. The newsletter fires at 07:00. The cron job that was supposed to set published: true at 06:55 didn't run — server restart, network issue, timing drift. The newsletter goes out with a link to a 404.

The cron approach has a single point of failure between "content ready" and "content visible."

Runtime evaluation: the always-consistent alternative

Instead of updating a flag, evaluate visibility at request time:

export function getAllArticles(): ArticleMeta[] {
  const now = new Date();
  return readAllMdxFiles()
    .filter(article => {
      if (!article.scheduledDate) return article.published;
      return new Date(article.scheduledDate) <= now;
    })
    .sort((a, b) => /* by date */);
}

An article with scheduledDate: "2026-04-20T10:00:00Z" is invisible until that moment, then visible to every request after it — without any cron, any database update, any deployment.

The MDX frontmatter pattern

---
title: "My scheduled article"
scheduledDate: "2026-04-20T10:00:00Z"
published: false
---

published: false plus a future scheduledDate means: draft, not yet visible.
published: false plus a past scheduledDate means: automatically visible now.
published: true means: always visible, regardless of date.

The two fields serve different purposes. published is manual override. scheduledDate is automatic timed release.

The dual-mechanism for newsletter integration

Runtime evaluation handles visibility. It does not handle triggered actions — like sending a newsletter when an article goes live.

For that, you still need a cron. But now the cron has one job: check if any article became visible in the last N minutes and fire the newsletter. It no longer needs to update the database first.

// Cron at 07:00 UTC
const recentlyPublished = getAllArticles()
  .filter(a => {
    const pub = new Date(a.scheduledDate ?? a.date);
    const windowStart = new Date(Date.now() - 60 * 60 * 1000); // last hour
    return pub >= windowStart && pub <= new Date();
  });

for (const article of recentlyPublished) {
  await sendNewsletter(article);
  // Optionally: commit published: true to MDX to prevent re-sending
}

The key difference: the article is already visible before the cron runs. If the cron misses its window, the article is still live — only the newsletter is delayed, not the publication.

The consistency guarantee

Runtime evaluation gives you a simple invariant: an article with a past scheduledDate is always visible, on every server, in every region, with no state to sync. There's no "published in one region but not another" problem because there's no state — just a comparison against the current time.

This makes it particularly well-suited for static site generators and edge-rendered content, where database updates would require a redeployment.

Trade-offs

What you gain: simplicity, consistency, no failure mode from missed cron jobs, works on read-only filesystems.

What you give up: instant unpublishing (you'd need to remove the file or set scheduledDate to a future date), and the ability to see exactly which articles are "live" without querying at a specific time.

For most publishing workflows, the gains outweigh the trade-offs.

More like this, straight to your inbox.

I write about Backend and a handful of other things I actually care about. No schedule, no filler — just when I have something worth saying.

More on Backend

Cache AI API results by content hash to prevent cost explosions

Users upload the same image multiple times. AI APIs charge per call. A cache keyed on SHA-256 of the input bytes ensures you pay for each unique input once — not once per upload.

If this raised a question, I'd be happy to talk about it.

Find me →
← Back to Learn