How This Blog Actually Works

After years of WordPress pain, I rebuilt this blog as a hybrid static-dynamic system. Here's how Hugo, FrankenPHP, RethinkDB, and Kubernetes come together to create something that doesn't suck.

So, I left WordPress a while back .

The short version: Gutenberg blocks are overengineered rubbish, WordPress doesn’t scale by default, and the ecosystem has become a graveyard of closed-source plugins masquerading as “open source.”

But here’s the thing… I didn’t just want to replace WordPress with another CMS. I wanted to build something that is actually nice to work with, both from a development and writer standpoint, but also from an operational one.

This post is about what I built, why I built it that way, and how the whole thing works together.

The Core Idea

Most blogs are either:

  1. Fully static (Hugo, Jekyll, etc.) — fast as hell but no dynamic features
  2. Fully dynamic (WordPress, Ghost, etc.) — flexible but slow and complicated

I wanted both. Static content delivery with dynamic features when I need them.

So I built a hybrid:

  • Hugo generates the static content
  • FrankenPHP serves it with Caddy’s speed
  • PHP handles the dynamic bits (subscriptions, notifications)
  • RethinkDB stores subscriber data
  • Kubernetes keeps it all running

The result: static-site performance with dynamic capabilities when needed, no WordPress bloat required.

The Static Layer: Hugo

Hugo’s great at what it does: it turns Markdown into HTML absurdly fast.

My content structure is pretty straightforward:

hugo/content/
├── posts/          # All blog posts
├── series/         # Multi-part series
├── garden/         # Digital garden experiments
└── about/          # About page

Each post is just Markdown with front-matter:

---
title: Some Title
date: 2025-11-02
series:
  - a-series
tags:
  - databases
---

Content goes here...

Hugo processes this and outputs multiple formats:

  • HTML for the website
  • RSS for feed readers
  • JSON (/index.json) for programmatic access
  • Email-friendly HTML for sending post-notifications

That last one is kind of clever. I defined a custom Hugo output format specifically for emails. When a new post goes live, the deployment pipeline fetches this email version and sends it to subscribers. No need to maintain separate email templates.

The Dynamic Layer: FrankenPHP + PHP

Here’s where it gets interesting.

FrankenPHP is basically Caddy (the web server) with PHP baked in. It serves static files with Caddy’s performance but can also handle PHP requests in the same process.

My Caddyfile looks roughly like this:

{
    frankenphp
    auto_https off
}

:80 {
    root * /app/public

# Cache static assets aggressively
    @static {
                path *.css *.js *.woff2 *.png *.jpg
            }
    header @static Cache-Control "public, max-age=604800, immutable"

# Cache HTML briefly
    @html {
              path *.html
          }
    header @html Cache-Control "public, max-age=300, must-revalidate"

# API requests go to PHP
    route /api* {
                    php_server
                }

# Everything else: try static file, then 404
    file_server
}

This setup means:

  • Caddy serves static files instantly
  • API requests hit PHP when needed
  • No separate nginx + php-fpm nonsense

The Subscription System

The subscription API lives at /api and handles three things:

1. Subscribing (with Proof-of-Work Rate Limiting)

Instead of a traditional CAPTCHA or rate limiting by IP (which doesn’t work well at the edge), I use proof-of-work to prevent spam. When someone tries to subscribe, they first get a challenge:

// Step 1: Issue a proof-of-work challenge
function issueProofChallenge(string $email): array {
    $difficulty = 22; // Number of leading zero bits required
    $issuedAt = time();
    $expiresAt = $issuedAt + 300; // 5 minute TTL
    $salt = base64UrlEncode(random_bytes(16));

    $payload = [
        'salt' => $salt,
        'difficulty' => $difficulty,
        'issuedAt' => $issuedAt,
        'expiresAt' => $expiresAt,
        'emailHash' => hash('sha256', $email),
        'algorithm' => 'sha256-leading-zero-bits',
    ];

    $serialized = json_encode($payload);
    $signature = hash_hmac('sha256', $serialized, SECRET, true);

    return [
        'token' => base64UrlEncode($serialized) . '.' . base64UrlEncode($signature),
        'salt' => $salt,
        'difficulty' => $difficulty,
        'algorithm' => 'sha256-leading-zero-bits',
    ];
}

The client (JavaScript) must then solve the puzzle by finding a nonce that, when hashed with the salt and email, produces a hash with at least 22 leading zero bits. On an average computer, this takes 1–3 seconds—trivial for a human, expensive for a bot.

// Step 2: Verify the proof-of-work solution
function verifyProofAnswer(array $proof, string $email): bool {
    // Verify signature (prevents tampering)
    $expectedSignature = hash_hmac('sha256', $payloadJson, SECRET, true);
    if (!hash_equals($expectedSignature, $signatureRaw)) {
        return false;
    }

    // Check expiration
    if ($payload['expiresAt'] < time()) {
        return false;
    }

    // Verify the solution produces enough leading zero bits
    $hash = hash('sha256', $payload['salt'] . '|' . $email . '|' . $proof['solution']);
    return countLeadingZeroBits($hash) >= $payload['difficulty'];
}

// Step 3: If proof is valid, create the subscription
if (verifyProofAnswer($subscription['proof'], $email)) {
    $nonce = bin2hex(random_bytes(32));

    r\table('subs')->insert([
        'id' => r\uuid($email),
        'email' => $email,
        'name' => $name,
        'nonce' => $nonce,
        'confirmedAt' => null,
        'createdAt' => r\now(),
    ])->run();

    // Send confirmation email via Postmark
    $postmark->sendEmailWithTemplate(
        $email,
        'welcome',
        ['action_url' => "https://withinboredom.info/api?action=confirm&nonce=$nonce"]
    );
}

This approach:

  • No CAPTCHA annoyance for legitimate users
  • No IP-based rate limiting (which breaks with CDNs/proxies)
  • Computationally expensive for bots trying to spam
  • Stateless (the challenge is signed, no server-side storage needed)
  • Time-limited (challenges expire after 5 minutes)

2. Confirming

$nonce = $_GET['nonce'];

// Update subscriber record but always 'pretend' that the update was successful
r\table('subs')
    ->filter(['nonce' => $nonce])
    ->update(['confirmed' => r\now()])
    ->run();

// Redirect to thank you page
header('Location: /thank-you/');

3. Notifying Subscribers

This one’s only called by the deployment pipeline:

// Fetch email-formatted HTML from Hugo output
$emailHtml = file_get_contents('/app/public/email/index.html');

// Get all confirmed subscribers
$subscribers = r\table('subs')
    ->filter(r\row('confirmed')->ne(null))
    ->run();

// Send via Postmark
foreach ($subscribers as $sub) {
    $postmark->sendEmail(
        'noreply@withinboredom.info',
        $sub['email'],
        'New Post: ' . $postTitle,
        $emailHtml
    );
}

// Track notification to prevent duplicates
r\table('pubs')->insert([
    'id' => $postId,
    'sent' => r\now()
])->run();

Why RethinkDB?

Most people would reach for Postgres or MySQL here. I chose RethinkDB for a few reasons:

  1. JSON-native: No ORM needed, I just store documents
  2. I’m already using it for some other stuff
  3. I maintain the semi-official PHP library for it
  4. It scales so well it is almost hilarious

Plus, for a subscription database with maybe a few thousand records, literally any database works. Might as well use one I already have that will keep on working if a disk catches on fire .

Image Optimization

This part’s actually pretty neat.

During the Docker build, there’s a dedicated “optimizer” stage:

FROM debian:bookworm-slim AS optimizer

# Install optimization tools
RUN apt-get update && apt-get install -y \
    jpegoptim \
    optipng \
    pngquant \
    gifsicle \
    findutils \
    coreutils

# Copy static files
COPY --from=builder /src/hugo/public /app/public

# Run optimization script with cache mount
RUN --mount=type=cache,target=/opt/image-cache \
    /optimize-images.sh /app/public

This means:

  • Each image is only optimised once across all builds
  • BuildKit’s cache mount persists between builds
  • Rebuilds are fast because unchanged images use cached versions

Deployment Pipeline

The whole thing runs on Kubernetes, deployed via GitHub Actions.

When I push to main:

  1. Build the image:

    - uses: docker/build-push-action@v5
      with:
        push: true
        tags: ghcr.io/bottledcode/withinboredom:latest
        cache-from: type=gha
        cache-to: type=gha,mode=max
  2. Update Kubernetes manifest with the new image digest

  3. Deploy to Kubernetes:

    kubectl apply -f manifest.yaml
    kubectl rollout status deployment/withinboredom-info
  4. Check for new posts:

    # Fetch current posts
    NEW_POSTS=$(curl -s https://withinboredom.info/index.json)
    
    # Compare with cached version
    DIFF=$(diff <(echo "$OLD_POSTS") <(echo "$NEW_POSTS"))
    
    # If new posts found, trigger notification
    if [ -n "$DIFF" ]; then
        curl -X PUT https://withinboredom.info/api/notify \
            -d "post_id=$NEW_POST_ID"
    fi

The pipeline automatically detects new posts and sends notifications to subscribers. No manual intervention is required.

Kubernetes Architecture

The Kubernetes setup is pretty standard:

  • 3 replicas by default (for redundancy)
  • HorizontalPodAutoscaler: scales 1-5 pods based on CPU
  • Anti-affinity rules: spreads pods across nodes
  • PodDisruptionBudget: ensures at least 1 pod during updates
  • Ingress: TLS via Let’s Encrypt (cert-manager)

The whole blog uses a fraction of a single CPU core and barely any RAM. Static sites are efficient.

What I Learned

1. Hybrid architectures are underrated

Most people think you have to choose between static and dynamic. You don’t. Serve static files for 99% of requests, use dynamic code for the 1% that need it.

2. Build-time optimisation is free performance

Optimising images during the Docker build means zero runtime overhead. Users get smaller images, I use less bandwidth, everyone wins.

3. Content-addressable deployments are bulletproof

By using image digests instead of tags, I know exactly what’s deployed. No “latest tag ambiguity”, no surprises.

4. PHP is genuinely great for this

FrankenPHP makes PHP feel like a first-class citizen again. No php-fpm configuration, no FastCGI nonsense, write code, and it works.

What I’d Do Differently

1. Add comment support from day one

Right now, there are no comments. I’m planning to add them via a simple PHP endpoint + WebMention support, but I should’ve built that earlier.

The Big Picture

This setup gives me:

  • Static-site performance: Most requests never touch PHP
  • Dynamic features: Subscriptions, notifications, eventually comments
  • Trivial scaling: Kubernetes handles it automatically
  • Cheap hosting: Runs on a single node most of the time
  • No WordPress: The best feature of all

If you’re thinking about leaving WordPress or just want to understand how a modern blog can be built, hopefully this gives you some ideas.

And if you want to see how it all comes together, the whole thing is coming to GitHub … 🔜.


Want to know when I write more posts like this? Subscribe below, and you’ll get an email whenever I publish something new.

Stay in the loop

New essays, never spam.

Get fresh posts, experiments, and updates straight from the workbench. One click to unsubscribe whenever you like.