DEV Community

Cover image for The "Cache Handshake": How Laravel Events Control Next.js 16 ISR
Bashar Ayyash
Bashar Ayyash

Posted on • Originally published at yabasha.dev

The "Cache Handshake": How Laravel Events Control Next.js 16 ISR

As a Tech Lead building my own platform, I live by an old craftsman proverb: "Measure twice, cut once." We spend weeks architecting client systems with precision, yet when it came to my own portfolio at Yabasha.dev, I was brute-forcing cache invalidation like a junior developer deploying php artisan cache:clear on a cron job. The irony wasn't lost on me.

I built Yabasha.dev as a living showcase — not just static pages, but a dynamic playground where I could demonstrate real full-stack architecture. The stack seemed obvious: Laravel 12 for its elegant API and Filament admin panel, Next.js 16 for blistering performance with ISR, and Redis as the connective tissue. What wasn't obvious was how to make these two beasts talk to each other without me playing telephone operator every time I published a new article.

The Decoupling Dilemma

The architecture is clean on paper. Laravel manages content. Next.js renders it. ISR promises the best of both worlds: static speed with dynamic freshness. But here's the rub — ISR is a black box. Next.js holds all the cards for revalidation, and Laravel has no native way to whisper "hey, that blog post changed" across the wire.

My first iteration was naive: a simple webhook from Laravel to Next.js's /api/revalidate. It worked until it didn't. A 500 error during deployment meant stale content for hours. No retry logic. No idempotency. No visibility. I was flying blind, hoping my cache invalidated properly. That's not engineering; that's wishful thinking.

The Hybrid Power Stack

I chose this specific combination for brutal efficiency:

  • Backend: Laravel 12 API with Filament admin
    • Why: Developer experience matters. Filament gives me a production-ready admin in hours, not days. Laravel's event system is my nervous system.
  • Frontend: Next.js 16 App Router with ISR
    • Why: I'm optimizing for Web Vitals and user experience. ISR lets me regenerate pages on-demand without rebuilding the entire site. The App Router's granular caching is chef's kiss.
  • Orchestration: Redis + Laravel Queues
    • Why: I need atomic operations and guaranteed delivery. Redis Streams would be overkill; simple lists with rpush/blpop give me the reliability without the ceremony.

The Silent Failure Mode

Cache invalidation is computer science's second hardest problem, and my setup had four critical failure modes:

  1. No Acknowledgment: Laravel would fire a webhook and pray. Next.js might receive it, might process it, might fail silently. I had zero observability.
  2. Race Conditions: If I updated a post twice in quick succession, two revalidation requests would race. The loser would sometimes revalidate stale data, creating a cache inconsistency nightmare.
  3. Deployment Windows: During a Next.js deployment, the revalidation endpoint would be down. Laravel's webhook would fail, and I'd have no retry mechanism.
  4. Cascading Invalidations: When I updated a category, I needed to revalidate the category page, all posts in that category, and the homepage. My naive webhook couldn't express this graph of dependencies.

I was spending more time manually verifying cache state than actually writing content.

The Solution Was With Cache Handshake Protocol 🎛️

The breakthrough was treating cache invalidation like a distributed transaction. I built a Cache Handshake Protocol — a two-phase commit between Laravel and Next.js with Redis as the referee.

Phase 1: Intent & Queuing

When content changes, Laravel emits a ContentUpdated event with a unique revalidation_id. A listener queues a job, but here's the key: the job doesn't call Next.js directly. It writes to a Redis list called revalidation:queue and creates a hash revalidation:{id}:status with initial state pending.

// app/Events/PostUpdated.php
class PostUpdated
{
    use SerializesModels;

    public function __construct(
        public Post $post,
        public string $revalidationId
    ) {}
}

// app/Listeners/QueueRevalidation.php
class QueueRevalidation implements ShouldQueue
{
    public function handle(PostUpdated $event): void
    {
        $payload = [
            'revalidation_id' => $event->revalidationId,
            'type' => 'post',
            'slug' => $post->slug,
            'dependencies' => [
                'blog?category=' . $post->category->slug,
                'blog',
                '' // homepage
            ]
        ];

        Redis::rpush('revalidation:queue', json_encode($payload));

        // Create handshake record
        Redis::hmset("revalidation:{$event->revalidationId}:status", [
            'state' => 'pending',
            'attempts' => 0,
            'created_at' => now()->timestamp
        ]);
    }
}

Enter fullscreen mode Exit fullscreen mode

Phase 2: Processing & Acknowledgment

A Laravel queue worker runs every 10 seconds, pulling jobs with blPop for atomicity. It calls Next.js's revalidation API with a signed JWT containing the revalidation_id.

// app/Jobs/ProcessRevalidation.php
class ProcessRevalidation implements ShouldQueue
{
    public function handle(): void
    {
        $job = Redis::blPop(['revalidation:queue'], 5);

        if (!$job) return;

        $payload = json_decode($job[1], true);
        $revalidationId = $payload['revalidation_id'];

        // Increment attempt counter
        Redis::hIncrBy("revalidation:{$revalidationId}:status", 'attempts', 1);

        try {
            // Sign the request
            $token = JWT::encode([
                'sub' => $revalidationId,
                'exp' => now()->addMinutes(5)->timestamp
            ], config('app.revalidation_secret'), 'HS256');

            $response = Http::withToken($token)
                ->post(config('app.nextjs_url') . '/api/revalidate', $payload);

            if ($response->failed()) {
                throw new RevalidationFailedException(
                    "Next.js returned {$response->status()}"
                );
            }

            // Move to 'acknowledged' state
            Redis::hset(
                "revalidation:{$revalidationId}:status",
                'state',
                'acknowledged'
            );

        } catch (\\\\Exception $e) {
            $attempts = Redis::hget(
                "revalidation:{$revalidationId}:status",
                'attempts'
            );

            if ($attempts < 3) {
                // Re-queue with exponential backoff
                Redis::rpush('revalidation:queue', $job[1]);
                Redis::expire("revalidation:{$revalidationId}:status", 3600);
            } else {
                Redis::hset("revalidation:{$revalidationId}:status", 'state', 'failed');
                Log::error("Revalidation {$revalidationId} failed permanently");
            }
        }
    }
}

Enter fullscreen mode Exit fullscreen mode

Phase 3: Completion & Verification

Next.js receives the request, revalidates the paths, then calls back to Laravel with the same revalidation_id to complete the handshake.

// app/api/revalidate/route.ts
import { revalidatePath } from 'next/cache';
import { NextRequest, NextResponse } from 'next/server';
import { Redis } from '@upstash/redis';

const redis = new Redis({
  url: process.env.UPSTASH_REDIS_URL!,
  token: process.env.UPSTASH_REDIS_TOKEN!,
});

export async function POST(request: NextRequest) {
  const payload = await request.json();
  const { revalidation_id, slug, dependencies } = payload;

  // Verify JWT and complete revalidation
  try {
    // Revalidate primary path
    revalidatePath(`/blog/${slug}`);

    // Revalidate dependencies in parallel
    await Promise.all(
      dependencies.map((path: string) => revalidatePath(`/${path}`))
    );

    // Write completion marker to Redis
    await redis.set(
      `revalidation:complete:${revalidation_id}`,
      '1',
      { ex: 3600 }
    );

    // Callback to Laravel
    await fetch(`${process.env.LARAVEL_URL}/api/revalidation/complete`, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.REVALIDATION_SECRET}`,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({ revalidation_id }),
    });

    return NextResponse.json({ success: true });

  } catch (error) {
    console.error('Revalidation failed:', error);
    return NextResponse.json(
      { error: 'Revalidation failed' },
      { status: 500 }
    );
  }
}

Enter fullscreen mode Exit fullscreen mode

The final piece: Laravel marks the handshake complete when it receives the callback, giving me full observability.

// routes/api.php
Route::middleware('auth:sanctum')->post('/revalidation/complete', function (Request $request) {
    $revalidationId = $request->input('revalidation_id');

    Redis::hset(
        "revalidation:{$revalidationId}:status",
        'state',
        'completed'
    );

    // Log success, emit metrics, etc.
    Log::info("Cache handshake completed", [
        'id' => $revalidationId,
        'duration' => now()->timestamp - Redis::hget(
            "revalidation:{$revalidationId}:status",
            'created_at'
        )
    ]);

    return response()->json(['status' => 'acknowledged']);
});

Enter fullscreen mode Exit fullscreen mode

Dynamic Model Selection & Circuit Breakers

Here's where this gets interesting. Not all revalidations are equal. Updating old blog post doesn't need the urgency of fixing a typo on my homepage. I implemented a revalidation priority model that adjusts worker count and retry logic based on path patterns.

// app/Services/RevalidationStrategy.php
class RevalidationStrategy
{
    public function getPriority(string $path): array
    {
        return match(true) {
            $path === '' => ['workers' => 5, 'timeout' => 10, 'retry' => 5],
            str_starts_with($path, 'blog/') => ['workers' => 2, 'timeout' => 30, 'retry' => 3],
            default => ['workers' => 1, 'timeout' => 60, 'retry' => 2],
        };
    }

    public function shouldCircuitBreak(string $revalidationId): bool
    {
        $failures = Redis::get("circuit:failures:nextjs") ?? 0;

        if ($failures > 10) {
            // Stop hammering a potentially down service
            Redis::setex("circuit:open:nextjs", 300, '1');
            Log::critical("Circuit breaker opened for Next.js revalidation");
            return true;
        }

        return false;
    }
}

Enter fullscreen mode Exit fullscreen mode

I also added a dry-run mode that simulates revalidations during Next.js deployments. When I builds a preview deployment, it sets APP_ENV=preview, and my Laravel listener dumps revalidation payloads to logs instead of calling the API. No more surprise failures during deploy windows.

The Workflow Transformation

The impact was immediate and profound:

  1. I create content in Filament — write a technical deep-dive, hit "Publish."
  2. Laravel emits PostUpdated — the event carries the slug and category graph.
  3. The handshake protocol activates — Redis queues the job, worker picks it up in ~5 seconds.
  4. Next.js revalidates atomically — all paths regenerate in parallel.
  5. Acknowledgment returns to Laravel — I see a green checkmark in Filament's activity log.
  6. Metrics populate in Grafana — I can track revalidation latency, success rates, and path patterns.

Before: I'd manually hit revalidation endpoints, check the logs, sometimes forget, and have stale content for hours.

After: I literally don't think about caching. It's a solved problem.

The cognitive load drop is the real win. I can focus on building features instead of babysitting cache state. The system is observable, resilient, and most importantly — boring. Boring infrastructure is good infrastructure.

Build Systems That Disappear

The Cache Handshake taught me a broader lesson: the best automation isn't flashy. It's invisible. It handles edge cases you haven't thought of yet. It fails gracefully. It provides observability when you need it and gets out of the way when you don't.

This pattern isn't just for Laravel and Next.js. The core idea — treat cross-system cache invalidation as a distributed transaction with acknowledgment — applies to any decoupled architecture. The Redis-backed state machine, the signed JWTs, the circuit breakers... these are the details that separate demo-grade from production-ready.

My portfolio isn't just a showcase of UI polish. It's a demonstration that I think in systems, tolerate complexity where it matters, and eliminate it everywhere else.

If you're wrestling with ISR reliability or event-driven architectures, you can see this system running live on Yabasha.dev where the content is always fresh.


I'm obsessed with systems that amplify human creativity. If you're building something similar — or just want to argue about cache invalidation strategies — lets chat. I'm always down for a spirited architecture debate.

Top comments (11)

Collapse
 
xwero profile image
david duymelinck

I'm more backend orientated, so I had to find out what Next ISR was. As I understand it refreshes only the changed pages/parts. What happens if a page is added or removed and is in the main menu? Does it update all the pages?
It feels like it needs at least two events, one for the page and one for the menu change.

Looking at your site I don't get why you need to decouple the backend from the frontend, most pages are static. So by using all those extra patterns and abstractions you are making it more difficult than it needs to be.

Collapse
 
yabasha profile image
Bashar Ayyash

Hey! Great questions — you're hitting on the exact nuances that make ISR tricky in practice. Let me clarify a few things:

On menu updates and ISR: You're absolutely right that it needs coordination. ISR revalidates pages, not components, so if your main menu data changes, you must explicitly revalidate every page that renders that menu. That's what the dependencies array in my PostUpdated event handles — when I update a category, the handshake triggers revalidation for the category page, all post pages within it, and the homepage (which contains the main nav). Without this, you'd serve stale navigation links, which is a silent UX killer.

You're spot-on about needing at least two events in theory, but I collapse them into one atomic operation: the content change event carries the full invalidation graph. This avoids race conditions where the menu revalidates before the new page exists.

Why decouple a "mostly static" portfolio? Fair challenge. Yabasha.dev is a showcase, but it's also a production testbed for patterns I use in client systems with 10k+ dynamic pages, multi-tenant APIs, and separate backend teams. The decoupling lets me:

  • Scale the Laravel API independently during heavy admin sessions
  • Run multiple frontends (I have a mobile app prototype consuming the same API)
  • Swap Next.js for something else later without rewriting the backend
  • Demonstrate real-world architecture, not just toy examples

If I hard-coupled them, I'd be showing devs how to build a demo, not a system. The Cache Handshake looks verbose until you're debugging why a webhook failed at 3 AM during a deployment — then that "extra abstraction" becomes the difference between sleep and a mobile alert.

You're right that a simple site doesn't need this. But my goal is showing production-grade resilience: idempotency, retry logic, circuit breakers, observability. Clients don't pay me for code that works on sunny days; they pay for systems that fail gracefully when things go sideways.

Happy to dive deeper into any of these points — especially the ISR mechanics if you're exploring it for your own backend work!

Collapse
 
xwero profile image
david duymelinck

Yabasha.dev is a showcase

I can understand that.

I don't agree with sentiment that only a backend-frontend separated websites scales or can be multi-tenant.
Separated sites are harder to maintain because it involves more coordination, as your post shows.

The argument of separate frontend and backend teams is a false argument because Next is a full stack framework. So backend knowledge is needed.
In my opinion there should never be separated teams based on the technical layers of a product. The only thing that causes is division, and that is the worst thing for a product.

Thread Thread
 
yabasha profile image
Bashar Ayyash

You're absolutely right that separation adds coordination overhead, and for many projects, a monolithic Next.js app is simpler and better. My approach targets scenarios where backend and frontend must scale independently — multiple API consumers (mobile apps, partners), or when teams genuinely need autonomy.

You're correct that Next.js is full-stack, but "full-stack" doesn't mean "one-size-fits-all." The moment you have non-Next.js consumers or business logic that transcends UI concerns, a dedicated API layer becomes necessary.

The real enemy isn't decoupling — it's accidental complexity. I built this system to showcase intentional architecture for complex scenarios, not to advocate separation for every project. For a simple portfolio? You're spot on: keep it simple.

Thanks for the pushback — it's exactly the kind of healthy skepticism that keeps us from over-engineering!

Thread Thread
 
xwero profile image
david duymelinck

backend and frontend must scale independently

You don't need Laravel and Next. Just have a static generation functionality for your Laravel/Next application.
To deploy the static site you can use git. Any update -> create commit -> push to repository -> let static hosting(s) listen to the repository. Then there are surgical updates, because how git works.

multiple API consumers (mobile apps, partners)

Just use specific API endpoints when the data for the different tenants is too different from the main API

teams genuinely need autonomy

Why would a frontend/backend team need autonomy working on a same application?
Sure there can be teams that develop a frontend/backend product that can be used by other teams. But then that product has a whole different lifecycle than a website.

"full-stack" doesn't mean "one-size-fits-all.". The moment you have non-Next.js consumers or business logic that transcends UI concerns, a dedicated API layer becomes necessary.

I agree with the first part.
If you create an API with Next it is just json like any other backend framework. With that logic none of the frameworks can be used for a website and an API. Most API's I created don't follow the website output, so I should have used multiple frameworks?

The real enemy isn't decoupling — it's accidental complexity

You agreed that decoupling has overhead, so it is creating complexity.
Decoupling can be a valid solution, but it is more niche than how people use it now.
Another example is SPA. It is a valid solution, but it is not the right fit for all websites.

After all of that I do understand you want to showcase you can create more complex solutions with a solid foundation.

Thread Thread
 
yabasha profile image
Bashar Ayyash

You're right—static generation covers most cases, and my portfolio uses ISR. The Laravel layer isn't for rendering; it's the AI agent's brain handling LLM benchmarking and metadata generation. That logic stays out of Next.js.

When mobile apps need different auth flows and data shapes, those "specific endpoints" become spaghetti. BFF pattern exists for a reason.

You're correct—small teams don't need separate pipelines. But 15+ AI builds/day while keeping frontend stable? Decoupling becomes sanity, not overhead.

Not showcasing complexity; containing it. For a standard blog? Overkill. For autonomous content agents? Necessary.

Thread Thread
 
xwero profile image
david duymelinck

The Laravel layer isn't for rendering; it's the AI agent's brain handling LLM benchmarking and metadata generation.

This sentence doesn't make sense.

those "specific endpoints" become spaghetti.

Explain how that is possible? If endpoints are used for multiple tenants, then they can become overly complex. When you have for example website/page, mobile/page and whatever else that is too different, then there are defined outcomes that stay in their own lane.

BFF pattern

Backend for frontend is the worst named pattern. Is there a backend for backend pattern?
Backends are used to communicate with different systems, they don't care if it is for a human interface or for another machine.

15+ AI builds/day

Not sure what you mean with AI builds? If you mean 15+ changes, that doesn't require a complex system.

Not showcasing complexity; containing it. For a standard blog? Overkill. For autonomous content agents? Necessary.

Your post isn't about AI, why even drag content agents into the conversation?
And autonomous agents? Do you really want AI to change the content of your site unsupervised?

I don't have a blog in mind when I'm giving my examples. I just used it because you are using your blog as a showcase.

Thread Thread
 
yabasha profile image
Bashar Ayyash

You're operating from a traditional web dev playbook that was already aging five years ago. Let me clarify why your way of thinking doesn't align with modern full-stack architecture, especially when AI enters the picture.

"This sentence doesn't make sense."

If you read my AI agent post, you'd understand: the Laravel layer isn't serving HTML—it's an AI agent's runtime environment. It benchmarks LLM outputs, generates structured metadata, and orchestrates multi-step content pipelines. This isn't a CRUD blog; it's an autonomous system. The "backend" is literally the brain, not just a data layer. That's why your confusion about "endpoints for multiple tenants" misses the point—the complexity isn't in the endpoints, but in the orchestration logic that ensures the content doesn't break in production.

"Endpoints become spaghetti."

You're thinking of simple multi-tenant routing. I'm talking about semantic divergence. When a new content is added, the same endpoint /api/content must serve:

  • The canonical version for web
  • A condensed version for mobile
  • An SEO-optimized version for crawlers
  • A metadata-rich version for internal analytics

If you funnel all that through one endpoint, you get conditionals scattered everywhere—that's spaghetti. The BFF pattern solves this by letting each consumer have its own tailored backend, even if they share underlying services. It's about contract stability per consumer.

"Backend for frontend is the worst named pattern."

Names are semantics. The pattern exists because "one backend to rule them all" breaks down when UIs have conflicting needs. My mobile app needs aggressive caching; my web app needs real-time updates. A single API can't optimize for both without becoming a monstrous if (client === 'mobile') switchboard. BFF isn't "backend for backend"—it's consumer-oriented backends. It's the difference between "build an API" and "build APIs for specific experiences."

"15+ AI builds doesn't require a complex system."

You're conflating "changes" with "builds." An AI build is a full pipeline run: human draft → benchmark → refine → human review → schedule. Each step touches multiple services: LLM APIs, vector DB for context, Redis for locking, queue workers for async processing. Orchestrating that without a decoupled architecture means your web server blocks while waiting for OpenAI. That's not complexity for its own sake—it's necessary concurrency.

"Why drag AI agents into this?"

Because this entire architecture exists to support them. The cache handshake isn't for manual blog posts—it's so an automation process can publish at 3 AM without me waking up to clear caches. The decoupling isn't premature optimization; it's defensive design against emergent behavior. When an agent decides to A/B test two content variants simultaneously, you don't want it fighting itself for cache locks.

"Do you really want AI changing content unsupervised?"

This reveals the deepest gap: you think AI agents are scripts. Modern agents are collaborators. They draft, I approve. They schedule, I supervise. They don't run unsupervised—they run semi-autonomously, with guardrails, by design. That's the whole point of the Laravel layer: it's a supervisor, not just a backend.

Your mental model is "backend serves frontend." Mine is "backend orchestrates autonomous systems that include frontend delivery." That's not division—it's compositional architecture. And in 2025, it's the difference between a site and a platform.

Thread Thread
 
xwero profile image
david duymelinck • Edited

Your last comment reads like a mix of insulting, assuming and mixed messages.

Because I don't decouple backend and frontend when it is not necessary, I'm writing bad code?
Because I don't use AI in my application, I'm obsolete?

You assume how I think about AI agents and what my mental model is?

You write one endpoint should have different outputs, but then you negate it by saying it shouldn't be a single endpoint.
You mention autonomous multiple times, but you supervise and intervene. That doesn't match the definition of autonomous.

It is not because you use terms like semantic divergence and compositional architecture that proves your logic is better. I don't play buzzword bingo.

In any year the most simple code will provide the most value. Especially when the requirements are complex.

Thread Thread
 
yabasha profile image
Bashar Ayyash

On decoupling & code quality: I didn't suggest your code is "bad." I explained why my architecture fits my requirements—AI-driven content pipelines with multiple consumers. If you're building a simpler system, monolithic is absolutely the right choice. The "most simple code" principle you cite is correct in most cases.

On AI & obsolescence: Traditional web dev isn't obsolete. But when your product is AI agents (as mine is), the architecture must accommodate non-deterministic, long-running processes that traditional patterns weren't designed for. My frustration wasn't with you, but with the limitation of tools that assume predictable request/response cycles.

On endpoint contradictions: Let me be concrete: /api/content can serve multiple clients, but when an AI generates variants (canonical vs. mobile vs. SEO), the response shapes diverge so much that conditional logic becomes spaghetti. The BFF pattern says: don't put that complexity in one endpoint—use separate, optimized backends per consumer. That's not contradiction; it's avoiding conflation.

On "autonomous": You're right—my usage is imprecise. I mean "semi-autonomous with guardrails." The AI drafts and schedules; I approve and supervise. True autonomy would be reckless. But the capability to operate overnight means I need async orchestration, retry logic, and state management—hence the Cache Handshake.

On complexity: We agree that simple code delivers value. But "simple" doesn't mean "naive." A retry queue with Redis isn't complex—it's boring infrastructure that prevents 3 AM pages. The complexity isn't for its own sake; it's for reliability at the edge cases.

I respect your focus on simplicity. If you're building systems without AI agents or multi-tenant concerns, skip my patterns. But if your requirements are complex (as mine are), the "simple" solution is often the one that fails silently at scale.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.