The Stack Behind Vibetoolstack 2026: What's Live, What's Next, What's the Bet

Paul Written by Paul Last updated: May 8, 2026 build-logstackastrosanitycloudflareclaude-codemcp

TL;DR

  • Live stack today (May 2026): Astro 5.x · Sanity (project s8j37thx, production dataset) · Cloudflare Workers · Claude Code in the terminal · 8-12 MCP servers wired in. 61 pages live on vibetoolstack.com.
  • Adding next: Paperclip-agent for SEO outreach + link-building (the same Paperclip Greg Isenberg covered to 318k views on YouTube), MCP-driven content-prep agents, and Hermes / OpenClaw as a research layer.
  • The bet: depth-first cluster content can compound faster with an agent layer than it did manually at another live publication, which hit healthy MRR in 12 months in a tighter niche, fully hand-built.
  • What's vapor, labeled vapor: the full agent layer isn't running yet. The page you're reading was written by Paul, not an agent. The roadmap is honest about which pieces ship by end of 2026 and which are still placeholder.
  • Why publish this at all: transparency about which tools actually run VTS is part of the deal. If the recommendations on this site come from someone, you should know what that someone runs in production.

Why I'm Publishing This

Most "my stack 2026" posts are flexes. A list of logos. No version numbers. No deploy times. No mention of what broke. The reader gets a vibe, not a working blueprint.

This isn't that.

Every tool in the stack below is something I run today (or a planned addition I'll mark as planned). Every claim is dated. Every deploy time is real. The version numbers are pinned because in a year from now, half of this stack will have moved, and a stack-tour post that doesn't tell you when it was written is worse than no post.

The other reason: Vibetoolstack reviews tools. The credibility check on a tool reviewer is "what do they actually use?" If I tell you Sanity is the right CMS for an indie operator and I run VTS on Sanity, that lands. If I told you Sanity was great while running VTS on Webflow, you'd close the tab. Fair.

So here's the working stack, dated May 2026, version-pinned, with what I'd add and why.

1. The Live Stack (Running Today, May 2026)

The site you're reading right now is rendered from these tools:

Layer Tool Version What it does
Site generatorAstro5.xStatic + selective hybrid. Builds 61 pages in ~6s locally.
CMSSanityStudio v3Tool reviews, stack guides, custom schemas
HostingCloudflare WorkerscurrentWorkers static deploy via wrangler
Editor / agentClaude CodeSonnet 4.6Terminal-native pair-coder
Wire layerMCP servers8-12 activeSanity-MCP, file-MCP, topic-specific rotation
CIGitHub ActionscurrentDrip-publish workflow + lint on push
EmailKITcurrentSame platform that other site runs. Beehiiv migration test planned Q3.

Here's what one deploy looks like:

$ bash deploy-live.sh
> astro build → 61 pages, 6.2s
> wrangler deploy → uploaded, version v60c23017
> backup-sanity.mjs → exported production dataset (37s)
> sync-dev.mjs → mirrored to development dataset
> done in 1m 14s

That's the actual time. Not a vibes-time. The whole "edit content in Sanity Studio → ship to vibetoolstack.com" loop is under 90 seconds end-to-end on a normal-sized commit.

What's NOT in the live stack yet:

  • The Paperclip-agent for SEO outreach. Planning to wire in Q3. Currently the SEO/keyword research is run via DataForSEO Python scripts I wrote (also in the repo).
  • Plausible / GA4. No analytics yet, the site is still pre-marketing-launch. GSC is the only thing connected.
  • Newsletter. KIT account exists for that other site; VTS list will be set up when the first 8-10 cluster pages are filled.

2. Why This Stack Over Alternatives

Every tool here was a real decision against a named alternative. The short reasoning per layer:

Astro over Next.js. Astro 5.x build times beat Next 15 by 2-3x on a content-heavy site like this. Static-first by default, selective islands. View transitions native. Less framework-overhead for what is essentially a publication. Next.js wins for app-shaped products. VTS is a publication, not an app.

Sanity over Notion-as-CMS, Webflow CMS, and Payload. Sanity's structured-content model fits tool reviews with 40+ fields per document better than Notion's database-on-rails. Webflow CMS is great for visual editing but bad for structured data and content-graph references. Tried it on another live publication for two years, hit walls on auto-generated comparison pages. Payload is the legitimate self-hosted alternative; I'd consider it next time, but Sanity's hosted API + Studio UX wins for solo-operator velocity.

Cloudflare Workers over Vercel. Vercel's pricing changed in March 2026 and made the math worse for high-page-count static sites. Cloudflare Workers' Pages-style deploys are faster, the WAF is included, and the egress isn't metered. The dev experience on Vercel is still nicer; the Cloudflare gap closed enough that it's not worth the markup.

Claude Code over Cursor. Cursor wins for small visual UI work. Claude Code wins for terminal-native multi-file refactors, plan-mode workflows, and any project where MCPs are central. I run both: Cursor when I'm pushing pixels, Claude Code for actual building. For VTS specifically, Claude Code is in the terminal 90% of the time.

MCP over custom integrations. Two years ago, wiring Claude into Sanity meant a 200-line custom adapter. With MCP, it's a config file. Same story for GitHub, Linear, anything else. The Anthropic-shipped MCP standard saved me probably 40 hours of glue-code already, and it composes: running 8 MCPs in parallel doesn't break anything.

The bottom line: every choice is documented, dated, and replaceable. If a better tool ships next quarter, the swap is one config away. Stacks shouldn't be religious.

3. The Content-Ops Layer (Where the Real Leverage Sits)

The stack above is the infrastructure. What actually produces output is the content-ops layer on top.

Sanity tool-schema. Each tool review is a Sanity document with ~40 structured fields: pricing tiers, features array, alternatives refs, common comparisons, methodology block, update log, FAQ, SEO. Auto-rendered into a /tools/[slug] page plus 5+ auto-generated sub-pages (/tools/[slug]/pricing, /alternatives/[slug], /compare/[slug]-vs-[other]). One deep review → 6+ ranked URLs without re-writing content.

Slop-filter as pre-publish gate. Mandatory. Every article runs through a 27-pattern AI-slop filter (em-dashes, "not only... but also", hype superlatives, etc.) plus 10 tech-slop patterns. Fails the filter → rewrite. Ships only after passing. This is the single biggest reason VTS content reads differently from generic SEO listicles.

vts-blog-article skill. A Claude Code skill (markdown file with structured prompts) that handles the full article workflow: required-inputs check, pre-writing research protocol, structure template, FAQ patterns, disclosure-block insertion. Pulls from the brand-identity skill for voice rules and from the slop-filter for the gate.

Where Claude Code accelerates today:

  • First-draft research synthesis (web fetches, doc parsing, comparison-table builds)
  • Markdown-to-Sanity-import conversion (Portable Text + structured fields)
  • Schema work and TypeScript on the Astro side
  • Refactors across the content + code layers in one session

Where it doesn't:

  • Voice. Every article gets a human pass for tone. Claude can match the slop-filter, but the voice-marker ("would Paul actually say this?") is a human check.
  • Methodology decisions. Picking which 6 tools belong in a comparison and what the angle is. The "what's the article" call is human.
  • Pre-publish brand-audit. Slop-filter passes are necessary but not sufficient.

4. The Agent Layer I'm Adding Next

This is the part that's mostly planned, not running. Marking honestly.

Paperclip for SEO outreach + link-building. Same Paperclip tool Greg Isenberg covered to 318k views on his YouTube. The pitch: AI agents that handle outreach, broken-link-building, listing submissions, and review-site outreach as a productized service. I run a parallel agency-side use of Paperclip; the VTS internal use is the next step. Target wire-in: Q3 2026.

MCP-driven content-prep agents. Claude Skills (the Anthropic feature that turns markdown skill files into reusable agent capabilities) for: keyword-cluster expansion, comparison-table research, FAQ pattern generation, internal-link suggestion. Most of these are scripted today; converting to skill-based agents is the lift.

Hermes / OpenClaw as a research layer. Both have been on my eval list since they shipped. The use-case is "agent that crawls 10 competitor articles, extracts the comparison-data structure, surfaces the gaps." Currently this is manual work I do during pre-writing recon. If the agents handle 60% of that surface, the time-per-article drops materially.

What's vapor (labeled):

  • Full autonomous content-generation. I don't believe in it for this brand at this voice slider. The slop-filter is too tight, the voice marker is too specific. AI-assisted, never AI-authored.
  • Multi-agent orchestration for content scheduling. Conceptually interesting, operationally premature. Single-operator + Claude Code is faster than 5 agents arguing.

5. What I'm Betting On

The bet isn't a number. It's a thesis.

Thesis 1: Depth-first cluster content compounds. Another live publication of mine proved the playbook in a tight niche. 1,100+ articles, DR 47, real recurring revenue after 12 months. The question for VTS is whether the same playbook works in a much bigger and faster-moving niche (AI-native tooling) where Greg, AI Jason, Riley Brown, and a dozen others are putting up huge engagement numbers on similar topics. The bet: depth on the page beats velocity on YouTube for the long-tail SEO + AI-citation pickup that compounds.

Thesis 2: Agents reshape the cost-per-article curve. That site was hand-built. Manual research, manual outreach, freelance-driven backlink work. With Paperclip-agent for outreach + AI-assisted research + the slop-filter pipeline, the marginal cost per quality article drops materially. Not zero. Not "fully automated." But low enough that the depth-first playbook scales 3-5x faster than the manual version.

Thesis 3: Brand-first publisher beats personal-brand creator. A site called Vibetoolstack outlasts a single operator's attention. The brand can hire contributors, get acquired, run as an asset. A personal-brand timeline can't. Same reason that other site is a brand site, not a Paul-McSomething.com.

What kills the bet:

  • If Google + AI-search shifts attention away from depth-pages toward video summaries, the SEO compounding thesis weakens.
  • If the AI-tool category mints 50 new "Vibetoolstack" competitors in 6 months, the differentiation gets harder.
  • If the agent layer turns out to add 30% efficiency instead of 3-5x, the velocity thesis breaks.

I think all three are tractable. The playbook on the other site held against similar shifts in its niche. The voice + methodology + EU-stack-niche angles are real differentiators that template-driven competitors can't replicate fast. And the agent layer is real-enough already to be worth betting on.

That's the bet.

FAQ

Why didn't you build VTS on Webflow like that other site?

that other site launched on Webflow when it was the right call for that project: visual editor, fast iteration on landing-style pages, decent CMS for under-50 firms. That stopped being right around year two when the structured-content needs (40+ fields per firm, auto-generated comparison pages, programmatic sub-pages) hit the limits. VTS started fresh with that learning baked in. Sanity + Astro fits "60+ articles per tool, 8 hubs, auto-rendered cluster pages" much better than Webflow ever could.

Why Sanity over Notion or Payload?

Notion-as-CMS is fine for a personal blog. It breaks at the structured-content boundary. You can fake a tool review as a Notion database entry, but you can't reliably reference five other tools, auto-generate comparison pages, and version-control schema changes. Payload is the legitimate self-hosted alternative; if I were starting today and had the engineering bandwidth, I'd evaluate it seriously. Sanity wins for solo-operator velocity because the Studio UX is mature, the GROQ query language is fast to write, and the hosted API removes one whole infrastructure category.

Why Cloudflare Workers over Vercel?

Pricing math, mostly. After Vercel's March 2026 pricing changes, the dollars-per-page-view at the page-counts I'm planning got worse. Cloudflare Workers' deploy story is now solid enough (wrangler is mature, the Pages-Workers convergence is happening), the WAF is included rather than a $20/mo add-on, and egress isn't metered. Vercel still wins on dev experience polish. The gap closed enough that it's not worth the markup for a content-heavy publication.

Is the agent layer ready?

No. The honest answer is "Paperclip wired in for SEO outreach is Q3, MCP-content-prep agents are mid-2026, Hermes/OpenClaw eval is parallel." The page you're reading was written by Paul, not an agent. The slop-filter is human-run. The methodology is operator-decided. AI-assisted, not AI-authored. The "agent layer" is a planned extension, not what runs today.

How much time does this stack take to maintain?

Light. Astro + Sanity + Cloudflare auto-deploy on push. The actual time goes into content (5-10 hours per tool review, Pillar-length) and into research / pre-writing recon (1-2 hours per piece). Maintenance of the stack itself is maybe an hour a month: version bumps, minor schema additions, occasional MCP server swap. The compounding leverage is in the content layer, not the infrastructure.

What's missing that you haven't picked yet?

A few things I'm still evaluating: (1) which analytics, Plausible vs GA4 vs both, decision postponed until first traffic worth measuring; (2) which OG-image generator, testing two options for per-page covers; (3) whether to add Algolia or Sanity's built-in search for site-search at 200+ pages; (4) the exact MCP-content-prep agent setup. I'd rather pick late and right than early and wrong.

Last checked: May 8, 2026 · Stack version: production v60c23017 · Methodology: hands-on, every tool above runs in this exact configuration today (or is labeled as planned). Update Log will capture the agent-layer additions as they land.