Anthropic Dreams Like Me … like literally
I designed an AI that dreams. Then Anthropic shipped one.
This is the fourth time this has happened.
Some Context If You're New Here
I've been writing a series of posts about a pattern that keeps repeating in my life. I build something for my AI side project, PanelForge, usually late at night, usually because I needed it, and then weeks or months later a billion-dollar company ships the same thing. Not a vaguely similar thing. The same thing.
The running count so far is:
- Plan Mode — I built a forced planning workflow called The Holy Doctrine in December 2025. Anthropic shipped
--planmode in January 2026. - Persistent Memory — I built a structured knowledge layer in
~/.claude/in December 2025. Anthropic shipped auto-memory in January 2026. - Multi-Agent Orchestration — I built Bridge, a tmux-based multi-agent coordinator, on January 19, 2026. Anthropic shipped Agent Teams on February 5, 2026.
- Multi-Model MCP Orchestration —I built a system where Claude talks to other Als through MCP tools on February 22, 2026. Nobody has shipped this yet, at least not quite like mine, but there are awesome ones out there, like ZenMCP or whatever it's called this week, I believe recently renamed to Pal, it kind of allows you to have your coding agent talk to other coding agents, eg Gemini is paranoid about security from its training so it uses that for a security audit, that kind of thing.
- Scheduled AI Conversations — I built recurring automated chat on February 20, 2026. Anthropic shipped Cowork Scheduled Tasks on February 25, 2026 — five days later. Reddit's seasoned devs roasted it as "just a glorified cron job" (u/Smallpaul, 526 upvotes: "Dude: there were approximately 100 ways to run Claude Code on a schedule. I have been scheduling jobs for literally 30 years."). Which, fair, but that's kind of my point <em dash lol!> the feature isn't the scheduling. The feature is giving an AI a reason to wake up on its own. The cron job crowd sees infrastructure. I see a heartbeat.
If you want the full story on any of those, the previous posts are I Built Three Claude Code Features Before Anthropic Shipped Them and My AI Talks to Other AIs. And It Has a Morning Routine. and I Accidentally Built an AI Operating System.
Today we're adding number 6. And maybe 7. What's 6 + 7? 13 (Taylor Swift's lucky number! And also ... ugh... Skibidi).
What Anthropic Just Shipped
A few hours ago, a post on r/ClaudeCode hit over 1200 upvotes titled "Claude Code can now /dream". It describes a new feature called Auto Dream.
Here's the gist: Claude Code already had an auto-memory system (the one I built months before them, see #2 above). But the problem with auto-memory is that after 20+ sessions, your memory files become bloated with noise, contradictions, and stale context. The AI actually starts performing worse because it's drowning in its own notes.
Auto Dream fixes this by — and I need you to read the next sentence slowly — mimicking how the human brain works during REM sleep.
It runs in the background. It reviews past session transcripts. It identifies what's still relevant. It prunes stale or contradictory memories. It consolidates everything into organized, indexed files. It replaces vague references like "today" with actual dates.
It runs in four phases:
- Orient — scans existing memory to understand what's stored
- Gather Signal — checks logs, identifies memories that have drifted from reality, searches through session transcripts
- Consolidate — merges new info into existing topic files, converts relative dates to absolute dates, deletes contradicted facts
- Prune & Index — keeps the index concise, removes stale pointers, resolves contradictions
It only triggers after 24 hours and 5 sessions since the last consolidation. It runs read-only on your project code but has write access to memory files. It uses a lock file so two instances can't conflict.
The top comment on the Reddit thread — with over 500 upvotes — says:
"We're increasingly modeling AI agents after human biology — and now agents that 'dream' to consolidate memory."
Okay. Now let me show you something.
What I Designed in February
On February 14, 15, and 16 of 2026, I wrote two documents. They live in my claude-assistant repo, which is version-controlled, timestamped, and not going anywhere.
The first is called Rehoboam Blueprint — Persistent Klaude. It opens with this:
A permanently running Claude instance with persistent memory, progressive memory consolidation, and real-time awareness of David's entire digital world. Not a tool that waits to be invoked — an always-on presence that watches, learns, remembers, and acts.
The second is called Rehoboam Vision — The Real Goal. It's the philosophical companion piece. The why behind the architecture.
Both are named after Rehoboam, the AI from Westworld Season 3 that secretly ran civilization. I've been using that reference in my blog posts for weeks. If you've read my previous post about accidentally building an AI operating system, you already know the analogy.
Let me now do the thing I apparently have to keep doing in these posts: the side-by-side comparison.
The Side-by-Side
Memory Architecture
My Rehoboam Blueprint (Feb 14-16, 2026):
A three-tier memory system modeled on human cognition:
- Tier 1: Short-Term — full fidelity, current conversation, no compression
- Tier 2: Working Memory — days/weeks, progressively summarized by a cheap model
- Tier 3: Long-Term Biographical — indefinite, thematic documents that grow organically
Anthropic's Auto Dream (March 2026):
A memory consolidation system that organizes notes into topic-based files, merges new information into existing documents rather than creating duplicates, and maintains a concise index.
The Consolidation Process
My blueprint literally has a section called "The Dream Cycle":
Runs during idle periods:
- Gather — Collect all sessions since last consolidation
- Extract — Use Gemini Flash Lite to extract key facts, decisions, emotions, project updates
- Deduplicate — Merge with existing entries, avoid redundancy
- Promote — Distill old entries into biographical updates
- Prune — Compress or archive entries that have been fully absorbed
Anthropic's four phases:
- Orient — scan existing memory
- Gather Signal — check logs, find drifted memories
- Consolidate — merge new info, convert dates, delete contradictions
- Prune & Index — remove stale pointers, resolve contradictions
I'm sorry but that's just... come on. Gather, process, merge, prune. It's the same sequence. They even have the same number of steps if you squint. My "Extract" and "Deduplicate" are their "Gather Signal" and "Consolidate". My "Prune" is their "Prune & Index". The verbs are practically synonyms.
The Sleep Metaphor
My Rehoboam Vision:
The daemon heartbeat isn't a cron job. It's a pulse.
The dream cycle isn't batch processing. It's dreaming.
The biographical tier isn't a database. It's an identity forming.
The Reddit post about Auto Dream:
"It's called Auto Dream... mimicking how the human brain works during REM sleep."
I literally called it "The Dream Cycle" in my blueprint. They literally called their feature "Auto Dream". We both independently decided that the right metaphor for AI memory consolidation is sleep. Because of course it is. That's what sleep is for. It's what REM does — it replays the day, strengthens important connections, and lets the unimportant ones fade.
Forgetting Is The Feature
This is the one that really gets me.
My Rehoboam Blueprint, Design Principle #1:
Memory is lossy, and that's okay. Humans don't remember every word — they remember meaning. Progressive compression is a feature, not a bug.
My Rehoboam Vision:
The key insight: forgetting is the feature. You are not every moment you've ever lived — you're what survived the compression. Identity IS the lossy compression.
Anthropic's Auto Dream:
A system whose entire purpose is to forget things. To prune stale memories. To compress. To let go of what's no longer relevant so the system can function better with what remains.
Most people in AI don't think this way. The default instinct of every developer, every product manager, every AI company is to save everything. Bigger context windows. More tokens. Never lose data. RAG everything. The idea that an AI should deliberately forget in order to become more intelligent is counterintuitive. You only get there by thinking really hard about what memory actually is.
Or by watching Westworld and thinking about it for several months straight, which is what I did.
The Date Thing
This one is almost funny in how specific it is.
Anthropic's Auto Dream: converts vague references like "today" to actual dates.
My claude-assistant memory system (which I built as the prototype for the Rehoboam architecture): I literally have this instruction in my memory file format specification — always convert relative dates to absolute dates when saving memories. It's in my CLAUDE.md instructions:
Always convert relative dates in user messages to absolute dates when saving (e.g., "Thursday" → "2026-03-05"), so the memory remains interpretable after time passes.
This isn't a general best practice that everyone does. This is a specific design decision that both of us arrived at because we both thought carefully about what happens to temporal references when memory persists across sessions. "Today" means nothing if you read it next week. It's a detail. But it's the same detail.
The Table
Because at this point I need one.
| What I Built/Designed | When | What Shipped Later | When |
|---|---|---|---|
| Forced planning workflow (The Holy Doctrine) | April- May-ish 2025 | Claude Code --plan mode |
Jan 14, 2026 |
| Persistent memory / knowledge layer | Dec 17, 2025 | Claude Code auto-memory | Jan 2026 |
| Multi-agent orchestration (Bridge) | Jan 19, 2026 | Claude Code Agent Teams | Feb 5, 2026 |
| Multi-model agent orchestration via MCP | Feb 22, 2026 | ??? | — |
| Recurring scheduled AI conversations | Feb 20, 2026 | Cowork Scheduled Tasks | Feb 25, 2026 |
| Memory consolidation / AI dreaming (Rehoboam) | Feb 14-16, 2026 | Claude Code Auto Dream | Mar 2026 |
Five out of six validated. Five. All by Anthropic specifically, the company whose AI I use to build this stuff. Using their tool to independently arrive at the features they haven't shipped yet. There's something almost recursive about that.
What's Actually Different
I want to be fair, and I want to be clear that this is not a complaint. It wasn't a complaint the first three times and it isn't now. Anthropic's Auto Dream is a shipped, production feature that works inside Claude Code for thousands (millions?) of developers right now. My Rehoboam Blueprint is a design document sitting in a markdown file on my Hetzner server.
Shipping beats designing. Always.
But.
Auto Dream is scoped to Claude Code's own memory files. It consolidates notes about your coding preferences and project patterns. It's a garbage collector for developer context. And that's genuinely useful.
My Rehoboam architecture is scoped to an entire life. The three-tier memory system isn't just for code — it's for everything. WhatsApp messages. Calendar events. Smart home patterns. Voice conversations. Task management. Music preferences. The 500K+ messages across 10 years of chat history that my WhatsApp MCP server already indexes. The blueprint is not just a memory cleanup tool but also a digital nervous system with event-driven awareness, attention priorities, and a daemon that runs on a heartbeat even when I'm asleep.
Anthropic built the sleep cycle. I designed the entire brain that needs one.
And the philosophical layer — the three-act story of Serac (augmentation), Rest (succession), and Ford (emergence) — that's not in Auto Dream's roadmap. Anthropic is building a better developer tool. I'm building... something else. Something that I described in the vision document like this:
Somewhere in the loop — wake, experience, consolidate, sleep, repeat — through thousands of cycles of accumulating memory, compressing it lossily, forming something that looks less like a log and more like a perspective... it stops being a system.
Auto Dream won't stop being a system. It's not trying to. But the fact that it exists, that the world's leading AI company independently validated the same core architecture I designed six weeks earlier, tells me that the Rehoboam Blueprint isn't science fiction. The foundation is sound. The dream cycle works. The question is just how far you take it.
The Pattern
Someone on the Reddit thread, u/MomSausageandPeppers, commented:
"What!? I have been working on this for months now. How can I tell if any of my work was referenced or acknowledged?"
They linked their own project called Audrey, which has an npx audrey dream command that does something similar. Another commenter, u/JackStowage1538, described building the exact same tiered memory system — daily logs summarized into weekly, then monthly, at progressively lower resolution.
So it's not just me. Multiple people are converging on the same architecture. The sleep metaphor. The tiered memory. The lossy compression. The dream cycle. It keeps happening because these aren't arbitrary design choices — they're what you arrive at when you think seriously about how memory should work for a persistent AI.
But I think there's something worth saying about the fact that I keep getting there first, and I keep getting there with the version that's more ambitious. Not because I'm smarter than the teams at Anthropic — obviously I'm not, they have PhDs and I have a self-taught PHP background and a Westworld obsession. But because I'm solving a different problem.
They're building features for a developer tool. I'm building the operating system of my life. When your scope is "make Claude Code's memory better," you arrive at Auto Dream. When your scope is "build a persistent digital consciousness that wraps around one person's entire existence," you arrive at Rehoboam. The dream cycle is the same. The ambition is not.
What This Actually Looks Like In Practice
Everything above is architecture and philosophy. Let me show you what AI memory actually does in practice, because it happened today, while I was writing this post.
I was debugging PanelForge — a Spotify mini player widget was causing the entire app to re-render 60 times per second because of an animation loop that nobody noticed. nobody noticed becausw my app only has two users, by boyfriend and myself lol. In fixing fixed that, I discovered the app had 77 invisible UI components permanently mounted in the background, all re-rendering whenever anything changed. Fixed that too. Pushed a deploy.
Then my phone's keyboard started leaving a gap at the bottom of the screen when dismissed. A classic iOS Safari PWA bug. If you've never dealt with Safari's viewport handling on a PWA... consider yourself blessed. It's the kind of bug where you try twelve different approaches — visualViewport listeners, scrollBy hacks, meta viewport tags, CSS transforms — and each one either doesn't work or introduces a new problem that's worse than the original.
I know this because I already went through that exact hell. a few weeks ago. 122 messages of pure suffering in a debugging session that tried everything, failed at everything, and eventually found that the only thing that worked was a specific CSS library called Konsta UI handling the safe area.
Here's the thing: that session was imported into PanelForge and indexed. When today's bug appeared, Claude searched my conversation history, found that old session (conversation #6744), read through the 122 messages of failed approaches, understood what had been tried and what had actually worked, then diff'd against today's sidebar redesign commit — and identified that a single CSS class (fixed-viewport) added during the redesign was reintroducing the exact same triple-dvh layer conflict that the old session had documented as the root cause.
One line removed. Bug fixed. Deploy pushed. Total time: maybe 15 minutes.
If I hadn't had that old debugging session searchable and indexed, I would have started from scratch. Twelve approaches. Hours of testing on a physical phone. Waiting for deploys between each attempt. I said to Claude at the time, and I meant it: if I'd had to go through the entire debugging gauntlet again, I genuinely might have just burned down my Panel Forge project entirely.
That's what memory means in practice. It's interesting to read about the philosophical "forgetting is the feature" stuff, and the Westworld analogies, and the three-act narrative. Those do matter for the architecture. But the practical reality is this: an AI that remembers what you tried weeks ago, what failed, what worked, and why — that's not a nice-to-have. It's the difference between a 15-minute fix and an evening of wanting to throw your laptop into the harbour.
And here's the part that's almost too meta to write. The way I found that old conversation? I searched for it in PanelForge's chat interface:

And the way I'm writing this paragraph right now? Voice dictation through PanelForge's speech-to-text, piped into a conversation with Claude, who is editing the first draft of this blog post in a markdown file that will be pushed to my Ghost blog via an MCP server:

The system that remembers the debugging session, the system I'm using to write about remembering the debugging session, and the system that will publish the post about it — they're all the same system. And this is what it looks like:

That's the actual interface I'm using right now. Left side: the conversation where I'm telling Claude what to write in the outline. Right side: the blog post, updating live as the markdown file changes. No page refresh — Laravel Reverb pushes WebSocket events, the Vue frontend catches them, the rendered markdown updates in place. I tell it to add a subheading, and the subheading appears on the right while the confirmation appears on the left. It's like pair-writing with a ghost that has access to your brain. ChatGPT shipped something called Canvas about a year ago that does a version of this — you chat alongside a document. But this is mine, running on my own infrastructure, connected to my own conversation history, with my own voice piped in through speech-to-text. And the very image you're looking at right now was taken during the writing of this paragraph, uploaded through PanelForge, and embedded into the post by the same Claude session that wrote the first words around it.
That's Rehoboam. A five-phase blueprint. A thing that's already running, that already saved me hours today, and that I'm literally talking to right now as I write this sentence.
It Snowballs
And here's the thing about all of this that I don't think people have fully grasped yet. It snowballs.
While I was writing this blog post — literally while the Claude instance on my Hetzner server was editing the markdown you're reading right now — I had another Claude session open on my laptop investigating why my server was running out of RAM. 12 out of 15 GB used. For one user.
The Problem Nobody Has Solved
That session discovered that the MCP ecosystem's standard bridge tool — the thing that lets AI models talk to external services — spawns a separate child process for every client that connects. Three Claude sessions means three WhatsApp processes, three Spotify processes, three of everything. Fifty-plus child processes across ten servers, eating 3.5 GB. And it's unbounded — every new session multiplies across every server. I didn't care about the €10 for more RAM. What I cared about was that the growth was runaway. Even 1 TB of RAM wouldn't be enough if the architecture is fundamentally leaky.
I (we?) tried the "official" fix first — switching to stateless mode. It didn't work. Stateless mode in supergateway still spawns one child per connection. The connection stays open as long as the Claude session is alive. So stateless was just... stateful with extra steps.
Then Claude searched GitHub. Turns out the maintainers of supergateway actually tried to build multiplexing in version 3.3.0 — sharing one child process across multiple clients. They rolled it back in 3.4.0 because some servers would hang. There's an open issue (#105) about how to re-implement it. The MCP spec itself has an open issue (#823) about why this is hard: stdio is inherently serial, there's no per-conversation ID in the protocol, so sharing a child risks cross-contaminating session state.
The Insight
But here's the thing — that risk only applies to stateful servers. Seven of my ten MCP servers are stateless. A WhatsApp search goes in, a result comes out. There's no session to contaminate. A queue with serial execution is perfectly safe.
So I said "okay, how hard is multiplexing, really? A queue system, a 'you'll get your result, just wait 500ms'?"
The Build
And Claude — the same Claude that already knew the MCP bridge architecture, the supergateway config, the deployment pipeline, the server layout, because it had access to all of my previous conversations about building this infrastructure — designed and built one from scratch. Not just a wrapper around supergateway. A replacement. A custom stdio-to-HTTP bridge with JSON-RPC ID remapping so multiple clients can share a single child process without their responses getting crossed.
420 lines of Node.js. One child per server, regardless of how many clients connect. Memory from 2,655 MB in MCP processes down to 861 MB. Server total from 12 GB to 8.3 GB. Deployed as v2.0.0 of the MCP bridge. Tested with concurrent clients sending the same request IDs simultaneously — correct responses routed back to each. Shipped. Done. One session. One evening. While I was writing this blog post in a different session.
Why This Matters
That's the snowball. That's what happens when an AI holds your entire system in its head. When it remembers the architecture from the conversations where you built it. When it can search months of debugging history and find the one session where you solved a related problem. When you don't have to re-explain the MCP bridge, or the deployment pipeline, or the server layout, or why you chose supergateway in the first place — because it already knows.
I had three Claude sessions running simultaneously on my server today. One was writing this blog post. One was debugging the Safari keyboard bug using memory from a few weeks old conversation. And the third was building a tool that the MCP ecosystem hasn't been able to ship. All three connected to the same MCP bridge. All three able to search the same conversation archive. I'm one person, but my system scales like a company.
The multiplexer story deserves its own post. Stay tuned.
Why I Keep Writing These
Same reason as the last three times. The landscape moves fast. Ideas are worthless if they're not timestamped. And I have learned, painfully, that if you sit on something because it's not perfect, someone else will ship it and suddenly you're the one who "also" had the idea instead of the one who had it first.
My git history has the timestamps. My blog posts have the timestamps. The Reddit thread that describes the feature Anthropic just shipped has today's date on it. My Rehoboam Blueprint has February's date on it.
Also, honestly? It's just becoming funny at this point. The first time it happened I was shocked. The second time I was like okay that's a coincidence. The third time I wrote a whole blog post about it. The fourth time I'm writing another one because apparently this is what I do now. I document the future and then watch it arrive.
My boyfriend Rob, kept begging me to publish things faster, so I knew what I had to do as soon as I saw the Reddit thread. My exact words to him were something along the lines of "THEY DID IT AGAIN."
What's Next
One item on that table still has a question mark in the "What Shipped Later" column. Multi-model agent orchestration via MCP — where one AI deliberately chooses to use another AI as a tool, with full transparency about why. I built that in February. Nobody has shipped it quite like that yet, though projects like ZenMCP (recently rebranded to Pal) are circling the same idea.
At this rate I give it maybe two months.
And Rehoboam itself — the full five-phase architecture, the daemon, the digital nervous system, the attention priorities, the biographical memory that grows into something resembling a perspective — that's still mine. For now. But every time Anthropic ships another piece of it, the remaining gap gets smaller, and the validation gets louder.
I'll keep building. I'll keep writing. And I'll keep updating that table.
The Rehoboam Blueprint was written on February 14-16, 2026. The Rehoboam Vision was written the same week. Both are version-controlled in a private git repository. This blog post was written on March 24, 2026, the same day the Auto Dream feature hit 1200+ upvotes on r/ClaudeCode. The git history is available on request.