Stash:让AI拥有持续记忆,告别重复解释
Stash — Your AI has amnesia. We fixed it.
stash.memory What Namespaces See It vs RAG Quick Start Pipeline MCP Backends → GitHub
Open Source · MCP Native · PostgreSQL + pgvector Stash makes your AI remember you. Every session. Forever. No more explaining yourself from scratch. 28 MCP tools 6 Pipeline stages ∞ Agents supported
Sound familiar? 😫 Without Stash Hey, I'm building a SaaS for restaurants. Can you help? Of course! Tell me about your project. We talked about this last week... I already explained everything. I'm sorry, I don't have access to previous conversations. ...again? 🔁 You just wasted 10 minutes re-explaining yourself. Again. VS 😌 With Stash Hey, continuing work on my project. Welcome back! Last time we finalized the pricing model for your restaurant SaaS. You were about to work on the onboarding flow. Want to pick up there? Yes! Exactly that. Great. You also mentioned you wanted to avoid Stripe's complexity — I have that noted. Here's where we left off... ✓ Picked up instantly. Zero repetition. Full context.
New session ❌ "Who are you again?" ✓ Picks up where you left off Your preferences ❌ Re-explain every time ✓ Already knows them Past mistakes ❌ Repeats the same errors ✓ Remembers what didn't work Long projects ❌ Loses track of goals ✓ Tracks goals across weeks Token cost ❌ Grows every session ✓ Only recalls what matters Switching models ❌ Start from zero again ✓ Memory is model-agnostic
What is Stash Not just memory. A second brain. Stash is a persistent cognitive layer that sits between your AI agent and the world. It doesn't replace your model — it makes your model continuous. Episodes become facts. Facts become patterns. Patterns become wisdom. "Your AI is the brain. Stash is the life experience."
your agent Claude, GPT, local model, anything episodes Raw observations, append-only facts Synthesized beliefs with confidence relationships Entity knowledge graph patterns Higher-order abstractions goals · failures · hypotheses Intent, learning, uncertainty postgres + pgvector Battle-tested infrastructure
Namespaces Memory organized like folders. Not all memory is equal. What your agent learns about you is different from what it learns about a project, which is different from what it knows about itself. Namespaces let the agent organize what it learns into clean, separate buckets — just like folders on your computer. Each namespace is a path. Paths are hierarchical. Reading from /projects automatically includes everything under /projects/stash, /projects/cartona, and so on. You never have to think about it — the agent does. 📁 Write to one namespace. Read from any subtree.
example namespace structure 📁 / everything 📁 /users/alice who alice is, her preferences 📁 /projects all projects 📁 /projects/restaurant-saas pricing, features, decisions 📁 /projects/mobile-app design, tech stack, goals 📁 /self agent self-knowledge 📄 /self/capabilities what I do well 📄 /self/limits what I struggle with 📄 /self/preferences how I work best
🔍 Recursive reads Recall from /projects and get everything across all sub-projects automatically. ✏️ Precise writes Remember always targets one exact namespace — no accidental cross-contamination. 🔒 Clean separation User memory never mixes with project memory. Agent self-knowledge stays in /self.
agent session
Stash vs RAG
RAG gives your AI a search engine. Stash gives it a life. You've probably heard of RAG — Retrieval Augmented Generation. It's clever. But it's not memory. Here's the difference, in plain English.
📚 RAG "A very fast librarian" You give it a pile of documents. When you ask a question, it searches those documents and hands you the relevant pages. That's it. It doesn't remember your conversation. It doesn't learn. It doesn't know you. Every question starts from scratch — it's just a smarter search engine over files you already wrote. Only knows what's in your documents Cannot learn from conversations Cannot track goals or intentions Cannot reason about cause and effect Cannot notice contradictions over time Stateless — no continuity whatsoever You must write the knowledge first
VS
🧠 Stash "A mind that grows" Stash learns from everything your agent experiences — conversations, decisions, successes, failures. It synthesizes raw observations into facts, connects facts into a knowledge graph, detects contradictions, tracks goals, and builds an understanding of you that deepens over time. You don't write anything. It figures it out. Learns from every conversation automatically Builds a knowledge graph over time Tracks your goals across weeks and months Reasons about cause and effect Self-corrects when beliefs contradict Continuous — picks up exactly where you left off Creates knowledge — you don't have to
📚 RAG is like... A brilliant intern who reads your files perfectly — but forgets everything the moment they leave the room. → 🧠 Stash is like... A colleague who was there from day one, remembers every decision you ever made, and gets more valuable every single week. Can you use both? Yes — RAG is great for searching documents. Stash is for remembering experience. They solve different problems. Stash just goes much, much further.
Why Stash is Different
Everyone gave AI a notepad. We gave it a mind. Claude.ai has memory. ChatGPT has memory. They only work for themselves — locked to one platform, one model, one company. Stash works for everyone, everywhere, forever. And it goes far deeper than any of them.
Remembers you ✓ ✓ ✓ Works with any AI model ✗ ✗ ✓ Works with local / private models ✗ ✗ ✓ You own your data ✗ ✗ ✓ Open source ✗ ✗ ✓ Background consolidation ✗ ✗ ✓ Goals & intent tracking ✗ ✗ ✓ Learns from failures ✗ ✗ ✓ Causal reasoning ✗ ✗ ✓ Agent self-model ✗ ✗ ✓ What it gives your AI A notepad A notepad A mind
The Problem
🧠 Brilliant brain, no experience AI models reason brilliantly but remember nothing. Every session you re-explain who you are, what you need, and what you've already tried. You're training the same student every single day. 💸 Context windows are expensive The workaround is stuffing full conversation history into every prompt. It's slow, expensive, and you still hit the limit. You're paying for tokens that repeat the same facts over and over. 🔄 Agents repeat their mistakes Your agent tried something, it failed, and next session it tries the exact same thing again. There's no mechanism to carry lessons forward. Every failure is forgotten. 🔒 Memory is a platform privilege Only a handful of AI platforms offer memory — and only for their own models. Your custom agent, your local LLM, your Cursor setup? They all start blind. Memory shouldn't be a premium feature.
Express Setup Up and running in 3 commands. No infrastructure to set up. No dependencies to install manually. Docker Compose handles everything — Postgres, pgvector, Stash, all wired together and ready.
1 Clone the repo 2 Copy .env.example → .env and set your API key + model preferences 3 Run docker compose up — that's it. Stash is live.
terminal $ git clone https://github.com/alash3al/stash $ cd stash $ cp .env.example .env # edit .env with your API key, # models and STASH_VECTOR_DIM $ docker compose up ✓ postgres + pgvector ready ✓ stash migrations applied ✓ mcp server listening ✓ consolidation running in background $ ⚠️ Set STASH_VECTOR_DIM in your .env before first run. It cannot be changed after initialization.
01 📝 Episodes Raw observations stored as they happen
02 💡 Facts Clustered episodes synthesized by LLM
03 🕸️ Relationships Entity edges extracted from facts
04 🔗 Causal Links Cause-effect pairs between facts
05 🌀 Patterns Abstract higher-order insights
06 ⚖️ Contradictions Self-correction and confidence decay
NEW 07 🎯 Goal Inference Facts automatically tracked against active goals. Progress detected, contradictions surfaced.
NEW 08 💥 Failure Patterns Detect repeated mistakes. Extract failure patterns as new facts. The agent stops repeating itself.
NEW 09 🔬 Hypothesis Scan New evidence passively confirms or rejects open hypotheses. No manual intervention needed.
MCP Integration Two commands. Any agent. Stash speaks MCP natively. Drop it into Claude Desktop, Cursor, or any MCP-compatible agent in under 5 minutes. No SDK. No vendor lock-in. Your agent remembers you everywhere. 28 tools covering the full cognitive stack — from raw remember and recall all the way to causal chains, contradiction resolution, and hypothesis management. Claude Desktop Cursor OpenCode Custom Agents Local LLMs Any MCP Client
stash · mcp stdio
$ ./stash mcp execute --with-consolidation $ ./stash mcp serve --port 8080 --with-consolidation ✓ remember · recall · forget · init ✓ goals · failures · hypotheses ✓ consolidate · query_facts · relationships ✓ causal links · contradictions ✓ namespaces · context · self-model $
Agent Self-Model Your agent can know itself. Call init and Stash creates a /self namespace scaffold. The agent uses its own memory layer to build and maintain a model of its own capabilities, limits, and preferences.
/self/capabilities What I can do well The agent remembers where it excels and recalls these when planning how to approach a task.
/self/limits What I struggle with Recorded failures and known weaknesses. The anti-repeat mechanism. Never make the same mistake twice.
/self/preferences How I work best Learned preferences for how to operate. The agent develops a working style over time, not just facts.
Autonomous Loop An agent that never stops learning. Give your agent a 5-minute research loop. It orients from past memory, researches a topic it chooses itself, invents new connections, consolidates what it learned, and closes gracefully — ready to pick up next time. Run it as a cron job. Every 5 minutes, your agent gets smarter. → See the loop prompt
01 Orient Recall context, active goals, open hypotheses, past failures
02 Research Search the web on a topic the agent chooses itself
03 Think Surface tensions, gaps, contradictions in what it now knows
04 Invent Generate something new — a hypothesis, pattern, or discovery
05 Consolidate Run the pipeline. Synthesize raw episodes into structured knowledge
06 Reflect + Sleep Write a session summary. Set context for next run. Stop.