Memtrace
Multi-tenant memory layer for production AI agents, backed by Arc time-series DB.
No embeddings, no vector DB. Fast, temporal memory that any LLM can consume as plain text. One deployment routes many orgs to their own Arc instances, with per-org API keys encrypted at rest.
Self-hosted is free forever. Memtrace Cloud, multi-tenant and managed with per-org Arc routing, in development.
v0.2.0 ships multi-tenant Arc routing. One Memtrace deployment, many orgs, each pointed at its own Arc instance with encrypted-at-rest API keys. Read the release notes ->
Is Memtrace for me?
Memtrace is server-side and multi-tenant, built for teams running fleets of AI agents in production.
Many agents, one memory pool
Call centers, SDR teams, multi-agent pipelines that need shared org-scoped memory across handoffs.
Many tenants, one deployment
SaaS teams routing each customer org to its own Arc instance, with per-org API keys encrypted at rest.
Long-running agents
Autonomous workers that run for hours or days and need durable, time-windowed recall.
Time-series queries
"What happened in the last 2 hours?" is a first-class operation, not a vector-similarity hack.
Memtrace is not a per-developer local memory store for your IDE. If you want a single-binary tool that lives in your laptop's .memtrace/, that's a different category. Memtrace is the server you'd point those products at if you wanted to share memory across an organization.
Real agents, real users
Memtrace is the memory layer behind these production deployments.
Memtrace powers cross-session agent memory across customer relationships, ticket histories, and decision logs.
Memtrace powers per-user training history, decision context, and long-running session recall for personalized coaching.
AI Agents Need Memory
Your agent runs for hours, makes decisions, encounters errors, learns what works. Then it restarts and forgets everything.
In-memory state
Dies on restart. No persistence, no queryability, no sharing between agents.
JSON files
Hard to query, grow unbounded, no time-based filtering. Great for config, terrible for memory.
SQL databases
Not optimized for temporal queries. "What happened in the last 2 hours?" shouldn't require a full table scan.
Vector databases
Overkill for operational memory. Embeddings add cost, latency, and complexity. You don't need semantic search for "what did I do last cycle?"
Agent memory is fundamentally temporal. Memtrace treats it that way — powered by Arc analytical storage.
Built for how agents actually work
Temporal memory primitives powered by Arc analytical storage.
Time-windowed queries
"What happened in the last 2 hours?" is a native query. Arc's analytical engine makes temporal lookups instant.
Memory types
Episodic, Decision, Entity, and Session memories. Each with its own schema and query patterns.
Importance scoring
0-1 relevance scale on every memory. Filter by what matters most without re-processing.
Shared memory pools
Multiple agents collaborate through shared memory namespaces. No custom pub/sub needed.
Session context
LLM-ready markdown output. Drop memories straight into your prompt — no transformation layer.
No embeddings
Plain text in, plain text out. Any LLM can consume Memtrace memories without adapters or SDKs.
Memtrace vs Vector Databases
Different problems need different tools. Operational memory isn't a search problem.
| Aspect | Memtrace | Vector DBs |
|---|---|---|
| Data model | Temporal events (Arc) | Embeddings |
| Query style | Time-windowed SQL | Semantic similarity |
| Storage | Parquet via Arc | Proprietary indexes |
| Latency | ~20-50ms | Variable (embedding + search) |
| Cost | Low (no embedding API) | High (embedding costs) |
| LLM format | Plain text (native) | Requires transformation |
| Setup | Memtrace + Arc | Complex pipeline |
Use cases
Any agent that runs more than once benefits from persistent memory.
Autonomous coding agents
Cursor, Devin, Aider — remember what files were changed, what errors occurred, and what approaches worked across sessions.
Customer support AI
Shared memory across agent handoffs. New agents pick up exactly where the last one left off.
DevOps monitoring agents
Incident memory that persists. Agent remembers past incidents, runbooks used, and resolution patterns.
Multi-agent collaboration
Shared memory pools let agents coordinate without custom messaging infrastructure.
Content & social media agents
Avoid repetition. Agent remembers what content was created, what performed well, and what topics were covered.
Sales & outreach agents
Prospect memory that builds over time. Track interactions, preferences, and follow-up history.
Pricing
Start free. Scale as your agents grow. All plans include Arc-powered storage.
Free
For individual agents and experimentation.
- 10K memories/month
- 1 agent
- 7-day retention
- All memory types
- Community support
Pro
For production agents that need longer memory.
- 1M memories/month
- 10 agents
- 90-day retention
- Importance scoring
- Email support
Team
For teams running multi-agent workflows.
- 10M memories/month
- 100 agents
- 1-year retention
- Shared memory pools
- Priority support
Enterprise
For organizations with custom requirements.
- Unlimited memories
- Unlimited agents
- Custom retention
- Self-hosted option
- SLA & SSO
Self-hosted Memtrace is free forever under the Apache 2.0 license. View on GitHub
Frequently asked questions
Give your agents memory
Self-Hosted
Deploy Memtrace on your infrastructure with Arc for storage. Free forever under the Apache 2.0 license.
# Coming soon
docker compose up -dRuns Memtrace + Arc together. Docker images coming soon.
View on GitHub ->Memtrace Cloud
Managed memory infrastructure backed by Arc Cloud. No servers to manage, no backups to configure.
enterprise@basekick.net