Open-source memory engine for AI applications and agents.
Docker-deployable memory backend with durable context, semantic retrieval, and memory mutation (AUDN: Add, Update, Delete, No-op).
- Semantic ingest — extract structured facts from conversations with contradiction detection
- Hybrid retrieval — vector similarity + BM25/FTS with RRF fusion
- AUDN mutation — Add, Update, Delete, No-op decisions with fail-closed integrity
- Claim versioning — temporal lineage tracking with supersession and invalidation
- Tiered context packaging — L0/L1/L2 compression for token-efficient retrieval
- Entity graph — spreading activation over extracted entities
- Pluggable embeddings — openai, openai-compatible, ollama, transformers (local WASM)
- Docker-deployable — one-command deployment with Postgres + pgvector
- Not a benchmark suite — eval harnesses live in atomicmemory-research
- Not an SDK or client library — this is the server/backend
git clone https://github.com/atomicmemory/Atomicmemory-core.git
cd Atomicmemory-core
cp .env.example .env
# Edit .env with your OPENAI_API_KEY and DATABASE_URL
docker compose up --buildnpm install
cp .env.example .env
# Edit .env — requires a running Postgres instance with pgvector
npm run migrate
npm run devHealth check: curl http://localhost:3050/health
| Method | Path | Description |
|---|---|---|
GET |
/health |
Health check |
POST |
/memories/ingest |
Full ingest with extraction and AUDN |
POST |
/memories/ingest/quick |
Fast ingest (embedding dedup only) |
POST |
/memories/search |
Semantic search with hybrid retrieval |
POST |
/memories/search/fast |
Fast vector-only search |
GET |
/memories/list |
List memories with optional filters |
GET |
/memories/:id |
Get a single memory |
DELETE |
/memories/:id |
Soft-delete a memory |
POST |
/memories/consolidate |
Consolidate and compress memories |
See docs/api-reference.md for full endpoint documentation.
| Variable | Description |
|---|---|
DATABASE_URL |
Postgres connection string (must have pgvector extension) |
OPENAI_API_KEY |
OpenAI API key (when using openai embedding/LLM provider) |
PORT |
Server port (default: 3050) |
Set EMBEDDING_PROVIDER to choose your embedding backend:
| Value | Description |
|---|---|
openai |
OpenAI Embeddings API (default) |
openai-compatible |
Any OpenAI-compatible API (recommended for self-hosters) |
ollama |
Local Ollama instance |
transformers |
Local WASM/ONNX inference via @huggingface/transformers |
For self-hosted deployments, openai-compatible is recommended as it works with any OpenAI-compatible embedding service.
See .env.example for the full list of configuration options.
See deploy/ for platform-specific configs (Railway, etc.). Copy the relevant config to your project root before deploying.
docker compose up --buildThe compose file includes Postgres with pgvector. The app container runs migrations on startup, then starts the server.
src/
routes/ # Express route handlers
services/ # Business logic (extraction, retrieval, packaging)
db/ # Repository layer, schema, migrations
adapters/ # Type contracts for external integrations
config.ts # Environment-driven configuration
server.ts # Express app bootstrap
Storage: Postgres + pgvector. Retrieval: hybrid (vector + BM25/FTS). Mutation: contradiction-safe AUDN with claim versioning.
See docs/memory-research/architecture-overview.md for detailed architecture documentation.
npm test # Run unit tests
npm run test:deployment # Deployment config tests
npm run test:docker-smoke # Docker smoke test
npm run migrate:test # Run migrations against test DBSee CONTRIBUTING.md for setup, workflow, and code style expectations.