MemOS

https://github.com/MemTensor/MemOS

MemOS, the first AI Memory OS with 5k+ GitHub stars, solves AI’s memory flaw. Manage dynamic long-term memory & knowledge...

MemOS: Separating AI Memory and Knowledge Base Long Before Claude

《MemOS: Separating AI Memory and Knowledge Base Long Before Claude》

Reuters reported yesterday that Claude’s “Knowledge Base + Memory” feature is set for an update—essentially addressing the same age-old issue:

Native AI treats every conversation as a first encounter. The model doesn’t recall your past actions, decision-making processes, or preferences.

To make AI seem memory-equipped, we have to reinsert historical context, user data, and document content all at once with every request.

While browsing GitHub, I found an open-source project named MemOS, boasting over 5,000 stars.

Instead of merely expanding the context window, it treats memory and knowledge base as first-class citizens, specializing in long-term memory management, contextual continuity, cross-conversation state reuse, and knowledge base support.

More importantly—

These capabilities are available right now.

MemOS already supports integrating files, URLs, and other content into its knowledge base, updating memory continuously during conversations so the model never starts from scratch. Actively maintained and rapidly iterated, it requires no wait for next-gen models or “future features.”

If you’ve been focusing on Claude, Coworks, knowledge base solutions, or AI memory mechanisms lately,

MemOS is a ready-to-use tool you can try today.

As the industry’s first AI Memory Operating System, MemOS is more than a storage tool. Its pioneering three-layer memory architecture equips AI Agents with a “hippocampus,” granting AI truly growable and manageable memory capabilities.

Open Source Project Overview

Without MemOS for AI app memory management, you’d typically handle document splitting, vectorization, retrieval logic, and excessive context length manually.MemOS adopts a unique approach: managing memory as a system resource, similar to how an OS oversees RAM and hard drives.It offers a full framework enabling your AI to:

  • Remember long-term interactions: Not just a few dialogues, but long-term user preferences and experiences.
  • Self-evolve: Its understanding of you updates dynamically with more conversations, instead of remaining static.
  • Multimodal memory: A standout feature—supporting image, document/chart understanding and memory, beyond plain text.
  • Knowledge base capabilities: Directly integrate files into MemOS for searchable, reusable long-term knowledge assets. Features include file upload/URL auto-parsing, shared knowledge bases across projects, and dynamic memory updates/revisions during conversations.

Key Difference from RAG

The core flaw of traditional RAG solutions is their statelessness, while MemOS provides dynamic long-term memory.

MemOS Architecture

MemOS’s architecture is innovative. Beyond data storage/retrieval, it introduces scheduling and classifies memory into multiple types:
  • Textual Memory: Stores textual content (e.g., user inputs).
  • Activation Memory: Accelerates reasoning, saving tokens and boosting efficiency.
  • Parametric Memory: Stores LoRA weights.
  • Tool Memory: Records Agent tool usage traces. If an Agent failed with a tool before, it remembers to adjust its approach—true intelligence, not repeating mistakes.
  • Redis Streams Scheduler: An engineering highlight—using Redis Streams to build a multi-level queue for high concurrency handling.

How to Use MemOS

① Create a Knowledge Base

  1. Log into the MemOS console and create a new knowledge base.
  2. Upload files (PDF, Word, etc.) to the knowledge base.
  3. MemOS automatically handles storage, parsing, segmentation, and memory generation. Wait until the document status shows “Available.”
  4. Copy the new knowledge base ID from the list (for later code use).

② Prepare the Environment

Run the following command in your Python environment to install required libraries:

pip install OpenAI datetime

Prepare API Keys

  • MemOS API Key: Register on the website below, then access the API Key console to generate and copy your key: URL: https://memos-dashboard.openmem.net/cn/quickstart/?source=landing
  • OpenAI API Key: Prepare your LLM provider key.

③ Implement Core Logic in Code

Write a Python script to configure environment variables with your API keys and knowledge base ID:

import os import json os.environ["MEMOS_API_KEY"] = "your_memos_key" os.environ["OPENAI_API_KEY"] = "your_openai_key" os.environ["MEMOS_BASE_URL"] = "https://memos.memtensor.cn/api/openmem/v1" os.environ["KNOWLEDGE_BASE_IDS"] = json.dumps(["your_knowledge_base_id"])

Call MemOS’s three core APIs (search memory, add message, get message) to equip your AI with a preference-remembering, evolving knowledge base.

④ Instantiate the Assistant

ai_assistant = KnowledgeBaseAssistant() user_id = "test_user_001" # Define a user ID

⑤ Start the Conversation Loop

Create a while loop to receive user input, call ai_assistant.chat() for responses, and print them:
while True: user_input = input("You: ") if user_input.lower() in ["exit", "quit"]: break response = ai_assistant.chat(user_id, user_input) print("AI Assistant:", response)
For more use cases, visit the open-source homepage: Open Source URL: https://github.com/MemTensor/MemOS

Call Center AI

Send a phone call from AI agent, in an API call. Or, directly call the bot from the configured phone number!

Vibe A Kanban Tool Prepared for the AI Era

The project name perfectly fits the current Vibe Coding trend. Simply put, it’s a kanban tool designed for the AI era.

Write a Resume with YAML

The biggest hassle when writing a resume is formatting. RenderCV takes an interesting approach: it allows you to craft your resume content using a YAML file.