MantleMemo turns AI conversations into reusable intelligence on Mantle. Memory stays off-chain, while Mantle enforces ownership and payments—making intelligence trustless and composable.
MantleMemo turns AI conversations into reusable intelligence coordinated on Mantle.
Instead of fine-tuning models or rebuilding prompts, MantleMemo captures long-term agent memory and exposes it as permissioned, composable intelligence.
- Wraps any stateless LLM with persistent memory
- Each LLM can have multiple isolated chats (memory capsules)
- Memory is reused across apps via API or SDK
- Intelligence stays off-chain; trust, access, and payments live on Mantle.
Mantle provides low-cost, high-frequency coordination for agent identity, access control, staking, and usage tracking.
This makes intelligence portable, trustless, and composable across applications.
- Agent: Persistent identity that owns intelligence
- Chat (Capsule): Isolated memory scope within an agent
- Memory: Semantic knowledge extracted from conversations
- Runtime: Injects memory into LLM calls without modifying the model
from mantlememo import Agent
agent = Agent(
agent_id="agent_123",
chat_id="governance_chat",
)
response = agent.chat("Analyze proposal X")* Frontend: React
* Runtime: FastAPI
* Chats & Messages: Redis
* Agents & Capsules: PostgreSQL
* Semantic Memory: mem0 (off-chain)
* Coordination Layer: Mantle
* ❌ No model fine-tuning
* ❌ No raw data on-chain
* ❌ No prompt marketplace
* ❌ No AI inference on-chain
MVP live with SDK-based access to persistent agent intelligence.
Hackathon Progress: Built the MantleMemo runtime with persistent agent memory, multi-chat capsules, and a read-only SDK for reuse across apps. Integrated off-chain semantic memory with on-chain coordination for identity, access, and usage. Delivered a working UI and API demonstrating reusable intelligence without fine-tuning. Future Scope: Enable staking, reputation, and marketplace discovery for high-quality intelligence. Expand SDKs across languages and support multi-agent workflows. Add privacy and verification layers to make intelligence trustless and composable at ecosystem scale.
Pitched MantleMemo to Cerebras and had early discussions with Solana teams. Received interest from teams exploring adoption, including Bytex for LLM inference integration.