Coding Agent Integration
Integrate Hebbrix with your coding agent using MCP (Model Context Protocol) or direct API calls.
MCP Integration (Recommended)
The fastest way to add memory to Claude, Cline, or any MCP-compatible agent:
# Install Hebbrix MCP Server
cd mcp
./quick_setup.sh
# Add to your Claude Desktop config
{
"mcpServers": {
"hebbrix": {
"command": "python",
"args": ["/path/to/hebbrix/mcp/server.py"],
"env": {
"HEBBRIX_API_KEY": "your_api_key"
}
}
}
}Available MCP Tools
hebbrix_rememberStore a new memory from the conversation
hebbrix_searchSearch for relevant memories
hebbrix_recallGet context for the current conversation
hebbrix_forgetDelete specific memories
Direct API Integration
For custom agents, use the direct API:
import asyncio
from hebbrix import MemoryClient
async def main():
# The official Hebbrix Python SDK is async-first.
async with MemoryClient(api_key="mem_sk_...") as client:
collection = await client.collections.create(name="my_agent")
# Store conversation context
await client.memories.create(
collection_id=collection["id"],
content="User asked about API authentication",
importance=0.7,
)
# Retrieve context for next response (6-layer hybrid search)
results = await client.search(
query="How do I authenticate?",
collection_id=collection["id"],
limit=5,
)
# Use results to enhance your agent's response
context = "\n".join(r["content"] for r in results)
# response = your_llm.generate(context + user_message)
asyncio.run(main())LangChain Integration
Hebbrix has no special LangChain package — the REST API is the integration surface. You wrap it as a LangChain tool using the standard decorator, then let your chain call it for retrieval & storage across runs.
import os
import requests
from langchain.tools import tool
from langchain.agents import AgentExecutor, create_openai_tools_agent
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
HEBBRIX_API_KEY = os.environ["HEBBRIX_API_KEY"]
COLLECTION_ID = "my_agent" # any collection you own
BASE = "https://api.hebbrix.com/v1"
H = {"Authorization": f"Bearer {HEBBRIX_API_KEY}"}
@tool
def remember(content: str) -> str:
"""Store a new memory for later retrieval."""
r = requests.post(
f"{BASE}/memories/raw",
headers=H,
json={"content": content, "collection_id": COLLECTION_ID},
)
r.raise_for_status()
return "Stored."
@tool
def recall(query: str) -> str:
"""Search relevant memories for a query and return the top matches."""
r = requests.post(
f"{BASE}/search",
headers=H,
json={"query": query, "collection_id": COLLECTION_ID, "limit": 5},
)
hits = r.json().get("results", [])
return "\n".join(f"- {h['content']}" for h in hits) or "(no memories)"
prompt = ChatPromptTemplate.from_messages([
("system", "You have tools to remember and recall long-term facts about the user."),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
])
llm = ChatOpenAI(model="gpt-4o-mini")
agent = create_openai_tools_agent(llm, [remember, recall], prompt)
executor = AgentExecutor(agent=agent, tools=[remember, recall], verbose=True)
# The agent can now persist and retrieve memories across runs
executor.invoke({"input": "Remember user preference: dark mode."})
executor.invoke({"input": "What UI theme does the user prefer?"}) # → "dark mode"Coding-Agent API Endpoints
Hebbrix ships a dedicated set of endpoints for coding agents at /v1/coding-agent/*. They let the agent record code changes, track errors, surface past fixes, and pull historical context — all scoped to the authenticated user.
| Method | Endpoint | Purpose |
|---|---|---|
POST | /v1/coding-agent/track-code-change | Record a diff / file modification (task type, language, commit hash) |
POST | /v1/coding-agent/check-error | Check whether an error has been seen before and how it was previously fixed |
POST | /v1/coding-agent/get-fix-suggestions | Ranked list of fix strategies from past successful resolutions |
POST | /v1/coding-agent/track-error | Persist an error encountered during a run, with stack trace and code context |
POST | /v1/coding-agent/track-fix | Record an applied fix (before/after code, success flag, tests-passed flag) |
POST | /v1/coding-agent/coding-history | Retrieve chronological history of changes, errors, and fixes (filter by file / language / task type) |
GET | /v1/coding-agent/health | Health check for the coding-agent API surface |
Example Projects
Check out the /examples folder for:
- LangChain integration
- Automatic memory chatbot (3 lines of code!)
- Custom coding agent setup
