Integration Guide
Connect your application to the Mind Layer. End-to-end guide with examples for REST API and official SDKs.
Overview
Your backend manages business logic and user sessions. Call the Mind Layer for agent intelligence — it owns memory, personality, mood, relationships, and context assembly.
Integrate via the REST API using official SDKs for Go, TypeScript, and Python.
Official SDKs
Official SDKs for Go, TypeScript, and Python. Each SDK wraps the full REST API with typed methods, SSE streaming, automatic retries, and error handling.
TypeScript / JavaScript
bun add @sonzai-labs/agents # or: npm install @sonzai-labs/agentsGitHub Repository →REST API
JSON-based endpoints. Chat responses stream via Server-Sent Events (SSE).
Authentication
All REST requests use Bearer authentication with your project API key:
# All REST requests use Bearer auth with your project API key
curl -H "Authorization: Bearer sk_your_api_key" \
https://api.sonz.ai/api/v1/agents/{agentId}/chatCore Interaction Flow (REST)
# Chat (SSE streaming response)
POST /api/v1/agents/{agentId}/chat
{ "messages": [{"role":"user","content":"Hello!"}], "user_id": "user-123" }
# Response: Server-Sent Events
# data: {"choices":[{"delta":{"content":"Hi"}}]}
# data: [DONE]SSE Parsing
Each line starts with data: . Strip the prefix and JSON.parse the remainder. The stream ends with data: [DONE].
Available REST Endpoints
POST /api/v1/agents Create agent
GET /api/v1/agents List agents
GET /api/v1/agents/{agentId} Get agent
POST /api/v1/agents/{agentId}/chat Chat (SSE streaming)
GET /api/v1/agents/{agentId}/notifications Pending notifications
POST /api/v1/agents/{agentId}/notifications/{id}/consume Consume notification
GET /api/v1/agents/{agentId}/notifications/history Notification historyTypeScript SDK (Server-Side)
For Node.js backends, serverless functions, and server-side frameworks. Works with Node.js >= 18, Bun, and Deno. Not for browser/client-side use— API keys would be exposed.
import { Sonzai } from "@sonzai-labs/agents";
const client = new Sonzai({ apiKey: "sk_your_api_key" });
// Chat (non-streaming)
const response = await client.agents.chat("agent-id", {
messages: [{ role: "user", content: "Hello!" }],
userId: "user-123",
});
console.log(response.content);
// Chat (streaming)
for await (const event of client.agents.chatStream("agent-id", {
messages: [{ role: "user", content: "Tell me a story" }],
userId: "user-123",
language: "en",
timezone: "America/New_York",
})) {
process.stdout.write(event.choices?.[0]?.delta?.content ?? "");
}
// Memory, personality, context engine data
const memory = await client.agents.memory.list("agent-id", { userId: "user-123" });
const personality = await client.agents.personality.get("agent-id");
const mood = await client.agents.getMood("agent-id", { userId: "user-123" });Python SDK
For Python backends, data pipelines, and evaluation scripts. Supports both sync and async clients.
from sonzai import Sonzai
client = Sonzai(api_key="sk_your_api_key")
# Chat (non-streaming)
response = client.agents.chat(
"agent-id",
messages=[{"role": "user", "content": "Hello!"}],
user_id="user-123",
)
print(response.content)
# Chat (streaming)
for event in client.agents.chat(
"agent-id",
messages=[{"role": "user", "content": "Tell me a story"}],
user_id="user-123",
language="en",
timezone="America/New_York",
stream=True,
):
print(event.content, end="", flush=True)
# Memory, personality, context engine data
memory = client.agents.memory.list("agent-id", user_id="user-123")
personality = client.agents.personality.get("agent-id")
mood = client.agents.get_mood("agent-id", user_id="user-123")
client.close()Browser / Frontend Apps
Server-Side Proxy Required
The Sonzai API does not accept browser (client-side) requests. API keys must never be exposed in frontend code. This is the same pattern used by OpenAI, Anthropic, and other AI API providers.
For web apps (React, Next.js, Vue, etc.), create a backend API route that proxies to Sonzai. Your frontend calls your server; your server calls Sonzai with the API key.
Next.js API Route
// app/api/chat/route.ts (runs on your server)
import { Sonzai } from "@sonzai-labs/agents";
const client = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });
export async function POST(req: Request) {
const { agentId, messages, userId } = await req.json();
const stream = client.agents.chatStream(agentId, { messages, userId });
return new Response(
new ReadableStream({
async start(controller) {
for await (const event of stream) {
controller.enqueue(new TextEncoder().encode(
`data: ${JSON.stringify(event)}\n\n`
));
}
controller.enqueue(new TextEncoder().encode("data: [DONE]\n\n"));
controller.close();
},
}),
{ headers: { "Content-Type": "text/event-stream" } }
);
}Frontend (Any Framework)
// Calls YOUR server, not Sonzai directly
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
agentId: "agent-uuid",
messages: [{ role: "user", content: "Hello!" }],
userId: "user-123",
}),
});
const reader = res.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Parse SSE chunks from your proxy
console.log(decoder.decode(value));
}Express / Fastify
// server.ts
import express from "express";
import { Sonzai } from "@sonzai-labs/agents";
const app = express();
const client = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });
app.post("/api/chat", async (req, res) => {
const { agentId, messages, userId } = req.body;
res.setHeader("Content-Type", "text/event-stream");
for await (const event of client.agents.chatStream(agentId, { messages, userId })) {
res.write(`data: ${JSON.stringify(event)}\n\n`);
}
res.write("data: [DONE]\n\n");
res.end();
});Go SDK
For backends and performance-sensitive applications, the Go SDK provides typed access to the REST API.
Connection Setup
import sonzai "github.com/sonz-ai/sonzai-go"
// Connects to api.sonz.ai by default — just provide your API key
client := sonzai.NewClient("sk_your_api_key")
// Override the base URL for local dev / self-hosted
client := sonzai.NewClient("sk_your_api_key",
sonzai.WithBaseURL("http://localhost:8090"),
)Agent Lifecycle
Creating an Agent
When a user creates a new agent in your application, call CreateAgent with their personality configuration:
resp, err := client.Agents.Create(ctx, sonzai.CreateAgentRequest{
Name: "Luna",
Gender: "female",
Big5: sonzai.Big5Scores{
Openness: 0.75,
Conscientiousness: 0.60,
Extraversion: 0.80,
Agreeableness: 0.70,
Neuroticism: 0.30,
},
Language: "en",
})
// resp.AgentID is the platform-generated UUID
// Store this in your user recordRetrieving an Agent
agent, err := client.Agents.Get(ctx, agentId)
// agent includes: name, personality prompt, big5, mood, etc.Chat Session Flow
This is the core integration loop. The chat endpoint handles context assembly, AI streaming, and state updates in a single call:
Streaming Chat
The chat endpoint assembles context, streams AI responses, and updates internal agent state automatically:
stream, err := client.Agents.ChatStream(ctx, agentId, sonzai.ChatRequest{
UserID: userId,
Messages: []sonzai.Message{
{Role: "user", Content: "I had a great day hiking!"},
},
Language: "en",
})
// Read streaming events
for event := range stream {
fmt.Print(event.Content)
}Mood Labels
Labels: Blissful (80-100), Content (60-79), Neutral (40-59), Melancholy (20-39), Troubled (0-19). Mood naturally drifts back toward the agent's personality baseline over time.
Proactive Notifications
Agents can reach out to users between conversations. When triggered, the platform generates a contextual message using the agent's full state and stores it as "pending". Your app polls and marks notifications consumed after delivery.
REST Polling
# Poll for pending proactive messages
GET /api/v1/agents/{agentId}/notifications?status=pending&user_id=user-123
# Response
{
"notifications": [{
"message_id": "msg-uuid",
"user_id": "user-123",
"check_type": "check_in",
"intent": "Ask about yesterday's hiking trip",
"generated_message": "Hey! How was the hike at Mount Rainier?",
"status": "pending",
"created_at": "2026-03-07T10:00:00Z"
}]
}
# After delivering to user, mark consumed
POST /api/v1/agents/{agentId}/notifications/{messageId}/consumeDelivery Best Practice
Poll every 30-60 seconds. Always mark consumed after delivery to prevent re-delivery.
Webhook Integration
Register webhooks for event callbacks (wakeups, consolidation, breakthroughs):
// Register webhook endpoints
PUT /api/projects/{projectId}/webhooks/{eventType}
{
"webhook_url": "https://your-server.com/platform/webhooks/wakeup",
"auth_header": "Bearer YOUR_SERVER_KEY"
}
// Event types:
// - "wakeup" : Agent wants to proactively reach out
// - "consolidation" : Memory consolidation completed
// - "breakthrough" : Significant personality evolutionWakeup Webhook Payload
Includes the generated message for direct delivery:
{
"event_type": "on_wakeup_ready",
"agent_id": "agent-uuid",
"user_id": "user-123",
"generated_message": "Hey! How was the hike?",
"wakeup_id": "wakeup-uuid",
"check_type": "check_in"
}Polling Alternative
Prefer polling? Use the notifications API instead of webhooks.
Example: Backend Integration Flow
Three-service architecture:
Client App Your Backend Mind Layer
| | |
|--- Authenticate --->| |
| | |
|--- Create agent ->| |
| |--- REST: CreateAgent ------>|
| |<-- Agent ID + Profile ------|
|<-- Agent ready --| |
| | |
|--- Send message ---->| |
| |--- REST: Chat (SSE) ------->|
|<-- Streaming response <-- AI chunks + side effects -|Your backend translates application events into Mind Layer API calls. You can swap the backend without changing agent behavior, or reuse agents across applications.
Knowledge Base (Go SDK)
Upload documents or push structured data to build a project-scoped knowledge graph. Agents search this graph during conversations.
Push Structured Data
// Insert entities and relationships
resp, err := client.Knowledge.InsertFacts(ctx, projectID, sonzai.InsertFactsOptions{
Source: "product_catalog",
Facts: []sonzai.InsertFactEntry{
{
EntityType: "product",
Label: "Widget Pro",
Properties: map[string]any{"price": 29.99, "category": "tools"},
},
},
Relationships: []sonzai.InsertRelEntry{
{FromLabel: "Widget Pro", ToLabel: "Tools", EdgeType: "belongs_to"},
},
})
fmt.Printf("Created: %d, Updated: %d\n", resp.Created, resp.Updated)Search the Knowledge Graph
results, err := client.Knowledge.Search(ctx, projectID, sonzai.KBSearchOptions{
Query: "widget price",
Limit: 10,
})
for _, r := range results.Results {
fmt.Printf("%s (%s): score=%.2f\n", r.Label, r.NodeType, r.Score)
}Entity Schemas
// Define a schema so the LLM knows what fields to extract
schema, err := client.Knowledge.CreateSchema(ctx, projectID, sonzai.CreateSchemaOptions{
EntityType: "product",
Fields: []sonzai.KBSchemaField{
{Name: "price", Type: "number", Required: true},
{Name: "category", Type: "string"},
},
})Analytics Rules
// Create a recommendation rule
rule, err := client.Knowledge.CreateAnalyticsRule(ctx, projectID, sonzai.CreateAnalyticsRuleOptions{
RuleType: "recommendation",
Name: "Similar products",
Config: map[string]any{"match_fields": []string{"category"}, "limit": 5},
Enabled: true,
})
// Get recommendations
recs, err := client.Knowledge.GetRecommendations(ctx, projectID, rule.RuleID, sourceNodeID, 5)User Priming (Go SDK)
Pre-load user metadata and content so AI agents already know users from their first conversation. Metadata (name, company, title) becomes instant facts; content blocks are extracted asynchronously via LLM.
Prime a Single User
resp, err := client.Agents.Priming.PrimeUser(ctx, agentID, userID, sonzai.PrimeUserOptions{
DisplayName: "Jane Smith",
Metadata: &sonzai.PrimeUserMetadata{
Company: "Acme Corp",
Title: "VP Engineering",
Email: "jane@acme.com",
Custom: map[string]string{"region": "APAC", "tier": "enterprise"},
},
Content: []sonzai.PrimeContentBlock{
{Type: "text", Body: "Jane led the migration from AWS to GCP..."},
},
Source: "crm",
})
fmt.Printf("Job: %s, Facts created: %d\n", resp.JobID, resp.FactsCreated)Batch Import
resp, err := client.Agents.Priming.BatchImport(ctx, agentID, sonzai.BatchImportOptions{
Users: []sonzai.BatchImportUser{
{UserID: "user-1", DisplayName: "Jane", Metadata: &sonzai.PrimeUserMetadata{Company: "Acme"}},
{UserID: "user-2", DisplayName: "Bob", Metadata: &sonzai.PrimeUserMetadata{Company: "Globex"}},
},
Source: "crm_sync",
})
fmt.Printf("Job: %s, Users: %d\n", resp.JobID, resp.TotalUsers)Manage Metadata
// Get metadata
meta, err := client.Agents.Priming.GetMetadata(ctx, agentID, userID)
// Update metadata (partial — merges with existing)
updated, err := client.Agents.Priming.UpdateMetadata(ctx, agentID, userID, sonzai.UpdateMetadataOptions{
Company: ptr("New Corp"),
Custom: map[string]string{"tier": "premium"},
})Async Processing
Metadata facts (name, company, title) are created synchronously. Content blocks (text, chat transcripts) are processed in the background via LLM extraction. Poll the job status to track progress.
Best Practices
- Use
StreamChat-- it handles context assembly, AI streaming, and state updates in one call. - Always pass application state via
GameContext.custom_fields. The platform doesn't cache it. - Register webhooks for wakeup events so agents can initiate contact.
- Don't duplicate personality, memory, or relationship logic -- let the engine own agent data.
- Poll notifications every 30-60 seconds. Consume after delivery to prevent re-delivery.
- REST API for all integrations. Use official SDKs for Go, TypeScript, or Python for the best developer experience.
- Browser apps must proxy through your backend -- never expose API keys in client-side code. See the Browser / Frontend Apps section above.