SISU Framework
A TypeScript framework for reliable AI agents. Compose middleware like Express, trace every decision, and swap providers in one line. No magic. No surprises.
Get started
pnpm add @sisu-ai/core @sisu-ai/adapter-openai \
@sisu-ai/mw-register-tools @sisu-ai/mw-tool-calling \
@sisu-ai/mw-conversation-buffer @sisu-ai/mw-trace-viewer \
@sisu-ai/mw-error-boundary zod dotenv
Everything you need to know.
Four ideas that explain how Sisu works — and why it's different.
Everything is Middleware
Compose your agent pipeline like an Express app. Each middleware does one thing well — error handling, tool registration, conversation memory, tracing. Stack them in any order.
No hidden framework magic. Every behaviour is explicit, composable, and testable in isolation.
const app = new Agent()
.use(errorBoundary()) // catch errors
.use(traceViewer()) // record every step
.use(registerTools([...])) // expose tools to the model
.use(conversationBuffer({ window: 8 }))
.use(toolCalling); // handle tool call loop
await app.handler()(ctx);
// Every middleware follows this signature
(ctx, next) => {
ctx.messages.push(...) // modify state before
await next() // run the rest of the pipeline
console.log(ctx.result) // react after
}
One Context, Zero Magic
Everything flows through a single typed ctx object. No hidden state, no side channels, no implicit globals. What you see is exactly what runs.
Typed Tools
Zod schemas validate tool inputs automatically. Define a tool once — the schema, description, and handler together — and use it safely everywhere, from LLM calls to unit tests.
const weather: Tool<{ city: string }> = {
name: "getWeather",
description: "Get weather for a city",
schema: z.object({ city: z.string() }),
handler: async ({ city }) => ({
city,
tempC: 21,
summary: "Sunny",
}),
};
import { sequence, branch, parallel } from
'@sisu-ai/mw-control-flow';
app.use(sequence([
classifyIntent,
branch({
'search': searchPipeline,
'chat': conversationPipeline,
'rag': ragPipeline,
}),
]));
Control Flow is Just Code
Sequence, branch, loop, run in parallel, or define a DAG. No DSL to learn, no YAML to write. Readable, testable TypeScript all the way down.
Swap providers in one line.
Provider-agnostic by design. The OpenAI adapter also works with any compatible API — LM Studio, vLLM, OpenRouter, and more.
const model = openAIAdapter({ model: "gpt-4o-mini" });
// const model = anthropicAdapter({ model: 'claude-sonnet-4' });
// const model = ollamaAdapter({ model: 'llama3.1' });
// Works with any OpenAI-compatible endpoint:
const local = openAIAdapter({
model: "gpt-4o-mini",
baseUrl: "http://localhost:1234/v1",
});
OpenAI
@sisu-ai/adapter-openai
Supports tools, streaming, vision. Compatible with any OpenAI-compatible API endpoint.
Anthropic
@sisu-ai/adapter-anthropic
Full Claude model support with tools, streaming, and vision capabilities.
Ollama
@sisu-ai/adapter-ollama
Run agents fully locally with llama3.1, qwen, and any Ollama-compatible model.
See everything. Debug nothing.
Every run auto-generates an interactive HTML trace and structured CLI output. Never parse a wall of JSON again.
HTML Trace Viewer
Every run auto-generates an interactive HTML trace: token usage and costs, tool calls with timing, full conversation history, and error details when things break.
CLI Trace Logs
Structured, color-coded terminal output. Every middleware step, tool call, and token is visible in your console. No more parsing walls of JSON.
Compose the pipeline you need.
Each package does one thing. Combine them in any order.
Ready-to-use capabilities.
Drop-in tools for the most common agent tasks. Web search, cloud storage, terminal, RAG, and more.
Web
webFetch · webSearch (Google, DuckDuckGo, OpenAI) · wikipedia
Cloud
AWS S3 · Azure Blob storage integration out of the box.
Dev
terminal · github-projects — for agents that work alongside developers.
Data & RAG
ragTools · extractUrls · summarizeText with Chroma and Vectra vector adapters.
Retrieval-Augmented Generation,
done right.
Sisu keeps RAG split into small, composable layers — backend code, vector mechanics, tool-calling, and middleware stay separate.
@sisu-ai/vector-core
Defines the VectorStore contract. Swap backends without changing your agent.
@sisu-ai/vector-chroma / vector-vectra
Implement the contract for Chroma (server) and Vectra (local file-backed indexes).
@sisu-ai/rag-core
Handles chunking, record prep, and direct store/retrieve helpers.
@sisu-ai/tool-rag
Exposes model-facing retrieval and storage tools for LLM tool-calling.
@sisu-ai/mw-rag
Supports deterministic middleware-driven RAG flows in your pipeline.
25+ runnable examples.
All examples live in the GitHub repo, covering streaming, vision, RAG, control flow, orchestration, guardrails, and more.
# Clone the repo first
git clone git@github.com:finger-gun/sisu.git
cd sisu
pnpm install
# OpenAI
cp examples/openai-hello/.env.example examples/openai-hello/.env
pnpm run ex:openai:hello
open examples/openai-hello/traces/trace.html
# Orchestration
pnpm run ex:openai:orchestration
pnpm run ex:openai:orchestration-adaptive
# Ollama — no API key needed
ollama serve && ollama pull llama3.1
pnpm run ex:ollama:hello
Zero config to start,
full control when needed.
Configure providers, logging verbosity, and trace output via environment variables.
# LLM Providers
API_KEY=sk-...
BASE_URL=http://localhost:11434
MODEL=gpt-4o-mini
# Logging
LOG_LEVEL=info # debug | info | warn | error
DEBUG_LLM=1 # log adapter requests on errors
# Tracing
TRACE_HTML=1 # auto-generate HTML traces
TRACE_JSON=1 # auto-generate JSON traces
TRACE_STYLE=dark # light | dark