SISU CLI
Your configurable agent CLI for daily terminal work. Run interactive chat, control tools and sessions, extend with skills, and discover Sisu framework packages when needed. No API key required to get started with Ollama.
Run it right now.
Use npx to try any command instantly. Or install globally for the short sisu alias.
Via npx (no install)
$ npx @sisu-ai/cli list tools
$ npx @sisu-ai/cli info vector-vectra
$ npx @sisu-ai/cli create chat-agent my-app
$ npx @sisu-ai/cli install skill
$ npx @sisu-ai/cli chat
After global install
$ npm install -g @sisu-ai/cli
# Then use the short alias:
$ sisu list tools
$ sisu chat
$ sisu --version
Everything you need, one command away.
Use the CLI as your configurable terminal agent: discover framework packages, scaffold agents, manage skills, and run chat with explicit control.
Discover
Explore every maintained Sisu package by category before writing a line of code.
$ sisu list tools
$ sisu list middleware
$ sisu list adapters
$ sisu list skills
$ sisu info mw-rag
Categories: libraries · middleware · tools · adapters · vector · skills · templates
Scaffold
Generate a maintained, working starter project in seconds. Three templates ready to go.
# create chat agent
$ sisu create chat-agent my-app
# single-shot CLI agent
$ sisu create cli-agent my-app
# local RAG with Vectra
$ sisu create rag-agent my-app
Install Skills
Install official or custom skills to your project or global profile.
$ sisu install skill
$ sisu install-skill @sisu-ai/skill-debug --project
$ sisu install-skill @sisu-ai/skill-repo-search --global
$ sisu list-official skills
Chat
Interactive AI chat in your terminal. Auto-detects local Ollama models — no API key needed.
$ sisu chat
$ sisu chat --prompt "show me git status"
$ sisu chat --session <session-id>
$ echo "hello" | sisu chat
A proper AI shell for daily work.
sisu chat is a first-class interactive chat mode for terminal-native agent workflows. Start it, and it auto-detects your local Ollama models — no API key, no config, no fuss.
In-chat commands
/provider
Switch AI provider interactively (ollama, openai, anthropic)
/model
Pick a model from an interactive list
/tools
List tools and their enabled state
/skills
List skills and their enabled state
/enable <id>
Enable a capability with session/project/global scope
/sessions
List persisted sessions; resume or delete them
/branch <id>
Fork a new session from any prior message
/search
Search conversation history
/new · /exit
Start a fresh session or close chat
Ctrl+O
Open options menu: new session, switch session, branch, help, exit.
Shift+S
Open settings: switch provider, model, or session.
Shift+Enter
Insert a newline for multiline messages. Ctrl+J as terminal fallback.
↑ / ↓ / Esc
Navigate and close menus. Markdown-aware assistant output rendering.
Policy-gated tool execution.
Every tool call is checked against a policy before execution. High-impact commands require confirmation by default. Denied and completed actions are persisted in session records.
Command runs immediately, no prompt.
Explicit user approval is required before execution.
Command is blocked with a reason. Logged to session record.
Deterministic precedence.
Built-in defaults
Sensible starting point with balanced tool policy.
Global profile
~/.sisu/chat-profile.json — your personal defaults across all projects.
Project profile
<project>/.sisu/chat-profile.json — overrides global per project.
Session overrides
In-memory updates from interactive /provider, /model, and /enable commands.
Example chat profile.
{
"name": "default",
"provider": "ollama",
"model": "qwen3.5:9b",
"theme": "auto",
"toolPolicy": {
"mode": "balanced",
"requireConfirmationForHighImpact": true,
"allowCommandPrefixes": ["echo", "ls", "git status", "pnpm test"]
},
"capabilities": {
"tools": { "enabled": ["terminal"], "disabled": [] },
"skills": {
"directories": ["./.sisu/skills", "~/.sisu/skills"]
},
"middleware": {
"pipeline": [
{ "id": "error-boundary", "enabled": true },
{ "id": "invariants", "enabled": true },
{ "id": "register-tools", "enabled": true },
{ "id": "tool-calling", "enabled": true },
{ "id": "conversation-buffer", "enabled": true },
{ "id": "skills", "enabled": true }
]
}
}
}
Provider notes
ollama — no API key, auto-detects local models.
openai — set OPENAI_API_KEY.
anthropic — set ANTHROPIC_API_KEY.
Ollama defaults
When no provider is configured, the CLI auto-detects local Ollama models.
Preferred: qwen3.5:9b → llama3.1 → llama4 → qwen3.5:0.8b.
Open config
Edit profiles directly in your $EDITOR from inside chat:
Every conversation saved.
Resume anytime.
Sessions are persisted locally — messages, run state, tool lifecycle records, events. Resume a session, branch from any message, or search your history.
- ✓ Deterministic restart and resume behavior
- ✓ Session search and retrieval with
/search - ✓ Branch-from-message lineage workflows with
/branch - ✓ Persist command allow-list by scope: session, project, global
# Resume a known session
$ sisu chat --session abc123
# Inside chat:
/sessions
→ abc123 2 hours ago
→ def456 yesterday
/resume def456
/branch msg-789
/search "git deploy"
/delete-session def456
Extend the CLI with skills.
Skills are reusable agent capabilities. Install official @sisu-ai/* skills or load your own from any directory.
skill-code-review
AI-powered code review with inline comments and suggestions.
skill-debug
Structured debugging: reproduce, trace, hypothesize, fix.
skill-deploy
Guided deployment workflows for common cloud targets.
skill-explain
Explain any piece of code, error, or concept in plain language.
skill-repo-search
Semantic search across your codebase using local embeddings.
skill-test-gen
Generate unit and integration tests from function signatures.
# Install to project
$ sisu install-skill @sisu-ai/skill-debug --project
# Install globally
$ sisu install-skill @sisu-ai/skill-repo-search --global
# Install from local path
$ sisu install-skill ./my-custom-skill --dir ~/.sisu/skills
# Enforce official @sisu-ai/* namespace
$ sisu install-skill @sisu-ai/skill-debug --project --official