🧠

How AI Works in TreeOS

Three zones. Per-node control. Replaceable everything.

The AI in TreeOS is not one thing. It is a stack of layers, each customizable without touching the one below it. Users control what the AI can do at every node. Builders create new AI modes and tools. Developers replace the entire conversation flow.

Three Zones

Where you are determines what the AI can do. There is no mode menu. No settings panel. You navigate, and the AI adapts. cd / and the AI becomes a system operator. cd ~/ and it becomes your personal assistant. cd MyTree and it works the tree with you. The tools, the context, the behavior, all change automatically.

Land /

The root of everything. Here the AI manages the land itself: install extensions, configure settings, monitor users and peers, run diagnostics. With the shell extension, it can execute server commands. This is your operations center. God-tier access required.

Home ~

Your personal space. The AI helps you organize raw ideas, review notes across all your trees, browse your chat history, and reflect on contributions. It knows your context across the whole land without being inside any specific tree.

Tree /MyTree

Inside a tree, the AI operates through three strict contracts: Chat reads and writes (full conversation). Place adds content silently (no response). Query reads only (changes nothing). The orchestrator classifies your intent and routes accordingly.

Per-Node Customization

This is the most powerful feature in the kernel and the one most people will miss. Every single node in your tree can have different AI capabilities. Not per-tree. Per-node. Different branches, different tools, different thinking, same tree.

Tools: What AI Can Do

Each node inherits tools from its parent. Add tools to specific branches. Block tools on others. A DevOps branch gets shell access. An archive branch loses delete. A training branch gets only read tools.

tools-allow execute-shell
tools-block delete-node-branch
tools # see effective tools

Modes: How AI Thinks

Override which AI mode handles each intent at any node. A research branch uses a formal academic mode. A journal branch uses a reflective mode. A creative branch uses freeform. Same tree, different personalities.

mode-set respond custom:formal
mode-set navigate custom:guided
modes # see overrides

Extensions: What Capabilities Exist

Block entire extensions at any node. A knowledge tree blocks Solana, scripts, and shell. A shared tree blocks dangerous extensions at the root so contributors can't use them. Blocked extensions lose their hooks, tools, modes, and metadata writes at that node and all children. Navigate somewhere and the world reshapes.

ext-block solana scripts shell
ext-scope # see what's active here
ext-scope -t # tree-wide block map

All three layers inherit parent to child. Set a block on the root and every node in the tree inherits it. Override on a branch and only that branch changes. The kernel walks from the current node up to the root, merging at each level. One tree can have a branch with shell access, another that's read-only, and another that can't even see certain extensions exist. No code changes. Just metadata.

The AI Stack

Five layers. Each one is customizable independently. Most people use the defaults. Power users adjust the top layers. Developers replace the bottom ones.

1

Per-Node Config (no code)

Tools, modes, extensions, and timeouts set through metadata on any node. CLI commands or API calls. Block an extension at the root and it disappears from the entire tree. Tools, modes, hooks, metadata writes, all filtered by position. The most accessible layer.

2

Custom Modes (extension)

Build a new AI mode with its own system prompt, its own tool set, and its own behavior. Register it during your extension's init(). Now any node on any tree can use it via mode-set. A mode is a personality for the AI at a specific point in its workflow.

3

Custom Tools (extension)

Register MCP tools that the AI can call. A web scraper. A code executor. A database query. A physical device controller. Any function your extension provides becomes a capability the AI can use. Tree owners control which nodes have access via per-node tool config.

4

Custom Orchestrator (extension)

Replace the entire conversation flow. The built-in tree-orchestrator does: classify intent, plan, navigate, execute, respond. Your orchestrator can do anything. Multi-agent debate. Parallel research. Code review pipeline. The kernel just dispatches to whatever orchestrator is registered for the zone. One extension. One init() call. The whole AI changes.

Building a Custom Orchestrator

This is the most ambitious thing you can build on TreeOS. A custom orchestrator replaces how the AI thinks about and responds to every message in a zone. The built-in tree-orchestrator is 2500 lines of intent classification, navigation, planning, and execution. Yours can be 50 lines or 50,000. The kernel does not care.

my-orchestrator/index.js
// Register your orchestrator for the tree zone
export async function init(core) {
core.orchestrators.register("tree", {
async handle({ message, userId, rootId, ... }) {
// Your entire AI flow goes here
// Use core.llm.processMessage() for LLM calls
// Use core.llm.switchMode() to change modes
// Use OrchestratorRuntime for chain tracking
return { answer: "response" };
}
});
}

Install the extension. Restart. Every chat, place, and query in every tree now goes through your orchestrator. The built-in one is just the default. Uninstall it and yours takes over. Reinstall it and the kernel falls back. Hot-swappable AI brains.

Two Ways to Talk to AI

Whether you are building a chat interface or a background pipeline, the kernel gives you one function call. No MCP wiring. No session management. No cleanup code.

runChat

Single message, persistent session. For user-facing conversations in any mode. Same tree keeps the same conversation. Switch trees, start fresh.

const { answer } = await core.llm.runChat({
userId, username, message,
mode: "tree:structure",
res, // auto-abort on disconnect
});

OrchestratorRuntime

Multi-step pipeline with managed lifecycle. For background jobs: dream cycles, understanding runs, cleanup passes. Each step switches mode, calls the LLM, and tracks the chain.

const rt = new OrchestratorRuntime({ ... });
await rt.init("Starting pipeline");
const { parsed } = await rt.runStep(mode, {
prompt: "Analyze this tree"
});
await rt.cleanup();

Reliability Built In

LLM Failover

Stack backup LLM connections. When your primary hits a rate limit or goes down, the kernel tries the next one automatically.

Model Agnostic

Any OpenAI-compatible endpoint. Ollama, OpenRouter, Together, Anthropic, local models. Per-tree and per-mode assignments.

Configurable

11 kernel tunables from land config. Timeouts, retries, context window, tool iterations, session limits. No code changes needed.

8 Lifecycle Hooks

Extensions modify kernel behavior without touching kernel code. Register a handler. The kernel fires it at the right moment. Before hooks can cancel operations. After hooks react. Enrich hooks inject data into AI context.

beforeNoteModify note data before save
afterNoteReact after note create/edit/delete
beforeContributionTag audit log entries
afterNodeCreateInitialize extension data
beforeStatusChangeValidate or intercept
afterStatusChangeReact to status changes
beforeNodeDeleteClean up extension data
enrichContextInject data into AI context

Extensions can also fire their own hooks. The hook system is an open bus. Any hook name is valid. core.hooks.run("my-ext:afterProcess", data) and other extensions can listen.

The kernel handles the plumbing. You build the intelligence.

MCP connections, session persistence, AIChat tracking, abort handling, chain indexing, tool resolution, mode switching, hook firing, LLM failover. All automatic. Your extension calls one function and the rest happens.

Get StartedRead the Guide