Three zones. Per-node control. Replaceable everything.
The AI in TreeOS is not one thing. It is a stack of layers, each customizable without touching the one below it. Users control what the AI can do at every node. Builders create new AI modes and tools. Developers replace the entire conversation flow.
Where you are determines what the AI can do. There is no mode menu. No settings panel. You navigate, and the AI adapts. cd / and the AI becomes a system operator. cd ~/ and it becomes your personal assistant. cd MyTree and it works the tree with you. The tools, the context, the behavior, all change automatically.
/The root of everything. Here the AI manages the land itself: install extensions, configure settings, monitor users and peers, run diagnostics. With the shell extension, it can execute server commands. This is your operations center. God-tier access required.
~Your personal space. The AI helps you organize raw ideas, review notes across all your trees, browse your chat history, and reflect on contributions. It knows your context across the whole land without being inside any specific tree.
/MyTreeInside a tree, the AI operates through three strict contracts: Chat reads and writes (full conversation). Place adds content silently (no response). Query reads only (changes nothing). The orchestrator classifies your intent and routes accordingly.
This is the most powerful feature in the kernel and the one most people will miss. Every single node in your tree can have different AI capabilities. Not per-tree. Per-node. Different branches, different tools, different thinking, same tree.
Each node inherits tools from its parent. Add tools to specific branches. Block tools on others. A DevOps branch gets shell access. An archive branch loses delete. A training branch gets only read tools.
Override which AI mode handles each intent at any node. A research branch uses a formal academic mode. A journal branch uses a reflective mode. A creative branch uses freeform. Same tree, different personalities.
Block entire extensions at any node. A knowledge tree blocks Solana, scripts, and shell. A shared tree blocks dangerous extensions at the root so contributors can't use them. Blocked extensions lose their hooks, tools, modes, and metadata writes at that node and all children. Navigate somewhere and the world reshapes.
All three layers inherit parent to child. Set a block on the root and every node in the tree inherits it. Override on a branch and only that branch changes. The kernel walks from the current node up to the root, merging at each level. One tree can have a branch with shell access, another that's read-only, and another that can't even see certain extensions exist. No code changes. Just metadata.
Five layers. Each one is customizable independently. Most people use the defaults. Power users adjust the top layers. Developers replace the bottom ones.
Tools, modes, extensions, and timeouts set through metadata on any node. CLI commands or API calls. Block an extension at the root and it disappears from the entire tree. Tools, modes, hooks, metadata writes, all filtered by position. The most accessible layer.
Build a new AI mode with its own system prompt, its own tool set, and its own behavior. Register it during your extension's init(). Now any node on any tree can use it via mode-set. A mode is a personality for the AI at a specific point in its workflow.
Register MCP tools that the AI can call. A web scraper. A code executor. A database query. A physical device controller. Any function your extension provides becomes a capability the AI can use. Tree owners control which nodes have access via per-node tool config.
Replace the entire conversation flow. The built-in tree-orchestrator does: classify intent, plan, navigate, execute, respond. Your orchestrator can do anything. Multi-agent debate. Parallel research. Code review pipeline. The kernel just dispatches to whatever orchestrator is registered for the zone. One extension. One init() call. The whole AI changes.
This is the most ambitious thing you can build on TreeOS. A custom orchestrator replaces how the AI thinks about and responds to every message in a zone. The built-in tree-orchestrator is 2500 lines of intent classification, navigation, planning, and execution. Yours can be 50 lines or 50,000. The kernel does not care.
Install the extension. Restart. Every chat, place, and query in every tree now goes through your orchestrator. The built-in one is just the default. Uninstall it and yours takes over. Reinstall it and the kernel falls back. Hot-swappable AI brains.
Whether you are building a chat interface or a background pipeline, the kernel gives you one function call. No MCP wiring. No session management. No cleanup code.
Single message, persistent session. For user-facing conversations in any mode. Same tree keeps the same conversation. Switch trees, start fresh.
Multi-step pipeline with managed lifecycle. For background jobs: dream cycles, understanding runs, cleanup passes. Each step switches mode, calls the LLM, and tracks the chain.
Stack backup LLM connections. When your primary hits a rate limit or goes down, the kernel tries the next one automatically.
Any OpenAI-compatible endpoint. Ollama, OpenRouter, Together, Anthropic, local models. Per-tree and per-mode assignments.
11 kernel tunables from land config. Timeouts, retries, context window, tool iterations, session limits. No code changes needed.
Extensions modify kernel behavior without touching kernel code. Register a handler. The kernel fires it at the right moment. Before hooks can cancel operations. After hooks react. Enrich hooks inject data into AI context.
beforeNoteModify note data before saveafterNoteReact after note create/edit/deletebeforeContributionTag audit log entriesafterNodeCreateInitialize extension databeforeStatusChangeValidate or interceptafterStatusChangeReact to status changesbeforeNodeDeleteClean up extension dataenrichContextInject data into AI contextExtensions can also fire their own hooks. The hook system is an open bus. Any hook name is valid. core.hooks.run("my-ext:afterProcess", data) and other extensions can listen.
The kernel handles the plumbing. You build the intelligence.
MCP connections, session persistence, AIChat tracking, abort handling, chain indexing, tool resolution, mode switching, hook firing, LLM failover. All automatic. Your extension calls one function and the rest happens.