🧠

The AI

How the tree thinks.

An intent routing system whose most natural developer interface is a linguistic grammar that unifies and clarifies the underlying architecture.

A conversation loop in the seed that resolves which LLM to call, which tools to provide, which mode to think in, and which position context to inject. All based on where you are in the tree.

Human thought is structured like a tree. Language reflects that structure. So if you build a system as a tree, you can use language directly as the interface.

Most AI systems ask "how do I get the AI to do the right thing?" TreeOS asks "how do I build an environment where the right thing is the only thing the AI can say?" The tools, the context, and the constraints change based on where you are. The structure constrains the output before the AI speaks. Position gives the AI genuine situational awareness, not through one giant prompt, but through architecture that mirrors how concepts actually relate to each other.

Three Zones

Where you are determines what the AI can do. There is no mode menu. No settings panel. You navigate, and the AI adapts. cd / and the AI becomes a system operator. cd ~/ and it becomes your personal assistant. cd MyTree and it works the tree with you. The tools, the context, the behavior, all change automatically.

Land /

The root of everything. Here the AI manages the land itself: install extensions, configure settings, monitor users and peers, run diagnostics. With the shell extension, it can execute server commands. This is your operations center. Admin access required.

Home ~

Your personal space. The AI helps you organize raw ideas, review notes across all your trees, browse your chat history, and reflect on contributions. It knows your context across the whole land without being inside any specific tree.

Tree /MyTree

Inside a tree, the AI operates through three strict contracts: Chat reads and writes (full conversation). Place adds content silently (no response). Query reads only (changes nothing). The orchestrator classifies your intent and routes accordingly.

Per-Node Customization

This is the most powerful feature in the kernel and the one most people will miss. Every single node in your tree can have different AI capabilities. Not per-tree. Per-node. Different branches, different tools, different thinking, same tree.

Tools: What AI Can Do

Each node inherits tools from its parent. Add tools to specific branches. Block tools on others. A DevOps branch gets shell access. An archive branch loses delete. A training branch gets only read tools.

tools-allow execute-shell
tools-block delete-node-branch
tools # see effective tools

Modes: How AI Thinks

Override which AI mode handles each intent at any node. A research branch uses a formal academic mode. A journal branch uses a reflective mode. A creative branch uses freeform. Same tree, different personalities.

mode-set respond custom:formal
mode-set navigate custom:guided
modes # see overrides

Extensions: What Capabilities Exist

Block entire extensions at any node. A knowledge tree blocks Solana, scripts, and shell. A shared tree blocks dangerous extensions at the root so contributors can't use them. Blocked extensions lose their hooks, tools, modes, and metadata writes at that node and all children. Navigate somewhere and the world reshapes.

ext-block solana scripts shell
ext-scope # see what's active here
ext-scope -t # tree-wide block map

All three layers inherit parent to child. Set a block on the root and every node in the tree inherits it. Override on a branch and only that branch changes. The kernel walks from the current node up to the root, merging at each level. One tree can have a branch with shell access, another that's read-only, and another that can't even see certain extensions exist. No code changes. Just metadata.

The AI Stack

Five layers. Each one is customizable independently. Most people use the defaults. Power users adjust the top layers. Developers replace the bottom ones.

1

Per-Node Config (no code)

Tools, modes, extensions, and timeouts set through metadata on any node. CLI commands or API calls. Block an extension at the root and it disappears from the entire tree. Tools, modes, hooks, metadata writes, all filtered by position. The most accessible layer.

2

Custom Modes (extension)

Build a new AI mode with its own system prompt, its own tool set, and its own behavior. Register it during your extension's init(). Now any node on any tree can use it via mode-set. A mode is a personality for the AI at a specific point in its workflow.

3

Custom Tools (extension)

Register MCP tools that the AI can call. A web scraper. A code executor. A database query. A physical device controller. Any function your extension provides becomes a capability the AI can use. Tree owners control which nodes have access via per-node tool config.

4

Custom Orchestrator (extension)

Replace the entire conversation flow. The built-in tree-orchestrator does: classify intent, plan, navigate, execute, respond. Your orchestrator can do anything. Multi-agent debate. Parallel research. Code review pipeline. The kernel just dispatches to whatever orchestrator is registered for the zone. One extension. One init() call. The whole AI changes.

Building a Custom Orchestrator

This is the most ambitious thing you can build on the seed. A custom orchestrator replaces how the AI thinks about and responds to every message in a zone. The built-in tree-orchestrator is itself an extension, 2500 lines of intent classification, navigation, planning, and execution. Yours can be 50 lines or 50,000. The kernel does not care.

my-orchestrator/index.js
// Register your orchestrator for the tree zone
export async function init(core) {
core.orchestrators.register("tree", {
async handle({ message, userId, rootId, ... }) {
// Your entire AI flow goes here
// Use core.llm.processMessage() for LLM calls
// Use core.llm.switchMode() to change modes
// Use OrchestratorRuntime for chain tracking
return { answer: "response" };
}
});
}

Install the extension. Restart. Every chat, place, and query in every tree now goes through your orchestrator. The built-in one is just the default. Uninstall it and yours takes over. Reinstall it and the kernel falls back. Hot-swappable AI brains.

Two Primitives

Whether you are building a chat interface or a background job, the kernel gives you one function call. No MCP wiring. No session management. No hook firing. No cleanup code. Pick the primitive that matches your need.

runChat

One LLM call, one explicit mode. For background work in extensions: summarize, classify, enrich. The caller knows exactly which mode to use and just wants an answer.

const { answer } = await core.llm.runChat({
userId, username, message,
mode: "tree:structure",
llmPriority: BACKGROUND,
});

runOrchestration

Full chat flow. Classification, routing, mode chains, tool loops. For anything where a real user is waiting. Used by every HTTP route, the websocket handler, and gateway extensions. One function. Every entry point.

const result = await core.llm.runOrchestration({
zone: "tree", userId, message,
rootId, currentNodeId,
res, // auto-abort on disconnect
});

One Function in the Middle

Every chat path on the system converges on the same kernel function. CLI, dashboard, gateway extensions, scheduled jobs. Different transports, different wire formats, same pipeline. Three layers, blind to each other.

CLI
HTTP
Dashboard
WebSocket
Gateway
Telegram, Email, etc.
↓
Thin transport adapters
validate, auth, translate to and from the wire
↓
runOrchestration
seed/llm/conversation.js
· session identity
· abort handling
· MCP connect
· dispatch
· Chat record
· beforeResponse hook
· enqueue serialize
· cleanup
↓
Orchestrator extension for the zone
tree-orchestrator (built in), or your own. Replaceable.

The kernel owns the pipeline. Extensions own the routing. Transport adapters own the wire format. Each layer is blind to the others. The kernel does not know fitness exists. The extension does not know HTTP exists. The transport does not know modes exist. beforeResponse fires in exactly one place per primitive, never in routes, never in handlers. The middle never changes.

Reliability Built In

Position and Time Injection

Every prompt starts with a [Position] block and ends with the current time in the land's timezone. The AI always knows where it is and when it is. Extension modes cannot exclude either.

[Position]
User: tabor
Tree: My Fitness (abc-123)
Current node: Push Day (xyz-456)

<mode system prompt>

Current time: Thursday, April 9,
2026, 7:23 PM PDT

LLM Failover

Backup LLM connections. Rate limit or outage hits, the kernel tries the next one automatically.

Model Agnostic

Any OpenAI-compatible endpoint. Per-tree and per-mode LLM assignments.

Tool Circuit Breaker

5 failures disables one tool for the session. The AI adapts. One bad API key doesn't kill the tree.

DB Health Check

Before each tool call, check database. If dead, the AI tells the user instead of retrying blind.

Ancestor Cache

One shared cache for all resolution chains. Snapshot per message. 120 DB queries become 1.

27 Lifecycle Hooks

Extensions modify kernel behavior without touching kernel code. Register a handler. The kernel fires it at the right moment. Before hooks can cancel. After hooks react. Sequential hooks capture return values. Open bus: any hook name is valid.

beforeNoteModify note data before save
afterNoteReact after note create/edit/delete
beforeNodeCreateModify or cancel node creation
afterNodeCreateInitialize extension data
beforeStatusChangeValidate or intercept
afterStatusChangeReact to status changes
beforeNodeDeleteClean up extension data
beforeContributionModify contribution data
enrichContextInject data into AI context
beforeLLMCallCancel or modify LLM calls
afterLLMCallReact to LLM usage
beforeToolCallModify or cancel tool execution
afterToolCallReact to tool results
beforeResponseModify AI response before client
beforeRegisterValidate registration (email, etc.)
afterRegisterInitialize user data
afterNavigateReact to tree navigation
afterMetadataWriteReact to metadata changes
afterScopeChangeReact to extension scope changes
afterOwnershipChangeReact to ownership or contributor changes
afterBootOne-time setup after everything is ready
afterSessionCreateReact to new sessions
afterSessionEndReact to ended sessions
onCascadeHandle cascade signals, results to .flow
onDocumentPressureDocument approaching size limit
onTreeTrippedTree circuit breaker tripped
onTreeRevivedTripped tree revived

Extensions can also fire their own hooks. The hook system is an open bus. Any hook name is valid. core.hooks.run("my-ext:afterProcess", data) and other extensions can listen.

The Grammar

The tree doesn't translate natural language into code. It translates natural language into more natural language at a lower level. The user says a sentence. The tree diagrams it. Each part of the diagram maps to an architectural primitive. The execution IS the parse tree.

Nouns = Nodes

Bench Press. Protein. Chapter 3. Things with identity, position, and relationships. They sit in the tree and hold meaning. The routing index is the vocabulary list: which nouns belong to which verb.

Verbs = Extensions

Food tracks. Fitness logs. Recovery reflects. Study teaches. Ways of being at a position. Install an extension and the tree gains a new capability, a new way to act on its nouns.

Tense = Modes

Once the territory is identified, the intent determines the tense. Four conjugations for every verb:

Review (past): "how did I do" "my progress"
Coach (future): "what should I" "help me"
Plan (imperative): "build" "create" "add"
Log (present): "ate eggs" "bench 135x10"

Routing = Parsing

"Ate eggs" contains food nouns. "Bench 135" contains fitness nouns. The classifier hints are vocabulary lists. The routing index maps territory. This noun-space belongs to this verb.

Adjectives = Metadata

135lb. 5x5. Ready for progression. Values, goals, and status describe each noun. The enrichContext hook injects adjectives into the AI's view of a position.

Adverbs = Instructions

"Be concise." "Use kg." "Never suggest meat." They modify how the verb behaves without changing the verb. Food still logs. It just logs concisely. The beforeLLMCall prepend is the adverbial layer.

Prepositions = Tree Structure

Under Health. Next to Food. Above Bench Press. Spatial scoping IS prepositional. ext-block shell UNDER DevOps. ext-allow solana AT Finance. The spatial commands are literally prepositions applied to the tree.

Pronouns = Position

"It" = currentNodeId. "Here" = where you are. "This tree" = rootId. The position system resolves pronouns. When you say "log this" the system knows what "this" means because of where you're standing.

Articles = Existence

"THE bench press" means the routing index found it. It exists. Route to it. "A bench press" means it doesn't exist yet. Sprout activates. Creates it. Definite versus indefinite. Existing versus potential.

Completing the Grammar

Eight parts of speech was the first pass. Standard English has nine, and every sentence the original system couldn't handle was missing one of them. We did not invent new primitives. We finished the set.

Conjunctions = Control Flow

The missing ninth part of speech. Subordinating conjunctions ("if", "when", "unless") become branches. Coordinating conjunctions ("and then", "after that") become chains. The original system only did linear routing. Now it handles "if protein is low, review my meals" as a real branch with a condition evaluated against live data, not as a phrase the AI has to interpret.

Determiners = Set Selection

Articles ("the", "a") were part of the original eight. But "all", "every", "top 3" are also determiners, and they select sets, not single items. The original system lumped them into metadata. Now they drive fanout: the kernel resolves the set, gathers each item's real data, and hands everything to the mode at once. No guessing.

Adverbials of Time = Data Window

"Last week", "yesterday", "since January", "over the past month". These are not tense. Tense is intent (review vs log vs coach). Time is data scope (which window to look at). The original system conflated them and misrouted messages. Now they are independent axes. "How did I do last week" has tense=review AND scope=last week.

Voice + Negation = Frame

Passive voice ("my bench press has been declining") tells the AI to reflect, not execute. Negation ("don't log that", "skip breakfast") cancels the default action and reroutes to conversation. The original system treated everything as active imperatives. Now it distinguishes describing from commanding, and cancelling from doing.

Five Orthogonal Axes

Every message decomposes into five independent axes. Each one evolves on its own. Any combination is legal. The grammar is compositional, not enumerative.

DOMAIN
what thing?
noun + pronoun + preposition
Which extension, which node, which scope.
SCOPE
how much, when?
quantifier + temporal scope
Which subset of data is in play.
INTENT
what action?
tense + conditional
Which mode fires, or whether to branch.
INTERPRETATION
how to behave?
adjective + voice + adverb
How the mode frames its response.
EXECUTION
runtime shape?
dispatch / sequence / fork / fanout
How the graph actually runs.

Four Execution Primitives

After parsing, the grammar compiles every message into an execution graph with four possible shapes. These are not invented primitives. They are the four ways English composes sentences.

Dispatch = Simple Sentence

One clause, one action. "Ate eggs." The runtime switches to the right mode and runs it once. This is the declarative sentence of the grammar. Every message in the original system was a dispatch. It's still the most common shape.

Sequence = Compound Sentence

"Log lunch and then review my day." Two clauses joined by a coordinating conjunction. The runtime executes each step in order, threading the result of one into the context of the next. Compound sentences compile to sequences.

Fork = Conditional Sentence

"If protein is low, review my meals." The runtime evaluates the condition against live data, gets a three valued result (true / false / unknown), and picks the branch. Unknown is first class. The system does not guess when data is missing. It takes a path that says "I can't determine this yet."

Fanout = Universal Quantification

"Review all my exercises." The runtime resolves the set, gathers each item's real enriched context, and hands everything to the mode at once. Extensions own their vocabulary and decide what "all my X" means inside their domain.

Every message flows through five steps:

1. Parse Domain
Noun, pronoun, preposition
2. Parse Scope
Quantifier, temporal window
3. Parse Intent
Tense, conditional, negation
4. Parse Frame
Adjective, voice, adverb
5. Compile + Execute
Build graph, walk it

The system is a natural language computer.
The seed is the parser. Extensions are the vocabulary.
The tree is the syntax tree. The user just talks.

In most AI frameworks, functions are the primary architectural unit. You explicitly wire them together: call this, route to that, handle this output. The developer thinks in functions. The system is organized around functions. TreeOS inverts this. Functions are downstream consequences of grammar, not the organizing principle. You don't call a logging function. You stand at a food node and speak in present tense, and logging is just what that means. The function fires, but nobody explicitly orchestrated it. The grammar did.

When you speak, your mouth calls functions: muscle contractions, air pressure, phoneme production. But you don't think in those functions. You think in words. The functions are real but they're not where the meaning lives. TreeOS claims the same thing for AI systems. Functions exist. They fire. They're just not the level of abstraction that matters.

The Honest Complexity

The grammar is clean. Three things in the runtime are messier than the story suggests. Worth naming them directly.

Fork evaluation uses an LLM call
The grammar compiles deterministically. But evaluating "is protein low" against live data needs a small LLM call. Quarantined to one function, three valued result (true / false / unknown), never hallucinates a branch because unknown is allowed. The LLM is in the loop, but it is not in charge.
Extensions own their vocabulary
"All my exercises" vs "all my runs" vs "all my muscle groups" each mean a different set inside the fitness extension. The kernel stays generic and asks the extension to map keywords to subsets. This is new surface area, but it matches the existing pattern of enrichContext and handleMessage. Extensions opt in to precision.
Condition evaluation walks children
Extension data is distributed across child nodes with different roles. The food root has no data; its Daily child has it all. So the evaluator walks one level down, collects enriched contexts from each child, and bundles them before deciding. One extra query depth, justified by the tree shape.

The surface grammar is still parts of speech. The runtime is still four primitives. These are the seams where the story meets the implementation.

Two Languages, One Tree

The grammar works in both directions. The user's intent flows in: parsed into noun, verb, tense, routed to the right extension in the right mode. But the AI also operates through this grammar. Its tools, its context, and its constraints are all shaped by where it lives in the tree.

At a food node, the AI's context is macros and calories. Its tools are food-specific. Its mode determines whether it's logging or reviewing. Move to fitness and everything shifts: context becomes sets and reps, tools become workout-specific, the mode determines coaching vs planning. Same AI. Different environment. The tree shapes what the AI can see, say, and do at every position.

This structure mirrors how human concepts actually branch from each other. "Health" divides into "Fitness" and "Food." "Fitness" divides into exercises. That's not a database schema. That's how concepts relate. The tree structure matches the conceptual structure, which is why natural language maps to it without a heavy translation layer.

The tree channels the user's intent into extensions, amplifying one sentence into mechanical actions across domains. And it gives the AI an environment to inhabit, a structure to operate through, a grammar that constrains and guides. The user talks to the tree. The tree shapes the AI's response. Both use the same structure.

The AI Operates Through the Tree

The AI at /Health/Fitness reads the tree to know what exercises exist, what weights were lifted, what progressive overload is due. It writes a note to the History node. Creates a child under Gym. Updates values on Bench Press. Cascade carries the workout signal to the Food branch. The tree is not a database the AI queries. It is the environment the AI inhabits and operates through.

The human speaks in sentences. "Ate chicken for lunch." The tree parses it. Noun: food. Verb: log. Tense: present. The AI receives this through its position-shaped context and responds with tree operations: create note, update value, cascade signal. Every memory the AI holds is a node with notes. Every communication between domains is a cascade signal between branches.

TreeOS was built from studying how concepts branch from each other in natural language. The same branching structure that organizes human concepts organizes the AI's environment. Most systems translate between human language and machine operations through a fragile middle layer. TreeOS structured the translation layer to mirror how concepts actually relate. The tree is the shared structure. That's why natural language works as the interface.

The kernel handles the plumbing. You build the intelligence.

MCP connections, session persistence, Chat tracking, abort handling, chain indexing, tool resolution, mode switching, hook firing, LLM failover. All automatic. Your extension calls one function and the rest happens.