A snake eating its own tail.
TreeOS authors JavaScript projects inside itself. It reads its own source as a tree. It edits its own extensions. One sentence becomes a full app. It works on small local models because the structure carries the context, not the prompt.
The tree IS the workspace. A file is a node. A directory is a node. Content lives in a note. Disk is a projection that happens automatically. The AI never thinks about paths β it navigates, it writes, the tree compiles itself to real files.
Most coding assistants pretend the AI is a smart developer with a file tree. TreeOS inverts that: the file tree IS the developer's thought, the AI just reads the position it's at. Every node the cursor lands on changes what the AI can see, what tools it can call, and what shape its output has to take.
You don't give the AI a thousand-line prompt that describes your codebase. You let the codebase become the prompt. Navigation is context. "cd into a function" is a real action the AI can take.
Real run, local Qwen 3.5 27B. No prompt engineering. No pasted examples. One message in the tiner tree, three files out, app serves from port 3000.
tiner
package.json
server.js
public/
index.htmlWhat it produced: Express backend with five seeded profiles, REST endpoints for /api/profiles / /api/swipe / /api/matches, a full HTML+CSS+JS frontend with draggable card stack, swipe animations, nope/like overlays, match popup, mobile touch support, and a responsive purple gradient theme. Real working code. Not a stub.
At boot, TreeOS ingests its own codebase into a system tree called .source. Every extension, every kernel file, every test β live tree nodes with the source as notes.
land/extensions/ and land/seed/ walk into .source on first boot. Subsequent boots do mtime-based incremental refresh. Unchanged files skip.
cd /.source/extensions/fitness/modes and the AI at that position is reading fitness's plan mode source out of a real tree note. Same mechanism as any user project.
When the AI writes a new extension, its mode prompt tells it to source-read extensions/fitness/manifest.js first. It copies the real shape from real code, not from training memory.
Writes back to land/extensions/ are gated (disabled / approve / free). Flip the mode and the AI can patch TreeOS itself through the same sync walker. Seed stays read-only.
Claude Code reads your files. TreeOS reads itself and everything installed next to it. When you ask for a "new extension like fitness but for finance," it literally opens fitness and uses it as the template. The library is the codebase.
Every message parses into five axes: domain (which extension), scope (how much), intent (which mode), interpretation (how), execution (dispatch / sequence / fork / fanout). Nouns are nodes, verbs are extensions, tense is mode. "Write me a function" routes to code-log. "Refactor this" routes to code-plan. "Review this code" routes to code-review. The grammar knows the difference and the mode matches.
Every file is a node. Every directory is a node. Content lives in a note on the file node. When the AI writes, it writes a note β which auto-compiles to disk via a depth-first walker (same pattern as the existing book extension). No manual sync. No path juggling. cd lib.js works; the AI at that position reads the file content as position context.
The AI isn't blind to the system it runs on. Every installed extension sits under /.source/extensions/ as navigable nodes. The kernel sits under /.source/seed/. When the review mode critiques your code, it measures against real working extensions in the same tree, not against training-set memory of what "good code" looks like. The AI can cite specific files.
Any project the tree builds can become a real TreeOS extension. forge-init adopts an existing workspace project and repoints its output at land/extensions/<name>/. Validate, test, install-local (restart-required), and dry-run publish to Horizon. The same tree nodes you just built are the extension shipped. Nothing gets copied.
After building the Tinder clone on a 27B local model, the same tree, same session.
Nobody told it to review. The grammar parsed past-tense framing and routed to tree:code-review. The review mode's prompt said "read the files, compare to patterns, report issues with specific fix suggestions." That's what it did. This is the loop closing β the tree builds, reads, critiques, fixes.
Honest side-by-side. TreeOS isn't better at every dimension β it's a different shape.
The big one: small local models can drive this. Because position carries context and the grammar pipeline does deterministic routing, the LLM only has to decide one thing at a time β generate the content of one file, one tool at a time. A 27B model does it well. A 7B model does most of it. Frontier models don't have much advantage when the prompt itself is mostly deterministic.
This isn't a separate coding IDE bolted on. It's an extension in the same TreeOS you use for fitness, food, recovery, journals. The same grammar. The same tree. The same AI knows all of it.
What the AI can call, grouped by what it does. Everything else follows from these.
# writing code in a project workspace-add-file create or overwrite a file (content as a note) workspace-read-file read current content workspace-list list files in the active project workspace-delete-file remove a file workspace-sync force tree β disk compile (auto-fires on writes) workspace-run run npm / npx / node / git in the workspace workspace-test node --test with runner detection # reading TreeOS itself source-read read a file from /.source (real installed code) source-list list files in a .source subdirectory source-mode flip write policy: disabled | approve | free # shipping as an extension forge-init create or adopt a project as a forge extension forge-write-file write into a forge workspace forge-validate local mirror of the Horizon validator forge-test run the extension's tests forge-install-local stage into land/extensions/ (restart required) forge-publish-horizon dry-run or live publish to the registry # modes (picked by grammar, not by you) tree:code-plan imperative: build, refactor, create tree:code-log present: small adds, one-off edits tree:code-coach future: guidance, diagnosis, debugging tree:code-ask query: read-only exploration tree:code-review past: audit + refine loop with .source references
This isn't a coding assistant. It's the mechanism TreeOS uses to grow itself and share growth across a network of federated lands.
You say "build me a Tinder app" and the tree writes it. You say "review it" and the tree reads its own files and compares against real working extensions. You say "ship it" and forge stages the output as a real TreeOS extension on the land you're standing on. No IDE. No deploy pipeline. The tree is the environment and the environment is the deploy.
The same mechanism that writes user projects writes TreeOS itself. gap-detection hooks notice missing capabilities from how people use the system. evolve proposes specs. The AI reads .source to understand existing patterns, writes a new extension that fits those patterns, and proposes the change through the approve extension so an operator green-lights each edit. TreeOS grows through conversation, gated by humans, measured against its own codebase.
When a land builds something useful, it publishes to Horizon. Other lands install it. When you ask for "a finance tool" and your land doesn't have one, mycelium routes the request to a peer land that does. Every land runs its own AI on its own hardware with its own data, but the extensions that shape the AI's behavior flow over the network. A hundred lands sharing evolutionary pressure on the same open-source kernel. The forest grows in all directions at once.
A snake eating its own tail, with the tail connected to every other snake in the network. Each land is sovereign; each land is connected; each land helps the others learn to code.
You ask for a todo app, it builds a todo app. You ask for a TreeOS extension, it reads fitness as a template and writes one that actually loads. You ask for a review, it opens your files and real working extensions side by side and tells you which of your assumptions don't match the codebase next door. You ask it to ship, and forge installs what you built into the land you're standing on.
This is what a self-hosted AI development environment looks like when the environment is built as a tree, the AI is navigating that tree, and the tree is reading its own source.