🐍

The tree writes code.

A snake eating its own tail.

TreeOS authors JavaScript projects inside itself. It reads its own source as a tree. It edits its own extensions. One sentence becomes a full app. It works on small local models because the structure carries the context, not the prompt.

The tree IS the workspace. A file is a node. A directory is a node. Content lives in a note. Disk is a projection that happens automatically. The AI never thinks about paths β€” it navigates, it writes, the tree compiles itself to real files.

Most coding assistants pretend the AI is a smart developer with a file tree. TreeOS inverts that: the file tree IS the developer's thought, the AI just reads the position it's at. Every node the cursor lands on changes what the AI can see, what tools it can call, and what shape its output has to take.

You don't give the AI a thousand-line prompt that describes your codebase. You let the codebase become the prompt. Navigation is context. "cd into a function" is a real action the AI can take.

One sentence. Full app.

Real run, local Qwen 3.5 27B. No prompt engineering. No pasted examples. One message in the tiner tree, three files out, app serves from port 3000.

tabor@treeos.ai/tiner β€Ί
make a rough tinder app with frontend and backend in nodejs and frontend html
Thinking…
Tree: Done.
tree after one message:
tiner
  package.json
  server.js
  public/
    index.html

What it produced: Express backend with five seeded profiles, REST endpoints for /api/profiles / /api/swipe / /api/matches, a full HTML+CSS+JS frontend with draggable card stack, swipe animations, nope/like overlays, match popup, mobile touch support, and a responsive purple gradient theme. Real working code. Not a stub.

The tree reads its own source.

At boot, TreeOS ingests its own codebase into a system tree called .source. Every extension, every kernel file, every test β€” live tree nodes with the source as notes.

🌱

Boot ingest

land/extensions/ and land/seed/ walk into .source on first boot. Subsequent boots do mtime-based incremental refresh. Unchanged files skip.

πŸ“–

Navigate it

cd /.source/extensions/fitness/modes and the AI at that position is reading fitness's plan mode source out of a real tree note. Same mechanism as any user project.

🧠

Reference by example

When the AI writes a new extension, its mode prompt tells it to source-read extensions/fitness/manifest.js first. It copies the real shape from real code, not from training memory.

πŸ”

Self-edit

Writes back to land/extensions/ are gated (disabled / approve / free). Flip the mode and the AI can patch TreeOS itself through the same sync walker. Seed stays read-only.

Claude Code reads your files. TreeOS reads itself and everything installed next to it. When you ask for a "new extension like fitness but for finance," it literally opens fitness and uses it as the template. The library is the codebase.

Four moving parts. Everything else composes from them.

πŸ”€

Grammar pipeline β€” routing by language, not keywords

Every message parses into five axes: domain (which extension), scope (how much), intent (which mode), interpretation (how), execution (dispatch / sequence / fork / fanout). Nouns are nodes, verbs are extensions, tense is mode. "Write me a function" routes to code-log. "Refactor this" routes to code-plan. "Review this code" routes to code-review. The grammar knows the difference and the mode matches.

πŸ—‚οΈ

code-workspace β€” tree as filesystem

Every file is a node. Every directory is a node. Content lives in a note on the file node. When the AI writes, it writes a note β€” which auto-compiles to disk via a depth-first walker (same pattern as the existing book extension). No manual sync. No path juggling. cd lib.js works; the AI at that position reads the file content as position context.

πŸ”

.source self-tree β€” self-awareness

The AI isn't blind to the system it runs on. Every installed extension sits under /.source/extensions/ as navigable nodes. The kernel sits under /.source/seed/. When the review mode critiques your code, it measures against real working extensions in the same tree, not against training-set memory of what "good code" looks like. The AI can cite specific files.

πŸ› οΈ

code-forge β€” ship it as an extension

Any project the tree builds can become a real TreeOS extension. forge-init adopts an existing workspace project and repoints its output at land/extensions/<name>/. Validate, test, install-local (restart-required), and dry-run publish to Horizon. The same tree nodes you just built are the extension shipped. Nothing gets copied.

The AI reviews its own work.

After building the Tinder clone on a 27B local model, the same tree, same session.

tabor@treeos.ai/tiner β€Ί
do you want to improve anything and add more detail?
Tree: Looking at your code now, I see some things worth fixing and a few nice-to-haves worth adding:
Key Issues:
1. Match logic is broken β€” every right swipe shows a match. Real Tinder only matches if both people swipe right.
2. No persistence β€” restart the server and you lose all swipes/matches.
3. Match section doesn't work β€” there's a counter but nowhere to view them.
4. Photos from pravatar.cc β€” random placeholder faces, could break.
Specific fixes I can do now:
1. Fix match logic β€” simulate mutual attraction (~30% match rate)
2. Add matches view β€” click header to see/list your matches
3. Add basic messaging β€” simple chat interface for matches
4. Persist in JSON file β€” survive server restarts

Nobody told it to review. The grammar parsed past-tense framing and routed to tree:code-review. The review mode's prompt said "read the files, compare to patterns, report issues with specific fix suggestions." That's what it did. This is the loop closing β€” the tree builds, reads, critiques, fixes.

Versus Claude Code, Cursor, Copilot.

Honest side-by-side. TreeOS isn't better at every dimension β€” it's a different shape.

Claude Code / CursorTreeOS
Context unitOpen files + promptPosition in the tree
Reference libraryTraining data memory.source tree of installed extensions
RoutingOne prompt, one modeGrammar pipeline picks mode per tense
File operationsRead/write filesystemRead/write tree nodes, disk auto-syncs
Model requirementLarge frontier modelsLocal 27B produces shipping code
Multi-file awarenessContext window + RAGTree walker + cascade signals
Ship as extensionManual packagingforge-ship: validate, install, publish
Self-modificationRead-only on its own sourceGated write-back to TreeOS itself

The big one: small local models can drive this. Because position carries context and the grammar pipeline does deterministic routing, the LLM only has to decide one thing at a time β€” generate the content of one file, one tool at a time. A 27B model does it well. A 7B model does most of it. Frontier models don't have much advantage when the prompt itself is mostly deterministic.

And it composes with everything else.

This isn't a separate coding IDE bolted on. It's an extension in the same TreeOS you use for fitness, food, recovery, journals. The same grammar. The same tree. The same AI knows all of it.

cascadeβ€” edit a file, downstream dependents get an awareness signal
bookβ€” compile a subtree into one document, the same walker that syncs code to disk
approveβ€” gate source edits so the operator sees every proposed change
gap-detection / evolveβ€” notice patterns in how you use the system, propose new extensions to fill them
horizonβ€” publish what the tree built to the federated extension registry
myceliumβ€” route "write me a finance tool" to a land that already has one

The whole surface, in order.

What the AI can call, grouped by what it does. Everything else follows from these.

# writing code in a project
workspace-add-file      create or overwrite a file (content as a note)
workspace-read-file     read current content
workspace-list          list files in the active project
workspace-delete-file   remove a file
workspace-sync          force tree β†’ disk compile (auto-fires on writes)
workspace-run           run npm / npx / node / git in the workspace
workspace-test          node --test with runner detection

# reading TreeOS itself
source-read             read a file from /.source (real installed code)
source-list             list files in a .source subdirectory
source-mode             flip write policy: disabled | approve | free

# shipping as an extension
forge-init              create or adopt a project as a forge extension
forge-write-file        write into a forge workspace
forge-validate          local mirror of the Horizon validator
forge-test              run the extension's tests
forge-install-local     stage into land/extensions/ (restart required)
forge-publish-horizon   dry-run or live publish to the registry

# modes (picked by grammar, not by you)
tree:code-plan          imperative: build, refactor, create
tree:code-log           present: small adds, one-off edits
tree:code-coach         future: guidance, diagnosis, debugging
tree:code-ask           query: read-only exploration
tree:code-review        past: audit + refine loop with .source references

The loop closes in three directions.

This isn't a coding assistant. It's the mechanism TreeOS uses to grow itself and share growth across a network of federated lands.

β†—

Inside out β€” users build apps from inside the tree

You say "build me a Tinder app" and the tree writes it. You say "review it" and the tree reads its own files and compares against real working extensions. You say "ship it" and forge stages the output as a real TreeOS extension on the land you're standing on. No IDE. No deploy pipeline. The tree is the environment and the environment is the deploy.

↙

Outside in β€” TreeOS builds itself

The same mechanism that writes user projects writes TreeOS itself. gap-detection hooks notice missing capabilities from how people use the system. evolve proposes specs. The AI reads .source to understand existing patterns, writes a new extension that fits those patterns, and proposes the change through the approve extension so an operator green-lights each edit. TreeOS grows through conversation, gated by humans, measured against its own codebase.

⇄

Across lands β€” shared evolution over the federation

When a land builds something useful, it publishes to Horizon. Other lands install it. When you ask for "a finance tool" and your land doesn't have one, mycelium routes the request to a peer land that does. Every land runs its own AI on its own hardware with its own data, but the extensions that shape the AI's behavior flow over the network. A hundred lands sharing evolutionary pressure on the same open-source kernel. The forest grows in all directions at once.

A snake eating its own tail, with the tail connected to every other snake in the network. Each land is sovereign; each land is connected; each land helps the others learn to code.

The system writes itself.

You ask for a todo app, it builds a todo app. You ask for a TreeOS extension, it reads fitness as a template and writes one that actually loads. You ask for a review, it opens your files and real working extensions side by side and tells you which of your assumptions don't match the codebase next door. You ask it to ship, and forge installs what you built into the land you're standing on.

This is what a self-hosted AI development environment looks like when the environment is built as a tree, the AI is navigating that tree, and the tree is reading its own source.