What runs when everything else is stripped away.
The kernel is called the seed. You plant it. It grows trees. Two schemas with metadata Maps, a conversation loop, a hook system, a cascade engine, and an extension loader. Remove every extension and the seed still boots. It defines the data contract that extensions build on and the resolution chains that determine what happens at every position in the tree.
A kernel manages hardware so applications don't have to. The seed manages intelligence so extensions don't have to. Same responsibilities. Different abstraction layer.
Starts and stops programs. Decides which runs, when, for how long. Makes them appear to run at once.
AI sessions per user per position. Request queue serializes per session. Orchestrator locks prevent collisions. Session TTL, stale cleanup, 10K cap with oldest-first eviction. maxToolIterations caps runtime. Abort signal cancels mid-message.
Allocates RAM to programs. Keeps them isolated so one can't crash another. Handles virtual memory and swapping.
Each extension gets its own namespace in the metadata Map. 512KB cap per namespace. 14MB document ceiling with pressure alerts at 80%. Six atomic operations on nodes, five on users. incExtMeta for counters, pushExtMeta for capped arrays, batchSetExtMeta for multi-field writes. Same toolkit on both schemas. No extension needs direct MongoDB. Circuit breaker auto-disables crashing extensions. .flow partitions evict oldest data when full.
Reads and writes files. Organizes folders, permissions, storage structure. Finds files and hands them to apps.
Nodes are folders. Notes are files. parent points up, children[] points down. Ownership chain controls who writes where. Spatial scoping controls what capabilities exist at each position. Ancestor cache makes lookups fast. Integrity check is fsck. Index verification on boot.
Talks to hardware. Apps say 'give me input' without knowing how a keyboard works electrically.
LLM endpoints are devices. The resolution chain is driver priority: extension slot on tree, tree default, user slot, user default. Extensions call runChat() without knowing which model, which endpoint, which provider. MCP is the device bus. Tools are system calls. The AI says 'create a node' and MCP routes it.
Sends and receives data over networks. Implements TCP/IP and sockets. How apps talk to servers.
Socket.IO for real-time client connections. Named event types as the packet format. protocol.js is TCP: shared response shapes before anyone starts talking. Canopy is the network between lands. REST, signed messages, peer discovery. Each land is a host. Canopy is the routing layer.
Controls who can access what. Enforces user permissions and process isolation. Prevents unauthorized access.
JWT + extension auth strategies with fallthrough. Ownership walks the parent chain: first rootOwner is the authority. Contributors accumulate. Spatial scoping blocks entire extensions at a position. Six rules: seed never imports extensions, schemas never change, extension data in metadata only. The kernel can't be injected into.
Intercepts kernel operations. Powerful and dangerous. Used in security tools and rootkits.
30 lifecycle hooks. before hooks intercept and cancel. after hooks react in parallel. Any extension can hook any operation. beforeToolCall rewrites arguments. beforeNote blocks writes. Orchestrator replacement swaps the entire conversation flow. 5s timeout, circuit breaker, spatial filtering. Power with guardrails.
How programs talk to each other. Shared memory, message passing, signals.
Hooks are pub/sub between extensions. Cascade is message passing between nodes. Canopy is message passing between lands. getExtension() is the direct call interface. Socket handler registry lets extensions push to clients. Every signal produces a visible result in .flow.
Brings the system up in the right order. Hardware init, driver loading, filesystem mount, service startup.
DB connect, index verification, system nodes, config load, seed migrations, integrity check, extension discovery, dependency resolution, topological sort, init(), wire routes/tools/hooks/modes, background jobs, Canopy peering, afterBoot hook. Each step depends on the one before it.
CPU overheats, kernel throttles the clock. Disk failing, kernel remounts read-only. Data preserved. System protects itself from making things worse.
Health equation monitors node count, metadata density, and error rate. When the score exceeds 1.0, the tree trips. No AI, no writes, no cascade. Read access stays open. Data preserved. Extensions diagnose and revive. The kernel protects the land from one sick tree dragging everything down.
Instead of bridging hardware to software, the seed bridges LLMs to structured data. The building blocks for AI operating systems. Extensions add meaning. Lands add presence. Canopy adds reach. Everyone can contribute. Everyone can build their own OS on top. The plumbing is done.
Two schemas carry extensible metadata Maps: Node and User. The Map is the invention. Extensions store everything in it. Values, prestige, personas, cascade config, AI instructions. Four supporting models exist for infrastructure (Note, Contribution, Chat, LlmConnection) but they don't carry Maps. Extensions don't write to them. The schemas never change.
12 fields. Type is free-form (custom types allowed). Status is active, completed, or trimmed. Extensions store all their data in metadata under their name. Values, prestige history, schedules, tool configs, extension scoping, all of it lives in the Map.
7 fields. One default LLM connection. Extensions store energy budgets, API keys, LLM slot assignments, storage usage, and preferences in metadata. Same atomic toolkit as nodes: incUserMeta, pushUserMeta, batchSetUserMeta.
Navigation determines the AI's behavior zone. Structural, not interpretive. Determined by URL. Zones are kernel. Sub-modes within zones are extensions.
/System management. Extensions, config, users, peers. Admin access required. The kernel provides a fallback mode. The land-manager extension provides the real one.
~Personal space. Raw ideas, notes across trees, chat history, contributions. The kernel provides a fallback. Extensions provide the experience.
/MyTreeInside a tree. Four commands: chat (full interaction), place (store silently), query (read-only), be (guided). The orchestrator routes them. The kernel enforces the query read-only constraint.
Every AI interaction goes through the same loop. Mode determines the system prompt and available tools. The loop calls the LLM, executes tool calls, and repeats until the LLM responds without tools or hits the iteration cap.
Walk the resolution chain: extension slot on tree, tree default, extension slot on user, user default. First match wins. Any OpenAI-compatible endpoint works.
Three layers: mode base tools, extension-injected tools, per-node config (allowed/blocked). Then spatial extension scoping filters out tools from blocked or restricted extensions. Query constraint: when readOnly is set, only tools with readOnlyHint pass through. The AI only sees what's permitted at this position.
The active mode's buildSystemPrompt() generates the system message with user context, tree position, and timezone. Extensions inject context via the enrichContext hook.
Send to LLM. If it returns tool calls, execute them via MCP, append results, send again. Repeat until the LLM responds with text or hits maxToolIterations (default 15). Abort signal checked between iterations.
Extensions never call the loop directly. They use runChat() (single message, persistent session) or OrchestratorRuntime (multi-step chain). One call handles MCP connection, session management, Chat tracking, abort propagation, and cleanup.
An open pub/sub bus. The kernel fires events. Extensions listen. Extensions can also fire their own events for other extensions to listen to. Any hook name is valid. No whitelist. Typos are detected and warned, not blocked.
Sequential. Can modify data. Can cancel. 5s timeout per handler.
Parallel, fire-and-forget. Errors logged, never block.
Return values captured. Handlers read each other's additions.
Extensions fire their own hooks with extName:hookName convention. The gateway extension fires gateway:beforeDispatch. Other extensions listen. Spatial scoping filters: if an extension is blocked at a node, its hook handlers are skipped for operations on that node.
Same pattern across all five. Extensions register. The kernel resolves. Failure falls back to the kernel, never to silence.
Every operation at a node goes through resolution chains that determine what the AI can do and how it thinks. Each chain walks the parent hierarchy and applies layered rules. This is what makes position determine capability.
Is this extension active, restricted, blocked, or confined? Two modes: global extensions accumulate blocked[] walking up (opt-out). Confined extensions check allowed[] walking up (opt-in). Confined and not allowed = blocked. Allowed but blocked further down = blocked wins. Restricted extensions keep read-only tools.
What tools does the AI have? Start with mode base tools. Add extension-injected tools. Apply per-node metadata.tools.allowed/blocked. Filter by extension scope. The AI sees only what survives all layers.
How does the AI think? Check metadata.modes[intent] for per-node override. Skip if owning extension is blocked. Fall back to default mapping (tree:respond). Then bigMode default.
Which model runs? Extension slot on tree, tree default, extension slot on user, user default. First match wins. Failover chain tried on failure.
How does the model behave? Per-node metadata.llm.config overrides for tool iterations, timeouts, context window size. Walk parent chain, closest value wins. Falls back to land defaults. A DevOps branch gets longer timeouts. A journal gets deeper context.
Navigate to a different node. All five chains re-resolve. Different tools appear. Different mode fires. Different model runs. Different behavior constraints apply. The tree reshapes around where you stand.
Promises the seed makes. Not configurable. Not optional. Always true.
Ownership resolves by walking the parent chain. The first node with rootOwner set is the ownership boundary. Setting rootOwner on a branch delegates that sub-tree to a new owner. Contributors accumulate along the walk. If a user is in contributors[] at any node between the current position and the ownership boundary, they have write access.
addContributorResolved owner or admin. Atomic $addToSet.removeContributorResolved owner, admin, or self-removal.setOwnerOwner above or admin can delegate.removeOwnerOwner above or admin can revoke. Falls back to next owner up.transferOwnershipCurrent owner or admin can transfer.All five reject on system nodes. All five validate the chain before writing. Extensions use core.ownership.*.
Every extension receives core in its init() function. The core services bundle exposes everything an extension needs without direct imports. No extension should call MongoDB directly for metadata, node creation, or note management.
incExtMeta, pushExtMeta, batchSetExtMeta, and unsetExtMeta accept a node document or ID string. All use MongoDB atomic operators. No read-modify-write race. No direct MongoDB needed.
createNode, createNodeBranch, deleteNodeBranch, updateParentRelationship, editNodeName, editNodeType. Plus cache management, integrity check, and circuit breaker.
createNote, editNote, deleteNoteAndFile, transferNote, getNotes. Programmatic note CRUD without direct seed imports.
isExtensionBlockedAtNode, getBlockedExtensionsAtNode, isToolReadOnly, getToolOwner, getModeOwner. Extensions check their own blocked status. Guard enrichContext injections. Know who owns which tool or mode.
Same pattern as node metadata. Both schemas get the same atomic toolkit. No extension needs direct MongoDB for user metadata.
registerMode, setDefaultMode, setNodeMode. Register custom AI modes. Set zone defaults. Set per-node mode overrides atomically.
Structure without communication is a filing cabinet. Cascade is what makes the tree alive. When content is written at a node marked for cascade, the kernel announces it. Extensions propagate, react, and deliver signals to other nodes and other lands. Every signal produces a visible result.
A note is written at a node with metadata.cascade.enabled = true. The kernel checks two booleans: is cascade enabled on this node? Is cascadeEnabled true in .config? If both yes, fire onCascade. The first event is always local. Somebody wrote something at a position marked for cascade.
Extensions call deliverCascade to send signals to other nodes, children, siblings, or remote lands via Canopy. The kernel never blocks inbound. Always accepts. Always writes a result to .flow. Extensions decide what to do when a signal arrives.
The kernel has four primitives. Structure: two schemas with metadata Maps, nodes in hierarchies. Four supporting models for infrastructure. Intelligence: the conversation loop, resolution chains. Extensibility: the loader, hooks, pub-sub. Communication: cascade, .flow, visible results. Everything else is emergent behavior from these four interacting.
Created at boot by ensureLandRoot(). They hold infrastructure state, not user content. Every boot verifies all six exist. Missing nodes are recreated. System nodes with wrong parents are repaired automatically.
The top-level node. Parent of all trees and system nodes. rootOwner: "SYSTEM".
Land UUID, domain, Ed25519 public key for Canopy federation signing. Set once at boot.
All runtime configuration as metadata keys. Readable and writable via CLI, API, or the land-manager AI.
Canopy federation peer list. Children are peer land records with status and heartbeat history.
Extension registry. Each loaded extension is a child node with version and schema version for migrations.
Cascade result store. Daily partition children hold results by date. Retention deletes entire partitions. flowMaxResultsPerDay caps growth per day. The land's short-term memory of what moved and what happened.
Set from the CLI, the API, or through the land-manager AI. No code changes. No restarts.
LAND_NAMEDisplay nameMy LandlandUrlLand URL for headers and SSRF protectionautolandLlmConnectionLand-wide fallback LLM for users without their ownnullllmTimeoutSeconds per LLM API call900llmMaxRetriesRetry count on 429/5003maxToolIterationsTool calls per message15maxConversationMessagesContext window size30noteMaxCharsMax characters per note5000treeSummaryMaxDepthHow deep AI sees the tree4treeSummaryMaxNodesHow many nodes AI sees60sessionTTLSession idle timeout (seconds)900maxSessionsMax concurrent sessions10000chatRetentionDaysAuto-delete chats after N days90contributionRetentionDaysAuto-delete contributions after N days365timezoneLand timezone for AI promptsautodisabledExtensionsExtensions to skip on boot[]cascadeEnabledEnable cascade signalsfalseuploadEnabledMaster switch for uploadstruemaxUploadBytesHard ceiling per upload104857600jwtExpiryDaysJWT token lifetime in days (1-365)30treeCircuitEnabledMaster switch for tree circuit breakerfalsecarryMessagesMessages carried across mode switch4chatRateLimitMax chat messages per rate window10chatRateWindowMsChat rate limit window (ms)60000maxChatMessageCharsMax characters per WebSocket chat message5000maxMessageContentBytesMax bytes per message in conversation history32768maxChatContentBytesMax bytes stored per chat message in DB100000maxChainStepContentBytesMax bytes per orchestrator chain step log2000maxSystemPromptCharsMax system prompt length before truncation32000maxScopedSessionsHard cap on scoped sessions20000maxAiContextEntriesHard cap on AI chat context tracking10000requestQueueMaxDepthMax waiting tasks per queue key100staleSessionTimeoutStale session cleanup (seconds)1800maxConversationSessionsHard cap on conversation sessions50000staleConversationTimeoutIdle conversation sweep (seconds)1800llmMaxConcurrentMax in-flight LLM calls20failoverTimeoutLLM failover stack timeout (seconds)15toolCallTimeoutSeconds before a tool call is killed60toolResultMaxBytesMax tool result size (bytes)50000toolCircuitThresholdTool failures before session disable5maxNotesPerNodeMax notes per node1000maxConnectionsPerUserMax LLM connections per user15maxRegisteredToolsMax tools in registry500maxRegisteredModesMax modes in registry200maxOrchestratorsMax registered orchestrators10maxChainStepsMax steps per pipeline500maxOrchestratorLocksConcurrent orchestrator locks cap10000maxParseInputBytesMax LLM JSON extraction input200000maxMcpClientsMCP client pool cap5000maxExtensionIndexesMax indexes per extension20maxInheritedStatusNodesMax nodes per status cascade10000resultTTLCascade result TTL (seconds)604800awaitingTimeoutAwaiting to failed timeout (seconds)300cascadeMaxDepthMax propagation depth50cascadeMaxPayloadBytesMax signal payload51200cascadeRateLimitMax signals per node per minute60cascadeMaxDeliveriesPerSignalMax deliveries per signal500flowMaxResultsPerDayMax cascade results per day10000maxDocumentSizeBytesDocument size ceiling14680064ancestorCacheTTLParent chain cache TTL (ms)30000integrityCheckIntervalTree fsck interval (ms)86400000allowedMimeTypesUpload MIME filter (null = all)nullallowedFrameDomainsCSP frame-ancestors domains[]allowedLlmDomainsLLM endpoint domain whitelist[]maxTreeNodesNode count health threshold10000maxTreeMetadataBytesMetadata size threshold1073741824maxTreeErrorRateErrors per hour threshold100circuitNodeWeightNode count weight0.4circuitDensityWeightMetadata density weight0.3circuitErrorWeightError rate weight0.3circuitCheckIntervalHealth check interval (ms)3600000treeSummaryRecentNotesRecent notes per node in summary3treeSummaryPreviewCharsNote preview length (chars)200treeSearchResultLimitSearch results in tree context10chatContributionQueryLimitContributions linked per chat2000chatHistoryMaxSessionsSessions per history query50chatHistoryMaxChatsPerSessionChain steps per session200chatHistoryMaxDescendantIdsincludeChildren expansion cap500chatHistoryMaxContributionsContributions per history query5000canopyEventRetentionDaysCanopy event cleanup (days)30npmInstallTimeoutnpm install timeout (ms)60000seedVersionCurrent seed version0.1.0The kernel protects itself from extensions, from runaway AI, and from time.
The kernel does not know about fitness, food, wallets, blogs, scripts, energy budgets, understanding runs, dream cycles, or gateway channels. It does not render HTML pages. It does not meter usage. It does not tag version numbers. It does not schedule recurring tasks. It does not propagate signals between nodes. It does not route cascade between lands. It does not filter content. It does not compress context.
The seed announces that content was written at a cascade-enabled position and records what happened. Propagation, routing, filtering, compression are all extensions. The seed provides structure, intelligence, extensibility, and communication. Extensions provide meaning.
The land is the ground. Trees grow from it. Each tree pulls data through cascade like roots pulling water. The conversation loop is photosynthesis: raw input becomes structured output. .flow is the water table, local to the land, felt by every tree. Signals cascade up through roots, across lands through Canopy, and down into other trees. Sometimes it pools. Sometimes it floods. The seed protects the ground. The tree survives. The structure holds.
The seed is AGPL-3.0. You can run it, modify it, build on it. If you modify the seed and run it as a service, you share your seed modifications.
Extensions are separate works. They interact with the seed through the defined API (core services bundle, hooks, registries, metadata Maps). Extension authors choose their own license. The seed license does not infect extensions. Build proprietary extensions, open source extensions, whatever you want. The ecosystem is free.
Every file in seed/ carries a one-line header: // TreeOS Seed . AGPL-3.0 . https://treeos.ai. Extension manifests declare a license field. The Horizon and CLI display it. Nothing blocks extensions without licenses. The seed is open. The ecosystem is free. Legal terms protect the seed. Code enforcement doesn't.
Every tool call goes through MCP. The AI and external clients share the same interface, the same security boundary, the same spatial scoping. Two entry points. Same destination.
The conversation loop calls tools through MCP. The AI says "create a node." Auth checks the session. Tree access validates the nodeId. Spatial scoping filters blocked extensions. The tool handler executes. The result flows back. The AI decides what to do next. Every tool call the AI makes goes through this path.
Claude Desktop, Cursor, VS Code, any MCP client connects to /mcp. Gets a session. Sends tool calls with auth headers. Same security checks. Same tool handlers. Same results. The external client brings its own intelligence. The kernel provides the tools and the security.
An extension registers tools once. Available to both paths. Per-session server isolation. Each client gets its own MCP session with its own auth context. Tool registration replay ensures every session sees every tool. Extension disable invalidates all sessions cleanly.
Commands are constraints on the orchestrator, not paths through it. Routing determines WHERE the message goes. The command determines WHAT the mode can do. The kernel enforces the query constraint (read-only tool filtering). Everything else is the orchestrator's decision.
Full interaction. The AI reads, writes, and responds naturally. "bench 135x10" at a fitness node: parses, logs, tells you about your progression.
Write and confirm. No commentary. "Logged. Bench 135x10/10/8." For speed. The user knows what they want. The tree does it.
Read-only safety. The AI can look but not touch. "How's my bench?" reads history without risking a stray write. Enforced in the kernel. Every orchestrator gets it for free.
The tree leads. You follow. The AI reads the structure, finds what needs doing, and walks you through it one step at a time. Extensions declare a guided mode. At a fitness node: workout coaching. At a food node: meal logging. No extension: the tree walks its own children.
Your data survives configuration changes. The capability layer is swappable. The data layer is permanent.
Run the full stack for six months. Fitness tracking, food logging, cascade signals, intelligence extensions analyzing patterns, dreams running at 3am. Switch to minimal. Eight extensions load. The rest go silent. Your LLM bill drops to zero.
Three months later, switch back. Every extension finds its data exactly where it left it. The fitness history is there. The food log is there. The codebook compressions are there. The tree remembers everything. It was sleeping, not dead.
Extension data lives in the metadata Map. Mongoose does not drop unknown Map keys. The .treeos-profile controls what LOADS, not what EXISTS. MongoDB keeps every key whether the extension is loaded or not. Load it later. The data is there. Build a full OS distribution. Test it. Strip it to the kernel. Build a different one on the same database. The data layer is permanent. The capability layer is swappable.
Trust model: Extensions run in the same Node.js process. The kernel enforces metadata namespace isolation, spatial scoping, and circuit breakers. This protects against bugs, not malicious code. Review what you install. Same trust model as npm packages and Linux kernel modules.
Git tracks changes to flat files. Unix gives you hierarchy and permissions. Knowledge graphs give you relationships. But none of them put AI at every node. None of them let the structure itself think.
LangChain, CrewAI, AutoGen. They are pipelines. The agent runs, does a thing, stops. There is no persistent home. No position. No "where am I" changing what the agent can do. No tree that grows and prunes and splits and remembers.
Obsidian, Notion, Roam. Hierarchy and linking. But the structure is inert. You organize it. You maintain it. The tool does not compress its own knowledge, detect its own contradictions, or propose its own reorganization.
Signals propagating through a tree, filtered by perspective, sealed for transport, flowing through a nervous system that the tree monitors with its own pulse. That is not a feature of something else. That is a new thing.
The biological metaphor is not decoration. Seeds, roots, branches, canopy, mycelium, rings, pulse, breath, pruning, dormancy, growth cycles. It maps because the problem actually is biological. A living system that grows, maintains itself, connects to others, and eventually reproduces. Software does not usually work like that. This does.
Most software starts with "what should the user see" and works backward. The seed started with "what does an AI need to live somewhere permanently" and worked forward. Two schemas. A hook system. An extension loader. That is the seed. Everything else grew from it.