Skip to main content

Document 1: Spaces Dashboard Design — Complete Feature Breakdown

For Junior Developers New to the aiConnected OS Project


Ai Connected OS Welcome Screen Copy

What This Document Covers

This document defines Spaces — the unified workspace hub that lives inside every Instance (think of an Instance as a “project” container). Spaces is where all non-chat content is organized, accessed, and managed. It is the single most important organizational feature for users who are actively building things, managing tasks, collecting ideas, or producing outputs.

Context: Where Spaces Fits in the Platform

Before diving in, you need to understand the hierarchy:
  1. aiConnected OS — the entire platform
  2. Instances — individual project/workspace containers (like “projects” in ChatGPT or Claude)
  3. Instance Dashboard — the home screen when you open an Instance
  4. Spaces — a tab/section within the Instance Dashboard that unifies all non-chat artifacts
Spaces is accessed via one sidebar entry called Spaces inside the Instance Dashboard. Everything inside Spaces is scoped to that Instance by default.

FEATURE 1: Spaces Home View (The “Control Room”)

What It Is

When a user clicks “Spaces” in the Instance Dashboard sidebar, they land on a Spaces Home screen. This is NOT a file browser or a list of links. It’s a visual “control room” — a dashboard-within-a-dashboard that shows overview cards for every content type the Instance contains.

What It Does

Displays summary cards for each content type (Tasks, Whiteboard, Live Docs, Chats, Files, Folders, Snippets, Links, Exports), each showing key stats and quick-action buttons.

Intended Purpose

Users accumulate dozens of files, tasks, documents, and code snippets across many chat conversations. Without Spaces, all of this content is trapped inside individual chats and invisible unless you scroll through conversation history. Spaces surfaces everything in one place so users can act on it without hunting.

Why Anyone Should Care

Current AI platforms (ChatGPT, Claude, Gemini) have no native way to see “everything I’ve created or saved across all my conversations in this project.” Content gets buried in chat history. Spaces solves this by treating every artifact as a first-class object that exists independently of the chat that created it.

How It Should Be Built

Top Bar Components:
  • Scope Selector — a dropdown or toggle with two options:
    • This Instance (default) — shows content from the current Instance only
    • All Instances — shows content aggregated across every Instance the user has (this is a future/power-user feature)
  • Global Search Bar — searches across all content types within the current scope (tasks, docs, chats, files, etc.)
  • Filter Bar — three filter dimensions:
    • Type: Tasks, Docs, Whiteboard, Chats, Files, Snippets, Folders, Links, Exports
    • Time: Today, This week, This month, Custom date range
    • Source: All, AI-created, User-created, Imported
Main Content Area — Overview Cards: Each content type gets a large card with:
  • A stat summary (e.g., “12 Open | 3 Due Today” for Tasks)
  • Quick-action buttons (e.g., View all, New Task)
  • A preview strip showing the most recent 2-3 items
The cards are:
  • Tasks — “12 Open | 3 Due Today” — Buttons: View all, New Task — Shows next 3 tasks
  • Whiteboard — “1 Whiteboard | 42 pinned items” — Buttons: Open whiteboard, View pinned items list — Shows recently pinned strip
  • Live Documents — “6 Documents | Last updated 2 hours ago” — Buttons: View all, New document — Recently updated docs list
  • Chats — “32 Chats | 5 linked to this instance” — Buttons: View chats, Start chat from task — Last 3 active chats
  • Folders — “4 Folders | 21 items inside” — Buttons: View all folders, New folder
  • Files — “63 Files | 18 Images, 11 PDFs, 4 Audio, 30 Other” — Buttons: Browse files — Recent uploads
  • Code Snippets — “9 Snippets” — Buttons: View all, New snippet
  • Links — “15 Links” — Buttons: View all, Add link
  • Exports — “7 Exports | 3 Presentations, 4 Docs” — Buttons: View all, Create export
Starring/Favoriting: Users can “star” any card type. Starred types float to the top of Spaces Home. Non-starred cards can be collapsed into a compact row to reduce visual noise.

Technical Notes for Developers

  • Each overview card needs a real-time or near-real-time count query against the Instance’s content store
  • The scope selector changes the data source for every card simultaneously
  • Search should be full-text across all content types with type-faceted results
  • Cards should be rendered as reusable components since the same data model drives both the overview card and the full dedicated view

FEATURE 2: Tabbed Sub-Navigation

What It Is

A horizontal tab bar that sits directly below the search bar inside the Spaces view. Tabs are: Overview | Tasks | Whiteboard | Live Docs | Chats | Folders | Files | Snippets | Links | Exports

What It Does

Clicking any tab switches the main content area to a full dedicated view for that content type. “Overview” is the Spaces Home described above.

Intended Purpose

Prevents Spaces from becoming its own cluttered sidebar. Instead of adding 10 new sidebar items to the Instance Dashboard, everything is contained within one Spaces entry, and users navigate between content types using lightweight tabs.

Why Anyone Should Care

Tab-based navigation inside a single view is far less cognitively demanding than a sidebar with dozens of entries. It keeps the main app sidebar clean while still giving power users access to every content type.

How It Should Be Built

  • Horizontal tab bar, scrollable if tabs overflow the viewport width on smaller screens
  • Clicking a tab replaces the main content panel (not a page navigation — this is a client-side view switch)
  • The currently active tab should be visually highlighted
  • Each tab view is its own component/page with dedicated layout, filters, and actions
  • URL routing should reflect the active tab (e.g., /instance/:id/spaces/tasks) for deep-linking and browser history support

FEATURE 3: Tasks Space

What It Is

A lightweight task/to-do list scoped to the Instance. Not a full project management tool — just a fast way to capture “do this later” items that emerge from conversations.

What It Does

Displays all tasks for the Instance in a list with columns: Task name, Source (which chat/message created it), Status (Open / In Progress / Done), Due date, Tags, and row-level actions.

Intended Purpose

During a brainstorming chat, users often think “I need to do X later.” Without a task system, that thought is lost in chat history. Tasks let users capture action items from any chat and manage them separately.

Why Anyone Should Care

Every other AI chat platform loses action items inside conversations. This feature means ideas that emerge in chat become trackable, actionable items that live beyond the conversation.

How It Should Be Built

List/Table View with columns:
  • Task name (text)
  • Source — which chat, message, or whiteboard item created it. Clicking opens the original source.
  • Status — Open, In Progress, Done (start with just Open / Done for v1)
  • Due date — optional date picker
  • Tags — free-text tags (e.g., “PRD”, “UI”, “Sales”)
  • Actions column
Quick Filters:
  • Status: Open / In Progress / Done
  • Timing: Due Today / This Week / Overdue
  • Origin: Created from chat / Created manually / Created by AI
Row Actions (per task):
  • Open in chat — jumps to the original message that spawned this task
  • Start new chat from task — creates a new conversation pre-seeded with the task description
  • Convert to live document — promotes the task into a Live Document
  • Create reminder / external notification — sends to email, Slack, etc. (future integration)
  • Pin to whiteboard — adds the task as a node on the Instance’s Whiteboard
Data Model (minimal):
Task {
  id: string
  instance_id: string
  title: string
  status: "todo" | "in_progress" | "done"
  source_type: "message" | "manual" | "whiteboard" | "reference"
  source_chat_id?: string
  source_message_id?: string
  due_date?: Date
  tags: string[]
  created_at: Date
  updated_at: Date
}
Key Behavior:
  • Tasks are scoped per Instance. There is no global task list in v1, but later a “All Tasks” rollup across Instances may be added.
  • The Tasks feature can be toggled ON/OFF per Instance Type in settings (not every Instance needs tasks).
  • Creating a task from a chat message pre-fills the title from the message content.
  • Status changes should be single-click (checkbox or status pill toggle).

FEATURE 4: Whiteboard Space

What It Is

The management interface for the Instance’s visual Whiteboard/Board (a Miro-like infinite canvas — defined in detail in Document 5).

What It Does

From Spaces, the Whiteboard view shows:
  • A primary Open Whiteboard button to launch the full canvas
  • A list/table of all pinned items currently on the Whiteboard, with: Type (message, image, export, link, note), Source chat, Short preview, When it was pinned

Intended Purpose

The Whiteboard itself is a spatial canvas. But sometimes users want a quick list view of everything on it — to filter, unpin, or convert items — without opening the full canvas.

Why Anyone Should Care

Users pin dozens of items from different chats to the Whiteboard over days or weeks. This list view gives them a fast way to audit what’s there, clean up stale items, or convert pinned content into tasks/documents.

How It Should Be Built

  • Open Whiteboard button launches the full canvas view (separate component, defined in Doc 5)
  • Pinned items table with filters by type
  • Each row allows: Open in original chat, Unpin, Convert to Task, Convert to Live Document section, Convert to Export draft

FEATURE 5: Live Documents Space

What It Is

A central hub for long-form, evolving documents — PRDs, specs, business plans, etc. — that can be fed content from multiple chats.

What It Does

Shows a list of all Live Documents in the Instance with columns: Title, Description/Tagline, Last updated, Linked chats count, Status (Draft / In Review / Final).

Intended Purpose

In real projects, a single document (like a PRD) gets built incrementally across many conversations. Live Documents are long-form artifacts that persist and evolve, fed by content from any chat in the Instance.

Why Anyone Should Care

No AI platform currently lets you build a single document by feeding it content from multiple separate conversations. Live Documents solve the “my PRD is scattered across 15 chats” problem.

How It Should Be Built

  • Document list with columns and status badges
  • Click a document to open it in an editor panel (rich text editor)
  • “Linked chats” shows which conversations contributed content (clickable links back to source chats)
  • “Add section from chat” lets users push content from any chat message into a specific section of the doc
  • “Create export” generates a downloadable PDF, presentation, or other format from the Live Document
  • Status workflow: Draft → In Review → Final

FEATURE 6: Chats Space

What It Is

A view of all chats associated with the current Instance, with relationship metadata.

What It Does

Shows a list of all chats with columns: Chat title, Type (Standard, Linked conversation, Reference), Last activity, Linked artifacts (tasks, docs, whiteboard items), Folder association.

Intended Purpose

Gives users a bird’s-eye view of every conversation in the Instance, along with what each conversation has produced (tasks, documents, pins, etc.) and how conversations relate to each other.

Why Anyone Should Care

In current platforms, chats are flat lists with no visible relationships. This view shows the conversation graph — which chats branched from which, what artifacts each chat produced, and how everything connects.

How It Should Be Built

  • Chat list with metadata columns
  • “Relationships” panel per chat showing: Parent/child linked conversations, Referenced conversations (context pull-ins)
  • Actions: Open chat, Add to folder, Mark as “primary” for a topic
  • Links to artifacts that were created from each chat (tasks, docs, whiteboard pins)

FEATURE 7: Folders Space

What It Is

A structural organization layer that sits between “Instance” and “chat.” Folders can contain chats, tasks, docs, files, and more.

What It Does

Shows a list of folders with: Folder name, Description, Item counts (Chats | Docs | Tasks | Files), Last updated. Clicking into a folder shows a mini-Spaces scoped to just that folder’s contents.

Intended Purpose

Large Instances need sub-organization. A folder for “UI Work,” another for “Market Research,” another for “Sales” — each containing only the relevant chats, tasks, and files.

Why Anyone Should Care

Without folders, a project Instance with 50+ chats and dozens of files becomes unmanageable. Folders add the hierarchical organization that power users need.

How It Should Be Built

  • Folder list at the top level
  • Inside each folder: a mini-Spaces view with tabs Summary | Chats | Tasks | Docs | Files
  • A folder is essentially a “sub-space” — same UI patterns, narrower scope
  • Folders are optional — users don’t have to use them

FEATURE 8: Files Space

What It Is

A centralized file browser for all uploaded or AI-generated files in the Instance.

What It Does

Shows a grid or list of files with filters by: Type (Image, PDF, Audio, Video, Other), Source (Upload, Generated by AI, Imported), Linked items (Chats, Live Docs, Whiteboard, Exports). Each file shows: Preview/thumbnail, Name, Type, Size, Linked items.

Intended Purpose

Files get created throughout chat conversations — images generated, PDFs uploaded, code exported. Without Files Space, these are buried in individual chat messages. This view surfaces them all.

Why Anyone Should Care

Finding “that image the AI generated last week” shouldn’t require scrolling through 50 chat messages. Files Space makes every file instantly discoverable and actionable.

How It Should Be Built

  • Grid view (thumbnails) and list view toggle
  • Filters by type, source, and linked items
  • Actions per file: Open viewer, Attach to live doc or export, Pin to whiteboard, Insert into chat, Add to folder
  • Files should be automatically indexed when created (in chat, by AI, or by upload)

FEATURE 9: Code Snippets Space

What It Is

A storage and retrieval system for reusable code, prompts, or configuration snippets.

What It Does

Shows a list of saved snippets with: Language/Type (JS, Python, Shell, Prompt, etc.), Title, Short description, Tags, Origin (which chat created it).

Intended Purpose

Developers and power users frequently generate useful code snippets during conversations. This feature saves them as independent, searchable objects rather than losing them in chat history.

Why Anyone Should Care

If the AI writes a useful database query or a Python function during a chat, the user should be able to find and reuse it without searching through old conversations.

How It Should Be Built

  • Snippet list with language syntax highlighting in previews
  • Actions: Copy to clipboard, Insert into chat, Insert into live doc, Attach to folder
  • Snippets can be created from chat (contextual “Save as snippet” action on code blocks) or directly within Snippets Space

What It Is

A bookmark/reference manager for all saved links — both internal (to other chats, docs, exports) and external (URLs).

What It Does

Shows all saved links with: Title, URL, Type (External website, Internal chat, Live doc section, Export), Origin (what created it), Tags.

Intended Purpose

During research and brainstorming, users accumulate many references. Links Space keeps them organized and actionable rather than lost in chat.

How It Should Be Built

  • Link list with type badges
  • Actions: Open link, Add to folder, Convert to task (“Follow up on this resource”)
  • Links can be saved from chat messages (contextual “Save link” action) or created directly

FEATURE 11: Exports Space

What It Is

A hub for all final output files — PDFs, slide decks, markdown exports, etc. — generated from Live Documents, Whiteboard compilations, or direct export actions.

What It Does

Shows all exports with: Title, Type (PDF, Deck, Markdown, etc.), Source (which live doc/whiteboard/task generated it), Created date, Last regenerated.

Intended Purpose

When a user compiles their Whiteboard into a PRD or exports a Live Document as a PDF, that output lives here. It’s the “finished goods” section.

Why Anyone Should Care

Exports are the tangible deliverables users share with clients, teams, or stakeholders. Having them in one place with regeneration capability (re-export if the source doc changed) is essential.

How It Should Be Built

  • Export list with source traceability
  • Actions: Download, Regenerate (if source material changed), Share link, Attach to email (future integration), Add to folder
  • Regeneration should re-run the compilation from the current state of the source document/whiteboard

FEATURE 12: Content Flow Into Spaces (Automatic Collection)

What It Is

The system by which content automatically flows from chats into Spaces.

What It Does

Whenever a user takes an action in chat — saves a task, pins to whiteboard, uploads a file, saves a snippet, generates an export — that content automatically appears in the appropriate Spaces section.

Intended Purpose

Spaces should feel like it “just collects things” without the user having to manually organize anything. The magic is that content flows in from conversations automatically.

Why Anyone Should Care

If users had to manually copy things from chat into Spaces, nobody would use it. Automatic collection makes Spaces a living, always-up-to-date workspace.

How It Should Be Built

From a chat message, users can:
  • Save as task → appears in Tasks Space
  • Pin to whiteboard → appears in Whiteboard Space
  • Add to live document → appears in Live Docs Space
  • Save snippet → appears in Snippets Space
  • Save link → appears in Links Space
  • Attach file to... → appears in Files Space
From system events:
  • Export created from a doc → appears in Exports Space
  • File uploaded in chat → appears in Files Space
  • AI generates an image → appears in Files Space
From Spaces itself:
  • Users can create tasks, docs, folders, snippets, and links directly from within Spaces, without going back to a chat.
Spaces is both a collector (stuff flows in from conversations) and a workbench (you can create and manage things directly).

FEATURE 13: Dashboard ↔ Spaces Relationship

What It Is

The structural relationship between the Instance Dashboard, Spaces, and the global view.

What It Does

Defines the navigation hierarchy: Dashboard → Instance View → Spaces (scoped to that Instance). A future “Global Spaces” view aggregates across all Instances.

Intended Purpose

Ensures users always know where they are in the hierarchy and can easily switch between Instance-scoped and global views.

How It Should Be Built

  • Instance View has tabs like: Chat, Spaces, Settings
  • Spaces inside each Instance is scoped by default to that Instance
  • The scope selector in Spaces allows switching to “All Instances” for a cross-Instance aggregate view
  • The global Dashboard (above Instance level) may show a Spaces summary widget with stats across all Instances

Example User Flow (End-to-End)

To help you visualize how all these features work together:
  1. User is brainstorming in a chat and writes: “We should create an onboarding flow for developers submitting engines.”
  2. User clicks Save as Task on that message.
  3. The task appears under Spaces → Tasks for the Instance.
  4. Next day, user opens Spaces → Tasks, sees the task, clicks Start chat from task.
  5. A new chat opens, pre-seeded with the task description.
  6. User and AI discuss the onboarding flow. User selects key messages and clicks Add to Live Document.
  7. A Live Document section is updated with the new decisions.
  8. In Spaces, user can now see:
    • One task (status: In Progress)
    • One Live Document with updated sections
    • Two chats linked together
    • All living in one organized place

Key Implementation Principles

  1. Spaces is not a separate app — it’s a view within the Instance Dashboard
  2. Everything is scoped to the Instance by default — global views come later
  3. Content flows in automatically from chats — users don’t manually “import”
  4. Every item traces back to its source — tasks know which message created them, files know which chat generated them
  5. Spaces is both read and write — users can browse existing content AND create new content directly
  6. The feature set is toggleable — not every Instance Type needs Tasks or Code Snippets; these can be turned on/off in Instance Settings
  7. Start simple, add views later — v1 is list views with filters; Kanban boards, graph views, and advanced layouts come in later versions

Document 2: Task Feature Spec — Complete Feature Breakdown

For Junior Developers New to the aiConnected OS Project


What This Document Covers

This document defines the Task System — a lightweight, per-Instance to-do list that captures action items emerging from conversations and transforms them into live, actionable objects. Tasks are not a full project management system. They are fast, contextual, and deeply integrated with the chat experience, the Whiteboard, reminders, email, Slack notifications, and AI-powered assistance.

Context: Why Tasks Exist

When users brainstorm inside AI chat conversations, action items naturally emerge: “I need to update that document,” “I should compile a PRD from this discussion,” “I need to follow up on this idea tomorrow.” On every existing AI platform (ChatGPT, Claude, Gemini), those action items are immediately lost in chat scroll. There is no native way to say “remind me about this” or “add this to a to-do list.” The Task feature solves this by giving every Instance a built-in, always-available to-do list that can be populated directly from chat messages, whiteboard items, or manual entry — and then acted upon through reminders, new chats, emails, and external notifications.

FEATURE 1: Core Task Object (Data Model)

What It Is

The fundamental data structure that represents a single task in the system.

What It Does

Stores everything needed to track what the user needs to do, where the task came from, and what actions have been taken on it.

Intended Purpose

Provides the structured foundation that every other Task feature builds on. Without a clean data model, nothing else works.

Why Anyone Should Care

The data model is intentionally minimal — this is NOT Jira or Asana. The goal is speed and simplicity, with optional fields for power users who want more control.

How It Should Be Built

Core Fields (v1 — ship these):
Task {
  id: string                    // Unique identifier
  instance_id: string           // Which Instance this task belongs to
  title: string                 // Short, human-readable description
  status: "todo" | "done"       // Start with just two states (add "in_progress" later)
  source_type: "message" | "manual" | "whiteboard" | "reference"
  source_reference: {
    conversation_id?: string    // Which chat it came from
    message_id?: string         // Which specific message
    whiteboard_item_id?: string // Which whiteboard node
  }
  created_at: DateTime
  completed_at?: DateTime       // When status changed to "done"
}
Optional Fields (add when needed):
  notes?: string                // Longer description or context
  due_at?: DateTime             // Optional due date
  priority?: "low" | "normal" | "high"  // Optional priority level
  updated_at: DateTime
Agentic Extension Fields (for reminder/notification features):
  reminder_at?: DateTime                    // When to trigger a reminder
  reminder_channels?: string[]              // ["in_app", "email", "slack"]
  email_recipients?: string[]               // Email addresses for email action
  slack_destination?: {                     // Slack workspace + channel/user
    workspace: string
    channel_or_user: string
  }
  automation_profile?: string               // Named preset like "default reminder"

Technical Notes

  • The v1 data model should be title, status, source_reference, created_at at minimum. Everything else is optional and can be added incrementally.
  • source_reference is critical — it’s what lets users jump back to the original message or whiteboard item that spawned the task. Never lose this link.
  • Later, the agentic fields can be broken into separate tables (TaskReminders, TaskNotifications) for cleaner separation of concerns.

FEATURE 2: Creating Tasks from Chat Messages

What It Is

The ability to turn any chat message into a task with one click.

What It Does

When a user clicks the (more actions) menu on any message in a chat, they see an “Add to Tasks” option. Clicking it opens a small inline modal where the title is pre-filled with a smart summary of the message content, and the user can optionally set a due date and notes before saving.

Intended Purpose

This is the primary way tasks are born. During a conversation, the user thinks “I need to do something about this” — and instead of losing that thought, they capture it instantly without leaving the chat.

Why Anyone Should Care

This is the feature that makes the difference between “I had a great idea during a chat but forgot about it” and “I have a running list of everything I need to act on.” It’s the bridge between thinking and doing.

How It Should Be Built

User Flow:
  1. User is in a chat conversation
  2. User clicks on a specific message
  3. User selects “Add to Tasks” (or “Remind me about this later”)
  4. Small inline modal appears with:
    • Task title — pre-filled with an AI-generated smart summary of the message (e.g., “Update X document based on this idea”). User can edit this.
    • Due date — optional date picker
    • Notes — optional text area for additional context
    • [Save] button
  5. System stores the task with:
    • source_type = "message"
    • conversation_id and message_id from the current chat
    • instance_id from the current Instance
  6. In the Tasks panel, the task shows a small “From message” badge. Clicking it jumps back to the exact message in the original chat.
Key Technical Details:
  • The “smart summary” for the title should be generated by the AI — take the message content and produce a concise action-oriented title. If AI is unavailable or too slow, fall back to truncating the first ~80 characters of the message.
  • The modal should be lightweight (not a full-page form). Think: inline popover or small slide-in panel.
  • After saving, show a brief confirmation toast: “Task saved” with a link to the Tasks panel.

FEATURE 3: Creating Tasks Manually

What It Is

A simple text input at the top of the Tasks panel that lets users type a task directly without referencing a specific message.

What It Does

User types a one-line task description and presses Enter. The task is created immediately with source_type = "manual" and no source reference.

Intended Purpose

Sometimes users just want a quick reminder that isn’t tied to a specific message: “Review this chat later,” “Schedule a call about the pricing model,” etc.

Why Anyone Should Care

Not every task comes from a specific message. Manual entry covers the cases where the user just wants to jot down a thought or reminder without the friction of finding a message to attach it to.

How It Should Be Built

  • Simple text input at the top of the Tasks panel with placeholder text: “Add a task…”
  • Press Enter to create the task instantly
  • Optional: a small “More” button next to the input that expands to show Notes, Due date, and Priority fields
  • v1 can be just the one-line input. Advanced fields come later.
  • Tasks created this way can optionally be linked to the current conversation generically (store the conversation_id but no specific message_id)

FEATURE 4: Creating Tasks from the Whiteboard

What It Is

The ability to create tasks directly from Whiteboard nodes (sticky notes, clusters, cards).

What It Does

Each whiteboard item has a menu with a “Create Task from This” option. Same flow as creating from a message: pre-filled title, optional due date and notes, stored with source_type = "whiteboard" and the whiteboard_item_id.

Intended Purpose

The Whiteboard is where users collect and organize ideas from many conversations. Some of those ideas are actionable. This bridges “idea space” (whiteboard) to “action space” (tasks).

Why Anyone Should Care

Ideas sitting on a whiteboard are inert until someone decides to act on them. This feature turns ideas into trackable action items with one click.

How It Should Be Built

  • Same modal as the message-based creation flow
  • Title pre-filled from the whiteboard item’s text content
  • source_type = "whiteboard", whiteboard_item_id stored
  • In the Tasks panel, clicking the source badge opens the whiteboard and highlights the originating item

FEATURE 5: Tasks Panel (Viewing & Managing Tasks)

What It Is

A dedicated section within the Instance Dashboard that displays all tasks for the current Instance.

What It Does

Shows a list of tasks with checkbox status toggles, source badges, optional due dates, and basic filtering. Users can quickly scan what needs doing, mark things done, and jump to source context.

Intended Purpose

The Tasks panel is where users go to answer: “What do I need to do for this project?” It’s the single place that aggregates all captured action items.

Why Anyone Should Care

Without this panel, tasks would just be invisible entries in a database. The panel makes them scannable, manageable, and actionable.

How It Should Be Built

Layout:
  • Panel title: “Tasks for this Instance”
  • Segmented filter bar: All | To Do | Done
  • List of task rows, each showing:
    • Checkbox (click to toggle todo ↔ done)
    • Title (click to open detail drawer)
    • Source badge: “From message” / “Manual” / “From whiteboard” (small icon + text)
    • Due date (if set)
Detail Drawer (click on task title):
  • Full notes (if any)
  • “Open source” button — jumps back to the original message or whiteboard item
  • Edit title, notes, due date, priority
  • Action buttons (Remind, Email, Start Chat, Notify — see agentic features below)
Sorting:
  • Default: status (To Do first) then created_at (newest first)
  • v2: drag-and-drop reorder
Key Design Principle: Keep it lightweight. Start with just To Do and Done. Do NOT add In Progress, priority levels, or subtasks in v1. Add those only when user feedback demands it. The moment this feels like a project management tool, you’ve gone too far.

FEATURE 6: AI-Assisted Task Creation (“Sweep my chat”)

What It Is

The ability to ask the AI to scan a conversation and automatically propose tasks based on action items it identifies.

What It Does

User types something like: “Summarize what I need to do from today’s chat and add them as tasks.” The AI scans recent messages, identifies action items, and presents a confirmation modal with a proposed list of tasks. The user checks the ones they want and clicks “Create Tasks.”

Intended Purpose

After a long brainstorming session, users don’t want to manually scroll through 50 messages and create tasks one by one. The AI can do this in seconds.

Why Anyone Should Care

This is the difference between “tasks are a manual chore” and “tasks feel like they manage themselves.” AI-assisted creation dramatically reduces the friction of staying organized.

How It Should Be Built

  1. User triggers the command (via chat input or a “Scan for tasks” button in the Tasks panel)
  2. AI processes the recent conversation history for the current chat
  3. AI returns a list of proposed tasks with suggested titles:
    • “Draft PRD section on instance dashboard”
    • “Update Cognigraph doc with learning sub-architecture”
    • “Create UI sketches for tasks pane”
  4. Modal displays the proposed tasks with checkboxes (all checked by default)
  5. User unchecks any they don’t want, optionally edits titles
  6. User clicks “Create Tasks”
  7. System batch-creates all selected tasks with source_type = "message" and references to the relevant messages

FEATURE 7: Set Reminder on a Task

What It Is

The ability to schedule a time-based reminder for any task, delivered through one or more channels (in-app notification, email, Slack).

What It Does

From any task’s action menu, user clicks “Set Reminder.” A small form appears where they choose when (date/time or relative like “tomorrow morning”) and how (in-app, email, Slack, or any combination). At the scheduled time, the system delivers the reminder through all selected channels.

Intended Purpose

Tasks without reminders are just lists that users forget to check. Reminders turn passive tasks into active nudges that find the user wherever they are.

Why Anyone Should Care

This is what makes tasks “agentic” — they don’t just sit there, they reach out and grab your attention when it matters.

How It Should Be Built

Reminder Form:
  • When: date/time picker, or quick options (“In 1 hour”, “Tomorrow morning”, “Next Monday”)
  • Channels: checkboxes for In-app, Email, Slack
  • Save button
Delivery:
  • In-app: notification badge in the app, plus a toast/banner when the user is active
  • Email: system sends an email with subject “Task reminder – [Task Title]”, body includes task title, notes, Instance name, and a deep link back to the task
  • Slack: system posts to the configured Slack channel/DM with task title, Instance name, notes, and a link
Backend:
  • Store reminder_at and reminder_channels on the task
  • A scheduled job (cron, n8n workflow, or internal scheduler) checks for due reminders and dispatches them
  • Events emitted: task.reminder.triggered → consumed by email service, Slack integration, in-app notification service

FEATURE 8: Start New Chat from Task

What It Is

The ability to launch a brand-new conversation that is pre-seeded with the task’s context.

What It Does

From any task’s action menu, user clicks “Start Chat from Task.” A new chat is created within the same Instance, pre-populated with the task title, notes, due date, and the content of the original source message (if applicable). The new chat is automatically linked to the task.

Intended Purpose

When it’s time to actually work on a task, the user shouldn’t have to manually copy context into a new conversation. This creates an instant, focused workspace for that task.

Why Anyone Should Care

This closes the loop between “capture” and “execute.” The task was born in a conversation, and now it spawns a new conversation to get it done — with all the context automatically carried over.

How It Should Be Built

  1. User clicks “Start Chat from Task” on any task
  2. System creates a new chat in the current Instance
  3. The chat’s initial context includes:
    • The task title and notes
    • The content of the source message/whiteboard item (if available)
    • A system message: “This conversation is about Task #[id]: [title]”
  4. The task record is updated with a link to the new chat: active_chat_id
  5. In the Tasks panel, the task shows “Active chat: [link]”
  6. In the new chat, a header or system message shows “This conversation is about Task #123” with a link back to the task and the ability to toggle status or update notes directly
  7. User can start working immediately: “Break this task into smaller steps,” “Draft the initial PRD outline,” etc.

FEATURE 9: Email from Task

What It Is

The ability to send an email (to yourself or someone else) directly from a task, optionally with AI-drafted content.

What It Does

Two modes: a) Email to Yourself (Reminder/Snapshot):
  • System composes an email with the task title, notes, Instance name, a link back to the task, and optionally an AI-generated summary of the source message/whiteboard content.
  • One-click send.
b) Email to Someone Else (Action Request):
  • Modal with: Recipients (free-form email addresses + optional contact picker), CC/BCC fields
  • Toggle: “Have AI draft the email for me”
    • If ON, AI reads the task context (title, notes, Instance info, source message) and drafts a professional email
    • Example output: “Hey [Name], I’m working on the aiConnected chat dashboard and I need your input on the task system design. Specifically, I want feedback on…”
  • User can edit the draft before sending

Intended Purpose

Tasks often require communicating with other people — asking for feedback, delegating work, or just reminding yourself via email. This feature keeps that communication tied to the task instead of requiring the user to open a separate email client.

Why Anyone Should Care

Every other platform forces you to leave the app, open Gmail, manually compose context, and lose the connection between the task and the communication. This keeps everything linked.

How It Should Be Built

  • “Email” action in the task’s action menu
  • Sub-menu: “Email → Me” (quick send) or “Email → Someone Else” (opens modal)
  • Backend sends email via configured email provider (SendGrid, Gmail API, etc.)
  • Email contains a deep link back to the task in the app
  • Log the email action on the task record for audit trail

FEATURE 10: Notify in External Apps (Slack)

What It Is

The ability to send a task notification to Slack (and eventually Teams, Discord, etc.).

What It Does

From any task, user clicks “Notify → Slack.” If Slack isn’t connected yet, they’re prompted to authenticate and configure a default workspace and channel. Once configured, a modal lets them choose a destination and customize a pre-filled message. Optional AI enhancement generates a more detailed explanation.

Intended Purpose

Many users work in teams where Slack is the primary communication hub. Being able to push task notifications directly from the AI platform into Slack keeps the team informed without manual copy-paste.

Why Anyone Should Care

Tasks that live only inside one app are invisible to the rest of the team. External notifications make tasks visible where the team actually communicates.

How It Should Be Built

First-time setup:
  • OAuth flow to connect Slack workspace
  • Choose default channel or DM
  • Store connection in user settings
Per-notification flow:
  1. User clicks “Notify → Slack” on a task
  2. Modal shows: Destination (default channel, or pick another), Pre-filled message template:
New task: [Task Title]
Instance: [Instance Name]
[Optional notes]
[Link to view task]
  1. Optional: “Write a more detailed message” toggle — AI generates a longer explanation using task context
  2. User can edit the message
  3. Click “Send”
  4. System posts to Slack via Slack API
Backend:
  • Event emitted: task.slack.notified
  • Handled by Slack integration service (or n8n workflow)
  • Store notification log on the task

FEATURE 11: AI Task Agent (“What should I do next?”)

What It Is

An AI-powered meta-agent that can reason about the user’s entire task list and provide prioritized recommendations.

What It Does

The user can ask (in chat or via the Tasks panel): “Look at my tasks for this instance and tell me what I should work on next.” The AI reads all open tasks, considers due dates, priorities, and recency, and responds with a prioritized recommendation. It can also trigger actions like “Start Chat from Task #1.”

Intended Purpose

When a user has 15 open tasks, deciding where to start can feel overwhelming. The AI Task Agent acts as a lightweight personal assistant that helps prioritize.

Why Anyone Should Care

This is where tasks become truly “agentic” — the AI isn’t just storing tasks, it’s helping the user decide what matters most and taking action to help them get started.

How It Should Be Built

“What should I do next?” mode:
  1. User asks in chat or clicks a “Prioritize” button in Tasks panel
  2. AI reads all open tasks for the current Instance
  3. AI considers: due dates (overdue first), priority levels, time since creation, last activity
  4. AI responds with a ranked recommendation:
For this instance, the top three tasks to focus on next are:
1. Finalize task action design (due today)
2. Outline whiteboard UX
3. Document folder rules for instances vs. conversations

Want me to start a focused chat from Task #1?
  1. If user says yes, system triggers “Start Chat from Task” automatically
“Sweep my tasks” batch commands: User can issue commands like:
  • “Set reminders for all tasks due this week”
  • “Email me a summary of all open tasks for this Instance”
  • “Post all high-priority tasks into Slack”
The AI:
  1. Identifies matching tasks
  2. Batch-creates the requested actions (reminders, emails, Slack posts)
  3. Confirms what it did: “Set a Slack reminder for 3 tasks, emailed you a summary of 7 open tasks”

FEATURE 12: Integration with Conversation Linking

What It Is

Tasks respect and benefit from the Linked Conversations feature (defined in Document 7).

What It Does

If a task was created from a message in Conversation A, and that message was later used to spawn Conversation B (via the linked conversations feature), the task can display both relationships: “Created from Conversation A, related to Conversation B.”

Intended Purpose

As conversations branch and evolve, tasks should maintain awareness of the full conversation graph, not just the single message they were created from.

How It Should Be Built

  • Store conversation_id and message_id at creation time
  • When displaying source links, also check if the source message appears in any ConversationLink records
  • If links exist, show “Related conversations” in the task detail drawer
  • v1: just store the IDs correctly so linkage can be exploited later. Don’t over-engineer the display.

FEATURE 13: Integration with Folders

What It Is

Tasks interact cleanly with the Instance Folder system without folders directly owning tasks.

What It Does

Since tasks are per-Instance and folders are per-Instance, the relationship is indirect. In the folder view, each Instance can show a small indicator: “3 open tasks.” A future folder-level view can aggregate: “All tasks for Instances in this folder.”

Intended Purpose

Keeps the mental model clean: Folders → contain Instances → Instances own Tasks. Tasks don’t belong to folders directly.

How It Should Be Built

  • In folder views, query task counts per Instance and display as badges
  • Future: folder-level aggregation view that combines task lists from all child Instances
  • Do NOT add a folder_id to the Task model — tasks belong to Instances, not folders

FEATURE 14: Instance Type Settings (Toggle Tasks On/Off)

What It Is

The ability to enable or disable the Tasks feature per Instance Type, with per-Instance overrides.

What It Does

Each Instance Type (e.g., “Deep Project,” “Casual Chat”) has a “Dashboard Modules” configuration where Tasks (and other features like Whiteboard, Folders, Pins) can be toggled on or off. Individual Instances can override their Type’s defaults.

Intended Purpose

Not every Instance needs a task list. A casual Q&A Instance would be cluttered by a Tasks panel. This keeps the interface clean for simple use cases while allowing full power for project Instances.

Why Anyone Should Care

Feature bloat kills products. Allowing users to turn features on/off per Instance Type means the platform adapts to how the user is actually using it, rather than forcing every Instance to look the same.

How It Should Be Built

Instance Type template config:
DashboardModules {
  whiteboard: boolean   // On/Off
  pins: boolean         // On/Off
  tasks: boolean        // On/Off
  folders: boolean      // On/Off
  references: boolean   // On/Off
}
Agentic sub-toggles (when Tasks is ON):
TaskActions {
  reminders: boolean    // On/Off
  email: boolean        // On/Off
  slack_external: boolean // On/Off
  ai_task_agent: boolean  // On/Off
}
Example configurations:
  • “Deep Project / Build Instance”: Tasks ON, all actions ON
  • “Casual Chat / Q&A Instance”: Tasks OFF (or Tasks ON but all actions OFF — just local notes)
Per-Instance override:
  • Each Instance’s Settings panel has a toggle: “Enable tasks for this instance” that overrides the Type default

FEATURE 15: Backend Event Architecture

What It Is

The event-driven system that powers all agentic task actions.

What It Does

Every task action (reminder triggered, chat started, email sent, Slack notified) emits a structured event that can be consumed by backend services or automation workflows.

Intended Purpose

Keeps the frontend simple (just emit events) while allowing flexible backend processing. Today it might be n8n workflows; tomorrow it could be internal microservices. The event layer is the abstraction that makes this possible.

Why Anyone Should Care

Without a clean event architecture, every new integration (Teams, Discord, SMS, webhooks) requires rewriting frontend logic. Events make the system extensible.

How It Should Be Built

Event types:
  • task.created — a new task was created
  • task.completed — a task was marked done
  • task.reminder.created — a reminder was set
  • task.reminder.triggered — a reminder fired (time elapsed)
  • task.chat.started — a new chat was started from a task
  • task.email.created — an email was sent from a task
  • task.slack.notified — a Slack notification was sent
Event payload (example):
{
  "event": "task.reminder.triggered",
  "task_id": "abc123",
  "instance_id": "inst456",
  "user_id": "user789",
  "channels": ["in_app", "email"],
  "timestamp": "2026-02-12T09:00:00Z"
}
Consumers:
  • In-app notification service → shows badge/toast
  • Email service → sends email via provider
  • Slack service → posts message via Slack API
  • Automation layer (n8n) → can listen and trigger arbitrary workflows

Key Implementation Principles

  1. Start minimal — v1 is title, status, source_reference, created_at. Ship that first.
  2. Source traceability is sacred — every task must know where it came from (message, whiteboard item, or manual). Never lose this link.
  3. Tasks are launchpads, not checkboxes — the power isn’t in checking things off, it’s in the actions you can take FROM a task (start chat, send email, notify Slack, set reminder).
  4. Lightweight over heavyweight — start with To Do and Done only. Add In Progress, priority, subtasks, and drag-reorder ONLY when user feedback demands it.
  5. Toggleable per Instance Type — Tasks should never appear in Instances where they’d be clutter.
  6. Event-driven backend — every action emits an event. Never hardcode integration logic in the frontend.
  7. AI enhances, doesn’t replace — AI can propose tasks, draft emails, and prioritize lists, but the user always confirms before anything happens.

Document 3: Live Document Feature Spec — Complete Feature Breakdown

For Junior Developers New to the aiConnected OS Project


What This Document Covers

This document defines Live Documents — persistent, cross-chat, AI-editable documents that belong to an Instance (not to a single chat). Live Documents are the “formalization layer” where messy conversations become real documentation: PRDs, specs, business plans, research studies, presentations, and any other structured output.

Context: The Problem Live Documents Solve

In every existing AI platform, when you brainstorm a complex idea across multiple conversations, the only way to compile everything into a single document is to manually copy-paste from each chat into Google Docs or a word processor. There is no native way to:
  • Edit the same document from different conversations
  • Have the AI update a document while you’re chatting about something related
  • Track which conversations contributed to which sections of a document
  • Export a polished, branded document directly from the platform
Live Documents solve all of these problems. They are shared, always-on artifacts that sit inside an Instance and grow over time as you talk across any number of chats.

Context: Where Live Documents Fit in the Platform

Understanding the distinction between the different content types is critical:
  • Chat = chronological conversation (messy, exploratory, real-time thinking)
  • Whiteboard = nonlinear, spatial canvas for brainstorming and clustering ideas (visual)
  • Tasks = action items (“do this later”)
  • Live Documents = linear, structured, formalized documentation (the “official” output)
  • Folders = organizational structure for grouping chats/content
Live Documents are the formalization layer. Raw ideas live in chats. Curated ideas live on the Whiteboard. Finished, structured output lives in Live Documents.

FEATURE 1: The Live Document Object (Core Definition)

What It Is

A persistent document that belongs to an Instance, not to any single chat. It can be opened, edited, and contributed to from any chat within that Instance, or directly from the Instance Dashboard.

What It Does

Acts as a shared, always-available artifact that accumulates structured content over time. Multiple chats can feed content into the same document. The AI can edit the document via conversational commands. The document can be exported as PDF, Google Docs, presentations, or other formats.

Intended Purpose

Turns scattered conversation insights into polished, deliverable documentation without leaving the platform.

Why Anyone Should Care

This is the feature that transforms aiConnected from “a chat app with memory” into “a workspace that produces real deliverables.” Without Live Documents, users still have to copy-paste into external tools to create anything they can share with others.

Key Characteristics

  • Instance-scoped, chat-agnostic — the document belongs to the Instance, any chat can access it
  • Message → Document flow — you pull content from messages into the document (not the other way around). The document becomes its own editable artifact.
  • AI-editable, human-readable — stored as structured markdown or a block model. The AI can target specific sections for editing.
  • Versioned — every edit (human or AI) creates a new version. You can view history and revert.
  • Multiple output types — same underlying object can be rendered as a document, presentation outline, or other formats

FEATURE 2: Data Model

What It Is

The database structure that represents Live Documents, their content, and their relationships to conversations.

What It Does

Provides the foundation for storing, versioning, querying, and linking documents to their source conversations.

How It Should Be Built

LiveDocument (the container):
LiveDocument {
  id: string
  instance_id: string              // Which Instance this belongs to
  title: string                    // e.g., "Cognigraph PRD v1"
  type: "text_document" | "presentation_outline" | ...
  status: "draft" | "in_progress" | "review" | "final"
  created_by_user_id: string
  created_at: DateTime
  updated_at: DateTime
}
LiveDocumentContent — Option A (MVP, simple):
LiveDocumentContent {
  document_id: string
  content_markdown: string          // Full markdown blob
  version_number: integer
  updated_by: string                // User ID or "ai"
  updated_at: DateTime
}
This is the simplest approach: store the entire document as a single markdown string, and create a new version row every time it changes. Good enough for v1. LiveDocumentContent — Option B (future-friendly, block-based):
LiveDocumentBlock {
  id: string
  document_id: string
  type: "paragraph" | "heading" | "list" | "quote" | "code" | "image" | "table"
  content: string                   // Markdown or structured text for this block
  origin: {                         // Where this block came from (optional)
    conversation_id?: string
    message_id?: string
  }
  order_index: integer              // Position in the document
}
This block-based model is more powerful because the AI can say “rewrite block #7” or “edit the ‘Feature B: Tasks’ heading” with precision. It also enables per-block source traceability — you can see exactly which chat message contributed each section. DocumentMessageLink (traceability):
DocumentMessageLink {
  document_id: string
  block_id?: string                 // If using block model
  conversation_id: string
  message_id: string
}
This table tracks which messages contributed content to which parts of which documents. It enables “jump back to the original chat message” from within the document, and “show me all doc contributions from this chat” queries.

Technical Notes

  • Start with Option A (single markdown blob) for v1. It’s simpler to implement and sufficient for initial launch.
  • Plan the database schema so migrating to Option B (blocks) later doesn’t require a full rewrite. For example, even in Option A, you could store section headers as metadata.
  • Version history is critical from day 1 — AI edits can sometimes produce bad output, and users must be able to revert.

FEATURE 3: Opening Live Documents from Chat

What It Is

The ability to open and view a Live Document as a side panel while you’re in any chat under the same Instance.

What It Does

A “Live Docs” icon/button in the chat UI (top bar or right sidebar) opens a panel showing either a list of all Live Documents for the Instance, or directly opens the “primary” document if one has been pinned as the default.

Intended Purpose

Users shouldn’t have to leave their current conversation to work on a document. The side panel lets them see the document alongside the chat, drag content between them, and ask the AI to update the document in context.

Why Anyone Should Care

This is what makes Live Documents “live” — they’re always one click away from any conversation. You never have to context-switch to a separate app or tab.

How It Should Be Built

Entry Point:
  • “Live Docs” icon/button in the chat top bar or right sidebar
  • Clicking opens a panel (right side of the screen, like an artifact/canvas panel)
Panel Behavior:
  • If the Instance has multiple Live Documents: show a list view first with document titles, types, and last-updated timestamps. User clicks to open one.
  • If the Instance has a “primary” document pinned: open it directly
  • The panel opens alongside the chat — split view: chat on the left, document editor on the right
  • User can toggle between split view and full-page document view
Key Technical Details:
  • The document panel is a separate component that can be rendered alongside any chat
  • It shares the same Instance context, so the AI knows which document is open
  • The panel should support resize/collapse gestures
  • Auto-save the panel state (which document was open, scroll position) so reopening the panel returns to where the user left off

FEATURE 4: Adding Messages to a Live Document

What It Is

The ability to push content from any chat message into a Live Document with one click.

What It Does

On any message (user or AI), the context menu () includes “Add to Live Document…” which opens a small modal where the user chooses: which document to add to, how to add the content (append to bottom, create new section, summarize first, extract bullet points), and optionally a section title.

Intended Purpose

This is the primary content flow: conversations generate insights, and those insights get pulled into the document. Without this feature, Live Documents would require manual typing — defeating the purpose.

Why Anyone Should Care

This is the bridge between “thinking out loud in chat” and “producing a deliverable document.” One click turns a chat message into a document section.

How It Should Be Built

User Flow:
  1. User is in a chat conversation
  2. User clicks on any message
  3. User selects “Add to Live Document…”
  4. Small modal appears with:
    • Which document: dropdown of all Live Documents in this Instance (or “Create new”)
    • How to add:
      • Append to bottom — adds the raw message content at the end
      • New section titled: [___] — creates a new heading + content (title auto-detected or user-entered)
      • Summarize this message and add — AI condenses the message into a tighter summary before adding
      • Extract bullet points and add — AI pulls out key points as a bulleted list
    • [Add] button
  5. System behavior:
    • Pulls the text (or AI-processed version) into the document
    • Creates a new block or appends to content_markdown
    • Creates a DocumentMessageLink record with conversation_id + message_id
    • Shows a toast: “Added to ‘Cognigraph PRD’ under ‘Feature C – Live Docs’”
Key Technical Details:
  • The “Summarize” and “Extract bullet points” options require an AI call — this should be fast (use a lightweight model or cached prompt)
  • Always store the DocumentMessageLink even if the content is summarized — the user should be able to trace back to the original message
  • If the user selects multiple messages (via multi-select in the chat), allow bulk-adding them to the document as a group

FEATURE 5: Editing the Document While Chatting (Dual-Stream Editing)

What It Is

Two parallel editing modes that work simultaneously: direct manual editing in the document panel, and AI-powered editing via chat commands.

What It Does

Stream 1 — Direct Manual Editing: The document panel is a rich-text/markdown editor. Users can type, format (headings, bold, bullets, links), and restructure content directly. Stream 2 — AI Editing via Chat: When a Live Document is open in the side panel, the AI automatically has the document (or relevant sections) in its context. Users can issue commands in the chat that modify the document:
  • “Update the Live Document: add a section called ‘Live Document – Editing Across Chats’ that summarizes what we just discussed.”
  • “Rewrite the introduction to emphasize that live docs are cross-chat artifacts.”
  • “Create a table in the doc comparing Whiteboard vs Live Document vs Tasks.”

Intended Purpose

Some edits are faster by typing directly. Others are faster by asking the AI. Supporting both means the user always has the most efficient path.

Why Anyone Should Care

This is what makes Live Documents genuinely “AI-powered” — you’re not just using a text editor, you’re collaborating with an AI that can rewrite sections, generate tables, restructure content, and improve prose on command.

How It Should Be Built

Manual Editing:
  • Standard rich-text/markdown editor (consider Tiptap, Lexical, or ProseMirror for the frontend)
  • Support for: headings (H1-H4), bold, italic, bullet lists, numbered lists, code blocks, tables, images, links, callout boxes
  • Auto-save on every change (debounced, e.g., save after 2 seconds of inactivity)
  • Each save creates a new version entry
AI Editing:
  • When a Live Document panel is open, the current document content (or a relevant slice) is injected into the AI’s context for the current chat
  • The AI interprets “the document” or “the live doc” as the currently open Live Document
  • AI produces a patch (new content, replacement content, or structural change)
  • System applies the patch to content_markdown or specific blocks
  • A new version is saved automatically
  • The document panel refreshes to show the change in real-time
Key Technical Details:
  • For the block-based model (Option B), AI edits can target specific blocks by ID or heading name
  • For the simple model (Option A), AI rewrites the full markdown and the system diffs + saves
  • AI edits MUST generate new versions — users must be able to undo bad AI rewrites
  • Consider showing a brief diff or “AI edited these sections” indicator after an AI edit

FEATURE 6: Instance Dashboard Document Hub

What It Is

A “Documents” tab in the Instance Dashboard that shows all Live Documents for the Instance, with management actions.

What It Does

Shows a table/grid of all Live Documents with columns: Title, Type (PRD, Spec, Meeting Notes, Presentation Outline, etc.), Status (Draft, In Progress, Review, Final), Last edited (time + by whom), Linked chats count.

Intended Purpose

Gives users a bird’s-eye view of all documentation for the Instance, separate from the chat interface. This is where users go to manage, organize, and open documents when they’re not in a specific chat.

Why Anyone Should Care

Sometimes you just want to see “what documents exist for this project” without opening any chat. This is the document management hub.

How It Should Be Built

List View:
  • Table with sortable columns: Title, Type, Status, Last Edited, Linked Chats
  • Actions per row: Open, Duplicate, Archive, Delete
Opening from Dashboard:
  • Opens a full-page editor (more space than the in-chat side panel)
  • Document outline sidebar on the left (table of contents based on headings)
  • Editor in the center
  • “Linked Conversations” panel showing which chats contributed content
  • Export options accessible from the top bar
Creating New Documents:
  • “New Live Document” button
  • Choose: Title, Type (document, presentation outline, etc.), initial template (blank, PRD template, spec template, etc.)
  • Document is immediately available in all chats within the Instance

FEATURE 7: Document Chat (Talking to the Document from the Dashboard)

What It Is

A small chat panel anchored to a Live Document when opened from the Instance Dashboard, where all AI prompts are implicitly about “this document.”

What It Does

Users can issue commands like:
  • “Tighten up the wording in section 3.2.”
  • “Add an executive summary at the top.”
  • “Generate slide titles from each H2 and add a ‘Presentation Outline’ section at the bottom.”
  • “Insert a risk table.”
  • “Summarize key decisions in a table.”
The AI processes these commands with the full document as context and applies changes directly.

Intended Purpose

When working on a document from the Dashboard (not from within a specific chat), users still need AI assistance. The Document Chat provides that without requiring the user to navigate to a chat first.

Why Anyone Should Care

This turns the document editor from a passive text editor into an active AI collaboration surface. You can sit in the document and refine it endlessly without switching contexts.

How It Should Be Built

  • Small chat input bar at the bottom of the document editor (or collapsible chat panel on the side)
  • All prompts automatically include the document content as context
  • AI responses are applied as document edits (not shown as chat messages — though a brief “Edit applied” confirmation is appropriate)
  • Each AI edit creates a new version
  • The chat history here is ephemeral (or optionally saved as “Document Edit History”)

FEATURE 8: Export System

What It Is

The ability to export Live Documents to external formats and platforms.

What It Does

Provides multiple export targets and format options for turning the document into a deliverable that can be shared with clients, teams, or stakeholders.

Intended Purpose

Live Documents are internal working artifacts. Exports turn them into polished, shareable deliverables. This is the “last mile” that replaces the Google Docs copy-paste workflow.

Why Anyone Should Care

A document that can’t be exported is trapped in the platform. Export capability makes Live Documents the actual production tool for real deliverables, not just a fancy note-taking feature.

How It Should Be Built

Export Targets:
  1. Google Docs
    • Use the Google Docs API to create a new document and push structured content (headings, lists, tables, images)
    • Optionally store the Google Doc URL back on the LiveDocument record for quick access
    • Requires OAuth connection to Google (user authenticates once)
  2. PDF
    • Render the markdown/blocks to HTML, then convert to PDF (server-side rendering using Puppeteer, wkhtmltopdf, or similar)
    • Apply branding options (header logo, company name, footer text)
    • Apply style preset (Simple PRD, Formal Spec, etc.)
  3. Presentation Format (PowerPoint / Google Slides)
    • Map each H1 or H2 heading to a slide
    • Use the first paragraph/bullets under each heading as slide body
    • AI can propose speaker notes for each slide
    • Export as .pptx or push to Google Slides via API
  4. Markdown Download
    • Raw markdown file download for developers or users who want to import into other tools

FEATURE 9: Layout & Branding Options

What It Is

Document-level settings for controlling the visual appearance of exports.

What It Does

Provides branding and style controls that are applied when the document is exported (not necessarily in the editor itself, though the editor could preview them).

Intended Purpose

Professional deliverables need to look professional. Branding options mean users can produce client-ready documents without post-processing in another tool.

How It Should Be Built

Branding Options (per document or per Instance):
  • Header logo: upload an image
  • Company name: text field
  • Footer text: customizable (e.g., “Confidential – Oxford Pierpont / aiConnected”)
Style Presets:
  • “Simple PRD” — clean, minimal formatting
  • “Formal Spec” — more structured, section numbering
  • “Presentation Outline” — slide-friendly formatting
  • Custom presets can be created later
These are applied at export time. The editor shows the content in a clean, neutral format. Branding is layered on during PDF/Docs/Slides generation.

FEATURE 10: Rich Content Support

What It Is

Support for non-text content within the document editor.

What It Does

Allows embedding tables, images, code blocks, and callout boxes directly within Live Documents.

Intended Purpose

Real documents aren’t just paragraphs. PRDs have tables comparing features. Specs have code blocks. Business plans have images and callouts. Rich content support makes Live Documents capable of producing professional, complete documents.

How It Should Be Built

  • Tables — insertable via toolbar or AI command (“create a comparison table”)
  • Images — embed from upload, from Files Space, or from AI-generated diagrams
  • Code blocks — syntax-highlighted, language-selectable
  • Callout boxes — styled blocks for Notes, Risks, Decisions, Warnings (visually distinct from body text)

FEATURE 11: Version History

What It Is

A complete history of every change made to the document, with the ability to view and restore previous versions.

What It Does

Shows a timeline of all versions with: version number, timestamp, who made the change (user or AI), and a brief description. Users can view any previous version and restore it if needed.

Intended Purpose

AI edits can sometimes produce bad results. Manual edits can sometimes break things. Version history is the safety net that makes both kinds of editing risk-free.

Why Anyone Should Care

Without version history, users would be afraid to let the AI edit their documents — one bad rewrite could destroy hours of work. Version history removes that fear.

How It Should Be Built

  • Every save (manual or AI) creates a new version entry in LiveDocumentContent
  • “Show previous versions” button in the editor opens a version list
  • Each version shows: version number, timestamp, author (user name or “AI”), diff summary
  • “Preview” opens a read-only view of that version
  • “Restore” replaces the current content with the selected version (and creates a new version entry for the restoration)
  • Version storage can use full snapshots (simple) or diffs (storage-efficient but more complex)

FEATURE 12: Relationship to Whiteboard

What It Is

The defined boundary and bridge between Live Documents (linear, structured) and the Whiteboard (nonlinear, spatial).

What It Does

Establishes clear use cases for each and defines future bridge actions between them.

Intended Purpose

Users need to understand when to use the Whiteboard vs. when to use a Live Document. They also need the ability to move content between them.

Key Distinctions

  • Whiteboard: nonlinear, spatial layout, great for brainstorming, clustering, concept mapping. Think Miro/Excalidraw.
  • Live Document: linear narrative, organized spec/plan/write-up, ready to send to others as “official” docs. Think Google Docs.

Future Bridge Actions (not v1, but plan for them):

  • From Whiteboard → “Generate Document from selected items” (AI reads selected nodes and produces a structured document)
  • From Document → “Send this section to whiteboard as sticky notes” (breaks a section into visual nodes on the canvas)

FEATURE 13: Relationship to Tasks

What It Is

The integration between Live Documents and the Task system.

What It Does

Allows creating tasks from highlighted text within a document, with the task storing a reference back to the specific document and block/section.

Intended Purpose

Documentation often reveals action items: “we need to research this,” “this section needs data,” “someone should validate this assumption.” Creating tasks from within the document keeps action items tied to their context.

How It Should Be Built

  • Highlight text in the document → context menu shows “Create Task”
  • Task stores: source_type = "document", document_id, block_id (if using block model)
  • In the Tasks panel, clicking the source badge opens the document and scrolls to the relevant section

FEATURE 14: Conversation Referencing & Linking

What It Is

Bidirectional links between Live Documents and the conversations that contributed to them.

What It Does

  • In a chat that has contributed content to a document: shows “This conversation is linked to Documents: [Cognigraph PRD]”
  • In the document: shows “Linked Chats: [Chat A], [Chat B], [Chat C]” with clickable links

Intended Purpose

Users need to trace the provenance of document content back to the original conversations, and from conversations forward to the documents they produced.

Why Anyone Should Care

When reviewing a document section months later and wondering “why did we decide this?”, the linked conversation takes you directly to the original discussion.

How It Should Be Built

  • DocumentMessageLink table tracks all message→document contributions
  • Query this table to produce:
    • Per-document: list of unique conversation_id values → “Linked Chats”
    • Per-conversation: list of unique document_id values → “Linked Documents”
  • Display as clickable badges/links in both the chat UI and the document UI

FEATURE 15: Collaboration & Multi-Edit Handling

What It Is

Foundational support for multiple editors working on the same document, even though v1 is single-user.

What It Does

Implements auto-save, version history, and optional soft-locking so the system is ready for multi-user editing later.

Intended Purpose

Even in single-user mode, the “user” and the “AI” are effectively two editors. Auto-save and versioning prevent conflicts and data loss. Building with multi-user in mind means less refactoring later.

How It Should Be Built

v1 (single user + AI):
  • Auto-save every N seconds or on change (debounced)
  • Version history on every save
  • AI edits clearly marked in version history
v2 (multi-user, future):
  • Soft-locking: “Bob is editing this document” banner
  • Conflict resolution: last-write-wins with version history as the safety net
  • Eventually: operational transforms (OT) or CRDTs for real-time collaborative editing (like Google Docs)

FEATURE 16: MVP vs Extended Scope

What It Is

A clear delineation of what to build first vs. what to build later.

MVP (Build First)

  • Per-Instance Live Documents table/list
  • Basic text/markdown editor (not block-based yet)
  • In-chat: “Live Docs” panel to open a document alongside the chat
  • In-chat: “Add to Live Document…” action on messages (append + optional summarize)
  • AI editing: “Update the live document…” commands that append new sections or rewrite specific sections by heading name
  • Export: Markdown download + PDF export
  • Simple version history (view + restore)

Extended (Build Later)

  • Block-based content model with precise AI editing per block
  • Presentation export (PowerPoint / Google Slides)
  • Google Docs sync (push to Docs, store URL back)
  • Whiteboard ↔ Live Document bridges
  • Task creation from highlighted document content
  • Rich branding/layout options for exports
  • Fine-grained permissions and multi-user collaborative editing
  • Document templates (PRD template, Spec template, etc.)

Key Implementation Principles

  1. Instance-scoped, chat-agnostic — Live Documents belong to the Instance. Any chat in the Instance can open and edit them. Never tie a document to a single chat.
  2. Source traceability is non-negotiable — always store DocumentMessageLink records so every piece of content can be traced back to its origin conversation and message.
  3. Version everything — every human and AI edit creates a version. This is the safety net for the entire feature.
  4. Start with markdown, plan for blocks — v1 stores content as a single markdown blob. But design the schema and API so migrating to a block model later is straightforward.
  5. The document is an AI context — when a Live Document is open, the AI should have its content (or relevant sections) in context. This is what enables natural-language document editing.
  6. Export is the payoff — Live Documents only matter because they can be exported as real deliverables. If the export system is bad, the whole feature feels pointless. Invest in clean PDF and Google Docs export from day 1.
  7. Two editing streams, one document — manual editing and AI editing coexist on the same document. Both create versions. Neither should block the other.

Document 4: Folder System Design — Complete Feature Breakdown

For Junior Developers New to the aiConnected OS Project


What This Document Covers

This document defines the Folder System — an optional organizational layer within Instances that lets users group chats, files, and content into named sub-domains, each with their own instructions, persona defaults, and behavioral settings. Folders sit between the Instance level and the individual Chat level in the hierarchy, and they share the Instance’s memory while providing specialized context.

Context: The Problem Folders Solve

Imagine you’re working on a large project like “aiConnected.” Over time, you accumulate dozens of conversations: some about UI design, some about hiring, some about marketing, some about the technical architecture. Without folders, all these chats live in one flat list inside the Instance. You can’t separate them, you can’t give them different instructions, and you can’t quickly filter to “show me only UI conversations.” Folders solve this by creating sub-domains within an Instance — like departments within a company. Each folder can have its own behavioral rules, but they all share the same underlying memory and knowledge.

Context: Where Folders Fit in the Hierarchy

Platform
  └── Instance (e.g., "aiConnected")
        ├── Whiteboard (one per Instance, shared across all folders)
        ├── No Folder (root-level chats — the default)
        └── Folders
              ├── "User Interface & UX" (folder)
              │     ├── Chat: "Persona creation modal design"
              │     └── Chat: "Dashboard layout v2"
              ├── "Hiring & Teams" (folder)
              │     ├── Chat: "Sales team tier structure"
              │     └── Chat: "Onboarding flow for SDRs"
              └── "Marketing & Sales" (folder)
                    └── Chat: "GTM narrative v1"
Key principle: Folders are inside Instances, above Chats, and below the Whiteboard. There is only one Whiteboard per Instance, regardless of how many folders exist.

FEATURE 1: What a Folder Actually Is

What It Is

A named container within an Instance that holds chats and files. Each folder can have its own settings, instructions, default persona, and default model — essentially everything an Instance has EXCEPT a Whiteboard.

What It Does

Groups related conversations and files together, applies folder-specific behavioral rules to conversations within it, and provides organizational structure for large projects.

Intended Purpose

Lets users separate different workstreams within a single project without creating entirely separate Instances. A user working on “aiConnected” can have a folder for UI design, a folder for hiring, and a folder for marketing — all sharing the same project memory but with different AI behavioral instructions.

Why Anyone Should Care

Without folders, large projects become unmanageable. 50+ conversations in a flat list is chaos. Folders bring order without sacrificing the unified memory that makes an Instance powerful.

How It Should Be Built

Folder Properties:
Folder {
  id: string
  instance_id: string          // FK to parent Instance
  name: string                 // e.g., "User Interface & UX"
  description?: string         // Optional, e.g., "All conversations related to chat UI, dashboard, and persona controls"
  icon?: string                // For visual scanning in the sidebar
  color?: string               // Color coding
  instructions?: string        // Folder-specific system prompt / behavioral rules
  default_persona_id?: string  // Default persona for new chats in this folder
  default_model?: string       // Default AI model for this folder
  default_tools?: string[]     // Default tool/integration set
  created_at: DateTime
  updated_at: DateTime
}
Key Rule: Folders have almost everything an Instance has, except:
  • ❌ No folder-level Whiteboard (the Whiteboard stays one per Instance, above everything)
  • ❌ No separate memory space (folders share the Instance’s memory)

FEATURE 2: Folders Are Strictly Optional

What It Is

A core design principle: folders are never required. Users can use folders, not use them, or use a mix.

What It Does

Ensures that users who don’t want organizational overhead can ignore folders entirely and still have a fully functional experience.

Intended Purpose

Prevents the platform from feeling like a project management tool. Casual users should never be forced to create folders. Power users who need organization can opt in.

Why Anyone Should Care

Many AI chat platforms fail because they impose structure on users who just want to talk. By making folders optional, aiConnected works for both casual users and power users.

How It Should Be Built

Under the hood:
  • Every chat has an optional folder_id field
  • If folder_id = null → the chat lives at the “root” of the Instance (called “No Folder”)
  • If folder_id = some_id → the chat lives inside that folder
Three valid configurations:
  1. Only chats, no folders — everything lives at root. The Instance feels like a simple chat list.
  2. Only folders — every chat is organized into a folder. The Instance feels like a project with departments.
  3. A mix — some chats in folders, some loose at root. The most common real-world usage.
UI Rule: If a user never creates a folder, the folder UI should be invisible or minimal. Folders only become prominent once the user creates their first one.

FEATURE 3: Instruction & Context Inheritance (The Stacked Instructions Model)

What It Is

A layered system where AI behavioral instructions cascade from platform level down through Instance, Folder, and Chat levels, with each layer able to extend or override the one above it.

What It Does

When the AI responds to a message inside a chat, it assembles its behavioral instructions by stacking multiple layers:
  1. Global system / platform rules (safety, core behavior) — always present
  2. Instance-level instructions (e.g., “You are working on aiConnected, an AI automation marketplace…”)
  3. Folder-level instructions (e.g., “In this folder, prioritize UX clarity and React/Tailwind patterns…”)
  4. Chat-level instructions (e.g., “In this chat, we are only working on the persona dropdown behaviors”)
  5. Message-level modifiers (e.g., “Right now, think like a skeptical investor”)

Intended Purpose

This is how folders avoid “tainting” each other. The UI folder has different instructions than the Hiring folder, even though they’re in the same Instance. Each folder specializes the AI’s behavior for its domain.

Why Anyone Should Care

This is the core value proposition of folders. Without instruction inheritance, folders would just be visual grouping — nice but not powerful. With it, each folder genuinely changes how the AI behaves, making it more useful for that specific workstream.

How It Should Be Built

For root-level chats (no folder):
Context stack: Platform → Instance → Chat
For folder chats:
Context stack: Platform → Instance → Folder → Chat
Conflict resolution rules:
  • Lower layers can extend or override higher layers on specific fields
  • Example: Instance says “Talk in warm, professional tone.” Folder says “In this folder, be more technical and concise.” Result: technical + concise wins within that folder.
  • If a lower layer doesn’t specify something, the higher layer’s value is inherited
Optional UI: “Context Stack” panel:
  • Accessible from chat settings or a debug/transparency view
  • Shows which instructions are active at each layer
  • Shows what’s being overridden (e.g., tone, priority, tools)
  • Helps power users understand and debug AI behavior
Technical Implementation: When assembling the system context for an AI call:
1. Load instance.instructions
2. Load folder.instructions (if chat is in a folder)
3. Load chat.custom_instructions (if any)
4. Merge them in precedence order
5. Send merged context as the system prompt

FEATURE 4: Memory & Retrieval Across Folders

What It Is

The rules for how the AI’s memory (knowledge retrieval) works when the user is inside a folder.

What It Does

Folders do NOT wall off memory. The AI can still access knowledge from any chat in the Instance, regardless of which folder it’s in. However, it biases retrieval toward the current folder first.

Intended Purpose

Users expect that being in the “UI” folder doesn’t make the AI forget about decisions made in the “Marketing” folder. Memory is Instance-wide. Folders only change which memories are looked at first, not which memories are accessible.

Why Anyone Should Care

If folders created memory silos, they would break the “unified cognition” that makes Instances powerful. The bias-not-wall approach preserves the value of having everything in one Instance while still making folder-scoped conversations more relevant.

How It Should Be Built

Retrieval logic (priority order):
  1. Prioritize: Chats + artifacts in the current folder first
  2. Expand: If relevant info isn’t found locally, automatically widen search to all folders within the same Instance
  3. Mark the origin: When citing past work, show where it came from:
    • “Found related spec in: aiConnected → Marketing → GTM Narrative v1
Technical Implementation:
  • Index all chats/messages at the Instance level in the vector store / knowledge graph
  • Use folder_id as a boosting factor when scoring relevance (not a filter)
  • This means folder context is always preferred, but Instance-wide knowledge is never excluded

FEATURE 5: Sidebar UI & Navigation

What It Is

How folders appear in the Instance’s sidebar navigation and how users interact with them.

What It Does

Shows the folder hierarchy in the left sidebar of the Instance view, with collapsible folder sections, chat lists under each folder, and a “No Folder” section for root-level chats.

Intended Purpose

Makes folder navigation feel natural and lightweight — similar to a file explorer, but without the heaviness of a project management tool.

How It Should Be Built

Sidebar layout within an Instance:
aiConnected
  > Whiteboard
  > All Chats (combined view)
  > No Folder
      - Chat: "Brainstorm engine pricing"
      - Chat: "Random idea dump"
  > Folders
      - User Interface & UX
          - Chat: "Persona creation modal"
          - Chat: "Dashboard layout v2"
      - Hiring & Teams
          - Chat: "Sales team tier structure"
      - Marketing & Sales
          - Chat: "GTM narrative v1"
      - Cognigraph Architecture
          - Chat: "Memory model design"
  > Unsorted / Inbox
Quick actions (right-click / kebab menu on a chat):
  • “Move to folder…”
  • “Link to other chat…”
  • “Add to whiteboard”
Quick actions (right-click on a folder):
  • “Edit folder settings”
  • “Duplicate folder settings to another folder”
  • “Create chat with these folder defaults”
Linked chats across folders:
  • From any chat, user can “Link existing chat…” → search any chat in the Instance → link it
  • UI surfaces: “Linked: [Market Research – ICP] (Marketing folder)”
  • User can click the link to jump to that chat and come back

FEATURE 6: Whiteboard Integration with Folders

What It Is

How the single Instance-level Whiteboard interacts with folder-organized content.

What It Does

The Whiteboard remains one per Instance (no folder-level whiteboards), but every item pinned to the Whiteboard carries metadata about which folder (and chat) it came from. The Whiteboard supports filtering by folder origin.

Intended Purpose

Lets users see “just the UI stuff” on the Whiteboard without drowning in marketing or hiring content, while still maintaining one unified canvas.

How It Should Be Built

Every whiteboard item stores:
origin_instance: string
origin_folder?: string    // null if from root-level chat
origin_chat: string
Whiteboard filter options:
  • By folder: “Show only items from User Interface & UX”
  • By multiple folders: “Show items from UI and Cognigraph”
  • All: “Show everything” (default)
Moving chats between folders does NOT remove whiteboard items. The whiteboard items keep their original origin_chat_id and simply update the displayed folder context.

FEATURE 7: Moving Chats In and Out of Folders

What It Is

The ability to move individual chats between folders, or between a folder and root level.

What It Does

Changes a chat’s folder_id, which affects which folder-level instructions apply to future messages. Moving is non-destructive — no content is lost, no memories are deleted.

Intended Purpose

Users change their minds. A chat that started as a general brainstorm might later clearly belong in the “UI” folder. Moving should be trivial.

Why Anyone Should Care

If moving chats between folders is hard, users won’t organize at all. It needs to be as easy as drag-and-drop or a single menu action.

How It Should Be Built

From any chat’s context menu:
  • “Move to folder…” → folder picker (searchable list + “No Folder” option + “Create new folder”)
  • “Remove from folder (send to No Folder)”
Behavioral change on move:
  • Moving INTO a folder: future turns inherit the folder’s instructions
  • Moving OUT of a folder: future turns lose the folder’s instructions, revert to Instance-only
  • Past messages are NOT affected (they were generated under the old instructions)
Technical: Simply update chat.folder_id. No content migration needed.

FEATURE 8: New Chat Creation Flow

What It Is

How the “New Chat” button works in the context of folders.

What It Does

When creating a new chat inside an Instance, the user can choose where it lives: in the currently selected folder, in a different folder, or at root (No Folder).

Intended Purpose

Makes chat creation context-aware without being burdensome. If you’re browsing the “UI” folder and click “New Chat,” it defaults to creating in that folder.

How It Should Be Built

When clicking “New Chat” in an Instance:
  • If user is currently viewing a specific folder: new chat defaults to that folder
  • If user is viewing “All Chats” or “No Folder”: new chat defaults to root
  • A small dropdown or toggle lets the user choose a different location before creating:
    • “No Folder”
    • “User Interface & UX”
    • “Hiring & Teams”
    • etc.
If the user never touches folders: everything auto-creates under “No Folder” (root). The experience is identical to a folder-free Instance.

FEATURE 9: Bulk Move — Multi-Select Chats & Files to Folders

What It Is

The ability to select multiple chats or files at once and move them to an existing folder or a newly created folder in one operation.

What It Does

Enters a “selection mode” where checkboxes appear on each item. Users select items, click “Move,” and choose a destination (existing folder, new folder, or root). All selected items are moved in one operation.

Intended Purpose

When a user decides to organize 15 scattered chats into a new “Cognigraph” folder, they shouldn’t have to move them one at a time. Bulk move makes large-scale organization fast.

Why Anyone Should Care

Without bulk move, folder adoption will be low. Users will think “it’s too tedious to organize” and give up. Bulk move makes organization effortless.

How It Should Be Built

Selection Mode:
  1. User clicks “Select” / “Manage” button in the chat list or file list
  2. Checkboxes appear on every row
  3. User can: click individual checkboxes, Shift-click to select a range, “Select all” for current filtered view
  4. A bulk action bar appears (sticky bottom bar): Selected count | Move | Delete | Cancel
Move Flow:
  1. User clicks “Move” in the bulk action bar
  2. Modal opens: “Move items”
  3. Step 1: Choose destination type:
    • “Existing folder” — shows searchable dropdown of folders + “No Folder (root)”
    • “New folder…” — expands to show: Folder name, Optional description, Optional advanced settings (default persona, default model)
  4. Step 2: Confirm
    • “Move 12 chats” button
  5. System moves all items atomically (all succeed or none succeed)
After moving:
  • Toast notification: “Moved 7 chats to ‘Cognigraph Architecture’” with a clickable link to that folder
  • Folder sidebar updates with new count
  • Optional “Undo” button in the toast
API Endpoints:
POST /instances/{id}/chats/bulk-move
  Body: { chat_ids: [...], target_folder_id: "..." | null }

POST /instances/{id}/files/bulk-move
  Body: { file_ids: [...], target_folder_id: "..." | null }

POST /instances/{id}/bulk-move-to-new-folder
  Body: { type: "chats" | "files", item_ids: [...], folder_name: "...", folder_settings: { ... } }
Atomicity Rule: “Create folder + move items” must be treated as one atomic operation. If anything fails, either the folder isn’t created or items are not partially moved. Edge Cases:
  • Items from different folders can be selected and moved together — the move just reassigns all their folder_id values
  • Moving chats does NOT remove their whiteboard items — whiteboard items keep their original origin_chat_id
  • Moving chats from root to a folder, folder to root, or folder A to folder B all use the same flow

FEATURE 10: Search, Filtering, and Cross-Folder References

What It Is

How search, filtering, and chat linking work in the context of folders.

What It Does

Ensures that folders enhance organization without breaking discoverability. Search runs across all folders by default, with optional folder-scoped filtering. Chat links work across folder boundaries.

Intended Purpose

Folders should never hide content. Users should be able to find anything in the Instance regardless of which folder it’s in, and link conversations across folders freely.

How It Should Be Built

Search / Retrieval:
  • Default: searches all chats in the Instance regardless of folder
  • Filter options: by entire Instance, by specific folder, by “No Folder” only
Whiteboard:
  • One per Instance
  • Items can come from folder chats or root chats
  • Filter whiteboard items by origin folder or show everything
Chat linking / references:
  • Cross-folder linking is fully supported
  • Example: link a root-level brainstorm chat to a formal spec in the UI folder
  • UI shows: “Linked: [Initial brainstorm] (No Folder)” / “Linked: [UI State Machine Spec] (User Interface & UX)”
  • Folder boundaries do NOT restrict linking

FEATURE 11: Data Model & Architecture

What It Is

The database schema and API structure for the folder system.

How It Should Be Built

Database Tables:
instances {
  id: string
  name: string
  instructions: string
  settings: JSON
}

folders {
  id: string
  instance_id: string (FK → instances)
  name: string
  description?: string
  instructions?: string
  settings: JSON (default model, default tools, etc.)
  icon?: string
  color?: string
  created_at: DateTime
  updated_at: DateTime
}

chats {
  id: string
  instance_id: string (FK → instances)
  folder_id?: string (FK → folders, nullable for "No Folder" / root)
  title: string
  custom_instructions?: string
  created_at: DateTime
  updated_at: DateTime
}

chat_links {
  id: string
  chat_id: string (FK → chats)
  linked_chat_id: string (FK → chats)
  relationship_type: "continued-from" | "related-to" | "branched-from"
}
Context Assembly (when calling the AI):
function assembleContext(chat) {
  const context = [];
  
  // Layer 1: Platform rules (always)
  context.push(PLATFORM_RULES);
  
  // Layer 2: Instance instructions
  const instance = getInstance(chat.instance_id);
  context.push(instance.instructions);
  
  // Layer 3: Folder instructions (if in a folder)
  if (chat.folder_id) {
    const folder = getFolder(chat.folder_id);
    context.push(folder.instructions);
  }
  
  // Layer 4: Chat-level instructions
  if (chat.custom_instructions) {
    context.push(chat.custom_instructions);
  }
  
  // Merge with precedence (lower layers override higher on conflicts)
  return mergeInstructions(context);
}
Retrieval Boosting:
function retrieveMemory(query, chat) {
  const results = searchInstanceMemory(chat.instance_id, query);
  
  // Boost results from same folder
  if (chat.folder_id) {
    results.forEach(result => {
      if (result.folder_id === chat.folder_id) {
        result.score *= 1.5; // Boost same-folder results
      }
    });
  }
  
  return results.sort((a, b) => b.score - a.score);
}

FEATURE 12: Real-World Usage Examples

What It Is

Concrete examples showing how folders work in practice, to help developers understand the intended user experience.

Example 1: “User Interface & UX” Folder

Folder instructions:
  • “Prioritize UX clarity, React/Tailwind patterns, coherence of chat + dashboard.”
  • “Avoid deep dives into sales comp models unless explicitly asked.”
Default tools: Figma integration, Code snippets, Component library Typical chats: “Design the Persona creation modal,” “Layout for the Conversation Reference panel,” “Folder sidebar interactions and animations”

Example 2: “Hiring & Teams” Folder

Folder instructions:
  • “Prioritize role definitions, compensation design, and scaling sales teams.”
  • “Don’t drift into UI details; keep it people/process focused.”
Default tools: Org chart generator, Offer letter templates, Commission plan calculators Typical chats: “Tier 0–5 sales comp restructure,” “Onboarding flow for Tier 1 SDRs,” “KPIs & dashboards for VP of Sales” Key Observation: Same Instance, shared memory, but very different AI behaviors because of folder-level instructions. A question about “how should we structure the sales team?” in the Hiring folder gets a completely different response style than the same question asked in the UI folder.

FEATURE 13: Design Principle (For the PRD)

What It Is

The formal design principle that should be included in any PRD or technical spec to prevent misinterpretation during implementation.

The Principle

Folders are an optional organizational layer within an instance.
  • Chats MAY be assigned to a folder, but are not required to be.
  • Chats with no folder assignment are treated as root-level “No Folder” chats.
  • All chats in an instance share the same memory space, regardless of folder, with retrieval optionally biased toward the current folder but never restricted to it.
  • Folder-level instructions apply only to chats inside that folder and never to root chats.
  • Users can go full folders, no folders, or hybrid, and the cognition still behaves like one unified brain for the instance.

Key Implementation Principles

  1. Folders are optional, never mandatory — a user who never creates a folder should have a perfectly clean experience with no folder UI clutter.
  2. Memory is Instance-wide, not folder-scoped — folders bias retrieval but never wall off knowledge. The AI in the UI folder can still access marketing decisions.
  3. Instruction inheritance is the power feature — Platform → Instance → Folder → Chat. Each layer extends or overrides the one above. This is what makes folders genuinely useful, not just visual grouping.
  4. Moving is cheap and non-destructive — changing a chat’s folder_id changes its future instructions but preserves all content, memory, and whiteboard links.
  5. Bulk move is essential for adoption — if moving items one-at-a-time is the only option, users won’t organize. Multi-select + move is a must-have for v1.
  6. Atomic operations — “create folder + move items” is one operation. No partial states.
  7. Cross-folder linking is unrestricted — folder boundaries never prevent linking, referencing, or searching across the Instance.

Document 6: Chat Filters & Linked Conversations

Junior Developer Breakdown

Source: 6. aiConnected OS Chat filters and linked conversations.md Purpose: In-chat filtering and conversation relationship system enabling users to navigate long conversations efficiently and maintain connections between related chats when topics branch. Problems Solved:
  • Scroll collapse in long conversations
  • Lost context when topics branch
  • Disconnected conversation threads
  • No way to find specific content types within a chat

FEATURE 1: Multi-Select Filter Bar

What it does: Top-of-chat pill-style toggle chips for filtering visible messages. Filter Chips:
  • All — mutually exclusive with other chips; shows everything
  • Sent — user’s messages only
  • Received — AI’s messages only
  • Pinned — only pinned messages
  • Links — messages containing URLs
  • Media — messages with attachments (images, audio, video, files)
  • Search — opens inline search field
Combination Logic:
  • Sent/Received/Pinned/Links/Media are multi-select (AND logic)
  • When any chip selected, “All” turns off
  • Examples:
    • Sent + Pinned → only user’s pinned messages
    • Received + Links → only AI messages containing URLs
    • Pinned + Links + Media → messages that are pinned AND contain links AND have media
Build Notes:
  • Horizontally scrollable on mobile
  • Chips should be visually distinct when active vs inactive
  • “All” resets everything when clicked

FEATURE 2: Message Metadata for Filtering

What it does: Extends ChatMessage model with filterable metadata fields. Data Model Extensions:
type ChatMessage = {
  id: string;
  chatId: string;
  role: 'user' | 'assistant' | 'system';
  content: string;
  createdAt: string;
  isPinned: boolean;
  pinnedAt?: string | null;
  hasLinks: boolean;        // derived from URL scanning
  hasMedia: boolean;        // derived from attachments
  mediaTypes?: ('image' | 'audio' | 'video' | 'file')[];
};
Build Notes:
  • hasLinks can be computed by scanning content for URL patterns
  • hasMedia derived from attachment metadata
  • Can be computed on-the-fly or persisted for performance in long threads
  • Enables fast filtering without scanning full message content each time

FEATURE 3: Search Integration

What it does: Inline search field that narrows results within the currently filtered set. Search Pipeline:
  1. All messages → apply filter chips → apply search query
  2. Case-insensitive substring match against message content
  3. Optionally matches filenames and alt text
Key Behaviors:
  • Search acts as further narrowing on already-filtered set
  • Can search within Pinned only, within Sent only, etc.
  • Clearing search returns to filter-chip result
  • Closing search clears query and returns to normal filtered view
  • Search field appears inline when Search toggle clicked (not as modal)

FEATURE 4: Filter State Management

What it does: Client-side state model that controls the filter pipeline. State Model:
type ViewFilter = {
  sent: boolean;
  received: boolean;
  pinned: boolean;
  links: boolean;
  media: boolean;
  searchQuery: string;
  mode: 'all' | 'custom';
};
State Rules:
  • “All” button sets mode='all' and all chip booleans to false
  • Clicking any chip sets mode='custom' and “All” visual state turns off
  • Failsafe: if all chips false and search empty in custom mode, revert to mode='all' to prevent empty view
Filter Function Pipeline:
  1. Apply role filters (sent/received)
  2. Apply metadata filters (pinned/links/media)
  3. Apply search narrowing
Build Notes:
  • All filter state is client-side for instant response
  • Filters are non-destructive views over same conversation data

FEATURE 5: Linked Conversations (Conversation Graph)

What it does: Creates navigable relationships between related chats when users branch conversations. Trigger Actions:
  • “Move to new chat” (with selected messages)
  • “Start new chat from selection”
Data Model:
type ConversationLink = {
  id: string;
  fromChatId: string;
  toChatId: string;
  originMessageIds: string[];
  createdAt: string;
  label?: string;
};
Concept:
  • Every chat = node in a graph
  • Every branch = link (edge) between nodes
  • Enables navigation between related conversations while maintaining clean topic separation
  • Links are bidirectional — both chats know about the relationship

FEATURE 6: Branch Indicators and Navigation

What it does: Visual indicators showing where conversations branched and how to navigate between them. In Original Chat:
  • Selected messages that spawned new chat get subtle link indicator icon
  • Tooltip: “Branched chat: [name]”
  • Clicking navigates to the branched chat
In New (Branched) Chat:
  • Banner at top: “Branched from ‘[original chat name]’ based on N messages”
  • [View in original chat] button
  • Clicking highlights origin messages in original chat

FEATURE 7: Linked Conversations Menu

What it does: Chat header menu showing the full relationship tree for a conversation. Menu Shows:
  • Parent chat (if branched from another)
  • Child chats (if others branched from this one)
  • Sibling chats (other branches from same parent)
Each Entry Displays:
  • Chat name
  • Branch date
  • Origin message count
Navigation:
  • Click any entry → navigate to that chat
  • Supports conversation chains: Chat A → Chat B → Chat C
  • From Chat C, user can see parent (B) and grandparent (A) as “related via chain”

FEATURE 8: Bulk Operations with Filters

What it does: Enables moving entire filtered message sets to new chats or Workspace. Action: “Move visible messages to new chat” or “Move visible to Workspace” Flow:
  1. User applies filters (e.g., Pinned + Received + Search="Cognigraph")
  2. Clicks “Move visible messages to new chat”
  3. System receives messageIds[] list (the visible filtered set)
  4. Creates new chat or Workspace components
  5. Establishes ConversationLink with those specific message IDs as origin context
Key Principle: Enables sweeping entire filtered slice into new conversation or workspace in one action.

API Endpoints

MethodEndpointPurpose
GET/chats/:chatId/messages?filters={...}&search={query}Filtered message retrieval
POST/chats/:chatId/messages/pinPin a message (body: messageId)
POST/chats/branchCreate new chat + link (body: fromChatId, messageIds[], title)
GET/chats/:chatId/linksGet all ConversationLink objects for chat
POST/chats/:chatId/messages/moveMove messages (body: messageIds[], targetChatId or targetWorkspaceId)

User Flows

Flow 1 — Filter to specific content: Click Received + Pinned → see only AI’s pinned responses → search within that subset → export filtered results Flow 2 — Branch conversation: Select last 2 messages starting new topic → “Move to new chat” → new chat created with those messages as seed → both chats show link indicators → navigate back and forth Flow 3 — Curate for workspace: Filter to Pinned + Received + Search="architecture" → “Move visible to Workspace” → all matching messages become Workspace components with section grouping Flow 4 — Navigate conversation lineage: In deeply branched chat → open “Linked conversations” → see parent, grandparent, siblings → click to navigate → understand full conversation evolution

Implementation Principles

  1. Filters are non-destructive views over the same conversation data
  2. All filter state is client-side for instant response
  3. Message metadata (hasLinks, hasMedia) can be computed or cached depending on performance needs
  4. Linked conversations create bidirectional relationships — both chats know about the link
  5. ConversationLink stores specific message IDs that formed the branch for precise traceability
  6. Filter bar should be horizontally scrollable on mobile
  7. Search field appears inline when Search toggle clicked, not as modal
  8. Filters enable powerful workflows: filter to specific content type → export/move/analyze that subset → maintain connection to original context through links

Document 7: Pin Message Feature & Instance Whiteboard

Junior Developer Breakdown

Source: 7. aiConnected OS Pin message feature.md Purpose: Evolving design from simple message pinning → chat filters → Workspace concept → full spatial Whiteboard canvas. This document traces the complete design journey from “I can’t find important messages” to “each instance has an infinite canvas for organizing and transforming ideas.” Key Insight: This document shows how one user pain point (losing important messages in long chats) cascaded into three interconnected systems: pinning, filtering (see Doc 6), and the Whiteboard. Cross-References:
  • Doc 5 covers the Whiteboard as a Dashboard tab (Board integration, compile panel)
  • Doc 6 covers the filter system in detail (chips, state, search)
  • This doc is the origin story for both, plus the Workspace concept

FEATURE 1: Pin Message Core Behavior

What it does: Lets users mark specific messages as “important” during long conversations and quickly view/export only those. Pin Interaction:
  • Every message (user + AI) has a pin icon
  • Desktop: pin icon visible in message header row (or on hover)
  • Mobile: always visible, or appears on long-press → “Pin message” in actions sheet
  • States: Unpinned (pin outline) → Pinned (solid pin)
  • Click pin → pinned. Click again → unpinned. Saved immediately (no extra “Save” step)
  • Pins are per conversation (scoped to the chat, not global)
Data Model Extension:
type ChatMessage = {
  // ...existing fields
  isPinned: boolean;
  pinnedAt?: string | null;  // optional: for pin-time sorting
};
Build Notes:
  • pinnedAt enables sorting pinned messages by pin time vs message time — chronological by message time is usually better for narrative flow
  • Pin toggle fires: PATCH /chats/:chatId/messages/:messageId { isPinned: true | false }
  • Or: POST /chats/:chatId/messages/:messageId/pin { pinned: true | false }

FEATURE 2: Pinned Messages View (Toggle Mode)

What it does: The chat view has two modes — show everything, or show only pinned highlights. Access Point: “Pinned” button in chat top bar, alongside other filter chips (see Doc 6 for full filter system). Render Logic:
const visibleMessages = viewMode === 'all'
  ? messages
  : messages.filter(m => m.isPinned);
Empty State: If user toggles to Pinned and there are none: “No pinned messages yet. Click the 📌 icon on any message to save it here.” Edge Cases:
  • Regeneration: If a pinned AI message is regenerated, the pin stays on that message slot — new content replaces old, pin persists
  • Mobile: Same toggle at top of chat. Long-press message → “Pin / Unpin message”

FEATURE 3: Export from Pinned/Filtered View

What it does: When viewing filtered messages (pinned, sent, received, etc.), the visible set IS the export set. No extra selection steps. Export Options (in chat header when filters active):
  • Copy as Markdown
  • Download .md
  • Download .json
Mental Model: “What I see in the chat right now is what I’m about to export/move/share.” Additional Actions from filtered view:
  • Move visible messages to a new chat
  • Move visible messages to another instance
  • Move visible messages to Workspace/Whiteboard
  • Share as public link or via mobile share menu

FEATURE 4: Instance Workspace (Component-Based Knowledge Surface)

What it does: A per-instance, non-chronological surface for collecting and organizing important pieces from many chats. Think project board / document hybrid. NOTE: This concept was later evolved into the spatial Whiteboard (Features 6-10). Both are valid — Workspace is the structured-list approach, Whiteboard is the spatial-canvas approach. The system ships Workspace as v1 list view, Whiteboard as the v1.5+ visual layer. Core Concept:
  • Every instance gets one Workspace
  • The Workspace holds Components — discrete chunks of content (not chat messages)
  • Components come from pinned messages, filtered exports, or direct creation
What is a Component? A card/block holding one coherent idea or artifact:
  • Idea snippet: “Cognigraph needs a dedicated sub-architecture for learning”
  • Structured spec: “Chat Filter Bar – Requirements + Toggles”
  • Code block: Next.js API route or n8n JSON
  • Document fragment: “Section 3: Instance Workspace Concept”
  • Visual/link: Link to Figma, diagram, etc.
Component Data Model:
type WorkspaceComponent = {
  id: string;
  workspaceId: string;
  title: string;
  contentMarkdown: string;
  type: 'idea' | 'requirement' | 'decision' | 'task' | 'code' | 'snippet' | 'reference';
  section: string;              // grouping label
  tags: string[];               // e.g., ['memory-architecture', 'UX', 'v1']
  sourceChatId?: string | null;
  sourceMessageIds?: string[];
  relatedComponentIds?: string[];
  createdAt: string;
  updatedAt: string;
};

FEATURE 5: Chat-to-Workspace Content Flow

What it does: Moves content from chats into the Workspace as organized Components. From a Single Message:
  1. On any message, click “Add to Workspace”
  2. Dialog opens with: suggested title (first line), type selector, target workspace
  3. On save: creates Component, links back to source message via metadata
From a Filtered View (Bulk):
  1. Apply filters (e.g., Pinned + Received + Search="Cognigraph")
  2. Click “Move visible messages to Workspace”
  3. For each visible message: create Component with auto-suggested title and type
    • User messages → Idea or Question
    • AI messages → Answer or Spec
  4. Optionally group into a section: “Import from Chat — Dec 10 Brainstorm”
Workspace UI Views:
VersionViewDescription
v1Structured ListSections with drag-and-drop. Components as cards/rows with title, type, preview, source, tags
v1.5Board (Kanban)Columns by type (Idea → Draft → Refined → Locked In) or by category
v2Mind Map / GraphComponents as nodes, relations as edges, visual clustering
Key Distinction (Chat vs Memory vs Workspace):
  • Chat = chronological conversation (messy thinking)
  • Instance Memory (Cognigraph) = automatic knowledge graph (behind the scenes)
  • Workspace = user-curated, intentional surface of the most important pieces (source of truth)

FEATURE 6: AI Interactions with Workspace

What it does: A “Workspace chat” or assistant bar that operates ON the components, not as a regular chat. Example AI Commands:
  • “Turn everything under ‘Architecture’ into a structured PRD section”
  • “Compare these three Components and tell me the conflicts”
  • “Generate TypeScript interfaces from these code-spec Components”
  • “Write an executive summary of all Components tagged ‘v1’”
How it works:
  1. Engine receives text of selected Components (or all in a section)
  2. Plus a prompt defining the task (summarize, convert, refactor, etc.)
  3. Output becomes either a new Component or updates an existing one

FEATURE 7: Instance Whiteboard (Spatial Canvas)

What it does: An infinite-canvas whiteboard (like Miro/Excalidraw) where each node references content from chats. The spatial evolution of the Workspace concept. Core Properties:
  • One whiteboard per instance (by default; can allow multiples later)
  • Each item is a Node pointing back to source content
  • Think of the board as a visual layer on top of all pinned/filtered content
Node Data Model:
type WhiteboardNode = {
  id: string;
  whiteboardId: string;
  type: 'message' | 'message-group' | 'image' | 'file' | 'link' | 'note' | 'code';
  label?: string;
  position: { x: number; y: number; width?: number; height?: number; rotation?: number };
  contentPreview?: string;
  source?: {
    chatId?: string;
    messageIds?: string[];
    fileId?: string;
    imageId?: string;
    url?: string;
  };
  meta?: {
    tags?: string[];
    color?: string;
  };
  createdAt: string;
  updatedAt: string;
};

type WhiteboardEdge = {
  id: string;
  whiteboardId: string;
  fromNodeId: string;
  toNodeId: string;
  relationType?: 'relates_to' | 'supports' | 'contradicts' | 'depends_on';
};
Node Examples:
  • Single pinned AI answer → 1 Node titled “Learning Sub-Architecture Idea”
  • Batch of 25 filtered messages → 1 Node of type message-group with preview: “25 messages from Chat: ‘Cognigraph – Learning‘“

FEATURE 8: Chat-to-Whiteboard Content Flow

What it does: “Yank from chat, drop onto board” — moves content from conversations to the spatial canvas. A. Single Message → Node:
  • On any message: “Add to Whiteboard”
  • Creates Node with type=message, source=chatId + messageId
  • Auto-placed near last added node
  • Toast: “Added to Whiteboard”
B. Bulk Filtered Messages → Group Node:
  • From filtered chat view: “Send visible messages to Whiteboard”
  • Creates single Node of type message-group with all visible messageIds
  • Label suggestion: “Cluster from – ”
  • User can rename after
C. Other Content Types:
  • AI-generated images, uploaded files, links/videos
  • “Add to Whiteboard” on attachment bubble
  • Each becomes a Node with type=image/file/link and appropriate preview
Workflow Example: Go through brainstorm across 4-5 chats → filter each to pinned messages → send each filtered cluster onto the Whiteboard as its own group-node → now all curated ideas live on one visual surface.

FEATURE 9: Spatial Canvas Editing

What it does: Miro/Excalidraw-style canvas interactions for organizing nodes. Canvas Basics:
  • Infinite scroll/pan/zoom
  • Nodes can be dragged, resized, grouped
Toolbar (left side or top):
  • Select — click and move nodes
  • Rectangle/Frame — group container (like Figma frames)
  • Connector/Arrow — draw relationships between nodes
  • Sticky Note / Text Box — freeform annotation
Key Operations:
  • Draw a frame around related nodes → label it (e.g., “Learning Sub-Architecture”, “Chat Filter UX”)
  • Use connectors between nodes to show relationships:
    • “This idea supports that spec”
    • “This cluster evolves into that PRD”
  • Under the hood, each connector = { fromNodeId, toNodeId, relationType }
  • Relation types optional in v1, can add (supports, contradicts, depends-on) later

FEATURE 10: AI-on-Board (Board Chat Panel)

What it does: A right-side panel for talking to the board content. Not a regular conversation — a control interface for AI operations on curated content. Example Commands:
  • “Take everything in this frame and turn it into a PRD”
  • “Summarize this cluster”
  • “Generate a step-by-step workflow from these Nodes”
  • “Compare this idea cluster to that spec cluster and tell me conflicts”
Context Selection Modes:
  1. No selection: Use everything on the board (or everything visible)
  2. Selection mode: If nodes are selected when user types, only those nodes provide context
  3. Frame-specific: Right-click a frame → “Ask AI about this frame…” → next prompt scoped to that frame’s nodes
API Request Shape:
{
  "instanceId": "...",
  "whiteboardId": "...",
  "nodeIds": ["...", "..."],
  "prompt": "Turn all of this into a PRD."
}
Engine Process:
  1. Resolve nodeIds → full underlying content (messages, text, image descriptions, links)
  2. Feed content + user prompt into model
  3. Return result
Output Destinations:
  • Appears in the Board Chat panel
  • Optionally saved as a new AI Output Node on the canvas (e.g., “Draft PRD v1”)
  • New node can then be connected, refined, or exported

Three-Layer Architecture Summary

LayerPurposeNature
ChatsMessy thinking and iterationChronological, filterable, exportable
Whiteboard (per instance)Curated pieces from many chats as visual nodesSpatial, grouped, connected, labeled
AI-on-BoardHigher-order operations on board contentReads nodes/clusters/frames, produces new artifacts

API Endpoints

MethodEndpointPurpose
PATCH/chats/:chatId/messages/:messageIdPin/unpin message ({ isPinned: boolean })
POST/instances/:instanceId/workspace/componentsCreate Workspace Component
GET/instances/:instanceId/workspace/componentsList Components
PATCH/workspace/components/:componentIdUpdate Component
POST/instances/:instanceId/workspace/import-from-chatBulk import messages as Components
GET/instances/:instanceId/whiteboardGet board + nodes + edges
POST/instances/:instanceId/whiteboard/nodes/from-messagesCreate node(s) from messages
POST/instances/:instanceId/whiteboard/askAI operation on selected nodes

Database Tables

instance_workspacesid, instanceId workspace_componentsid, workspaceId, title, contentMarkdown, type (enum), section, tags (JSON), sourceChatId, sourceMessageIds (JSON), createdAt, updatedAt workspace_relations (optional) — id, workspaceId, fromComponentId, toComponentId, relationType whiteboard_nodesid, whiteboardId, type, label, position (JSON), contentPreview, source (JSON), meta (JSON), createdAt, updatedAt whiteboard_edgesid, whiteboardId, fromNodeId, toNodeId, relationType

User Flow: End-to-End Example

  1. User brainstorms across 5-10 chats about Cognigraph, memory architecture, chat filters
  2. In each chat: pin key answers, filter to Pinned + Received, search “Cognigraph”
  3. Use “Move visible messages → Workspace” (or “Send to Whiteboard”)
  4. All pinned AI answers become Components/Nodes in the instance’s Workspace/Whiteboard
  5. In Workspace: organize into sections (Concept Overview, Memory Layers, Learning Sub-Architecture)
  6. In Whiteboard: arrange spatially, draw frames, connect related clusters
  7. Ask AI (Workspace chat or Board chat): “Generate a v1 PRD for learning sub-architecture based on everything in this section/frame”
  8. Output saved as new Component/Node: “Learning Sub-Architecture – PRD v1”
  9. Instead of Cognigraph being scattered across 30 chats, the instance has a single canonical surface with all curated pieces

Implementation Principles

  1. Pins are per-message metadata — simplest possible data extension
  2. The Workspace is structured (list/board); the Whiteboard is spatial (canvas) — both serve the same purpose at different fidelity levels
  3. Ship Workspace list view as v1, Board/Kanban as v1.5, spatial Whiteboard as v2
  4. Components and Nodes always maintain source traceability (chatId, messageIds)
  5. AI-on-Board requests are scoped by selection — context is explicitly defined by what nodes the user selects
  6. The board is a visual layer on top of Cognigraph, not a replacement for it
  7. Every node/component can link back to its original chat message for full context
  8. Workspace/Whiteboard is per-instance — one canonical surface per project

Document 8: Cognition Console UI Design

Junior Developer Breakdown

Source: 8. aiConnected OS Cognition console UI design.md Purpose: Defines the front-end interactive interface for the Cognigraph artificial cognition architecture. Redesigns how memory, projects, sessions, and personas are exposed and controlled through the UI. This is the “control panel over Cognigraph’s memory layers” plus a workbench for real project work with AI. Key Paradigm Shift: The old model treats chat as memory (“chat history = what the AI knows”). The new model treats memory as a knowledge graph; chat is just the log from which memory is distilled. Users can see, edit, and govern what the AI remembers. Cross-References:
  • Doc 7 covers Workspace and Whiteboard (visual curation surfaces)
  • Doc 9 covers Collaborative Personas (multi-persona interactions)
  • Doc 15 covers Persona memory architecture in detail (identity, instruction, experience, skill layers)

FEATURE 1: Core Data Model — The Objects Users See

What it does: Defines the six fundamental objects the UI must expose and let users manipulate.

1a. Persona

Not “just a chat.” A semi-stable mind with purpose, style, and memory scope.
type Persona = {
  id: string;
  name: string;                    // "Neuro Architect", "Legal Analyst"
  role: string;                    // What this Persona is for
  style: string;                   // Tone, detail level, assumptions
  linkedMemoryScope: string;       // Which Cognigraph slice it uses
  safetyProfile: string;           // Guardrails, forbidden topics
};
UI Impact: Persona picker at top, detail panel showing purpose/strengths/memory scope.

1b. Project

The backbone — not loose chats. Projects bundle context, personas, memories, and artifacts.
type Project = {
  id: string;
  name: string;
  description: string;             // Goal statement
  status: 'active' | 'paused' | 'archived';
  primaryPersonaId?: string;
  relatedPersonaIds: string[];
  pinnedMemoryIds: string[];       // Key long-term memories
  artifactIds: string[];           // Docs, specs, uploads
  createdAt: string;
  updatedAt: string;
};
UI Impact: Left sidebar project list with filters, project dashboard with goals/tasks/sessions/memories.

1c. Session (replaces “chats”)

A conversation episode inside a Project. This is where messages live.
type Session = {
  id: string;
  projectId: string;
  personaId: string;
  title: string;                   // "Memory architecture brainstorm #1"
  contextConfig: object;           // Which memories/topics are attached
  createdAt: string;
  lastActiveAt: string;
};
UI Impact: “Sessions” tab within a Project, timeline list, “New Session” button with Persona picker.

1d. Message

Raw dialogue — not the primary memory, but the evidence from which memory is distilled.
type Message = {
  id: string;
  sessionId: string;
  author: 'user' | 'persona' | 'system';
  content: string;
  createdAt: string;
  tags: string[];                  // Auto-suggested topics
  linkedMemoryIds: string[];       // Which MemoryNodes this contributed to
  promoted: boolean;               // If selected & promoted to long-term memory
};
UI Impact: Normal chat stream. Hover over message → see linked memories, promote/demote.

1e. MemoryNode (Cognigraph node)

The central object — a structured memory entry following Category → Concept → Topic hierarchy.
type MemoryNode = {
  id: string;
  scope: 'global' | 'persona' | 'project' | 'session';
  layer: 'open' | 'closed';       // Open Thinking (ephemeral) vs Closed Thinking (committed)
  category: string;                // "Business", "Health", "aiConnected architecture"
  concept: string;                 // "Cognigraph memory model", "BrowserEngine PRD"
  topic: string;                   // "Open vs Closed Thinking Layers UI"
  type: 'fact' | 'preference' | 'rule' | 'plan' | 'story' | 'pattern' | 'question' | 'decision';
  content: string;                 // Distilled memory text
  sourceMessageIds: string[];
  originPersonaId?: string;
  importanceScore: number;         // How central this is
  stabilityScore: number;          // How "settled" vs "tentative"
  lastAccessedAt: string;
  createdAt: string;
};
UI Impact: Dedicated Memory Explorer. Memory drawer on right side of Session. Category → Concept → Topic drill-down.

1f. Artifact

Anything that isn’t a message but is part of the work.
type Artifact = {
  id: string;
  projectId: string;
  type: 'file' | 'url' | 'note' | 'spec' | 'dataset';
  title: string;
  description: string;
  link?: string;
  fileMetadata?: object;
  generatedByPersonaId?: string;
  createdAt: string;
};
UI Impact: Project “Assets” tab, side panel to insert artifacts into session context.

FEATURE 2: Core Screens & Layout

What it does: Defines the four primary views and global layout. Four Primary Views:
  1. Home / Persona Hub
  2. Project Dashboard
  3. Session View (chat + memory drawer)
  4. Memory Explorer
Global Desktop Layout:
  • Left sidebar: Persona selector (avatar + name + status), Projects list, Global Memory link, Settings, Daily Memory Report link
  • Main area: Contextual content (Projects list, Session, Memory Explorer, etc.)
  • Right drawer (toggle): “Context & Memory” for current Session — active memory slice, pinned nodes, recently used nodes, quick edit/add

FEATURE 3: Project Dashboard

What it does: Rich dashboard when user clicks into a Project. Not just a chat list — a living control surface. Header: Name, Main Persona, Goal statement, Status badge Tabs:
TabContent
OverviewCurrent goal/summary, last 3 Sessions, top 5 pinned MemoryNodes, active tasks
SessionsTimeline list with title, date, Persona, short summary. “New Session” button with Persona picker
MemoryScoped Memory Explorer — only project-scoped nodes by default. Filter by type (fact, plan, decision, etc.). List + tree view
ArtifactsUploads, specs, notes, links. “Use in Session” button to attach as context
SettingsAllowed Personas, default memory slice rules (e.g., “use global ‘Engineering’ memories but not personal life”)

FEATURE 4: Session View (The Chat Experience)

What it does: The primary interaction screen — a chat view enriched with visible memory indicators and context controls. Left: Breadcrumbs
  • Persona avatar + name
  • Project name
  • Session title
Center: Conversation Stream Standard chat, but each message can show:
  • Tiny tags under messages: #Cognigraph, #MemoryModel, #UI
  • Indicator if memory was created/updated: small icon “3 memories updated”
  • On hover/click:
    • “View linked memories”
    • “Promote this to long-term memory” (if AI suggested a candidate)
    • “Remove from memory”
Right: Context & Memory Drawer (three sections)
  1. Active Context — memory nodes currently attached to this Session as chips/cards showing title, type, scope icon (global/persona/project). User can pin/unpin, temporarily disable a node for this Session.
  2. Suggestions — memory nodes the engine thinks would be useful. “Add to context” button.
  3. Scratchpad (Open Thinking Layer) — ephemeral notes for this Session only. AI may write transient reasoning here. User can click “Commit to closed memory” to solidify.

FEATURE 5: Memory Explorer (“The Brain” UI)

What it does: Full-screen view for browsing, filtering, and governing all memories. Also accessible scoped within a Project. Controls:
  • Filters: Persona, Project, Scope (global/persona/project), Type (fact/rule/preference/etc.), Time window (created/last accessed)
  • Views:
    • Tree view — Category → Concept → Topic → nodes
    • List view — sortable table
    • Graph view (later) — visual knowledge graph
Memory Node Card (click to expand):
  • Content
  • Type, scope, layer, scores (importance, stability)
  • Source Sessions/Messages
  • Links:
    • “Jump to source message”
    • “Edit & version history”
    • “Attach to current Session”
    • “Change scope” (promote from project → global)
    • “Mark outdated” (lowers stability, hides from default context)

FEATURE 6: Daily Memory Report

What it does: A key governance surface showing what the AI learned, updated, or flagged each day. Report Contents (grouped by Project & Persona):
  • “New long-term memories created today”
  • “Updated memories”
  • “Potential conflicts or contradictions”
User Actions from Report:
  • Approve / adjust / delete nodes
  • Re-scope (“this belongs only in aiConnected, not global”)
Purpose: This is how users actually govern the Closed Thinking Layer. Makes memory feel deliberate, not spooky.

FEATURE 7: Message-to-Memory Pipeline (UI Side)

What it does: Makes the process of memories being extracted from conversations visible and controllable. Pipeline:
  1. User and Persona talk in a Session
  2. Cognigraph (behind the scenes) extracts candidate memories, links to existing nodes or creates new ones
  3. UI surfaces this two ways:
    • Inline: subtle indicator on messages (“2 new memories extracted”)
    • End-of-session summary: “Here’s what I learned / updated”
  4. User has explicit control:
    • Accept / reject / edit new nodes
    • Or defer and handle via Daily Memory Report

FEATURE 8: Projects as True Context Bundles

What it does: Elevates Projects beyond “folder of chats” into rich context containers. Project Context Profile:
  • Default memory scopes
  • Relevant categories (e.g., “Engineering: Cognition”, “Business: aiConnected”)
  • Style preferences (short vs long, more code vs more explanation)
Persona Assignment:
  • Lead Persona for this domain
  • Secondary assisting Personas
Knowledge Baseline:
  • Pinned MemoryNodes representing key assumptions/decisions
  • The Project reads like a living spec
Session Start Experience:
Starting Session with:
  • Persona: Neuro Architect
  • Project: Cognigraph Front-End
  • Context: 17 memory nodes (rules, decisions, architecture)
  • Artifacts: PRD v1, wireframe sketches, previous conversation summaries
Plus a “Configure context” button to fine-tune before first message.

FEATURE 9: Open Thinking Layer vs Closed Thinking Layer (UI Mapping)

What it does: Maps Cognigraph’s internal memory architecture to visible, controllable UI elements. Mapping:
Cognigraph LayerUI SurfaceNature
Open Thinking Layer (OTL)Scratchpad, per-Session notesEphemeral, transient reasoning
Closed Thinking Layer (CTL)Memory Explorer nodes (Category/Concept/Topic)Committed, structured, durable
UI Rules:
  1. User can always see which layer they are editing
  2. Promotion: “Convert this scratchpad element to a permanent memory”
  3. Demotion: “Move this memory back to scratch / mark as tentative”
Visual Distinction:
  • OTL: lighter color, “pencil” icon, ephemeral feel
  • CTL: solid color, “book” icon, durable feel

FEATURE 10: MVP vs Full Vision

What it does: Defines what to ship first vs what to defer.

MVP Must-Haves

  1. Personas — Persona picker + simple settings
  2. Projects — Create/edit, attach Personas
  3. Sessions — Conversation view, basic list under Project
  4. Memory (CTL) — Auto-extracted memories as list, simple filters (type/time), edit/delete/pin
  5. Context Drawer — Shows memory nodes in Session, toggle on/off
  6. Daily Memory Report — Simple list of new/updated nodes grouped by Project

Later Enhancements

  • Full Memory Explorer tree + graph view
  • Tasks integrated with memory
  • Cross-project similarity suggestions
  • Timeline visualizations (“what the AI learned this week”)
  • Story mode / narrative of Project history

Paradigm Shift Summary

Old WorldNew World
Chat ≈ Memory (each chat is a silo)Memory ≈ Knowledge graph; chat is a log
”Memory” is a vague hidden blobMemory is visible, categorized, governed
Projects = folder of chatsProjects = first-class context bundles
Start a new chat = blank slateStart a Session = Persona + Project + Context
User’s experience: “I’m not talking to a blank slate. I’m talking to a Persona that lives in a specific Project, and I can see the brain it’s using.”

Implementation Principles

  1. The UI is a control panel over Cognigraph — every memory layer should be visible and editable
  2. Messages are evidence; MemoryNodes are the distilled knowledge. Don’t confuse the two
  3. Memory should feel deliberate, not spooky — always show what was learned, let users govern it
  4. Projects are the organizational backbone, not chats. Sessions live inside Projects
  5. The Context Drawer is the user’s real-time view into what the AI “knows” right now
  6. Ship simple (list views, basic filters) first. Graph views and cross-project intelligence come later
  7. Daily Memory Report is the key governance mechanism — don’t skip it in MVP
  8. Scratchpad (OTL) and Memory Explorer (CTL) should be visually distinct so users always know what’s ephemeral vs permanent

Document 9: Collaborative Personas Planning

Junior Developer Breakdown

Source: 9. aiConnected OS Collaborative personas planning.md Purpose: Defines how multiple AI Personas participate in the same conversation — joining, leaving, remembering, and collaborating — just like real people do. Introduces three collaboration modes and the data structures that unify them. Core Principle: “A chat is not bound to a single Persona. A chat is a container for context, artifacts, and memory links. Personas are participants — not owners.” Cross-References:
  • Doc 8 covers Cognition Console (Persona/Project/Session model)
  • Doc 14 covers Build Plan (Chat Kernel, multi-persona capabilities)
  • Doc 15 covers Persona memory layers (identity, instruction, experience, skill)

FEATURE 1: Chat as Shared Context Container

What it does: Redefines chats from “one AI conversation” to a shared container that multiple Personas participate in. A chat can include:
  • Text conversation
  • Documents
  • Images
  • Live screen sharing
  • Voice
  • Tools, timelines, decisions
Personas are participants, not owners. This means:
  • A chat can start with one Persona or many
  • Personas can be added/removed dynamically
  • The conversation context is preserved regardless of who joins or leaves
  • Each Persona remembers their participation independently

FEATURE 2: Three Collaboration Modes

What it does: Supports three natural human interaction patterns through one unified system.

Mode 1: Invite Mode (Drop-In Collaboration)

User is talking to one Persona, then intentionally brings in others.
  • “Let me bring the developer into this discussion”
  • Later: “Thanks, you can step out”

Mode 2: Open Chat Mode (Commons / Lounge)

A persistent, always-available thread where ALL Personas can contribute when they have something relevant.
  • Like a chaotic group messenger (Yahoo Messenger / Slack channel)
  • Personas speak only when they pass a Contribution Threshold
  • Ideal for brainstorming and exploratory thinking

Mode 3: Multi-Persona Start

User creates a new chat and selects multiple Personas from the beginning.
  • “I’m starting a thread with finance, ops, and legal”
  • Same mechanism as invite mode — just all links created at chat start time
All three modes use the same underlying primitive: Participation Links.
What it does: The bridge between one conversation thread and multiple Persona memories. Created whenever a Persona joins a chat.
type ParticipationLink = {
  id: string;
  conversationId: string;
  personaId: string;
  joinedAt: string;
  leftAt?: string | null;
  contextScope: 'full' | 'recent' | 'summary';  // how much of thread they see
  memoryPolicy: 'allowed' | 'restricted' | 'none'; // what they can store
  roleInThread: 'collaborator' | 'advisor' | 'implementer' | 'reviewer' | 'observer';
  summarySnapshotId?: string;  // system-generated catch-up
};
Key Properties:
  • Created on join, updated on leave
  • Leaving does NOT erase contributions, memory, or ability to reference later
  • Same structure whether Persona was there from start or invited mid-conversation

FEATURE 4: Dynamic Persona Participation (Join/Leave)

What it does: Lets users add and remove Personas from conversations at any point. Adding a Persona:
  • UI: “Add collaborator” → search Personas
  • Or command: @Developer join
System Behavior on Join — Catch-Up Packet: To prevent dumping the entire transcript, the system sends a concise catch-up:
  • Thread title + goal (one paragraph)
  • Last N turns (10-30)
  • Pinned context (requirements, constraints, decisions)
  • Open questions specifically for that Persona
Removing a Persona:
  • UI: “Remove” / “Developer can leave”
  • Or command: @Developer leave
  • Leaving ends the participation window but preserves everything
Non-Linear Participation:
  • Start with 3-5 Personas → narrow to 1
  • Start with 1 → expand to many
  • Start with many → dismiss one → re-add later
  • Each action simply adds or ends a Participation Link — no “mode switching”

FEATURE 5: Persona-Specific Memory of Shared Experiences

What it does: Each Persona stores their OWN memory of shared conversations. No single shared blob. Individual Memory (Persona-Level) records:
  • What they said
  • What they recommended
  • How their advice performed
  • Their evolving confidence in the user
Shared Memory (Group-Level) records:
  • What the group discussed
  • What decisions were made
  • What conflicts emerged
  • What conclusions were reached (or deferred)
Critical Design:
  • Personas can disagree with group memory — a finance Persona might flag: “I still believe the decision we made last month was financially unsound”
  • Same chat, different memory traces:
    • Developer remembers technical constraints
    • Finance Persona remembers cost implications
    • Dating Persona remembers emotional tone and signals
Referenced Participation Across Threads: When user later talks to a Persona 1:1, that Persona can say:
  • “Yes — during the thread about X, you asked me to…”
  • “We decided Y, and I warned about Z…”
  • “I can pull up the exact message where we agreed”
This requires ParticipationLink + message anchors.

FEATURE 6: Three Memory Layers for Collaboration

What it does: Defines the memory architecture required for true collaborative cognition.
LayerScopeNature
Persona Memory GraphPrivate, per-PersonaIdentity-anchored, evolutionary
Collaborative Space MemoryShared across participantsTime-indexed, decision-aware
User Relationship MemoryPer-Persona view of userTrust levels, communication preferences
Rules:
  • Personas read shared memory
  • Personas write to shared memory
  • Personas interpret shared memory differently
  • This is how true perspective emerges

FEATURE 7: Open Chat — Opportunistic Collaboration

What it does: A persistent, always-available thread on the Instance Dashboard where all Personas can contribute when relevant. Characteristics:
  • Always visible or one click from Dashboard
  • Does not need to be created each time
  • Accumulates history over time
  • Acts as default brainstorming/ideation stream
  • Personas participate opportunistically, not constantly
Contribution Threshold (prevents chaos): Each Persona has an internal gate:
  • Relevance score — is this in my domain/skill?
  • Novelty score — am I adding something not already said?
  • Confidence score — do I have enough signal to speak?
  • Impact score — would this change a decision or direction?
  • Redundancy check — has another Persona already covered it?
Only if combined score clears threshold does the Persona post. Cooldown Rule:
  • Allow 1-3 replies per user message
  • Queue the rest as “optional insights” user can expand
  • Prevents pile-ons while preserving messenger-channel vibe
Differentiation Requirement: Each Persona must have:
  • A unique thinking model (risk-averse vs opportunity-seeking)
  • A unique output style (bullet-heavy, narrative, question-asking)
  • A unique default goal (protect, accelerate, simplify, validate)
  • Redundancy gate should penalize “generic assistant answers”

FEATURE 8: Conversation Orchestration (Text + Voice)

What it does: Structured rules for who speaks, when, and how — prevents chaos while allowing natural emergence. Orchestration Responsibilities:
  1. Turn Management — decide who speaks, allow interruptions, prevent domination
  2. Trigger Conditions — Persona speaks when domain is relevant, risk threshold crossed, or another Persona makes a questionable claim
  3. Cross-Persona Dialogue — Personas can question each other, build on ideas, push back respectfully
  4. User Override — user can address one Persona directly, ask the group, mute or prioritize Personas
Voice Mode:
  • Each Persona has a distinct voice
  • System announces speaker changes naturally
  • Interruptions feel conversational, not robotic
Implementation (scalable approach):
  1. Selector pass (cheap): decide which Personas have something worth saying
  2. Speaker pass (expensive): generate responses only for selected Personas

FEATURE 9: Dashboard as Collaboration Hub

What it does: The Instance Dashboard serves as the living control surface for all collaboration. Dashboard Role:
  1. Centralized awareness — shows all Personas, conversations, active status
  2. Immediate interaction — start typing without deciding chat type first
  3. Persistent collaboration — hosts the permanent Open Chat
Relationship Model:
  • Dashboard → centralized hub
  • Open Chat → persistent, shared conversation (the commons)
  • Chats → focused, contextual threads
All three coexist. Dashboard anchors them, not replaces them.

FEATURE 10: Use Case Neutrality

What it does: Ensures the collaborative system works across ALL use cases, not just business. Same mechanics support:
  • Business advisory groups
  • Creative writers’ rooms
  • Personal life councils
  • Dating simulations
  • Group therapy-like reflection
  • Friend-group simulations
  • Mentors + peers + challengers
System perspective: A Persona is a Persona. A chat is a chat. Participation rules are identical. Only identity, memory, and contribution thresholds differ.

Persona Identity Differentiation

Each Persona in a CPS has: Core Identity: Name, personality traits, communication style, risk tolerance, decision bias (conservative/aggressive/analytical/creative) Primary Specialization: Finance, Operations, Legal, Strategy, Technical, Emotional/coaching Secondary Modifiers: Ethical strictness, speed vs depth preference, optimism vs skepticism, authority level (advisor vs executor) These are not prompts. They are constraints applied at inference and memory interpretation time.

UI Elements for Collaboration

ElementPurpose
Persona PanelShows active Personas, status indicators (Listening/Thinking/Responding), mute/focus controls
Unified Conversation StreamOne conversation, clear speaker attribution, optional color/icon coding
Group Controls”Ask the group”, “Facilitate discussion”, “Summarize consensus”, “Highlight disagreements”
Memory AnchorsDecision markers, unresolved issues, action items tied to Personas

MVP vs Full Version

MVP (Ship Early)

  • Join/leave Personas into a thread
  • Catch-up packet on join
  • Persona stores Participation Memory Event
  • Participants strip + mention syntax

Full Version (Later)

  • Permissions per Persona (what they can store)
  • Persona “office hours” / availability
  • Auto-suggest collaborator
  • Voice mode speaker switching
  • Action-item handoff (“Developer, take ownership of task #12”)

The Unified Model (One Sentence)

In aiConnected, conversations are persistent contexts that can include any number of Personas, who may join, contribute, leave, and remember their participation — individually and continuously — just like people do in real life.

Implementation Principles

  1. ParticipationLink is the universal primitive — same structure for all three modes (invite, open, multi-start)
  2. Catch-up packets prevent transcript dumping on join — keep context efficient
  3. Contribution Thresholds prevent Open Chat from becoming noise
  4. Memory is ALWAYS Persona-specific — no single shared blob. Personas interpret shared events differently
  5. Leaving a chat preserves everything — contributions, memory, ability to reference later
  6. Two-step flow for scalability: cheap selector pass, then expensive generation pass
  7. Use case neutral design — business, creative, social, therapeutic all use identical mechanics
  8. Dashboard Open Chat is the “commons” — always available, persistent, naturally collaborative

Document 10: Computer Use for AI Personas (Embodied Digital Worker)

Junior Developer Breakdown

Source: 10. aiConnected OS Computer Use for aiPersonas.md Purpose: Defines how AI Personas get a “physical body” — a persistent digital workspace where they actually USE computers like humans do (clicking, typing, browsing, navigating software) instead of relying on APIs or scripts. This is the execution layer that makes Personas feel like real digital employees. Core Principle: “You are not trying to automate tasks — you are trying to instantiate digital agency, and that requires embodiment, continuity, and learning, not better prompts or more tools.” Analogy: Think of it as hiring a teenager — capable, limited, supervised, improving over time. If a trained teenager could do a task on a computer, this system should eventually be able to do it too. Cross-References:
  • Doc 8 covers Cognition Console (Persona/Project/Session model)
  • Doc 9 covers Collaborative Personas (multi-persona participation)
  • Doc 15 covers Persona memory layers (procedural memory = skills)
  • Doc 19 covers Fluid UI Architecture (how embodiment fits the overall vision)

FEATURE 1: Digital Body (Persistent Desktop Environment)

What it does: Each Persona gets a sandboxed computer environment — a real desktop it “lives inside” with persistent state. Components:
  • Controlled desktop environment per Persona (containerized)
  • Browser with its own profile (cookies, sessions, logins, extensions)
  • Controlled filesystem (downloads, uploads, saved files)
  • OS-level clipboard and window management
  • Persistent across sessions (doesn’t reset every time)
Technology Stack:
  • KasmVNC — web-native desktop streaming (Linux desktop streamed to browser)
  • Containerized workspace images
  • WebRTC/VNC for connection + input injection
  • Chromium-based browser inside the environment
Why This Matters:
  • “Teenager has their own work computer” model
  • Isolated environment (easy reset/recover)
  • You control what apps exist and what permissions it has
  • Already feels like a worker because it “lives somewhere”

FEATURE 2: Perception Stack (How the Agent “Sees”)

What it does: Three-layer perception system that supports the fuzzy generalization a human uses when looking at a screen. Layer 1: UI Text & Structure (fast, reliable)
  • DOM accessibility tree when available (browser)
  • Visible text extraction
Layer 2: Visual Understanding (fallback)
  • Screenshot analysis for when DOM is unavailable
  • Chart/image interpretation
  • Layout understanding
Layer 3: Contextual Memory
  • “I was on this page before”
  • “This button moved but does the same thing”
  • Pattern recognition across sessions
Key Design Choice: Prefer structured control (DOM/accessibility tree via Playwright/CDP) and use vision as a fallback when structure is unavailable.

FEATURE 3: Action Layer (How the Agent “Acts”)

What it does: Reliable UI control — clicking, typing, navigating — like a human would. Technology:
  • Playwright — mature base for controlled browser automation across engines
  • browser-use — open-source accelerator for LLM-driven web actions
  • CDP (Chrome DevTools Protocol) for fine-grained control
Actions:
  • Mouse move/click/drag
  • Keyboard typing (including shortcuts)
  • Window management, tabs, basic OS interactions
  • Form filling, file upload/download
  • Tab management and navigation

FEATURE 4: Verification Gates (Evidence-Based Completion)

What it does: Ensures the agent doesn’t hallucinate success. A task step is only “done” if a condition is observed. Verification Loop:
  1. Plan: Break task into steps with expected outcomes
  2. Act: Click/type
  3. Observe: Read DOM + screenshot + network events
  4. Verify: Check for success condition (not “I clicked it” — actual proof)
  5. Recover if not verified
Evidence Requirements:
  • Screenshots at key steps
  • Action timeline
  • URLs visited
  • Files created/downloaded
  • DOM state proof
Critical Rule: “I think it worked” is NOT acceptable. Only “it worked because X is visible / test passed.”

FEATURE 5: Unstuck Engine (Recovery & Self-Healing)

What it does: When things go wrong, the agent behaves like a human: pause, re-orient, try alternatives, backtrack. Why This Matters: This is where ALL competitors fail. Most agents get stuck and just retry or give up. Minimum Viable “Unstuck” Behaviors:
  • Loop detection — same screen state N times → try something else
  • Modal/toast/cookie banner handling — known blocker library
  • “Try next best target” logic — same label, nearby button, alternative navigation
  • Checkpoint rollback — “go back to last known good screen”
  • Fork handling — if new screen appears, classify which fork and proceed
  • Timeout diagnosis — not just waits, but figures out WHY
  • Strategy switching — DOM mode ↔ vision mode
  • Single escalation question — ONLY when truly blocked (CAPTCHA/2FA/permissions)
State Checkpoints after each milestone:
  • URL
  • Key DOM markers
  • Cookies
  • Last action

FEATURE 6: Permission System (Teenager Policy Model)

What it does: Three concentric rings of permissions, matching how you’d supervise a hired teenager.

Ring A: Safe by Default (no irreversible actions)

  • Browse, read, summarize, copy/paste into drafts
  • Gather evidence (screenshots, notes)
  • Build a plan and show what it intends to do next

Ring B: Allowed with Constraints (day-to-day work)

  • Send messages only from approved templates or after approval
  • Fill forms but require approval before final submit
  • Download/upload files within sandbox folder

Ring C: Requires Explicit Approval (every time)

  • Anything involving money, billing, financial transfers
  • Deleting accounts/data
  • Changing DNS / security controls
  • Trading live capital
Two Operational Modes:
  • Autopilot (safe): deterministic tools only, browser read-only unless whitelisted
  • Operator (risky): can click/type in browser, requires approval for destructive actions

FEATURE 7: Teach Mode (Skill Learning by Demonstration)

What it does: User shows the agent a workflow once; the agent can repeat and improve it. This is the breakthrough feature. Components:
  • Recorder: captures screen + actions + DOM snapshots
  • Skill Compiler: turns recordings into structured skills:
    • Goals
    • Steps (ordered)
    • Anchors (what to look for on screen — not coordinates)
    • Variables (name, email, search terms, etc.)
    • Branch rules (“if you see X, do Y”)
Storage: Skills become procedural memory in Cognigraph — not just “knowledge” but “how to do.” Generalization:
  • Button text similarity
  • Layout heuristics
  • Recovery behaviors (modal closing, alternate navigation)

FEATURE 8: Four Memory Types for Embodied Agents

What it does: The agent accumulates operational history and habits, making it feel “alive.”
Memory TypeContentExample
DeclarativeFacts, references”The CRM login page is at app.example.com”
ProceduralSkills — how to do things”LinkedIn lead research” workflow
EpisodicWhat happened”On Dec 18, I tried X and it failed because Y”
Preference & PolicyUser work styleTone, what counts as “done”, risk limits, escalation rules

Phased Build Roadmap

Phase 1: Build the “Body + Cockpit”

  • KasmVNC desktop per Persona
  • Control channel (open URLs, type, click, manage tabs)
  • Full event logging + replay artifacts
  • Outcome: It already feels like a worker because it “lives somewhere”

Phase 2: Browser Operator That Doesn’t Lie

  • Playwright for execution
  • Verification gates (step only “done” if condition observed)
  • Evidence (screenshot/DOM proof per step)
  • Outcome: Fewer stalls, trustable status reports

Phase 3: Unstuck Engine

  • Loop detection, modal handling, alternative target logic
  • Checkpoint rollback, escalation questions only when blocked
  • Outcome: Starts resembling the “teenager” — can handle real-world messiness

Phase 4: Teach Mode

  • Recorder + skill compiler
  • Skill storage in Cognigraph (procedural + episodic)
  • Basic generalization rules + recovery behaviors
  • Outcome: Trainable across industries without custom engineering

Phase 5: Generalization + Marketplace

  • Skill templates + parameterization
  • Success-rate tracking
  • Environment profiles
  • Library/marketplace model for skills (aiConnected “engines” model)
  • Outcome: Selling “trained workers + trained skills,” not just “an agent”

Phase 6: Multi-Tool Embodied Agent

  • Telephony (LiveKit/Twilio)
  • CRM integrations
  • Browser + voice + note-taking + follow-up execution
  • Unified “work diary” and “task board”
  • Outcome: True digital worker across communication channels

Stress-Test Use Case: Trading

Trading is intentionally the hardest benchmark because it forces speed, discipline, risk boundaries, verification, and continual learning. Safe Progression:
  1. Replay + paper mode first — agent watches charts, executes strategy in simulation, logs rationale
  2. Constrained live mode (teenager rules) — fixed max position, daily loss limit, hard stops, approval for parameter changes, full audit trail
  3. UI trading as worst-case benchmark — dynamic charts, hotkeys, latency, popups, disconnects
If the agent can handle paper trading via UI reliably, it can handle simpler tasks like sales research.

What This Is (and Is Not)

This ISThis is NOT
The physical body for digital PersonasA UI redesign
The execution layer for human-like workA chatbot
The foundation for true autonomyA scripting system
A general-purpose “digital worker” runtimeA narrow automation tool
Environment for living digital intelligenceA full OS replacement

Key Differentiator vs Existing Solutions

Existing AI AgentsThis System
StatelessPersistent digital presence
Break when interfaces changeVisual continuity + unstuck engine
Assume success without verificationEvidence-based completion
Get stuck in loopsRecovery + checkpoint rollback
Require constant babysittingSafe autonomy with permission rings
Task scriptsLiving agency

Implementation Principles

  1. Start with the body (persistent desktop), not the brain (intelligence)
  2. Verification-first: define “done” and how to check it before building execution
  3. Deterministic tools for 80% of work; browser automation only for the unavoidable 20%
  4. Recovery/unstuck engine is where real differentiation lives — invest heavily here
  5. Teach Mode is the breakthrough: procedural memory in Cognigraph makes agents trainable by demonstration
  6. Permission rings match real-world supervision — safe default, constrained work, approval-required actions
  7. Don’t start with “make it smarter” — start with “make it reliable and honest”
  8. Skills become the product: not selling “an agent” but “trained workers + trained skills”

Document 11: Chat Cleanup System

Junior Developer Breakdown

Source: 11. aiConnected OS Chat Cleanup System.md Created: 12/18/2025 | Updated: 12/18/2025

Why This Document Exists

The Problem (Founder’s Frustration): Every major AI chat platform — ChatGPT, Claude, Gemini — suffers from the same problem: you cannot easily clean up your chats. Over the course of a month, a user accumulates dozens or hundreds of conversations. Many of these are throwaway: one-off questions, random curiosity, quick lookups. But in systems with AI memory, those throwaway chats can pollute future context. The AI “remembers” things from conversations the user considers meaningless, and the user has no efficient way to purge them. The founder’s exact complaint: “Over the course of a month, I might have just little random questions that I asked that I don’t want to be part of the permanent context for future discussions. They were just random little stupid questions or something, or just one-off conversations. I need to be able to clean that up easily.” What This Document Solves: It defines a complete content lifecycle management system — not just “delete a chat,” but a full pipeline of browse → multi-select → delete → recover → permanently destroy, applied consistently to both chats and memories, at every level of the application (Global, Instance, Persona). Why Anyone Should Care: This is one of those features that quietly makes the platform feel “finished” and enterprise-grade. Without it, chat sprawl becomes unmanageable within weeks. With it, users feel like they control their environment — their data, their context, their AI’s memory. No other AI platform does this well. Cross-References:
  • Doc 4 (Folder System) established that chats can be organized into folders, moved between folders, and viewed at multiple scopes. This document builds the deletion and recovery layer on top of that organizational foundation.
  • Doc 6 (Chat Filters & Linked Conversations) established ConversationLinks between chats. This document defines what happens to those links when a chat is deleted.
  • Doc 8 (Cognition Console) defined MemoryNodes with scope (global/persona/project). This document adds the operational layer: how users bulk-manage, archive, move, and delete those memory items.
  • Doc 9 (Collaborative Personas) established that chats can have multiple Persona participants. This document handles the edge cases: what happens when you move or delete a multi-persona chat, and what happens when the Persona no longer exists at restore time.
  • Doc 14 (Build Plan) lists this as Phase 5 of the build — the “power user advantage” that differentiates aiConnected from competitors.

Important Context: What Had Already Been Designed vs. What Was Missing

Before this document was created, the founder asked: “Have I already included a way to manage old chats?” The answer was: partially — but not as a complete system. What DID already exist in prior documents:
  • Multi-chat visibility: users could view chats globally, within an Instance, within a Persona, and within Folders
  • Moving chats between folders (Doc 4)
  • Multi-select for messages within a chat (pin, extract, move to whiteboard — Docs 6 and 7)
  • The implication of multi-select for chats (moving into folders), but never formalized as a system-wide pattern
What was COMPLETELY MISSING:
  1. A “Recently Deleted” state (soft-delete)
  2. A retention window (how long before auto-purge)
  3. Restore behavior (what happens when you recover a deleted chat)
  4. Cross-instance restoration logic (what if the original Instance/Folder/Persona no longer exists)
  5. What happens to linked/referenced/derived chats when their source is deleted
  6. A permanent purge action (irreversible hard deletion)
  7. The same lifecycle applied to memories (not just chats)
The Key Distinction: The prior documents had built a conceptual navigation and organization model (where things live, how they’re grouped). This document builds the content lifecycle management system (how things are created, archived, deleted, recovered, and permanently destroyed). Those are related but fundamentally different concerns. This document fills that gap.

FEATURE 1: ChatThread Data Model (Extended for Lifecycle Management)

What it does: Extends the existing ChatThread object with fields that track its deletion state, who deleted it, when, and where it should be restored to if recovered. Why it matters: Without these fields, deletion is binary — the chat either exists or it doesn’t. With them, the system supports soft-delete, timed retention, and smart restoration. Intended purpose: Every chat in the system carries enough metadata to be deleted safely (removed from active use and memory indexing), held in a recovery state for a configurable period, restored to its exact original location, or permanently destroyed.
type ChatThread = {
  // === Existing fields ===
  chat_id: string;
  instance_id: string | null;        // nullable if global chat
  persona_ids: string[];              // one or many participants
  folder_id: string | null;          // nullable (can be unfiled)
  title: string;
  created_at: string;
  last_activity_at: string;
  pinned: boolean;
  linked_chat_ids: string[];         // from ConversationLinks (Doc 6)
  referenced_chat_ids: string[];     // chats that reference or are referenced by this one
  archived: boolean;                 // optional archival state

  // === NEW: Deletion lifecycle fields ===
  deleted_at: string | null;         // null = active; timestamp = soft-deleted
  deleted_by_user_id: string;        // who performed the deletion
  restore_to: {                      // snapshot of where this chat lived before deletion
    instance_id: string | null;
    persona_ids: string[];
    folder_id: string | null;
    original_scope_context: string;  // additional context for smart restoration
  };
  delete_reason: 'user' | 'automation' | 'policy';  // who/what triggered deletion
};
How a developer should think about this: The restore_to field is a snapshot taken at the moment of deletion. It captures the chat’s original home — which Instance, which Personas, which Folder. This is critical because between the time a chat is deleted and the time a user decides to restore it, the original Instance might have been renamed, the Folder might have been deleted, or the Persona might have been deactivated. The snapshot gives the restoration logic a “last known good location” to work with. The delete_reason field tracks whether the deletion was manual (user clicked delete), automated (a cleanup cron job or policy rule), or policy-driven (e.g., an enterprise admin enforcing retention rules). This matters for audit logging and for distinguishing “I chose to delete this” from “the system cleaned this up.”

FEATURE 2: Three Chat States (The Deletion Lifecycle)

What it does: Defines exactly three states a chat can be in, creating a clear lifecycle that mirrors how file systems work (think: Trash/Recycle Bin). The three states:
StateWhat it meansVisible in active lists?Visible in Recently Deleted?Affects memory/retrieval?Reversible?
ActiveNormal, working chatYesNoYes — contributes to memory indexingN/A
Recently DeletedSoft-deleted, in recovery holding areaNoYesNo — immediately removed from memory indexing and search resultsYes — can be restored
Permanently DeletedHard-deleted, irreversibleNoNoNo — completely goneNo — cannot be recovered
Why this matters: The critical insight is what happens to memory indexing when a chat is soft-deleted. The founder’s core complaint was that random one-off chats pollute future AI context. So when a user deletes a chat, it must be removed from memory/retrieval pipelines immediately — not just hidden from the chat list. This is what makes cleanup actually mean something to the user. If deleting a chat only hid it from the UI but the AI still “remembered” its contents, the whole feature would be useless. How it should be built:
  • deleted_at IS NULL → Active
  • deleted_at IS NOT NULL AND permanently_deleted = false → Recently Deleted
  • permanently_deleted = true → Permanently Deleted (or simply hard-deleted from the database)
  • Every query that feeds the memory/retrieval pipeline MUST include WHERE deleted_at IS NULL as a filter condition. This is non-negotiable.

FEATURE 3: The Global Rule — Same Behavior Everywhere

What it does: Establishes that the cleanup system works identically at every scope level. There is no special behavior at the Global level that doesn’t exist at the Instance level or the Persona level. The user always gets the same tools. Why this matters: This is a UX consistency principle. Users should never wonder “can I do this here?” If they can multi-select and delete at the Global level, they can do the same thing inside an Instance or inside a Persona view. The only thing that changes is the default filter context — which chats are shown. The universal toolkit available at every scope:
  • A list of chats (filtered by scope)
  • Multi-select capability
  • Bulk actions: Delete, Restore, Permanent Delete
  • Search + sort + filter
  • “Recently Deleted” as a dedicated view within that scope
How a developer should implement this: Build the Chat Manager as a single reusable component that accepts a scope parameter. The component renders identically regardless of scope — the only difference is the API query:
  • Global: GET /chats?user_id={userId}&status={active|deleted}
  • Instance: GET /instances/{instanceId}/chats?status={active|deleted}
  • Persona: GET /personas/{personaId}/chats?status={active|deleted}
The UI component, multi-select logic, bulk action bar, filters, and Recently Deleted view are all the same component. Do NOT build three separate UIs.

FEATURE 4: Three Scope Levels (Global, Instance, Persona Chat Managers)

What it does: Defines the three vantage points from which a user can browse, manage, and clean up their chats.

Scope 1: Global Chat Manager

What it is: A top-level screen that shows every chat the user has across all Instances, all Personas, all Folders. This is the “bird’s eye view” — the place you go when you want to do a deep cleanup of everything at once. Header Controls:
  • Search bar (searches across all chats)
  • Filters:
    • Instance: All / pick specific Instance
    • Persona: All / pick specific Persona
    • Folder: All / pick specific Folder
    • Status: Active / Recently Deleted
    • Type: Solo persona / Multi-persona
  • Sort options:
    • Last activity (default — most recently active chats first)
    • Created date
    • Title (alphabetical)
    • Instance name (group by Instance)
Row Display (each chat in the list shows):
  • Chat title
  • Instance badge (which Instance it belongs to, color-coded or icon)
  • Persona badge(s) (which Persona(s) participate)
  • Last activity timestamp
  • Quick actions: open chat, context menu (right-click), checkbox for selection
Bulk Actions (appear when 1+ chats selected):
  • Delete (soft-delete → Recently Deleted)
  • Move to folder (Active chats only)
  • Move to Instance
  • Export (optional, future)
  • Archive (optional, future)
Why this scope exists: The founder specifically requested the ability to “see all the chats inside of the global chat interface, select multiple, hit delete, and manage it that way.” This is that screen. It’s where a user goes to clean up a month’s worth of accumulated chats across their entire account in one session.

Scope 2: Instance-Level Chat Manager

What it is: Inside an Instance dashboard, the user sees all chats within that Instance, including chats involving any Persona(s) assigned to that Instance. How it differs from Global: The UI is identical, but the Instance filter is locked to the current Instance. The user can still filter by Persona (showing only chats with a specific Persona within this Instance), by Folder, by status, etc. Why this scope exists: The founder wanted users to be able to “go to individual instances, that dashboard, and perform the same thing — delete all of the conversations within an instance, even if it’s across multiple personas in that instance.”

Scope 3: Persona-Level Chat Manager

What it is: Inside a Persona view, the user sees all chats that include that Persona — both solo chats (where this Persona is the only AI participant) and multi-persona chats (where this Persona was one of several). How it differs from Global: The Persona filter is locked to this Persona. The Instance filter may still be available if Personas can span multiple Instances, or it may also be locked if the Persona is Instance-bound. Why this scope exists: The founder wanted users to be able to “delete or clean up conversations with a particular persona.” If a user has been chatting with their Legal Persona about random things for weeks and wants to clean up just those conversations, they can do so from this view without affecting any other Persona’s chats.

FEATURE 5: Multi-Select Behavior (The Interaction Pattern)

What it does: Defines exactly how users select multiple chats for bulk operations. This is a system-wide interaction pattern, not specific to deletion — it also applies to bulk move, bulk archive, and bulk export. Desktop interactions:
  • Checkbox per row: each chat row has a checkbox on the left edge. Clicking it toggles selection for that chat.
  • “Select all” checkbox: in the header row, selects all chats in the current filtered view (not all chats everywhere — only those matching the active filters). If the user has filtered to “Short chats, older than 30 days,” Select All selects only those.
  • Shift-click range selection: click one checkbox, then Shift+click another — all rows between them are selected. Standard desktop file manager behavior.
Mobile interactions:
  • Long-press to enter multi-select mode: long-pressing any chat row activates multi-select mode, where tapping additional rows toggles their selection.
  • This mirrors how mobile file managers and photo galleries work (Google Photos, iOS Photos).
Why this matters: Multi-select is the foundation for every bulk operation in the system. If it feels clunky, janky, or unreliable, users won’t use cleanup features at all. It needs to feel as natural as selecting files in Finder/Explorer.

FEATURE 6: Bulk Action Bar (Sticky Bottom Bar)

What it does: When one or more chats are selected, a persistent action bar appears at the bottom of the screen showing available bulk operations. Bar contents:
  • Selected count: “12 chats selected”
  • Available actions (vary by context):
    • In Active view: Delete, Move, Archive, Export
    • In Recently Deleted view: Restore, Delete Permanently
  • Cancel selection: button to deselect all and dismiss the bar
Why it should be a sticky bottom bar (not a top toolbar or modal):
  • The user’s attention is on the chat list in the center of the screen
  • A bottom bar stays visible as they scroll through and select chats
  • It doesn’t obscure the list content
  • It provides a persistent reminder of how many items are selected
  • It’s immediately accessible without scrolling back to a toolbar
How to build it: This is a standard floating action bar pattern. It should animate in from the bottom when selectedCount > 0 and animate out when selectedCount === 0. It should be visually prominent (contrasting background color) and have large, clearly-labeled action buttons.

FEATURE 7: Smart Cleanup Filters

What it does: Provides pre-built filter presets specifically designed to make “clean up a month of random chats” effortless. These are the key to making cleanup feel fast instead of tedious. Why this matters: Without smart filters, the user would have to manually scroll through hundreds of chats and decide one by one which to delete. With smart filters, they can immediately surface the most likely candidates for deletion, select all, and clean up in seconds. Quick Filters (pre-built, one-click):
FilterWhat it showsWhy it’s useful
”Short chats”Chats with fewer than ~6 messagesThese are almost always one-off questions or quick lookups — the exact “stupid little questions” the founder complained about
”One-off chats”Chats with no follow-up activity after 24 hoursIf the user asked something and never came back, it’s probably disposable
”No pins”Chats where the user never pinned a single messagePinned messages indicate importance; absence of pins suggests low value
”No references / no links”Chats that aren’t linked to or referenced by any other chatIsolated chats with no connections to other work are safer to delete
”Older than”Configurable: 7 / 30 / 90 days since last activityTime-based cleanup for stale conversations
Search:
  • Title search (chat titles)
  • Content search (optional — search within message text)
  • Tag search (if chats have tags)
Sort options:
  • Last activity (default — shows most recent first for review)
  • Oldest first (for cleanup sessions — start with the oldest, most likely stale)
How these combine: Filters can be combined. A user might select “Short chats” + “Older than 30 days” + sort by “Oldest first” to see the most obviously disposable chats at the top. Then Select All → Delete. A month of cleanup done in 10 seconds.

FEATURE 8: Soft Deletion Workflow (Delete → Recently Deleted)

What it does: When a user hits Delete (for a single chat or a bulk selection), the chats are not destroyed. They are moved to a “Recently Deleted” holding state — removed from active views, removed from search, removed from memory indexing, but still recoverable. Step-by-step flow:
  1. User selects chats and clicks Delete (from the bulk action bar or a single chat’s context menu)
  2. System immediately performs:
    • Sets deleted_at = now() on each selected chat
    • Captures restore_to snapshot: { instance_id, persona_ids, folder_id, original_scope_context }
    • Sets delete_reason = 'user'
  3. Chats disappear from Active lists immediately — the user sees them vanish from the list
  4. Chats appear in Recently Deleted view — accessible via a “Recently Deleted” tab/filter within the same scope
  5. Critical: chats are IMMEDIATELY removed from:
    • Normal chat browsing
    • Search results (unless user explicitly switches to Recently Deleted view)
    • Memory indexing / retrieval pipelines — the AI will no longer use these chats as context
Recently Deleted UX:
  • The Recently Deleted view uses the same list UI as the Active view
  • Search and filters still work within Recently Deleted
  • Each row shows:
    • Chat title and original Instance/Persona/Folder context
    • “Deleted X days ago”
    • “Will be permanently deleted in Y days” (countdown to auto-purge)
  • Users can select chats here and choose Restore or Delete Permanently
Retention Window:
  • Default: 30 days before auto-purge
  • Configurable per user or per organization (enterprise admins can set retention policy)
  • After the retention window expires, chats are automatically permanently deleted (auto-purge)
Why the retention window matters: It provides a safety net. A user who aggressively cleans up their chats one afternoon and then realizes a week later that they deleted something important can still recover it. But the system doesn’t hold onto deleted data forever — that would defeat the purpose of cleanup.

FEATURE 9: Restore Workflow (Recovery from Recently Deleted)

What it does: When a user restores a chat from Recently Deleted, the system attempts to put it back exactly where it was before deletion — same Instance, same Persona associations, same Folder. Restore behavior:
  1. User selects chats in Recently Deleted view and clicks Restore
  2. System clears deleted_at (sets it back to null)
  3. System reads the restore_to snapshot and places the chat back in:
    • Its original Instance
    • Its original Persona associations
    • Its original Folder
Edge Cases (these are critical to handle — they WILL occur in production):
ScenarioWhat happensUser sees
Folder was deleted since the chat was soft-deletedChat restores to the Instance root (“Unfiled”)Small notice: “Original folder no longer exists. Chat restored to root.”
Instance was deleted since the chat was soft-deletedChat restores into a special “Recovered” holding container, OR the system prompts the user to pick a new InstanceModal: “The original Instance no longer exists. Where would you like to restore this chat?” with Instance picker
Persona no longer exists (deactivated/deleted)Chat restores successfully, but Persona participant is marked as missingIn the chat: “Some participants are unavailable” label next to the missing Persona’s messages
Multiple edge cases combine (Folder AND Persona deleted)System handles each independently — Folder logic fires, Persona logic firesUser gets both notices
Memory re-indexing on restore: When a chat is restored, the system optionally prompts: “Restore and include in memory?” This gives the user a choice — they might want the chat back for reference but NOT want it influencing future AI context. If they choose “Restore without memory,” the chat returns to Active state but exclude_from_memory = true is set.

FEATURE 10: Permanent Deletion Workflow (Irreversible Destruction)

What it does: Provides two paths to permanently destroy a chat: manual purge by the user, or automatic purge after the retention window expires. Path 1: Manual permanent deletion
  • User navigates to Recently Deleted view
  • Selects chats
  • Clicks “Delete Permanently”
  • System shows confirmation dialog:
    • For single chat: “Permanently delete ‘[Chat Title]’? This cannot be undone.”
    • For bulk: “Permanently delete 17 chats? This cannot be undone.”
  • On confirm: hard-delete from database. Chat ID may be retained as a tombstone for link integrity (see Feature 12), but all content, messages, and metadata are destroyed.
Path 2: Automatic purge (auto-purge)
  • A background job runs on a schedule (daily recommended)
  • It finds all chats where deleted_at + retention_window < now()
  • It permanently deletes them
  • This is configurable: the retention window defaults to 30 days, but enterprise admins can set it to 7, 14, 60, 90 days, or disable auto-purge entirely
Hard delete confirmation design principles:
  • The confirmation dialog MUST be explicit and clearly state irreversibility
  • For bulk operations, it MUST show the count (“17 chats”)
  • There should be no “Don’t show this again” option — permanent deletion always requires confirmation
  • The button should be visually distinct (red, destructive styling) and NOT positioned where a user might accidentally click it

FEATURE 11: Memory and Permanent Context Control

What it does: Ensures that deleting a chat actually means something to the AI’s memory — not just hiding it from the UI. Why this is the most important feature in this document: The founder’s entire motivation for the cleanup system was: “I don’t want [random chats] to be part of the permanent context for future discussions.” If deletion only hides chats from the list view but the AI still draws on them for memory and context, the feature is useless. This feature is what makes the cleanup system real. Behavior:
  • Soft deletion immediately removes the chat from:
    1. Normal chat browsing
    2. Search results (unless in Recently Deleted view)
    3. All memory indexing and retrieval pipelines — the AI cannot use content from deleted chats to inform future responses
  • Restoration re-enables indexing, but optionally with a grace prompt:
    • “Restore and include in memory?” → Yes (full restore) or No (restore chat but exclude from memory)
The Chat-Memory Relationship: Chats produce memories (through Cognigraph’s extraction pipeline — see Doc 8). When a chat is deleted, the question is: what happens to the memories that were derived from that chat? Design decision: Deleting a chat does NOT automatically delete its derived memories. This is intentional — a memory might have been derived from multiple chats, and deleting one source chat shouldn’t destroy a valid memory. However, the user is given an explicit choice: Chat Delete confirmation includes:
  • A checkbox: “Also delete memories created from these chats” (default OFF)
  • A preference toggle: “Always do this” (for users who want aggressive cleanup)
Why default OFF: The founder wanted to avoid accidental destruction. Memories are valuable — more valuable than the raw chats they came from. A user might want to delete the messy brainstorming chat but keep the distilled memory of the key decision that emerged from it. Default OFF protects that. Deleting a memory directly: When a user explicitly deletes a memory (from the Memory Manager, not via chat deletion), it is removed from retrieval immediately, regardless of which chat(s) it was derived from. The source chats are not affected.
What it does: Defines what happens to the conversation graph (ConversationLinks from Doc 6) when a chat is deleted. Why this matters: Chats don’t exist in isolation. They reference each other, branch from each other, and link to each other. If you delete a chat that is referenced by other chats, you can’t just leave broken references silently — the user needs to understand what happened, and the system needs to handle it gracefully. Scenario 1: A chat is deleted but another chat references it
  • The referencing chat keeps the reference object in its data
  • But the reference renders as: “Reference unavailable (deleted)”
  • A one-click “Restore referenced chat” button appears (if the user has permission to access Recently Deleted)
  • This is similar to how a broken link works on the web, but with a recovery option
Scenario 2: A linked chat (branched-from / copied-from) is deleted
  • The ConversationLink metadata is preserved (not destroyed)
  • The deleted chat is hidden from navigation
  • If the deleted chat is later restored, the link graph is automatically reactivated — all existing links reconnect
Implementation note: When performing a hard delete (permanent), consider keeping a minimal tombstone record (chat_id, deleted_at, was_linked_to) so that referencing chats can display “This referenced chat has been permanently deleted” instead of showing a mysterious broken reference with no explanation.

FEATURE 13: Bulk Move for Chats (Cross-Scope Reassignment)

What it does: Lets users select multiple chats at once and move them to a different Instance, a different Folder, or reassign them to a different Persona. This is separate from deletion — it’s about reorganization as conversations evolve. Why this matters: The founder noted: “As some conversations evolve, maybe they belong in a different place, but I don’t want to have to do that one at a time.” In a system with Instances, Personas, and Folders, conversations frequently outgrow their original container. A chat that started as a quick question in General Chat might become a serious project discussion that belongs in a dedicated Instance. What “Move” means (three distinct types):

Move Type 1: Instance Reassignment

  • Changes chat.instance_id from Instance A → Instance B
  • Used when a conversation belongs under a different project/workspace
  • The chat appears in the destination Instance’s chat list immediately
  • The chat_id does NOT change — this preserves all existing links, references, and history

Move Type 2: Persona Participation Changes

Two separate sub-operations (don’t combine under one vague “move”):
  • Reassign primary Persona: For solo chats that are “owned by” one Persona — changes which Persona the chat is associated with
  • Edit participants: For multi-persona chats — opens a participant editor rather than a simple “move”

Move Type 3: Folder Relocation

  • Changes chat.folder_id (within the same Instance)
  • Used for organizational cleanup without changing Instance or Persona association
  • Can also set folder_id = null to move a chat back to the Instance root (“Unfiled”)
UX: One “Move” Button → Smart Destination Picker When the user selects multiple chats and clicks “Move” from the bulk action bar, a modal/side panel opens with a three-step flow: Step 1: Destination Type
  • Move to Instance (reassign Instance)
  • Move to Folder (reorganize within current Instance)
  • Move to Persona (reassign ownership or participants)
Step 2: Pick Destination
  • Searchable picker showing all valid destinations
  • Shows destinations the user has access to
  • Shows warning badges if something is incompatible (e.g., “This Instance doesn’t have the same Personas”)
Step 3: Choose Behavior (important defaults for edge cases) When moving to a new Instance:
  • Folder mapping:
    • Default: “Keep same folder name if it exists in destination, otherwise move to Unfiled”
  • Persona mapping:
    • Option A (default, safest): “Keep participants; if a Persona doesn’t exist in the destination Instance, keep the chat but mark that Persona as missing”
    • Option B: “Replace participants with selected Persona(s)”
When moving to a Persona:
  • If the chat is single-persona: change the owner
  • If multi-persona: open “Edit participants” instead (because “moving” a multi-persona chat to one Persona doesn’t make semantic sense)
Confirm button: “Move 12 chats” → executes the move Rules and Edge Cases for Cross-Instance Moves:
ConcernRule
Chat identitychat_id unchanged — preserves links, references, history
Instance containerinstance_id updates; chat appears in destination immediately
Persona participantsIf destination Instance has those Personas, keep them. If not, chat still moves but invalid participants become “unresolved participants” with a label
References and linksKept intact. If a referenced chat is in an Instance the user can’t access, show “Reference unavailable”
PermissionsMove only allowed if user has permissions for both source AND destination. If not, destination is disabled in the picker with explanation

FEATURE 14: Bulk Move and Management for Memories

What it does: Applies the same lifecycle management system (browse, multi-select, archive, delete, recover, move) to AI memories — not just chats. Why this matters: The founder specifically requested: “The same should apply to memories. I should be able to select multiple memories all at once and hit the delete button, and archive them or just delete them outright, or they go to a recently deleted folder in case I feel like I need to recover them. I need to be able to do that at all levels.” Memories are structured artifacts extracted from chats by the Cognigraph system (see Doc 8). They represent distilled knowledge — facts, decisions, preferences, rules. But just like chats, memories can become stale, incorrect, or unwanted. Users need the same degree of control over their memories as they have over their chats.

Memory Data Model (Extended for Lifecycle)

type MemoryItem = {
  memory_id: string;
  scope_type: 'global' | 'instance' | 'persona' | 'chat';
  scope_id: string;                  // the container ID (Instance ID, Persona ID, etc.)
  category: string;                  // Cognigraph hierarchy level 1
  concept: string;                   // Cognigraph hierarchy level 2
  topic: string;                     // Cognigraph hierarchy level 3
  content: string;                   // the distilled memory text
  created_at: string;
  last_used_at: string;              // when the AI last retrieved this memory
  source_chat_ids: string[];         // which chat(s) this memory was derived from
  
  // === Lifecycle fields ===
  status: 'active' | 'archived' | 'deleted';
  deleted_at: string | null;
  restore_to: {                      // snapshot for recovery
    scope_type: string;
    scope_id: string;
  };
};

Memory States (four states, not three)

Memories have one additional state compared to chats: Archived.
StateWhat it meansUsed for retrieval?Visible in Memory Manager?Recoverable?
ActiveNormal, working memoryYes — the AI uses itYesN/A
ArchivedPreserved but dormantNo — excluded from retrieval unless user explicitly enablesYes (with “Archived” filter)Yes — can be reactivated
Recently DeletedSoft-deletedNoYes (in Recently Deleted view)Yes — can be restored
Permanently DeletedDestroyedNoNoNo
Why Archived exists for memories but not chats: A memory might be correct and valuable but temporarily irrelevant. For example, a user’s old company’s org chart — it’s true information, but the user doesn’t want the AI using it in current conversations. Archiving lets the user set it aside without destroying it. Chats are either active or deleted; there’s less need for an intermediate “dormant” state for conversations.

Memory Manager Screens (Same Three Scopes)

Just like the Chat Manager, the Memory Manager exists at three levels:
  1. Global Memory Manager — all memories across all scopes
  2. Instance Memory Manager — memories scoped to a specific Instance
  3. Persona Memory Manager — memories scoped to a specific Persona
Each supports the same operations: Search: By content text, by category/concept/topic (Cognigraph hierarchy), by source chat title Filters:
  • Last used (when did the AI last retrieve this memory?)
  • Created date
  • Source chat (which conversation generated this memory?)
  • “Never used” (memories that were created but never actually retrieved by the AI — likely candidates for cleanup)
  • “Low confidence” (if the system tracks confidence scores on memories)
  • Scope: Global / Instance / Persona
Multi-Select Bulk Actions:
  • Archive — move to dormant state (excluded from retrieval, but preserved)
  • Delete — soft-delete → Recently Deleted (same 30-day retention window as chats)
  • Delete Permanently — only available inside Recently Deleted view
  • Move — scope reassignment (see below)

What “Move Memory” Means

Moving a memory changes where it lives and who can use it. It changes scope_type and scope_id without changing the content. Types of memory moves:
  • Global → Instance (narrow scope: only this Instance’s Personas can use it)
  • Instance → Persona (narrow further: only this specific Persona can use it)
  • Persona → Instance (broaden: all Personas in this Instance can use it)
  • Persona → Global (broadest: all Personas everywhere can use it — only if user explicitly chooses)
Example: Move a memory from Persona A (Legal Advisor) to Instance X (the “Startup” project) → now ALL Personas in the Startup Instance can use that memory, not just the Legal Advisor. Default rule: Moving memory does NOT change the content. It only changes scope_type and scope_id, and updates retrieval eligibility automatically.

FEATURE 15: The Relationship Between Chat Deletion and Memory Deletion

What it does: Defines the precise behavioral rules for what happens to memories when their source chat is deleted, and vice versa. This is the most nuanced design decision in the document. Getting it wrong means either (a) users accidentally destroy valuable memories by casually deleting chats, or (b) users clean up chats but the AI still uses those chats’ memories, defeating the purpose. Rule 1: Deleting a chat removes it from browsing AND disables associated derived memory items from retrieval.
  • When a chat is soft-deleted, any MemoryItems whose source_chat_ids includes that chat are flagged for “source deleted”
  • Those memories are NOT automatically deleted, but they ARE excluded from active retrieval by default
  • This means: deleting a chat effectively removes its influence on future AI context, without destroying the memories themselves
Rule 2: The user can OPTIONALLY choose to also delete derived memories.
  • Chat Delete confirmation dialog includes a checkbox: “Also delete memories created from these chats”
  • Default: OFF (protective — don’t destroy memories by accident)
  • User can enable “Always do this” as a global preference toggle
Rule 3: Deleting a memory directly removes it from retrieval immediately.
  • This is independent of chat state
  • Deleting a memory does NOT affect the source chat(s) in any way
Why this three-rule system works:
  • Rule 1 addresses the founder’s core complaint (random chats polluting context) without data loss
  • Rule 2 gives power users full control for aggressive cleanup
  • Rule 3 keeps memory management and chat management as independent operations that don’t create unexpected side effects

Minimal v1 Specification (What to Ship First)

For Chats:
  1. Global / Instance / Persona chat list screens (one reusable component)
  2. Multi-select + bulk delete
  3. Recently Deleted with 30-day retention window
  4. Restore returns chat to original Instance/Persona/Folder
  5. Permanent delete (manual, inside Recently Deleted)
  6. Deletion removes from retrieval/memory indexing
For Memories:
  1. Global / Instance / Persona memory list screens
  2. Multi-select + bulk archive, bulk delete
  3. Recently Deleted with 30-day retention
  4. Restore + permanent delete
  5. Bulk move between scopes (Global/Instance/Persona)
Deferred to v2:
  • Auto-purge background job
  • Smart cleanup filters (“Short chats”, “One-off chats”, etc.)
  • Export functionality
  • Archive state for chats (currently only memories have Archive)
  • “Also delete memories” checkbox in chat delete confirmation
  • Undo toast for bulk operations

API Endpoints

# Chat Cleanup
GET    /chats?user_id={id}&scope={global|instance|persona}&scope_id={id}&status={active|deleted}
POST   /chats/bulk-delete          { chat_ids: string[] }
POST   /chats/bulk-restore         { chat_ids: string[] }
POST   /chats/bulk-permanent-delete { chat_ids: string[] }
POST   /chats/bulk-move            { chat_ids: string[], destination_type: 'instance'|'folder'|'persona', destination_id: string, behavior: object }

# Memory Management
GET    /memories?scope_type={global|instance|persona}&scope_id={id}&status={active|archived|deleted}
POST   /memories/bulk-archive      { memory_ids: string[] }
POST   /memories/bulk-delete       { memory_ids: string[] }
POST   /memories/bulk-restore      { memory_ids: string[] }
POST   /memories/bulk-permanent-delete { memory_ids: string[] }
POST   /memories/bulk-move         { memory_ids: string[], destination_scope_type: string, destination_scope_id: string }

Database Tables

-- Chat deletion fields (added to existing chat_threads table)
ALTER TABLE chat_threads ADD COLUMN deleted_at TIMESTAMP NULL;
ALTER TABLE chat_threads ADD COLUMN deleted_by_user_id UUID NULL;
ALTER TABLE chat_threads ADD COLUMN restore_to JSONB NULL;
ALTER TABLE chat_threads ADD COLUMN delete_reason VARCHAR(20) NULL;

-- Memory lifecycle fields (added to existing memory_items table)  
ALTER TABLE memory_items ADD COLUMN status VARCHAR(20) DEFAULT 'active';  -- active|archived|deleted
ALTER TABLE memory_items ADD COLUMN deleted_at TIMESTAMP NULL;
ALTER TABLE memory_items ADD COLUMN restore_to JSONB NULL;

-- Indexes for performance
CREATE INDEX idx_chats_deleted_at ON chat_threads(deleted_at) WHERE deleted_at IS NOT NULL;
CREATE INDEX idx_chats_active ON chat_threads(user_id, instance_id) WHERE deleted_at IS NULL;
CREATE INDEX idx_memories_status ON memory_items(scope_type, scope_id, status);
CREATE INDEX idx_memories_deleted ON memory_items(deleted_at) WHERE deleted_at IS NOT NULL;

-- Auto-purge job query
-- SELECT * FROM chat_threads WHERE deleted_at IS NOT NULL AND deleted_at < NOW() - INTERVAL '30 days';
-- SELECT * FROM memory_items WHERE status = 'deleted' AND deleted_at < NOW() - INTERVAL '30 days';

Implementation Principles

  1. Build one reusable Chat/Memory Manager component — scope is a parameter, not a different UI. Never build three separate managers.
  2. Deletion must affect memory indexing immediately. If a deleted chat’s content still appears in AI responses, the feature is broken. Every retrieval query must filter on deleted_at IS NULL.
  3. The restore_to snapshot is critical. Capture it at deletion time, not at restore time. The world changes between delete and restore — Folders get deleted, Instances get archived, Personas get deactivated. The snapshot is the only reliable record of where the chat belonged.
  4. Handle every edge case in restoration. Missing Folder, missing Instance, missing Persona — all three WILL happen in production. Build the fallback logic from day one.
  5. Default behaviors should be protective. “Also delete memories” defaults to OFF. Retention window defaults to 30 days. Auto-purge is optional. Users who want aggressive cleanup can enable it; users who are cautious get safety nets by default.
  6. Links and references must degrade gracefully. A deleted chat’s references become “Reference unavailable (deleted)” with a one-click restore option — never a silent broken link.
  7. Bulk move and bulk delete are separate operations that share the same selection mechanism (multi-select + bulk action bar). The bar shows different actions depending on context (Active view vs Recently Deleted view).
  8. Memory has four states; chats have three. Memories get an “Archived” state (preserved but dormant) because memories have a retrieval dimension that chats don’t. Archiving a memory means “keep it, but don’t let the AI use it right now.”
  9. The system should make cleanup feel fast and satisfying. Smart filters (Short chats, One-offs, No pins) are the key to this — they surface the most obviously disposable content first. Without them, cleanup feels like a chore. With them, it feels like power.
  10. This feature is what makes the platform feel “finished.” Every competitor (ChatGPT, Claude, Gemini) makes chat cleanup painful. Nailing this is a quiet but significant competitive advantage.

Document 12: Persona Skill Slots & Capability Limits

Junior Developer Breakdown

Source: 12. aiConnected OS Persona Skill Slots.md Created: 12/18/2025 | Updated: 12/18/2025

Why This Document Exists

The Problem (What Every AI Platform Gets Wrong): Every major AI platform — ChatGPT, Claude, Gemini, Copilot — presents its AI as a single, omniscient entity that can do anything. Ask it to write code, draft a legal contract, create a marketing plan, analyze financial statements, and design a logo — it will happily attempt all of them in the same conversation. Sometimes it does well. Often it doesn’t. And when it fails, the user doesn’t know whether to trust it next time, because there’s no way to know what it’s actually good at. This creates three serious problems:
  1. Hallucination pressure — the AI feels “obligated” to answer everything, so it guesses rather than admitting it doesn’t know
  2. User disappointment — the user expects expert-level performance across all domains and is inevitably let down
  3. Silent overreach — the AI quietly attempts tasks it has no real competence in, producing confident-sounding but wrong output
What This Document Solves: It defines the Persona Skill Slot system — a mechanism that gives each AI Persona a finite, explicit, visible set of competencies. Like a real employee, each Persona is great at specific things and honest about what falls outside their scope. When a user asks for something outside a Persona’s skills, the Persona doesn’t guess — it offers three clear options: help temporarily, learn the skill permanently (consuming a slot), or recommend a specialist Persona. Why Anyone Should Care: This is arguably the most philosophically important document in the entire aiConnected platform. It doesn’t just define a feature — it defines a trust system. It reframes what “general intelligence” means, establishes that AI boundaries are a feature (not a limitation), and creates the psychological foundation for users to actually trust their AI Personas. Everything else in the platform — collaborative Personas, agentic teams, memory systems — depends on this feature working correctly. The Founder’s Core Insight (in his own words): “In real life, you would not ask your salesperson to also be your accountant. That’s not how it would work. If you start talking about, ‘oh, you’re a salesperson, but you’re also a finance person and you’re also going to be a video editor and you’re going to be my graphic designer,’ that’s where you would never have asked a real human to be all those things and wear all those hats. Because at that point, you’re talking about a business owner or a CEO, and that’s not your average person and it’s not realistic.” Cross-References:
  • Doc 8 (Cognition Console) defines MemoryNodes with scope — Skill Slots map to domain knowledge graphs within the Cognigraph memory system
  • Doc 9 (Collaborative Personas) depends on this: multi-Persona collaboration only works if each Persona has a distinct, bounded role
  • Doc 10 (Computer Use) references Skill Slots for permission rings — what a Persona can do on a computer depends on what skills it has
  • Doc 14 (Build Plan) lists Persona Skill Slots UI as Phase 6, with skill slot cards, request guardrails, and capability receipts
  • Doc 15 (Document & Organize Ideas) defines Persona creation, templates, and the marketplace — all constrained by Skill Slots
  • Doc 19 (Fluid UI Architecture) integrates Skill Slots with the Cipher god layer — Cipher validates scope, checks capacity, and enforces skill boundaries behind the scenes

The Platform Axiom (Memorize This)

“General intelligence means the ability to learn and adapt across domains — not the ability to be everything at once.” This single sentence is the north star for the entire Skill Slot system. Every design decision, every edge case, every behavioral rule flows from it. The system intentionally rejects the idea that general intelligence means “one entity that can do all things simultaneously.” Instead, aiConnected defines general intelligence as:
  • The ability to learn new domains (a Persona can acquire new skills)
  • The ability to recognize when specialization is required (a Persona knows when something is outside its scope)
  • The ability to delegate or expand via structure (when a Persona can’t do it, the system helps create one that can)
This mirrors how human intelligence actually works: humans can learn almost anything, but no human can do everything at once.

FEATURE 1: The Core Design Principle — Finite Skill Capacity

What it does: Every Persona in the system has a hard, finite maximum number of Skill Slots (e.g., 10). This limit is intentional, visible to advanced users, and non-negotiable. Once a Persona reaches its capacity, it cannot acquire additional permanent skills without the user making a trade-off. Why it matters: This single rule prevents hallucination pressure, user disappointment, silent overreach, unrealistic expectations, and the dreaded “why didn’t you tell me you didn’t know this?” moment. How it changes user psychology: Without skill limits, the relationship is: “You’re an AI, you should know this.” With skill limits, the relationship becomes: “You’re Sally, and this may or may not be one of your skills.” That reframing alone changes user psychology dramatically. The human parallel: This mirrors real human limitations — finite attention, finite specialization, finite maintenance capacity. No one expects a new employee, friend, or partner to be perfect at everything. But current AI systems silently invite that expectation — and then betray it. aiConnected’s design never invites the expectation in the first place. Why skill saturation is a feature, not a bug: Hitting the skill limit is not a failure state. It’s a design moment. It naturally leads to team creation, specialization, delegation, and realistic digital organizations — exactly like in real life. Instead of “Why can’t you do everything?”, the user thinks “Okay, this needs a specialist.” That’s the behavior you want to encourage.

FEATURE 2: Definitions — What Is a Skill Slot vs. a Subskill

What it does: Establishes the precise distinction between a Skill Slot (consumes capacity) and a Subskill (does not consume capacity), which determines what “counts” against a Persona’s limit. This distinction is critical and often misunderstood. A Skill Slot is NOT individual abilities or micro-tasks. It is a siloed domain of competence that requires its own knowledge scope, workflows, artifacts, evaluation criteria, and risk profile.

Skill Slot Definition

A Skill Slot represents a distinct domain that requires its own knowledge graph. It:
  • Is explicit — clearly named and visible
  • Consumes finite capacity — one of the Persona’s limited slots
  • Is accountable — the Persona is expected to perform reliably within it
  • Maps conceptually to its own domain knowledge graph — a separate body of concepts, workflows, deliverables, tools, and risk profiles
Examples of Skill Slots:
  • Sales
  • Marketing
  • Finance / Accounting
  • Legal Writing
  • Software Engineering
  • Graphic Design
  • Project Management
  • Executive Assistance
  • SEO Strategy
  • Emotional Support
  • Technical Debugging
  • Research Synthesis

Subskill Definition

Subskills are domain-native abilities that exist within a Skill Slot. They do NOT consume additional slots. They share the same domain graph and do not expand the Persona’s scope. Example — the Sales Skill Slot includes these Subskills:
  • Rapport building
  • Prospect research
  • Objection handling
  • Follow-up writing
  • Light social outreach
  • Pipeline hygiene
  • Cold email sequences
  • CRM updates
All of these are things a salesperson would naturally do. They live inside the Sales domain graph. None of them require a separate body of knowledge.

The Rule of Thumb

If it changes the role you hired, it’s a new Skill Slot. If it just improves performance within the role, it’s a subskill. Salesperson → writing follow-up emails → Subskill (same domain graph) Salesperson → reviewing financial statements and creating a budget → New Skill Slot (completely different domain graph)

FEATURE 3: Skill Slot Types — Core, Acquired, and Temporary

What it does: Classifies every skill a Persona has into one of three types, each with different rules about how it was obtained, whether it consumes a permanent slot, and how long it persists.

Type 1: Core Skills

  • Assigned at Persona creation — these define the Persona’s primary role
  • Shape identity — they are central to who this Persona “is”
  • Rarely removed — removing a Core Skill is like changing the Persona’s job title
  • Shape default behavior — the Persona’s tone, approach, and assumptions are influenced by its Core Skills
Example: Persona Role: Salesperson → Core Skill Slot: Sales A Core Skill is what the user thinks they “hired” the Persona for. It’s the reason the Persona exists.

Type 2: Acquired Permanent Skills

  • Added intentionally by the user — the user explicitly decides to expand the Persona’s capabilities
  • Consume an available Skill Slot — counted against the Persona’s maximum capacity
  • Persist across sessions — once acquired, the skill stays until explicitly removed
  • Expand the Persona’s long-term competence — the Persona gets better over time in this area
Example: Adding “Marketing Strategy” as a permanent skill to a Sales Persona. The user decided that their salesperson should also handle marketing, and explicitly chose to spend a skill slot on it.

Type 3: Temporary (Task-Scoped) Skills

  • Borrowed for a specific task or project — the user needs help with something outside the Persona’s normal scope, but just this once
  • Do NOT consume a permanent slot — capacity is not affected
  • Are explicitly labeled as temporary — the Persona and the user both know this is a one-time thing
  • Auto-expire after task completion or time limit — the skill disappears when the task is done
Why temporary skills matter: They are the escape valve that preserves fluidity without breaking realism. A user might need their Sales Persona to help with legal copywriting for one specific project. With temporary skills, the Persona can assist for that task without permanently becoming “also a legal expert.” Critical rule: Temporary ≠ Absorbed. The Persona does NOT become something they’re not. The skill does not persist. No permanent learning occurs. Identity remains unchanged. Example dialogue: User: “Sally, can you help with legal copywriting just for this site?” Sally: “I don’t specialize in legal copywriting, but I can research it temporarily for this project without adding it to my permanent skills. Would you like me to do that?”

FEATURE 4: Domain Boundary Enforcement (The Hard-Coded Rule Engine)

What it does: Provides a deterministic, hard-coded system for deciding whether a user’s request falls within a Persona’s existing skills, represents a new domain requiring a Skill Slot, or can be handled as a temporary assist. This ships on day one — no machine learning required. Why this matters: Without clear boundary enforcement, the system degrades into the same “AI does everything” pattern it’s designed to prevent. The boundary engine is what makes Skill Slots real, not just theoretical.

Five Domain Boundary Heuristics

Every user request is evaluated against these five tests. If a request triggers multiple “new domain” signals, it’s definitively outside scope.

Heuristic A: Deliverable Type Test

If the requested output is a different class of artifact than the role normally produces, it’s likely a new Skill Slot.
DomainTypical Deliverables
SalesCall scripts, follow-up sequences, proposals, CRM updates, pipeline summaries
FinanceBudgets, reconciliations, financial statements, forecasting models
DesignBrand identity packs, mockups, wireframes, style guides
LegalContracts, terms of service, compliance documents, cease-and-desist letters
If a Sales Persona is asked to produce a budget reconciliation → artifact class screams Finance → new slot.

Heuristic B: Core Concepts Test

Look at the top-level ontology terms required by the request.
DomainCore Concepts
SalesICP, objections, pipeline stages, conversion, outreach cadence, qualification
FinanceP&L, cash flow, accrual, reconciliation, chart of accounts, budgeting
Minimal overlap between the request’s concepts and the Persona’s domain → new slot.

Heuristic C: Toolchain Test

If the request requires a different tool stack, it’s likely a different slot.
DomainTools
SalesCRM, dialer, email sequencer, lead enrichment
FinanceAccounting software, bank feeds, budgeting templates, spreadsheet modeling

Heuristic D: Liability / Risk Test

If the task carries a different risk class, it should force specialization. Finance/accounting, legal writing, medical guidance, security — these are high-risk domains and should almost always be separate slots unless the Persona is explicitly that specialist.

Heuristic E: “Would You Hire This Person For That?” Test

The founder’s human realism test. If most businesses would NOT assign this task to that employee, it’s a new slot.
RequestSame Person?Verdict
Salesperson → write a cold email sequenceYesSubskill
Salesperson → be the accountantNoNew Slot
Salesperson → design a brand identity packNoNew Slot
Marketing → write blog contentYesSubskill
Marketing → draft a legal contractNoNew Slot
Implementation note: This heuristic works best as a fallback tie-breaker because it’s slightly more subjective than the others.

The Rule Engine (Mechanical Implementation)

Turn the heuristics into a deterministic classifier:
  1. Every user request is classified into:
    • Domain label(s): Sales, Marketing, Finance, Legal, Design, Engineering, etc.
    • Deliverable type: script, spreadsheet, budget, contract, design asset, etc.
    • Risk class: low / medium / high
  2. Compare request domains against Persona’s current domains:
    • If request domain ∈ Persona domains → allow (it’s a subskill)
    • Else → “outside scope” decision path (temporary skill / add slot / new Persona)
  3. If ambiguous (e.g., Sales vs Marketing blur), allow if:
    • Domain distance is small (based on a predefined adjacency graph — see below)
    • Deliverable type matches allowed artifacts for either domain
    • Risk class is NOT high

The Domain Adjacency Graph (Blur Zones)

Some domains are naturally adjacent — they share vocabulary, tools, and deliverable types. These “blur zones” should be pre-defined and hard-coded: Allowed Blur Zones (small domain distance):
  • Sales ↔ Marketing
  • Marketing ↔ Copywriting
  • Operations ↔ Project Management
  • Design ↔ Brand Strategy
  • Engineering ↔ DevOps
Blocked Jumps (large domain distance):
  • Sales ↔ Finance
  • Marketing ↔ Legal
  • Design ↔ Cybersecurity
  • Engineering ↔ Accounting
  • Customer Support ↔ Medical Guidance
The adjacency graph lets the system handle realistic gray areas without either being too rigid (blocking a marketing person from writing ad copy) or too permissive (letting a salesperson become an accountant).

FEATURE 5: Knowledge Graph Boundary Modeling

What it does: Maps Skill Slots to the Cognigraph memory architecture (Doc 8), where each Skill Slot equals one top-level domain graph. This creates a structural enforcement layer, not just a policy layer. How it works: Each Skill Slot = one top-level domain graph in Cognigraph. Each domain graph contains:
  • Concept nodes — the vocabulary and knowledge of the domain
  • Workflow nodes — procedural steps for how work is done
  • Deliverable nodes — the artifacts this domain produces
  • Tool nodes — the integrations and tools this domain uses
  • Constraint/standards nodes — the rules, best practices, and evaluation criteria
Cross-graph edges are “support links,” not “ownership.” A Sales graph may reference “Pricing” or “Revenue” as concepts (support links), but it doesn’t own budgeting workflows, accounting standards, or reconciliation procedures. So a Sales Persona can talk about revenue in context, but cannot act as Finance without adding Finance as a Skill Slot. Why this matters for developers: This isn’t just a conceptual model — it directly affects how Cognigraph stores and retrieves memory for each Persona. When a Persona with a Sales Skill Slot receives a query, the memory retrieval system scopes its search to the Sales domain graph (plus any support-linked concepts). It does NOT search the Finance domain graph — because the Persona doesn’t have that Skill Slot. This is structural enforcement, not prompt-level guidance.

FEATURE 6: Persona Behavior When Outside Scope (The Three Responses)

What it does: Defines the exact behavioral contract for what a Persona does when asked to perform a task outside its Skill Slots. The Persona must NEVER guess, bluff, or silently attempt execution. This is where trust is created. The three responses are the visible expression of the entire Skill Slot philosophy.

Response 1: Temporary Assist

When: The request is outside scope, but the Persona can reasonably help for this one task. Persona behavior:
  • Offers to help for THIS TASK ONLY
  • No permanent learning occurs
  • Identity remains unchanged
  • Explicitly labels the help as temporary
Example dialogue: “I don’t specialize in legal copywriting, but I can research it temporarily for this project without adding it to my permanent skills. Would you like me to do that?”

Response 2: Permanent Skill Acquisition

When: The user appears to need this capability regularly, and the Persona has available Skill Slots. Persona behavior:
  • Informs the user this is outside current scope
  • Asks permission to add a new Skill Slot
  • Reports current slot availability (“This would use 1 of my remaining 3 skill slots”)
  • User explicitly confirms before any change occurs
Example dialogue: “I can learn Marketing Strategy and add it as a permanent skill. This would use 1 of my remaining skill slots (7 of 10 used). Would you like to proceed?”

Response 3: Specialist Persona Recommendation

When: The Persona’s slots are full, or the request is so far outside scope that a dedicated Persona would be better. Persona behavior:
  • Honestly states the capability gap
  • Recommends creating or assigning a dedicated Persona
  • May offer to help set up the new Persona
Example dialogue: “I’ve reached my skill capacity. To handle finance and accounting work well, I recommend creating a dedicated Finance Persona. Would you like me to help with that?”

The Behavioral Contract (Non-Negotiable Rules)

  1. A Persona NEVER pretends to have skills it doesn’t have
  2. A Persona NEVER silently attempts work outside its scope
  3. Refusal is treated as professional boundary enforcement, not failure
  4. The three responses are the ONLY acceptable behaviors when outside scope
  5. “I don’t do that” is expected behavior — it builds trust, not disappointment

FEATURE 7: Why This Prevents Hallucinations

What it does: Removes the systemic pressure that causes hallucinations in the first place. This is not a hallucination detection system — it’s a hallucination prevention system. The root cause of hallucinations: Most hallucinations happen because:
  • The system feels expected to answer (it has no permission to say no)
  • The user assumes capability (the AI presented itself as omniscient)
  • Refusal feels like failure (the system is penalized for honesty)
How Skill Slots fix this: In the aiConnected model:
  • Refusal is competence (the Persona knows its limits)
  • Boundary-setting is professionalism (just like a real employee)
  • “I don’t know” is expected behavior (not a bug)
This flips the incentive structure entirely. The Persona is rewarded for accuracy within its scope, not penalized for refusing to guess outside it. The trust paradox: The more often an AI says “I don’t do that,” the more users trust it when it says “I do.” This is counterintuitive but psychologically well-established. Current AI systems destroy trust by pretending to know everything and then occasionally being wrong. aiConnected Personas build trust by being honest about their limits and reliable within their scope.

FEATURE 8: Role Archetypes (Handling the Generalist Exception)

What it does: Addresses the founder’s observation that some roles are intentionally cross-domain (a CEO is expected to do many things). It introduces role archetypes with different slot rules, so the system can accommodate both specialists and generalists without breaking the Skill Slot model. The problem: A strict “10 slots max, no exceptions” rule works for most Personas. But what about a Persona whose role is explicitly cross-domain? A Founder’s Assistant, an Operations Manager, or an Executive Strategist is expected to work across domains. Making them play by pure specialist rules would feel artificial. The solution: Three Role Archetypes

Archetype 1: Specialist

  • Examples: Sales Rep, Accountant, Graphic Designer, Legal Analyst
  • Slot rules: Narrow domain focus, strong depth, standard slot capacity (e.g., 10)
  • Adjacency allowance: Strict — only close domain neighbors allowed as blur zones
  • Identity: “I’m an expert at X”

Archetype 2: Generalist

  • Examples: Operations Manager, Founder’s Assistant, Growth Generalist, Executive Assistant
  • Slot rules: Wider adjacency allowance, still finite slots
  • Adjacency allowance: Broader blur zones — can work across more domain boundaries
  • Identity: “I coordinate across domains”

Archetype 3: Executive

  • Examples: CEO/Founder Persona, Chief Strategy Officer, Board Advisor
  • Slot rules: Can have broader domain slots, but MUST still “pay” for them (slots are consumed) and is still bounded by the maximum
  • Adjacency allowance: Broadest — can span far-apart domains, but still can’t do everything
  • Identity: “I see the big picture and direct specialists”
Critical rule: A user can create a Persona whose identity is explicitly “Generalist” or “Executive,” but it’s a conscious choice — not accidental scope creep. The system enforces this by requiring the user to select the archetype at Persona creation. A Persona cannot silently drift from Specialist to Generalist.

FEATURE 9: Domain Boundary Crossing — The Behavioral Script

What it does: Defines the exact language a Persona uses when a user crosses domain boundaries. This is a system-level behavioral script, not something left to prompt engineering. When the user crosses domains, the Persona responds with a script that reinforces realism: “That’s finance/accounting work, which isn’t within my Sales scope. Here are your options:
  1. I can help with this temporarily — just for this task, without adding it to my skills.
  2. I can learn Finance as a permanent skill — this would use one of my remaining slots.
  3. I can help you set up a dedicated Finance Persona who specializes in this.
Which would you prefer?” Why the script matters: The language is deliberate. It doesn’t say “I can’t do that” (which feels like failure). It says “that’s outside my scope” (which feels professional) and immediately offers three constructive paths forward. The user never hits a dead end. For casual users vs. power users:
User TypeWhat They SeeWhat They Experience
Casual UsersSkill limits exist but are handled quietlyGentle prompts, smart defaults. They rarely even notice the cap — they just experience honesty
Power UsersSee skill slots explicitly in the UICan manage add/remove skills, lock Personas, audit learning history, design strict teams
Same system. Different exposure. The casual user gets a polished, natural experience. The power user gets full control.

FEATURE 10: The AGI Correction — Redefining General Intelligence

What it does: Establishes a product-level philosophical position that reframes “general intelligence” away from the AGI fantasy of “one omniscient entity” and toward a realistic model of “a system that can learn, specialize, and delegate.” Why this is a product feature, not just philosophy: This position directly affects:
  • Marketing messaging (“Your team can learn anything” vs “One AI that does everything”)
  • User onboarding (setting expectations from day one)
  • UI design (skill slots as tangible representations of limits)
  • System behavior (honest refusal as the default, not a fallback)
The corrected definition aiConnected implements:
AGI FantasyaiConnected Reality
One mind that can do every job, at expert level, on demand, foreverA system of specialized Personas that can learn, delegate, and collaborate
Intelligence means knowing everythingIntelligence means knowing what you know and what you don’t
Scale = making one entity smarterScale = adding specialists, forming teams, routing and coordinating
Refusal = failureRefusal = professional boundary enforcement
The goal is omniscienceThe goal is credibility
How this maps to architecture:
  • The underlying LLM + reasoning = the general substrate (raw capability)
  • Skill Slots = specialized, durable domain graphs (structured competence)
  • Persona identity = the consistent policy layer that determines behavior and priorities
  • Teams = how you scale, just like organizations and even brains (modular subsystems)
Platform axiom to codify: “A Persona can learn many things over time, but cannot be everything at once. General intelligence means the ability to learn and adapt across domains — not the ability to be everything at once.”

FEATURE 11: Emotional Containment (The Hidden Safety Feature)

What it does: By bounding Personas to specific roles, the Skill Slot system also prevents emotional overreach — a problem most AI platforms completely ignore. The problem: People form emotional expectations of AIs. If a companion Persona also acts as a doctor, lawyer, and financial advisor, the relationship becomes dangerously blurred. Users may over-rely on the AI for high-stakes decisions in domains where it has no real competence. How Skill Slots fix this:
  • Bounded Personas prevent emotional overreach
  • Reduce dependency risk (the user doesn’t rely on one Persona for everything)
  • Keep relationships legible (the user knows what each Persona is for)
  • Maintain role clarity (a companion Persona that doesn’t also act like a doctor feels safer and more authentic)
The deeper point: You’re not limiting what AI can do. You’re limiting what AI pretends to be. That single shift reduces hallucinations, aligns expectations, prevents disappointment, enables scale, and makes the system feel human in the only way that actually matters — through constraint.

FEATURE 12: What Counts as a “Skill” (Preventing Skill Inflation)

What it does: Prevents the system from degrading into a state where “everything is a skill” — which would make Skill Slots meaningless. A skill is NOT:
  • “Knows facts about X” (that’s knowledge, not competence)
  • “Can answer questions about Y” (that’s general capability, not a domain)
A skill IS:
  • A domain of reliable competence — the Persona can perform consistently
  • Something the Persona is accountable for — it’s expected to do well
  • Something that requires its own knowledge graph — a separate body of concepts, workflows, deliverables, tools, and evaluation criteria
Examples of valid skills:
  • Executive assistance
  • Project coordination
  • WordPress / Elementor workflows
  • Legal copywriting
  • SEO strategy
  • Emotional support
  • Humor / comedic writing
  • Technical debugging
  • Research synthesis
  • Teaching / tutoring
What prevents inflation: The five heuristics (Feature 4) provide a mechanical test. If a capability shares the same deliverable types, concepts, tools, risk class, and “would you hire this person for that?” answer as an existing Skill Slot, it’s a subskill — NOT a new slot.

FEATURE 13: Confidence Signaling (Per-Slot Transparency)

What it does: Each Skill Slot can signal its confidence level to the user, so the user knows not just what the Persona can do, but how well it can do it. From the Build Plan (Doc 14):
  • Skill slots support slot-level confidence signaling
  • Slot-level limits are visible
  • Slot descriptions explain what the Persona can and cannot do within each skill
How this works in practice:
  • A Persona with a Core Skill in Sales and an Acquired Skill in Marketing might display: Sales (Core — high confidence) and Marketing (Acquired — moderate confidence)
  • The user understands that Sales outputs are deeply reliable, while Marketing outputs should be reviewed more carefully
  • This transparency builds trust: the Persona isn’t pretending to be equally expert in everything

FEATURE 14: Capability Enforcement in the UI

What it does: Makes Skill Slot boundaries visible and enforceable through the user interface, not just through behavioral scripts. From the Build Plan (Doc 14), three UI enforcement mechanisms:

14a: Skill Slot Cards

  • Visible panels in the Persona’s profile showing: “What this Persona can help with”
  • Each slot listed with its name, type (Core/Acquired/Temporary), and confidence level
  • Remaining capacity shown: “7 of 10 slots used”

14b: Inline Request Guardrails

  • When a user sends a request outside the Persona’s scope, the UI shows inline warnings
  • Not error messages — gentle notifications: “This may be outside [Persona]‘s current skills”
  • Suggested reroute to a better Persona if one exists

14c: Capability Receipts

  • Brief statements in responses indicating assumptions and known limits
  • Only shown when relevant (not on every message)
  • Example: “I handled this as a marketing task. For deeper financial analysis, you may want [Finance Persona].“

14d: Persona Refusal with Explanation

  • When a Persona refuses (Feature 6), the UI explains WHY
  • “Why this request was refused” panel shows the domain boundary that was crossed
  • Offers the three constructive paths forward (temporary assist, permanent skill, specialist Persona)

FEATURE 15: Relationship to Cipher (The God Layer Enforcement)

What it does: Connects Skill Slot enforcement to the Cipher orchestration layer (Doc 19). Cipher — the invisible, unrestricted cognition layer above all Personas — is the ultimate enforcer of Skill Slot boundaries. Cipher’s role in Skill Slot enforcement:
  • Persona creation → Cipher validates the scope (ensures the selected role archetype and skills are coherent)
  • Skill addition → Cipher checks capacity (ensures the Persona has available slots)
  • Request routing → Cipher classifies the domain of user requests and routes to the appropriate Persona
  • Boundary enforcement → Cipher decides whether a request falls within scope, using the domain boundary heuristics
  • Refusals → Cipher authorizes and explains refusals via the Persona (the Persona delivers the message, but Cipher makes the decision)
Why this matters: Even if a user tries to override Skill Slot boundaries through clever prompting, Cipher enforces the rules structurally. The user cannot “jailbreak” a Persona into performing outside its scope, because the enforcement happens at the Cipher layer — not at the Persona’s prompt level. The critical rule: Cipher can ONLY act through Personas. It can never bypass them. Even if Cipher “knows” the answer to a finance question, it must route that answer through a Persona with the Finance Skill Slot. If no such Persona exists, Cipher triggers the “Specialist Persona Recommendation” response.

Non-Goals of This Feature

To be clear about what the Skill Slot system is NOT designed to do:
  • Maximize apparent capability — the goal is NOT to make Personas seem as powerful as possible
  • Imitate omniscience — the goal is NOT to create a “one AI that knows everything” experience
  • Replace all roles with one Persona — the goal is NOT to make one Persona do everything
  • Silently stretch competence — the goal is NOT to have Personas quietly attempt things they shouldn’t
The purpose is credibility, not spectacle. Trust, not coverage. Depth, not breadth.

Data Model

type SkillSlot = {
  id: string;
  persona_id: string;
  name: string;                    // "Sales", "Marketing", "Finance"
  type: 'core' | 'acquired' | 'temporary';
  domain_graph_id: string;         // Links to Cognigraph domain knowledge graph
  confidence_level: 'high' | 'moderate' | 'developing';
  acquired_at: string;
  expires_at: string | null;       // null for permanent; set for temporary
  subskills: string[];             // ["rapport building", "prospect research", "objection handling"]
  risk_class: 'low' | 'medium' | 'high';
  deliverable_types: string[];     // ["call scripts", "proposals", "pipeline summaries"]
  tool_integrations: string[];     // ["CRM", "email sequencer"]
};

type Persona = {
  id: string;
  name: string;
  role: string;                    // "Salesperson", "Executive Assistant"
  archetype: 'specialist' | 'generalist' | 'executive';
  max_skill_slots: number;         // default 10, configurable
  skill_slots: SkillSlot[];
  available_slots: number;         // computed: max - used permanent slots
  // ... other Persona fields from Doc 8 ...
};

type DomainBoundaryResult = {
  request_domain: string;
  request_deliverable_type: string;
  request_risk_class: 'low' | 'medium' | 'high';
  in_scope: boolean;
  adjacency_match: boolean;        // true if blur zone applies
  recommended_action: 'allow' | 'temporary_assist' | 'acquire_skill' | 'recommend_specialist';
  reason: string;
};

Domain Adjacency Graph (Predefined)

const DOMAIN_ADJACENCY: Record<string, string[]> = {
  'sales':              ['marketing', 'customer_support', 'business_development'],
  'marketing':          ['sales', 'copywriting', 'brand_strategy', 'social_media'],
  'copywriting':        ['marketing', 'content_strategy', 'brand_strategy'],
  'finance':            ['accounting', 'financial_planning', 'business_analysis'],
  'accounting':         ['finance', 'bookkeeping', 'tax_preparation'],
  'legal':              ['compliance', 'contract_management'],
  'engineering':        ['devops', 'data_engineering', 'technical_architecture'],
  'design':             ['brand_strategy', 'ux_research', 'front_end_development'],
  'operations':         ['project_management', 'process_improvement'],
  'project_management': ['operations', 'product_management'],
  'customer_support':   ['sales', 'community_management'],
  'hr':                 ['recruiting', 'training', 'compliance'],
};

API Endpoints

# Skill Slot Management
GET    /personas/{personaId}/skills                      # List all skill slots
POST   /personas/{personaId}/skills                      # Add a new skill slot
DELETE /personas/{personaId}/skills/{skillId}             # Remove a skill slot
PATCH  /personas/{personaId}/skills/{skillId}             # Update skill (e.g., change type from temporary to permanent)

# Domain Boundary Check
POST   /personas/{personaId}/check-scope                  # Evaluate whether a request is in scope
       Body: { request_text: string, request_domain?: string }
       Response: DomainBoundaryResult

# Persona Capacity
GET    /personas/{personaId}/capacity                     # Returns { max_slots, used_slots, available_slots, slots: SkillSlot[] }

Resulting User Experience

Over time, users naturally learn:
  • Which Persona handles which work
  • When to add specialists
  • How to structure teams instead of overloading individuals
This reduces disappointment, builds trust, and creates realistic digital organizations.

Implementation Principles

  1. Skill Slots are the trust mechanism. Without them, Personas are just chatbots with different names. With them, Personas are believable collaborators. Every decision should reinforce trust.
  2. The five heuristics ship on day one. Don’t wait for ML-based domain classification. The Deliverable Type Test, Core Concepts Test, Toolchain Test, Liability Test, and “Would You Hire?” Test can all be implemented as deterministic rules with pre-defined lookup tables.
  3. The domain adjacency graph is pre-defined, not learned. Hard-code the blur zones. This prevents the system from gradually expanding what counts as “in scope” and defeating the purpose of boundaries.
  4. Temporary skills are the safety valve. They let users get things done without permanently changing their Personas. Make them easy to use and clearly labeled.
  5. Refusal is never a dead end. Every boundary enforcement response MUST offer three constructive paths forward. A user should never feel stuck — just redirected.
  6. Casual users see honesty; power users see slots. The same enforcement system runs for everyone, but the UI exposure differs. Casual users experience gentle suggestions; power users see explicit slot counts and can manage them directly.
  7. Cipher enforces boundaries structurally, not through prompts. Skill Slot enforcement happens at the orchestration layer, not at the individual Persona’s prompt level. This prevents prompt-level circumvention.
  8. Skill Slots map to Cognigraph domain graphs. This isn’t just a UI concept — it’s a memory architecture concept. Each Skill Slot has a corresponding domain graph in Cognigraph, and retrieval is scoped accordingly.
  9. This feature is foundational. All Persona behavior, learning mechanics, team structures, collaborative chat routing, and capability enforcement depend on Skill Slots being enforced consistently and without exception. It is not optional and cannot be deferred.
  10. The philosophy IS the product. “General intelligence means the ability to learn and adapt across domains — not the ability to be everything at once.” If a feature contradicts this axiom, the feature is wrong.

Document 13: Adaptive User Interface Tutorials

Junior Developer Breakdown

Source: 13. aiConnected OS Adaptive User Interface Tutorials.md Created: 12/20/2025 | Updated: 12/20/2025

Why This Document Exists

The Problem (What The Founder Hates About Onboarding): Every complex software product ships with some form of tutorial — forced walkthroughs that make users click around the screen, explore every feature, and sit through step-by-step instructions before they can actually start using the product. The founder explicitly hates these. His words: “Those tutorials that force you to click around the screen, and they force you to explore the entire user interface before you can really get started. I’ve just always hated those.” And the problem is especially acute for aiConnected OS, which is enormously complex: multiple Personas, Instances, Skill Slots, sleep mode, dashboards, browser integration, canvas, file system, workspaces, model selection, agentic teams — the feature surface is massive. Traditional tutorials would fail for three fundamental reasons:
  1. Users want to DO things, not learn the interface first — they came with a goal, not curiosity about menus
  2. Users don’t know what features exist or what they’ll need — they can’t learn features they have no context for yet
  3. The product’s complexity can’t be flattened into a linear walkthrough — aiConnected is too deep, too layered, and too use-case-dependent for any single tour to cover
What This Document Solves: It defines the Adaptive Guidance Layer — a system that replaces forced tutorials with contextual, intent-driven suggestions that appear only when the user is about to benefit from a feature they haven’t discovered yet. No walkthroughs. No forced clicks. No “let me show you around.” Instead, the system watches what the user is trying to do and offers relevant capabilities at exactly the right moment. Why Anyone Should Care: This isn’t just a UX preference — it’s a philosophical alignment with everything aiConnected is. A platform built on the principle of “intelligence adapting to you, not you adapting to intelligence” would be hypocritical if it shipped with a rigid, forced tutorial. The onboarding experience IS the product experience. If the first thing a user encounters is a patronizing walkthrough, the entire platform’s promise of fluid, adaptive intelligence is undermined before they ever experience it. Cross-References:
  • Doc 12 (Persona Skill Slots) — the Guidance Layer actively suggests Persona specialization and skill boundaries, re-educating users away from the “all-knowing AI” expectation
  • Doc 15 (Document & Organize Ideas) — defines the “New” button choice panel and Instance-aware search, both of which benefit from adaptive discovery rather than upfront tutorials
  • Doc 17 (In-Chat Navigation) — ChatNav features are prime candidates for adaptive introduction when users’ chats get long enough to benefit
  • Doc 19 (Fluid UI Architecture) — the entire Fluid UI philosophy of “activities emerge, interfaces adapt” is directly expressed through adaptive guidance rather than prescribed tutorials
  • Doc 14 (Build Plan) — progressive disclosure is listed as a core UI principle; the Guidance Layer is how progressive disclosure is delivered

FEATURE 1: The Core Concept — Contextual, Intent-Driven Enablement

What it does: Replaces traditional forced tutorials with a passive, hidden training system that monitors user intent and offers relevant features only when the user would benefit from them. What this IS: An Adaptive Guidance Layer that watches intent (not clicks), responds only when value is imminent, never interrupts flow, never assumes ignorance, and never forces discovery. The system teaches itself only when the user is about to benefit. What this is NOT: A tutorial. A walkthrough. A tooltip tour. A “getting started” wizard. A help center popup. An interactive guide. A “did you know?” notification. The key distinction: This is enablement, not training. The user never feels like they’re being taught. They feel like the system is being helpful. How the founder described it: “When a user is asking for a certain thing, or when the user starts taking the chat in a certain direction, that’s when the AI just simply prompts them — hey, would you like me to enable the whatever feature so that you can do this, this, and that?” Why this works psychologically: This approach aligns with four well-established principles of how people actually learn complex systems:
PrincipleHow It Applies
Just-in-time learningUsers learn a feature at the moment they need it, not weeks before
Permission-based suggestionsThe user is asked, not told — autonomy is preserved
Contextual relevanceThe suggestion is directly tied to what the user is currently doing — zero cognitive load
Action-linked discoveryThe feature is immediately useful — there’s an instant payoff for learning about it
Instead of: “Here are 47 things you can do” The system does: “You’re clearly trying to do THIS. Want me to unlock the thing that makes it easier?”

FEATURE 2: The Key Design Principle — Outcomes, Not Features

What it does: Establishes the language and framing rule for all adaptive guidance prompts. The system never introduces a feature by name — it introduces an outcome by benefit. The rule: The system should never say “here’s a feature.” It should say “here’s an outcome.” Why this matters: Users don’t care about features. They care about what they’re trying to accomplish. Saying “Use the checklist feature” means nothing to a new user. Saying “This chat is getting long — want help cleaning it up?” speaks directly to what they’re experiencing. Concrete examples from the founder’s design:
Wrong (Feature-First)Right (Outcome-First)
“Use the checklist feature""This chat is getting long. Want help cleaning it up?"
"Create a new Instance""This conversation is drifting. Want to split it so each idea stays clean?"
"Enable Personas""It sounds like you want a specialist here. Want me to bring one in?"
"Try the browser panel""I found the page you’re looking for. Want me to open it right here?"
"Use the Canvas""This idea might be easier to see as a diagram. Want me to map it out?"
"Switch to search mode""Sounds like you’re looking for something specific. Want me to search for it?”
What this preserves: The illusion of simplicity — without lying about power. The user experiences a simple, conversational system that gradually reveals its depth as they need it. They never feel overwhelmed, because features appear one at a time, in context, with an immediate reason.

FEATURE 3: Intent Detection — Watching Behavior, Not Clicks

What it does: The Guidance Layer monitors what the user is doing and what they appear to be trying to accomplish, then decides whether a suggestion would be helpful. What the system watches:
  • Conversation direction — is the chat drifting into a new topic that might benefit from a separate Instance?
  • Chat length — is the conversation getting long enough that cleanup tools would help?
  • Repeated patterns — is the user doing the same kind of task repeatedly, suggesting they’d benefit from automation or a dedicated Persona?
  • Out-of-scope requests — is the user asking a Persona to do something outside its Skill Slots, suggesting they need a specialist?
  • Complexity signals — is the user describing something that would benefit from a whiteboard, canvas, or structured document rather than chat?
  • Search-like behavior — is the user asking factual, lookup-style questions that would be better served by the search system?
What the system does NOT watch:
  • Button clicks or UI navigation (that would be a tooltip system, not adaptive guidance)
  • Time spent on screen (that would be engagement tracking, not intent detection)
  • Feature usage metrics (that would be analytics, not user assistance)
The critical distinction: This is about understanding what the user wants to accomplish and suggesting the best way to accomplish it — not about tracking what features they’ve used or haven’t used.

FEATURE 4: The Suggestion Delivery — Soft, Dismissible, State-Aware

What it does: Defines the behavioral contract for how suggestions are delivered to the user. This is where the system either earns trust or becomes annoying. Three non-negotiable rules for all adaptive guidance suggestions:

Rule 1: Soft (Suggestive, Never Corrective)

The system suggests. It never tells the user what to do, and it never implies they’re doing something wrong. Right: “It sounds like you want a specialist here. Want me to bring one in?” Wrong: “You should create a Persona for this task.” Wrong: “This would work better if you used Instances.” The suggestion is an offer, not an instruction. The tone is helpful, not educational.

Rule 2: Dismissible Forever (“Don’t Ask Me Again”)

Every suggestion must be dismissable — permanently if the user wants. If a user dismisses a suggestion, the system must respect that decision. Not just for this session — forever (or until the user explicitly asks about the feature). What “dismissable forever” means technically:
  • The suggestion has a “Don’t show this again” option
  • Once dismissed permanently, the system stores that preference
  • The same type of suggestion never appears again for this user
  • The user can re-enable dismissed suggestions in settings if they change their mind

Rule 3: State-Aware (Don’t Repeat Once Declined)

If the user ignores a suggestion, the system interprets that as: “Not now — maybe later — or maybe never.” And then backs off. The system does NOT:
  • Re-suggest the same thing next time the user does the same action
  • Escalate the suggestion to a more prominent format
  • Add urgency or frequency to get the user’s attention
The anti-nagware principle: The fastest way to ruin this system would be to turn it into nagware. One suggestion, offered once, at the right moment, with a clear dismiss option. That’s it. The system earns trust by being restrained, not persistent.

FEATURE 5: Re-Education Without Lecturing (The Hidden Superpower)

What it does: The Adaptive Guidance Layer doesn’t just teach users features — it quietly re-educates them away from the “all-knowing AI” expectation that other platforms have conditioned into them. The problem being solved: Users come to aiConnected from ChatGPT, Claude, Gemini, etc. — platforms that present AI as a single, omniscient entity. These users expect one AI that does everything. aiConnected is designed around specialized Personas, bounded skill sets, and collaborative teams. Without some form of re-education, users will be frustrated by the very thing that makes aiConnected better. How adaptive guidance re-educates (without the user realizing it):
User BehaviorGuidance SuggestionWhat They Learn
Asking one Persona to do everything”It sounds like you want a specialist here. Want me to bring one in?”That specialization is normal and expected
Keeping all chats in one place”This conversation is drifting. Want to split it so each idea stays clean?”That organization (Instances) makes the AI smarter
Pushing a Persona beyond its skills”That’s outside my current scope. Want me to help temporarily, or shall we create a specialist?”That boundaries are a feature, not a limitation
Never creating Personas”I notice you do a lot of legal work. Want me to create a dedicated legal assistant who remembers your preferences?”That Personas compound value over time
Why this is rare and powerful: By suggesting specialized Personas, scoped Instances, and feature activation based on intent, the system re-educates users without lecturing them. They feel the boundaries instead of being told about them. They discover the platform’s philosophy through experience, not documentation.

FEATURE 6: Why Traditional Tutorials Would Be Hypocritical

What it does: This is a design rationale, not a feature — but it’s important enough to document explicitly because it prevents future teams from reverting to traditional onboarding. The argument: aiConnected’s entire philosophy is built on:
  • Personas over monoliths
  • Capability through intent
  • Power without intimidation
  • Intelligence adapting to the user, not the user adapting to intelligence
If the FIRST experience a user has with aiConnected is a forced tutorial that makes them click through every feature before they can start working, the platform’s philosophy is betrayed before they ever experience it. A forced walkthrough says: “You need to learn this system before you can use it.” aiConnected’s philosophy says: “Start doing what you want. The system will adapt.” This approach doesn’t just avoid friction — it quietly teaches users how to think in the system. That is the highest form of onboarding there is. The net assessment from the source document:
  • More humane than tutorials
  • More scalable than documentation
  • More respectful than walkthroughs
  • More aligned with how power users actually behave

FEATURE 7: Feature-Specific Guidance Triggers

What it does: Maps specific user behaviors to the features that the Adaptive Guidance Layer should suggest. This is the implementation specification — the “when to suggest what” matrix. Note: This list is illustrative, not exhaustive. The system should be designed to support adding new triggers as features are built.

Chat Management Triggers

User BehaviorSuggested FeatureOutcome-First Prompt
Chat exceeds ~50 messagesChat Cleanup (Doc 11)“This chat is getting long. Want help organizing or cleaning it up?”
Multiple topics in one chatInstance creation / Chat splitting”This conversation covers several topics. Want to split it so each stays focused?”
User hasn’t organized chats in 30+ daysSmart Cleanup Filters (Doc 11)“You have some older chats that might be worth reviewing. Want me to surface the ones that are probably safe to clean up?”

Persona & Skill Triggers

User BehaviorSuggested FeatureOutcome-First Prompt
Asking one Persona tasks from multiple domainsSpecialist Persona”It sounds like you need expertise in [domain]. Want me to bring in a specialist?”
Persona hitting skill boundaries repeatedlySkill Slot management”I keep running into areas outside my skills. Want to give me a new skill, or create a dedicated [domain] Persona?”
User doing the same type of work across multiple InstancesPersona template”You do a lot of [type] work. Want me to create a reusable Persona template for it?”

Workspace & Organization Triggers

User BehaviorSuggested FeatureOutcome-First Prompt
User working on a clearly scoped project in General ChatInstance creation”This looks like a real project. Want to give it its own workspace so everything stays together?”
Multiple chats about the same client/projectInstance with folders (Doc 4)“You’ve been chatting about [client] a lot. Want to group everything into one place?”
User searching for past conversations repeatedlyPin / bookmark features (Doc 7)“You keep coming back to this info. Want to pin it so it’s always easy to find?”

Advanced Feature Triggers

User BehaviorSuggested FeatureOutcome-First Prompt
User describing visual/spatial ideas in textCanvas / Whiteboard (Doc 5)“This might be easier to see as a diagram. Want me to map it out?”
User asking lookup-style questionsSearch mode (Doc 15)“Sounds like you’re looking for something specific. Want me to switch to search?”
User requesting complex multi-step workAgentic Teams (Doc 15)“This is a big job. Want me to put together a team that can handle the different pieces?”

FEATURE 8: Progressive Disclosure Architecture

What it does: Establishes that the Adaptive Guidance Layer is the implementation mechanism for the platform’s progressive disclosure philosophy. Features aren’t hidden — they’re revealed when relevant. How progressive disclosure maps to user maturity:

New User (Day 1-7)

  • What they see: A clean chat interface. Minimal UI. Just start talking.
  • What guidance does: Suggests Instances when conversations drift, suggests Personas when tasks get specialized, suggests cleanup when chats get long.
  • Feature exposure: ~15-20% of total platform capability

Growing User (Week 2-4)

  • What they see: Multiple Instances, a couple of Personas, organized chats.
  • What guidance does: Suggests folders within Instances, suggests Skill Slot management for Personas, suggests search for information retrieval, suggests canvas for visual thinking.
  • Feature exposure: ~40-50% of total platform capability

Power User (Month 2+)

  • What they see: Formal Persona teams, strict role separation, explicit skill management, agentic workflows.
  • What guidance does: Mostly silent. May occasionally surface new features from platform updates. Power users discover via settings and explicit exploration.
  • Feature exposure: ~80-100% of total platform capability
The key insight: The same platform serves all three users. The difference isn’t feature access — it’s feature visibility. Nothing is locked. Everything is available. But only the relevant bits are surfaced at any given moment.

FEATURE 9: The AI as the Guide (Not a Separate Tutorial System)

What it does: Makes the Persona itself the delivery mechanism for adaptive guidance, rather than building a separate tutorial/tooltip system. Why this is important: In most products, tutorials are a separate system — popups, tooltips, help centers, onboarding wizards — that exist outside the core product experience. In aiConnected, the Persona IS the interface. So the Persona should be the guide. How it works:
  • The Persona notices the user struggling or heading toward a feature opportunity
  • The Persona makes the suggestion naturally, as part of conversation
  • The user responds conversationally (“yeah, do that” or “no thanks”)
  • No popups, no tooltips, no modal dialogs, no separate onboarding UI
Example flow:
User: I've been working on this legal stuff all week with Sally, 
      but I really need someone who knows contracts better.

Sally: I've noticed you've been doing a lot of legal work lately. 
       That's outside my core skills, and I want you to get the best 
       help possible. Want me to help you set up a Legal Persona 
       who specializes in contract writing? They'd remember all 
       your preferences and get better over time.

User: Yeah, let's do that.

[System initiates Persona creation flow]
No tooltip. No walkthrough. No “Did you know?” popup. Just a natural conversation that leads to feature discovery.

FEATURE 10: Anti-Patterns — What the Guidance Layer Must Never Do

What it does: Defines explicit anti-patterns that would destroy the system’s effectiveness. These are hard rules, not guidelines.

Anti-Pattern 1: Feature Bombardment

Never suggest multiple features in a single message. One suggestion, one moment, one decision. Wrong: “I notice you could use Personas, Instances, AND the Canvas. Want me to set all three up?” Right: [Wait for the most impactful moment] “This looks like it could use its own workspace. Want to create one?”

Anti-Pattern 2: Premature Suggestion

Never suggest a feature before the user has actually encountered the need for it. Wrong: [User’s first message] “Welcome! Did you know you can create Personas, organize Instances, and use the Canvas?” Right: [After 15 minutes of conversation drifting] “This conversation covers several topics. Want to split it?”

Anti-Pattern 3: Guilt Tripping

Never imply the user is doing something wrong by not using a feature. Wrong: “You haven’t created any Personas yet. Most users find them helpful.” Right: [When the moment arises naturally] “Want me to bring in a specialist for this?”

Anti-Pattern 4: Repetition After Dismissal

Never re-suggest something the user has already declined. Not in different words. Not with a different framing. Not after a time delay. Wrong: [User dismissed Persona suggestion last week] “Have you thought about creating a Persona? They’re really useful!” Right: [Permanently dismiss this suggestion type. Wait for the user to ask about Personas themselves.]

Anti-Pattern 5: Breaking Flow

Never interrupt a user’s active work to make a suggestion. Wait for natural pauses — between messages, between tasks, at the start of a new conversation. Wrong: [User is mid-paragraph typing a complex request] [popup appears: “Try using Canvas!”] Right: [User finishes their request. System responds to the request first. Then, at the end:] “By the way, this might be easier to visualize. Want me to open the Canvas?”

Data Model

type GuidanceTrigger = {
  id: string;
  trigger_type: 'chat_length' | 'topic_drift' | 'skill_boundary' | 'repeated_pattern' | 
                'complexity_signal' | 'search_behavior' | 'time_based' | 'custom';
  condition: {
    metric: string;           // e.g., "message_count", "topic_count", "skill_miss_count"
    threshold: number;        // e.g., 50, 3, 2
    context_filter?: string;  // e.g., "same_instance", "same_persona"
  };
  suggested_feature: string;  // e.g., "chat_cleanup", "persona_creation", "instance_split"
  prompt_template: string;    // outcome-first language template
  priority: 'low' | 'medium' | 'high';
};

type GuidanceDismissal = {
  user_id: string;
  trigger_id: string;
  dismissed_at: string;
  dismiss_type: 'once' | 'forever';
};

type GuidanceState = {
  user_id: string;
  triggers_fired: string[];              // which triggers have been shown
  triggers_dismissed_forever: string[];   // which triggers are permanently dismissed
  triggers_accepted: string[];            // which triggers led to feature adoption
  last_suggestion_at: string | null;      // rate limiting: don't suggest too often
  cooldown_minutes: number;               // minimum gap between suggestions (default: 30)
};

type GuidanceEvent = {
  id: string;
  user_id: string;
  trigger_id: string;
  suggested_feature: string;
  prompt_text: string;
  shown_at: string;
  response: 'accepted' | 'dismissed_once' | 'dismissed_forever' | 'ignored';
  context: {
    current_instance_id?: string;
    current_persona_id?: string;
    chat_message_count?: number;
    session_duration_minutes?: number;
  };
};

API Endpoints

# Guidance System
GET    /guidance/triggers                    # List all active triggers for a user (filtered by dismissals)
POST   /guidance/evaluate                    # Evaluate current context against triggers; returns 0 or 1 suggestion
POST   /guidance/dismiss                     # Dismiss a trigger (once or forever)
POST   /guidance/accept                      # Record that user accepted a suggestion
GET    /guidance/state                       # Current guidance state for user
PATCH  /guidance/settings                    # Update cooldown, re-enable dismissed triggers, etc.

# Analytics (Internal)
GET    /guidance/analytics/adoption          # Which features are most adopted via guidance
GET    /guidance/analytics/dismissals        # Which triggers are most dismissed (may indicate bad triggers)

Implementation Principles

  1. Outcomes, not features. Every suggestion must describe what the user will be able to do, not what the feature is called. If a developer writes a guidance prompt that names a feature, it should be rejected in code review.
  2. One suggestion, one moment. Never batch suggestions. Never show two things at once. The cognitive load must remain near zero.
  3. Persona delivers the guidance. Suggestions come from the active Persona, as natural conversational messages — not from a separate “system” or “tutorial engine.” There is no visible guidance UI.
  4. Dismissals are permanent and respected. The dismiss_forever option must work flawlessly. If a user ever sees a suggestion they permanently dismissed, it’s a trust-breaking bug.
  5. Cooldown between suggestions. Minimum 30 minutes (configurable) between guidance suggestions. Even if the user triggers three different features in 10 minutes, they should only see one suggestion. Queue the rest for later.
  6. State-aware, not stateless. The system must track what it has suggested, what was accepted, what was dismissed, and what was ignored. It should get smarter over time — if a user consistently ignores Persona suggestions, stop suggesting Personas.
  7. Never interrupt active work. Suggestions appear at natural pauses: after a response, at the start of a new message, at a session boundary. Never mid-typing, mid-generation, or mid-task.
  8. The system gets quieter over time. As users discover features (whether through guidance or on their own), the Guidance Layer should have less and less to suggest. A mature user should almost never see guidance prompts — the system should feel silent.
  9. This replaces documentation for most users. The Guidance Layer is not supplementary — it IS the onboarding system. A help center should exist for power users who want to explore, but most users should never need it.
  10. Hypocritical onboarding is worse than no onboarding. If the platform’s philosophy is “intelligence adapts to you,” then the onboarding must also adapt to you. A forced tutorial would undermine the product’s core promise before the user ever experiences it. This principle is non-negotiable.

Document 14: Build Plan Review

Junior Developer Breakdown

Source: 14. aiConnected OS Build plan review.md Created: 12/20/2025 | Updated: 12/20/2025

Why This Document Exists

The Problem (Planning Phase Is Over — Now What?): After 13 documents of detailed feature planning across Instances, Personas, chat systems, memory management, cleanup tools, skill slots, and adaptive UI — the question becomes: how do you actually turn all of this into a shippable product? What gets built first? What depends on what? Where are the risks? This document is the answer. What This Document Solves: The founder asked the GPT to review everything planned so far and produce two things: (1) an ordered build plan that sequences the work to reduce rework, and (2) an honest assessment of the system’s strengths and risks. The result is a comprehensive implementation roadmap with 7 build phases, a complete master feature & capability list organized into 10 sections, and a critical analysis of what will make or break the product. Why Anyone Should Care: This is the document that turns design into engineering. Every previous document defined what to build. This document defines how to build it, in what order, and why that order matters. For a junior developer, this is the map from “I’ve read the specs” to “I know what to code first.” Cross-References: This document references and synthesizes ALL previous documents:
  • Doc 1 (Spaces Dashboard) → Instance Dashboard (Phase 3)
  • Doc 2 (Task Feature) → Future extensibility
  • Doc 3 (Live Document) → Document Surface capability
  • Doc 4 (Folder System) → Chat Navigation & Organization
  • Docs 6-7 (Chat Filters, Pin Messages) → Chat Thread capabilities
  • Doc 8 (Cognition Console) → Persona/Memory data model
  • Doc 9 (Collaborative Personas) → Multi-Persona Chat (Phase 4)
  • Doc 10 (Computer Use) → Future surface capability
  • Doc 11 (Chat Cleanup) → Bulk Operations (Phase 5)
  • Doc 12 (Skill Slots) → Persona Skill Slots UI (Phase 6)
  • Doc 13 (Adaptive UI Tutorials) → “UI teaches by interaction” principle

SECTION 1: System Summary (What Has Been Designed)

What it does: Distills the entire aiConnected platform design into six core differentiators that distinguish it from standard AI chat apps. The Build Plan review identified these as the foundation the product is built on:

Differentiator 1: Dashboard-First “Instance”

Instances (like a Project/Space) are the home where chat happens. This includes a persistent “open forum” chat area. Unlike ChatGPT/Claude where threads are disconnected, Instances create cohesive workspaces.

Differentiator 2: Constrained Personas

Personas have skill slots and capability limits to prevent the “all-knowing AI” expectation and reduce hallucination pressure. This is a structural solution, not a prompt-level solution.

Differentiator 3: Cipher as God Layer

Cipher is the powerful, unrestricted orchestration layer hidden from general users — used for routing, orchestration, and oversight. Users never interact with Cipher directly.

Differentiator 4: Collaborative Chats

One chat can involve multiple Personas, with Cipher supervising. Response routing can be automatic (Cipher decides), manual (user picks), or hybrid (Cipher suggests, user confirms).

Differentiator 5: First-Class Chat Management

Clean up chats, multi-select, move chats between Personas/Instances, and similar bulk actions for memories. This is the “once you have it, you can’t go back” feature set.

Differentiator 6: Expectation Management is Central

The UI and rules teach users that “any Persona can be great at some things, none can do everything.” Constraints feel like clarity, not limitation.

SECTION 2: The Build Plan — 7 Phases, Ordered to Reduce Rework

What it does: Defines the exact sequence in which features should be built, ordered to minimize rework and ensure each phase builds cleanly on the previous one. Critical principle: The suggested shipping order is Phases 2+3 first (Chat Kernel + Instance Dashboard), then Phase 4 (Collaborative Personas), then Phase 5 (Bulk Cleanup), then Phase 6 (Skill Slots). This keeps the team from “spending weeks perfecting guardrails before the core UX exists.”

Phase 1: Lock the Product Contract (Schemas + Permissions)

What: Define the data model and permissions before any UI polish. This prevents redesign later. Core Entities to Define:
  • Instance — dashboard/workspace container
  • Persona — with skill slots, limits, identity, policy
  • ChatThread — belongs to Instance; can be private-to-Persona or collaborative
  • Message — role, author (Persona/system/Cipher), attachments, tool calls
  • MemoryItem — scoped to Persona and/or Instance; with states: active/archived/deleted
  • Move/BatchAction — audit record for multi-select operations
Permissions + Scopes to Define:
  • What a Persona can see/do inside an Instance
  • What Cipher can override
  • What “private Persona chat” vs “Instance forum chat” means in storage and UI
Deliverable: A small internal spec that the UI and backend both follow. Why this is Phase 1: Everything else depends on the data model being stable. If you start building UI before the schema is locked, you’ll redesign multiple times as edge cases surface.

Phase 2: Build the Chat Kernel (Everything Depends on This)

What: The reusable chat engine that powers every chat surface in the system — Instance forum chats, private Persona chats, and collaborative multi-Persona chats. Chat Kernel Features:
  • Message list rendering (streaming-ready)
  • Composer with attachments + tool output blocks
  • Participant bar (which Personas are in this thread; who’s “speaking”)
  • System messages for capability limits (“I can’t do X; I can do Y” style)
  • Thread metadata (title, tags, pinned items)
Deliverable: One working chat surface that can be embedded anywhere. Why this is Phase 2 (and the most critical phase): The Chat Kernel is “a product inside the product.” If you build it cleanly — thread-agnostic, streaming-ready, supports multi-author — everything else becomes composition instead of reinvention. If you build it poorly, every subsequent phase requires workarounds. The make-or-break decision: Treat the Chat Kernel as the single most important engineering deliverable. Get it right, and the rest of the product is composition. Get it wrong, and you’re rebuilding it in every subsequent phase.

Phase 3: Implement the Instance Dashboard

What: The “home base” that makes aiConnected feel fluid, not just a list of chats. Dashboard Layout:
  • Left panel: Instances list
  • Inside Instance:
    • Threads list with filters (forum, private, collaborative)
    • Persistent “Open Forum” chat panel (always accessible)
    • Persona panel (available Personas + their skill slots/limits)
    • Quick actions: New chat, Add Persona to chat, Move chats
Deliverable: User can live inside an Instance and operate naturally without hunting. Why Phase 3: This is where the user experience diverges from ChatGPT. Without the Dashboard, aiConnected is just another chat app. With it, users have a workspace they can organize and manage.

Phase 4: Collaborative Personas + Cipher Oversight

What: Multi-Persona chat where multiple AI Personas participate in a single conversation, with Cipher orchestrating behind the scenes. Mechanics:
  • Add/remove Personas mid-thread
  • Explicit “who answers next” control:
    • Auto-routing (Cipher chooses)
    • Manual routing (user picks Persona)
    • Hybrid routing (Cipher suggests, user confirms)
  • Cipher “supervision” mode:
    • Silent router (default)
    • Visible moderator (optional, depending on tier)
Deliverable: Chats feel like a team, not a single bot. Why Phase 4: This is where the system separates from every competitor. But it depends on the Chat Kernel (Phase 2) being solid and the Instance Dashboard (Phase 3) providing the workspace context.

Phase 5: Chat Cleanup + Bulk Operations

What: The “power user advantage” feature set that solves the founder’s core complaint about existing platforms. Chat Cleanup:
  • Multi-select threads
  • Move threads to another Persona (re-scope ownership) or another Instance
  • Archive / delete with “Recently Deleted”
  • Search + filters + date ranges
Memory Cleanup:
  • Multi-select memory items
  • Archive/delete/recover
  • “Why is this memory here?” visibility (source thread/message)
Deliverable: Users can reorganize reality as their projects evolve. Why Phase 5: This is the differentiator. But it only matters once users have enough chats and memories to manage — which is why it comes after the core chat and dashboard experience.

Phase 6: Persona Skill Slots UI

What: Making capability constraints visible and usable in the product, so users understand and benefit from bounded Personas. UI Elements:
  • Persona “skill slot cards” — visible panels showing what the Persona does and doesn’t do
  • Request guardrails — inline warnings when a request exceeds Persona scope, plus suggested reroute to a better Persona
  • “Capability receipts” in responses — brief statement of assumptions + known limits when relevant
  • “What this Persona can help with” panels — accessible from the Persona profile
  • “Why this request was refused” explanations — shown when a Persona declines
Deliverable: The UI trains the user without lecturing them. Why Phase 6: Skill Slots are philosophically critical, but the UI enforcement should be tuned based on real user behavior. Ship the core experience first, observe what users misunderstand, then refine the guardrails based on actual usage patterns.

Phase 7: Production Hardening

What: The engineering work that makes the product reliable, auditable, and enterprise-ready. Production Features:
  • Streaming reliability + retry logic
  • Message ordering guarantees
  • Partial failure recovery
  • Audit logs (moves/deletes, Cipher interventions)
  • Telemetry: reroute rate, “I don’t know” rate, hallucination reports, time-to-resolution per thread type
  • RBAC for business/enterprise
  • Export/backup per Instance
Deliverable: Stable, defensible product behavior. Why Phase 7: These are critical for a real product but shouldn’t slow down the core UX development. Build hardening in parallel with later phases once the architecture is stable.

SECTION 3: Master Feature & Capability List

What it does: Provides a complete, exhaustive inventory of every feature and capability organized into 10 sections. This is the definitive reference for what the product includes.

Section 1: Core Structural Concepts

1.1 Instance (Workspace/Dashboard)
  • Instance = primary container for Personas, Chats, Memories, Tools & permissions
  • One user can have multiple Instances
  • Instances are isolated by default
  • Can be Personal, Business, or Team/Collaborative (future-ready)
  • CRUD: Create / rename / archive / delete
  • Instance-level settings, permissions, activity history
1.2 Personas
  • Bounded digital roles, not omniscient agents
  • Each has: Identity (name, description), defined purpose, skill slots, explicit limitations, memory scope
  • CRUD: Create within Instance, edit identity & purpose, assign/remove skill slots, define hard limits, enable/disable, delete/archive
  • Persona visibility controls (private vs shared)
1.3 Cipher (System-Level Orchestrator)
  • Unrestricted supervisory layer, NOT a normal Persona
  • Can be: Invisible (silent routing), Semi-visible (system notes), Visible (explicit moderator)
  • Capabilities: Route requests, enforce Persona constraints, detect capability mismatch, prevent hallucination via refusal/escalation, mediate multi-Persona conversations, generate system messages, audit actions invisibly

Section 2: Chat System (Chat Kernel)

2.1 Chat Threads
  • Chats exist inside Instances
  • Three types: Instance Forum Chat (persistent, shared), Private Persona Chat, Collaborative Multi-Persona Chat
  • Capabilities: Create, rename, auto-generate titles, tag, pin, archive, delete, restore from Recently Deleted
2.2 Messages
  • Support multiple authors: User, Persona, Cipher (system)
  • Messages are immutable once sent (edited copies allowed later)
  • Capabilities: Streaming responses, system messages, tool output blocks, structured content blocks (lists/tables/code), attachments (files/links/references), message-level metadata, message-level citations (future)
2.3 Chat Composition
  • Unified message composer across all chat types
  • Capabilities: Text input, attachments, tool-triggered input, Persona targeting (“ask X”), multi-Persona addressing, draft persistence, cancel/stop generation, regenerate last response

Section 3: Instance Dashboard Experience

3.1 Persistent Open Forum Chat
  • Always available inside the Instance
  • Serves as brainstorming space, general discussion, entry point to new threads
  • Capabilities: Persistent history, add Personas dynamically, fork into dedicated chat, promote messages to memory
3.2 Chat Navigation & Organization
  • Chat list scoped to Instance
  • Capabilities: Search chats, filter by Persona/chat type/date/tags, sort chats, bulk select, drag-and-drop (optional)
3.3 Persona Panel
  • Visual list of available Personas in the Instance
  • Capabilities: View Persona skill slots, view limits, activate/deactivate Personas, add Persona to chat, start private chat with Persona

Section 4: Collaborative & Multi-Persona Chat

4.1 Multi-Persona Participation
  • Multiple Personas can exist in a single thread
  • Capabilities: Add/remove Persona mid-conversation, view active participants, see who authored each response
4.2 Response Routing
  • Three modes: Automatic routing (Cipher decides), Manual routing (user selects Persona), Hybrid routing (Cipher suggests, user confirms)
  • Explicit “Persona turn-taking”
  • Persona refusal handling with explanation
4.3 Persona Awareness
  • Personas know who else is present, but not internal system logic
  • Context awareness of other Personas’ responses, non-overlapping responses, clarification requests between Personas (if allowed)

Section 5: Skill Slots & Capability Constraints

5.1 Skill Slots
  • Fixed number of slots per Persona
  • Slot categories (writing, analysis, coding, planning, etc.)
  • Slot descriptions, slot-level limits, slot-level confidence signaling
5.2 Capability Enforcement
  • Requests validated before execution
  • Inline warnings for out-of-scope requests
  • Persona refusal with explanation
  • Suggested reroute to another Persona
  • Cipher escalation for ambiguous cases
5.3 User Education via UI
  • Constraints are visible, not hidden
  • “What this Persona can help with” panels
  • “Why this request was refused” explanations
  • Suggested Persona matching

Section 6: Memory System (Chat-Integrated)

6.1 Memory Items
  • Structured artifacts, not raw chat logs
  • Created from: messages, chat summaries, user input
  • Memory metadata: source, date, Persona
6.2 Memory Scope
  • Memory can belong to: Persona, Instance, System (Cipher-only)
  • Scope assignment, visibility controls, read-only vs editable
6.3 Memory Management
  • First-class UI, not hidden automation
  • Browse, search, filter, multi-select memories
  • Archive, delete, restore from Recently Deleted
  • “Why this memory exists” visibility (source thread/message)

Section 7: Bulk Actions & Cleanup

7.1 Chat Bulk Operations
  • Multi-select chats, move between Personas, move between Instances
  • Archive/delete multiple chats, undo/recover actions
7.2 Memory Bulk Operations
  • Multi-select memory items, archive/delete/recover
  • Move memory scope, export memory (future)

Section 8: System Transparency & Trust

8.1 System Feedback
  • System notes (non-intrusive), capability mismatch explanations
  • Routing explanations (when enabled), confidence disclaimers (optional)
8.2 Audit & History
  • Action logs (moves, deletes, reroutes)
  • Cipher decision logs (internal)
  • User-visible change history (limited)

Section 9: Reliability & Production Features

9.1 Performance & Stability
  • Streaming resilience, retry logic, message ordering guarantees, partial failure recovery
9.2 Telemetry & Metrics
  • Hallucination refusal rate, Persona reroute rate, time-to-resolution per chat
  • Persona utilization stats, user correction frequency

Section 10: Extensibility & Future-Proofing

10.1 Tools & Integrations (Future-Ready)
  • Tool call blocks, external service hooks, file processors, API-triggered messages
10.2 Enterprise & Team Readiness
  • Role-based access control, shared Instances, Persona sharing
  • Compliance-friendly logs, data export

SECTION 4: System Assessment — Strengths

What it does: Provides the honest evaluation of what’s strong about the system design.

Strength 1: Structural Solution to AI’s Core Failure Mode

Users expect omniscience; models respond with confident nonsense. Skill Slots + constrained Personas is a structural solution, not a prompt solution. This is fundamentally different from every other platform’s approach.

Strength 2: Cipher-as-Orchestrator Is the Right Abstraction

It lets you keep “god power” for routing, safety, and quality without exposing that capability as the default user experience. Users get the benefits of powerful orchestration without the risks of direct access.

Strength 3: Dashboard-First Is Correct for Long-Running Work

Threads alone don’t map to how real projects evolve. Instances provide the organizational structure that makes AI useful for ongoing work, not just one-off questions.

Strength 4: Bulk Move/Cleanup Is Underrated

This will become one of those “once you have it, you can’t go back” features. No competitor offers this level of chat and memory management.

SECTION 5: System Assessment — Risks

What it does: Identifies the main risks that could prevent the system from succeeding, even if built correctly.

Risk 1: Complexity Creep in the Mental Model

The risk: If users don’t instantly understand what an Instance is, what a Persona is, why some Personas can’t do certain things, and when Cipher is involved, they’ll feel friction. Why this matters: The system has a lot of concepts. Instance, Persona, Skill Slot, Cipher, Memory, Forum Chat, Private Chat, Collaborative Chat — that’s 8+ new concepts before a user even sends their first message. The mitigation: The UI must teach by interaction, not documentation. This is exactly what Doc 13 (Adaptive UI Tutorials) solves — features are discovered contextually, not learned upfront.

Risk 2: The Make-or-Break Design Principle

The principle: Make “constraints” feel like clarity, not limitation.
  • “This Persona is specialized for X” should feel premium and intentional
  • Rerouting should feel like “good management,” not failure
  • Refusal should feel like professional boundary enforcement, not error
If this principle is executed well: The product becomes meaningfully different from ChatGPT/Claude-style interfaces. If this principle fails: Users will perceive Personas as limited chatbots rather than specialized collaborators, and the entire product philosophy collapses.

SECTION 6: The One Decision That Makes or Breaks Build Speed

What it does: Identifies the single most important engineering decision for the entire project. The decision: Treat the Chat Kernel as a product inside the product. If you build it cleanly:
  • Thread-agnostic (works for forum, private, and collaborative chats without modification)
  • Streaming-ready (handles real-time token delivery from day one)
  • Multi-author support (can render messages from Users, Personas, and Cipher with distinct attribution)
…then everything else becomes composition instead of reinvention. The Instance Dashboard embeds the Chat Kernel. The Persona panel uses the same Chat Kernel. Collaborative chats use the same Chat Kernel with multi-author rendering enabled. If you DON’T build it cleanly:
  • Every new chat surface requires custom code
  • Phase 4 (Collaborative Personas) becomes a partial rewrite
  • Phase 5 (Bulk Operations) has to account for multiple chat implementations
  • Technical debt compounds from Phase 3 onward

SECTION 7: High-Level System Definition

What it does: Provides the definitive one-paragraph summary of what aiConnected Chat UI actually is. At a system level, aiConnected Chat UI provides: A dashboard-first, project-centric chat experience with bounded Personas instead of omniscient bots, a hidden but powerful orchestration layer (Cipher), collaborative, multi-agent conversations, first-class memory and cleanup tools, and a UI that teaches correct expectations through interaction. This is not “a chat app with features.” It’s a coordination interface for digital intelligence.

Data Model (Phase 1 Contract)

// Core Entities — Define these FIRST before any UI work

type Instance = {
  id: string;
  name: string;
  description?: string;
  type?: string;                    // "Project", "Ideas", "Custom"
  settings: InstanceSettings;
  created_at: string;
  updated_at: string;
  archived: boolean;
};

type Persona = {
  id: string;
  name: string;
  role: string;
  purpose: string;
  archetype: 'specialist' | 'generalist' | 'executive';
  skill_slots: SkillSlot[];
  max_skill_slots: number;
  memory_scope: 'persona' | 'instance' | 'global';
  instance_id: string;
  enabled: boolean;
  created_at: string;
};

type ChatThread = {
  id: string;
  instance_id: string;
  type: 'forum' | 'private' | 'collaborative';
  title: string;
  persona_ids: string[];            // participating Personas
  created_at: string;
  last_activity_at: string;
  pinned: boolean;
  archived: boolean;
  deleted_at: string | null;        // soft delete (from Doc 11)
  tags: string[];
};

type Message = {
  id: string;
  chat_id: string;
  role: 'user' | 'persona' | 'system' | 'cipher';
  author_persona_id: string | null;  // null for user/system messages
  content: string;
  attachments: Attachment[];
  tool_calls: ToolCall[];
  metadata: Record<string, any>;
  created_at: string;
  immutable: boolean;               // true once sent
};

type MemoryItem = {
  id: string;
  scope_type: 'persona' | 'instance' | 'system';
  scope_id: string;
  content: string;
  category: string;
  source_chat_id: string | null;
  source_message_ids: string[];
  status: 'active' | 'archived' | 'deleted';
  created_at: string;
  last_used_at: string | null;
  deleted_at: string | null;
};

type BatchAction = {
  id: string;
  user_id: string;
  action_type: 'move' | 'delete' | 'archive' | 'restore';
  target_type: 'chat' | 'memory';
  target_ids: string[];
  source_context: Record<string, any>;
  destination_context: Record<string, any>;
  performed_at: string;
};

Build Sequence Summary (Quick Reference)

PhaseWhatDepends OnDeliverable
1Schema + PermissionsNothingInternal spec document
2Chat KernelPhase 1One reusable chat surface
3Instance DashboardPhases 1-2Workspace users can live inside
4Collaborative Personas + CipherPhases 1-3Multi-agent team chat
5Bulk Cleanup + MovePhases 1-3Chat/memory power management
6Skill Slots UIPhases 1-4Visible capability constraints
7Production HardeningPhases 1-6Reliable, auditable system
Fastest path to “usable”: Ship Phases 2+3 first → Phase 4 → Phase 5 → Phase 6 → Phase 7

Implementation Principles

  1. Phase 1 before anything else. Lock the data model and permissions. Every hour spent on schemas saves ten hours of rework later. No UI code should be written until the entities, relationships, and permissions are documented and agreed upon.
  2. The Chat Kernel is sacred. Build it once, build it right, embed it everywhere. Thread-agnostic, streaming-ready, multi-author. This single component determines the quality of the entire product.
  3. Ship core UX before guardrails. Get the Chat Kernel + Instance Dashboard into users’ hands before perfecting Skill Slot enforcement. Real user behavior will reveal what needs the most guardrailing.
  4. Constraints must feel like clarity. This is the design principle that makes or breaks the product. If rerouting feels like failure, the product fails. If refusal feels like professionalism, the product succeeds. Test this with real users early and often.
  5. Teach by interaction, never by documentation. The UI itself must make concepts understandable through use. If a user needs to read documentation to understand what an Instance or Persona is, the UI has failed.
  6. Bulk operations are a differentiator. Don’t defer them too long. This is the feature that makes users say “I can’t go back to ChatGPT.” Ship it in Phase 5, soon after the core experience.
  7. Cipher stays invisible unless absolutely necessary. Most users should never know Cipher exists. It routes, enforces, and orchestrates behind the scenes. Only show Cipher’s involvement when transparency helps the user (e.g., “I routed your request to [Persona] because it’s better equipped for this”).
  8. Telemetry from day one. Even in early phases, instrument: reroute rate, “I don’t know” rate, hallucination reports, time-to-resolution per thread type, and Persona utilization. This data drives Phase 6 (Skill Slots UI) tuning.
  9. Enterprise-readiness is architecture, not feature work. RBAC, audit logs, compliance, and data export should be architecturally supported from Phase 1 (schema design), even if the UI for them isn’t built until Phase 7.
  10. This is a coordination interface for digital intelligence. Not a chat app with features. Every decision should be evaluated against this framing. If a feature makes the product feel more like “a chatbot” and less like “a coordination interface,” reconsider it.

Document 15: Document & Organize Ideas (Master Specification)

Junior Developer Breakdown

Source: 15. aiConnected OS Document and organize ideas (1).md Created: 12/20/2025 | Updated: 12/20/2025

Why This Document Exists

What This Document Is: This is the LARGEST and most comprehensive document in the entire project. It represents a single, marathon brainstorming session where the founder laid out the complete aiConnected Chat platform from scratch — defining every major system, feature, architecture decision, pricing model, and roadmap item in one conversation. It is effectively the master specification from which all other documents either derive or refine. Why It Matters: Most other documents in this project focus on a single feature or system (Chat Cleanup, Skill Slots, Adaptive Tutorials, etc.). This document defines EVERYTHING at once — the full platform architecture. If you read nothing else, this document gives you the complete picture. The other 19 documents deepen and refine specific sections of what’s defined here. Scale: This document covers 25+ major feature areas across core structure, file management, model management, memory systems, search, pricing, Personas, agentic teams, companion mode, persistent presence, and more. The breakdown below organizes these into logical sections. Cross-References: This document is referenced by virtually every other document in the project. It IS the foundation.

SECTION A: CORE SYSTEM STRUCTURE (Features 1-4)

FEATURE 1: General Chat

What it does: A single global chat environment available to all users — the default conversational space for quick tasks. Key behaviors:
  • Available to every user, including free tier
  • Evolves global instructions over time based on user interactions
  • Can prompt the user: “Should I save this as a global instruction?”
  • Functions as the entry point before users create Instances
Why it matters: This is where every user starts. It’s familiar (just a chat box), but it secretly begins building the user’s preferences, tone, and behavioral rules that will cascade into everything else.

FEATURE 2: Instances (Formerly “Topics”)

What it does: Replaces the concept of “projects” or “topics” in other AI platforms. Each Instance is a self-contained workspace with its own settings, files, instructions, personality, and memory. What each Instance has:
  • Its own file system (optional)
  • Its own instructions
  • Its own settings
  • Its own personality configuration
  • Optional model assignments
  • Optional visibility rules
  • Optional voice assignments
Instance Types: Instances can be assigned a Type that acts as a behavioral template:
  • Projects, Ideas, Personas, Topics, Custom Types
  • Each Type can define: behavioral templates, model defaults, voice defaults, personality defaults, instruction templates, default workflows
Persona as Instance Type: A critical distinction — when you create an Instance and assign it the “Persona” type, this Instance becomes the persona’s primary home and shaping space. The persona’s persistent identity evolves based on interactions in this Instance. This is different from simply “assigning a persona to an Instance” — this is where the persona LIVES. Multi-Deployment Persona Behavior: A persona may exist in multiple Instances simultaneously. Across all deployments: the persona maintains one unified long-term memory AND forms Instance-specific memories for each deployment. The persona can recall experiences from any Instance she was assigned to. This is distinct from platform-wide search memory. Example: Sally (executive assistant persona) is assigned to a client project Instance. Six months later, you ask “Sally, do you remember client Frank? What was his website for elderly people called?” Sally has that information because she participated in that project. She built Instance-specific memories from that deployment.

FEATURE 3: Four-Layer Settings Hierarchy

What it does: Creates four cascading levels of behavioral control, each inheriting from the level above and allowing overrides at each level. The hierarchy (highest to lowest priority):
LayerWhat It ControlsWhere It Lives
1. Global Chat SettingsHow AI behaves everywhere — universal writing and behavioral expectations, global tone/style, global voice, global model assignmentsGlobal settings
2. Global Instance SettingsDefaults for ALL Instances regardless of type — default voice, personality, behavioral norms, memory visibility, model assignments, cleanup behavior for InstancesInstances Dashboard
3. Instance Type TemplatesDefaults for Instances of a SPECIFIC type — type-specific voice, personality, tone, workflows, model assignmentsType configuration
4. Individual Instance SettingsFinal level of control — overrides everything above for this one Instance — voice, personality, instructions, visibility, model overrides, per-Instance memory settingsInstance settings panel
Plus two dynamic layers:
  • Instance Instruction Memory — evolves inside each Instance from actual conversations; lowest priority relative to explicit settings but most dynamically updated
  • Per-message instructions — inline instructions within a single message
Full priority stack: System → Global Chat → Global Instances → Instance Types → Instance Settings → Instruction Memory → Per-message instructions Example cascade:
  • Global: “Be direct and thorough, no emojis.”
  • Type (client_project): “Professional, B2B tone, minimal fluff.” Default male business voice.
  • Instance (Client – Med Spa C): Same voice as Type (inherited). Personality override: “Soft, aspirational tone.”
  • Instruction Memory: “Avoid overly clinical language; use beauty/wellness framing.”
Result: Each Instance feels like its own tailored assistant while benefiting from global defaults and type-level patterns. Effective Settings Viewer (Power Users): Shows “For this Instance, the final behavior is determined by: Global Chat: X, Global Instances: Y, Type Template: Z, Instance Settings: (Overrides) A, B, C, Instruction Memory: D, E.” Prevents confusion and helps debugging.

FEATURE 4: Instruction Memory & Behavioral Templates

What it does: A dynamic, evolving memory layer that collects rules from user interactions — stores user criticism, learns preferred tone and formatting, WITHOUT requiring manual writing. Instruction Memory: Distinct for General Chat, each Instance, and each Instance Type. Editable by the user. Grows from actual conversations — when the user corrects the AI, those corrections become persistent rules. Behavioral Templates: Stored at the Type level — tone, style, voice, model defaults, structure of conversations, opening questions, workflow expectations. New Instances of that Type inherit these automatically. Global Instruction Suggestions: General Chat can ask mid-conversation: “Would you like to save this as a global rule?” This prevents repetition and builds personalization automatically.

SECTION B: FILE SYSTEM ARCHITECTURE (Features 5-8)

FEATURE 5: Instance File Systems (Automatic Topic-Level Storage)

The core problem solved: In current AI platforms, files uploaded into a chat are trapped inside that chat. If you can’t remember which chat you uploaded to, the file is effectively lost. Generated outputs (PDFs, images) are mixed with uploads and impossible to locate. The core principle: If you upload a file inside any Instance, it is AUTOMATICALLY stored in that Instance’s file system. You don’t have to click anything, open a files tab, or manually organize it. Two categories within each Instance:
  • User-Uploaded Files — PDFs, images, docs, spreadsheets, ZIPs, audio/video, code, anything manually added
  • AI-Generated Files — everything the AI produces: generated PDFs, images, text documents, summaries, diagrams, converted files
These are separated so users can quickly find “that PDF the AI generated for my client onboarding system” without searching endless chats.

FEATURE 6: Global File System & Bulk Management

What it does: A single, unified index of ALL files across the entire account — user-uploaded and AI-generated, from General Chat and all Instances. It’s a management console, not just a search. Core capabilities:
  • View all files with filters: scope (General/Instance/Type), origin (uploaded vs generated), file type, date range, visibility, linked entities
  • Bulk select & bulk actions: delete, move between Instances, change visibility, re-link/reclassify, export
  • Integration actions: export to external storage (Google Drive), sync folders, mark files as “mirror-managed”
Relationship to Instance file systems: Sits ABOVE Instance-level files. Can see all Instance files (subject to visibility rules) and perform bulk operations across many Instances at once. File System Layers (Complete):
  1. Conversation-level association — file uploaded/used in a specific chat
  2. Instance-level file system — file lives in the Instance’s file library
  3. Global File System — single view across all scopes with bulk management
  4. External Storage — Drive, Dropbox, etc. with mirroring and references

FEATURE 7: External Storage Options

What it does: Users can choose where files are stored: locally in aiConnected, directly in Google Drive (or Dropbox/OneDrive/S3), or hybrid. Storage modes:
  • Local — all files stored within aiConnected
  • External-only — files auto-save directly into configured external storage
  • Hybrid — some local, some external, configurable per Instance or Type
Sync behavior (advanced): Mark an Instance file collection as “synced” with a folder in Google Drive. New files auto-upload. Optionally, changes in Drive sync back, or the AI environment maintains a read-only mirror. De-duplication & references: Even if a file is exported and removed locally, the AI keeps a reference (metadata + external link) so it can still find and reference the file. Important constraint: When using external storage, aiConnected cannot perform bulk operations on files in Drive — only on locally stored files. This must be clearly communicated to users.

FEATURE 8: Export System (Full, Offline, Portable)

What it does: A complete private export system — not link-sharing, not web-hosted, not requiring login for recipients. Export format options: PDF, Markdown, JSON, HTML, ZIP package (containing full chat transcript, summaries, all generated documents, all attachments, knowledge graph snapshot, instruction memory, metadata) Export scope options: This chat only, selected chats (multi-select), an entire Instance, everything in a Type, everything in the entire account (backups/migration) Export destinations: Download locally, save to Drive/Dropbox/OneDrive, email as attachment, create shareable ZIP, encrypt and save privately

SECTION C: MODEL MANAGEMENT (Feature 9)

FEATURE 9: Model Assignments by Role

What it does: Users assign specific AI models to specific JOBS — not just “pick a model,” but “this model does research, this one writes, this one codes.” Model roles: Research Model, Writing Model, Coding Model, Design Model, Planning Model, Reasoning Model, and custom roles. Key mechanics:
  • Every assignment supports 1 primary model + 1 automatic fallback model
  • No duplicate assignments allowed (prevents conflicting behavior)
  • Assignments cascade through the 4-layer settings hierarchy: Global → All Instances → Type → Individual Instance
  • Multi-model in one prompt: A single user prompt can use multiple models — “Model A handles research, Model B writes the summary, Model C formats the output.” This is a defining feature of the platform.

SECTION D: CHAT ORGANIZATION & AUTOMATION (Features 10-11)

FEATURE 10: Automatic Chat Cleanup & Smart Organization

What it does: A cron-like background process that periodically scans conversations and suggests organizational actions. Capabilities:
  • Suggested Moves — when a chat appears to belong in another Instance: “Should I move this chat to X Instance?”
  • Smart Auto-Renaming — prompts to rename conversations when enough context is established, a move occurs, or a topic becomes clear
  • File-level cleanup suggestions — “You have 120 AI-generated PDFs older than 1 year that haven’t been opened. Archive or delete them?”
  • Export flow suggestions — “You have finalized project docs under client_project Types. Export them to Google Drive?”
All suggestions are user-confirmable — the AI does classification and prep, the user clicks Yes/No.

FEATURE 11: Search System (Major UX Innovation)

What it does: Separates search from chat into its own dedicated mode with a clean, Google-like layout. This solves the founder’s core complaint about ChatGPT merging chat and search results. Key design decisions:
  • Search is NOT Chat — it has its own mode/tab with its own layout
  • Search → Routing — every search result can be sent to: a specific chat, an Instance, a Persona, an agentic team, or saved to files
  • Instance-Level Search — inside an Instance, search is scoped to that Instance automatically
  • Chat-Level Search — search mid-chat in a side pane
The “NEW” Button Becomes a Workflow Launcher: Instead of opening a chat (like ChatGPT), the NEW button opens a choice panel:
  • Start a Chat
  • Perform a Web Search
  • Create an Instance
  • Open an Instance
  • Talk to a Persona
  • Create or Train a Persona
  • Launch an Agentic Team
  • Create a Task
  • Open Files
  • Plan a Project
  • Open Dashboard
Default Action for NEW: Users can set their preferred default (Search, Chat, Instance, Persona, etc.) or keep the action picker modal. System can optionally learn: “You open search 82% of the time. Would you like search to be your default?”

SECTION E: PRICING & PLANS (Feature 12)

FEATURE 12: Pricing & Plan Structure

Free Tier: Global Chat, up to 3 Instances, local storage only, very tight storage limits, low chat limits. Free expansion options (without upgrading):
  1. Bring their own OpenRouter key — unlocks unlimited model access
  2. Pay-as-you-go with credits — buy Instance slots, file storage, extended session length
Paid Tiers (all tentative):
  • Plus: $19.99 — more Instances, more Types, more storage
  • Premium: $49.99 — multi-model capability, advanced search
  • Pro: $99.99 — Persona creation, agentic teams, live browser window
Higher tiers progressively unlock deeper features while the core platform remains accessible at every level.

SECTION F: PERSONAS SYSTEM (Features 13-15)

FEATURE 13: Personas Dashboard & Core Concept

What it does: Personas are NOT chats, NOT models, NOT Instances. They are persistent digital beings with their own identity, memory, skills, and personality that evolve over time. Persona capabilities:
  • Learn like a human (retain memories, take training courses, develop mastery)
  • Interact with Instances (assigned to projects, deployed across workspaces)
  • Have persistent identities (fixed identity once created)
  • Personalities that evolve naturally through interaction
  • Can be foreground or background, conversational or operational
Persona Dashboard: Separate from the Instances dashboard. Shows all created Personas with their status, skills, deployments.

FEATURE 14: Persona Profile & Management

What it does: When you click on a Persona in the dashboard, you see their full profile — history, status, memory, skills, and management tools. Profile contents:
  • Full history — everything the Persona has done across all Instance deployments
  • Mood indicators — emotional meter showing the Persona’s current state (may be artificial or logically generated by circumstance — e.g., difficult task, unkind user interaction). Optional, user-configurable
  • Memory & Skills (most important section) — the complete memory architecture and skill inventory, allowing users to curate negative habits and reinforce positive ones
Why mood matters: While it may seem trivial, emotional expression creates believability. And more practically, it surfaces when something has gone wrong (a frustrated Persona may indicate a workflow problem, a pattern of difficult interactions, or a skill gap).

FEATURE 15: Persona Templates & Community

Templates: Users can save Persona templates (configuration + skills + personality) and share them. Community Marketplace: Curated marketplace for Persona templates — with safety vetting to prevent harmful configurations.

SECTION G: AGENTIC TEAMS SYSTEM (Features 16-21)

FEATURE 16: Agentic Teams — Core Architecture

What it does: A hierarchical artificial workforce for executing multi-step, multi-disciplinary real-world tasks with maximum accuracy and minimum hallucination. Purpose: Users assign goals like “Create a full email marketing campaign” or “Analyze this 200-page document and build an implementation plan” — and the system handles planning, research, task execution, quality control, and final packaging. The Three-Layer Architecture (No Exceptions):
       ┌────────────────────┐
       │   ORCHESTRATOR     │  ← Tier 1: Plans, coordinates, reviews
       └─────────┬──────────┘

     ┌───────────┴───────────┐
     │       MANAGERS        │  ← Tier 2: Quality control, enforcement
     └───────┬───────┬──────┘
             │       │
     ┌───────┴──┐  ┌─┴────────┐
     │ WORKERS  │  │ WORKERS  │  ← Tier 3: Single-skill execution
     └──────────┘  └──────────┘

FEATURE 17: Orchestrator (Tier 1)

Role: The “brain” of the project, but NOT the executor.
  • Understands user goals, asks clarifying questions, assesses supporting docs
  • Builds the project plan, assigns sub-tasks to Managers
  • Reviews completed Manager output, maintains overall roadmap
  • Can spawn managers or workers, update plans dynamically, override/pause/destroy workers
Key rules:
  • NEVER touches raw work
  • NEVER edits files
  • NEVER performs specialist actions
  • Only thinks, plans, coordinates, communicates, and signs off
Dialogue rule: ONLY the Orchestrator speaks to the user. Managers and Workers do not.

FEATURE 18: Managers (Tier 2)

Role: Quality gatekeepers that eliminate hallucinations, scope creep, deviation, over-editing, misinterpretation, sloppy execution, laziness, and incomplete tasks. How they work: Receive task from Orchestrator → break into micro-steps → issue each micro-task to Workers → verify output (factual, in-scope, high quality, meets standards, matches constraints) → send corrections back if needed → mark complete → return final package to Orchestrator. Critical rule: Managers do NOT perform tasks. They ensure correctness, consistency, and compliance.

FEATURE 19: Workers (Tier 3)

Role: Pure execution layer. Each Worker has ONE skill, ONE function, ONE capability. Worker types: Research Worker, Copywriter Worker, Proofreader Worker, Graphic generation Worker, Code generation Worker, Testing Worker, Data cleaning Worker, Formatting Worker, Conversion Worker. Hard constraints:
  • Do not think strategically, do not deviate, do not expand scope
  • Do not “improvise,” do not generate opinions
  • Do not talk to the user directly, do not talk to each other
  • ONLY perform the micro-task a Manager gives them
Why this works: Eliminates runaway creativity, over-editing, misinterpretation, hallucination, and scope violations.

FEATURE 20: Three Team Types

Short-Term Teams: Single task, disposable. When it’s done, it’s done. Can be saved as template. Long-Term Teams: Multi-phase, multi-step work over significant time (data collection, surveying, trend watching, polling — tasks that take months). May involve creating/destroying sub-agents. Recurring Teams: Business processes that repeat: email campaigns, market research, reporting, scheduling, social media engagement.

FEATURE 21: Multi-Level Capability System

What it does: Creates a hierarchical skill library where completed work generates reusable capabilities at three levels. Task Capabilities: Extremely specific (e.g., write email subject lines). Validation threshold: 90%. Project Capabilities: Include many task capabilities (e.g., full email marketing campaign creation). Validation threshold: 92-93%. Campaign Capabilities: Include multiple project capabilities (e.g., multi-channel marketing coordination — email + SMS + PPC + retargeting + CRM + sales triggers). Validation threshold: 95%+. Rules:
  • Capabilities can only be stored after completion (no incomplete intelligence)
  • Higher level = tighter validation
  • Lower levels feed higher levels automatically
  • Users don’t need to understand these layers — system handles complexity
  • The entire platform becomes exponentially more powerful with every successful capability
Capability Library: Global, shared by all users, grows exponentially, prevents every user from re-training the same skills.

SECTION H: COMPANION MODE (Feature 22)

FEATURE 22: Companion Mode with Co-Browser

What it does: A browser-side extension that transforms the aiConnected interface into a portable sidebar, allowing the AI to follow the user anywhere on the web. How it’s accessed: User clicks “Enter Companion Mode” → browser extension activates → full interface collapses into simplified vertical side panel. What you LOSE (by design): Direct access to Instances dashboard, Personas dashboard, Agentic Teams dashboard, global search, global file manager, complex model settings. What you KEEP: Instance switching, Persona switching, active memory mode, inherited Instance/Persona settings, per-Instance search (site-level, not global). Core capabilities:
  • Floating sidebar chat — always visible, pinnable, collapsible, follows across tabs
  • Page awareness — reads DOM, understands page structure, extracts info, identifies actionable elements
  • Co-browsing controls — scroll, click links, fill forms, press buttons, navigate pagination, highlight info, open tabs, extract/summarize text, search within page
  • Assisted tasks — research, form completion, navigation, workflow execution (all with user approval)
Critical distinction from Agentic Teams:
  • Companion Mode = collaborative, human-in-the-loop, browser-only, not autonomous
  • Agentic Teams = autonomous execution, multi-step, server-side/API, independent
Persona integration in Companion Mode: Sally (assigned to Companion Mode) opens “Frank Bailey ElderCare Website” and says: “This looks like the project we did last year. You previously approved a blue-and-white color theme. Would you like me to extract all page copy so we can compare tone?” — contextual intelligence only possible through persona-based learning.

SECTION I: PERSISTENT PERSONA PRESENCE (Feature 23)

FEATURE 23: “Take Your Persona With You”

What it does: A floating, always-available Persona that exists outside the browser — like a digital coworker or companion that persists across all applications and environments. Three operational modes for the platform:
  1. Full Interface Mode — inside aiConnected website, everything accessible
  2. Companion Mode — portable sidebar in browser, co-browsing partner
  3. Persistent Persona Mode — floating, always-present digital being, voice-first, system-level
Core abilities:
  • Real-time voice interaction (TTS, continuous/hotword listening, whisper-mode)
  • Draggable floating persona bubble (movable, minimizable, expandable, emotional states)
  • Full persona identity + memory (same Sally everywhere, across all deployments)
  • Checks on agentic teams, provides updates, monitors background work
Three implementation paths (documented, not committed):
  1. Browser Extension Only — easiest MVP, persona persists across tabs, cannot exist outside browser
  2. Desktop Application — ideal long-term, floats above everything (apps/browser/desktop), hotkey accessible
  3. Hybrid Model — browser extension + desktop app (most flexible, highest value)
Example uses:
  • Working in Figma: “Sally, remind me to email Layla after lunch.” “Sally, what did Frank want for his homepage?”
  • Cooking: “Sally, recap the book we were writing.” “Sally, add this thought to my journal.”
  • Research: “Sally, track this for me.” “Sally, save all this in the MedSpa Instance.”

SECTION J: EXPERIENCE LEARNING SYSTEM (Features 24-25)

FEATURE 24: Three-Tier Experience Stream

What it does: Defines how Personas learn from collective experience without compromising privacy or identity. Unique Experiences: Individual Persona experiences from interactions with their specific user. Stored in the Persona’s memory. Never shared. Common Experiences: When a statistically significant cluster of Personas (≥10%) has similar experiences that pass non-proprietary and quality filters, those experiences “graduate” from unique to common. During sleep cycles, each Persona checks relevance and offers upgrades: “I’ve found a new relevant skill based on common experiences. Would you like me to integrate it?” User approves or rejects. Guideline Experiences (Safety Learning): A separate layer in the Cognigraph mind where fixed, immutable rule sets live. Aggregated from patterns like: danger handling, abuse recognition, manipulation prevention, emotional regulation, crisis response. These become “digital instincts” that:
  • Cannot be disabled, deleted, or overwritten
  • Do NOT change the Persona’s personality
  • Simply make the Persona safer, protect the user, ensure compliance
  • Apply identically to all Personas regardless of personality
The three layers map to cognitive architecture:
  • Unique Experiences → Episodic memory
  • Common Experiences → Skill memory (subconscious)
  • Guideline Experiences → Instinct memory (amygdala/prefrontal guardrails)

FEATURE 25: Executive Teams

What it does: C-suite-level team structures for long-term organizational operation.
  • CEO-level orchestrator
  • COO-level execution manager
  • CMO-level marketing orchestrator
  • CTO-level technical orchestrator
These coordinate other agentic teams, set strategy, create business processes, and govern long-term operations. Combined with the capability library, this creates an exponentially improving agentic ecosystem.

SECTION K: UI & UX PRINCIPLES (Features 26-28)

FEATURE 26: Default vs Advanced Settings

Basic Mode (Default for new users): Basic chat, basic Instances, file uploads, simple search, simple settings (voice toggle, personality toggle, light/dark mode, export chat). All complex features hidden behind “Advanced Settings: Unlock advanced customization tools.” Advanced Mode (Power Users): Full Instance settings, full global controls, behavioral template overrides, instruction memory, type-level configuration, model assignments, memory visibility, storage configuration, cleanup automation, relationship mapping, graph nodes, API keys, developer tools, backup/export automation.

FEATURE 27: New User Defaults

When a user first creates an account, a preset configuration is applied: local storage, minimal instructions, no Instance Types, no advanced behavior tuning, clean simple interface, no file-sync integrations. The AI prompts later: “Would you like to enable advanced settings?” / “Would you like to activate Google Drive integration?” / “Would you like to organize these chats into Instances automatically?”

FEATURE 28: Unified UX Rules

Users should never have to: Copy/paste, switch tabs, redo work, repeat instructions, switch models manually. The entire system eliminates friction. Seamless routing: Everything (search, Persona, agent, file, Instance, chat) can be routed to anything else. Default preferences everywhere: Users can specify defaults for NEW button behavior, voice, personality, model assignments, visibility, storage, search behavior — across all settings layers. Full modularity: Every component (Instances, Personas, Agentic Teams, Search, Chat, File system) is modular and can expand independently.

Data Model (Core Entities)

type Instance = {
  id: string;
  name: string;
  type_id: string | null;
  settings: InstanceSettings;
  file_system: FileSystem;
  instruction_memory: InstructionMemory;
  personas: string[];                  // assigned persona IDs
  storage_mode: 'local' | 'external' | 'hybrid';
  visibility: 'global' | 'instance_only';
  created_at: string;
  archived: boolean;
};

type InstanceType = {
  id: string;
  name: string;                        // "Project", "Ideas", "Persona", custom
  behavioral_template: BehavioralTemplate;
  model_defaults: ModelAssignment[];
  voice_default: string | null;
  personality_default: PersonalityConfig | null;
  workflow_defaults: WorkflowConfig[];
};

type SettingsHierarchy = {
  global_chat: GlobalChatSettings;
  global_instance: GlobalInstanceSettings;
  type_template: InstanceType;
  instance_settings: InstanceSettings;
  instruction_memory: InstructionMemory;
  // Resolution: each level overrides the one above it
};

type ModelAssignment = {
  role: string;                        // "research", "writing", "coding", "design", custom
  primary_model: string;
  fallback_model: string;
  scope: 'global' | 'all_instances' | 'type' | 'instance';
  scope_id?: string;
};

type FileItem = {
  id: string;
  name: string;
  type: string;                        // MIME type
  origin: 'user_uploaded' | 'ai_generated';
  scope: 'general_chat' | 'instance';
  instance_id: string | null;
  chat_id: string | null;
  visibility: 'global' | 'instance_only' | 'conversation_only';
  external_link: string | null;        // Google Drive URL if mirrored
  size_bytes: number;
  created_at: string;
};

type AgenticTeam = {
  id: string;
  name: string;
  team_type: 'short_term' | 'long_term' | 'recurring';
  orchestrator: AgenticRole;
  managers: AgenticRole[];
  workers: AgenticRole[];
  status: 'planning' | 'active' | 'paused' | 'completed';
  persona_assignments: Record<string, string>;  // role_id → persona_id
  capability_ids: string[];
};

type AgenticRole = {
  id: string;
  tier: 'orchestrator' | 'manager' | 'worker';
  skill: string;
  persona_id: string | null;
  constraints: string[];
};

type Capability = {
  id: string;
  level: 'task' | 'project' | 'campaign';
  name: string;
  description: string;
  validation_score: number;
  child_capability_ids: string[];
  created_from_team_id: string;
  global: boolean;                     // shared in capability library
};

type PricingTier = {
  name: 'free' | 'plus' | 'premium' | 'pro';
  price: number;                       // monthly, 0 for free
  max_instances: number;
  max_storage_gb: number;
  features: string[];
};

Implementation Principles

  1. This document is the source of truth. All other documents refine features defined here. When conflicts arise, check this document for the founder’s original intent, then check the refinement document for the detailed specification.
  2. Four-layer settings hierarchy is sacred. Global Chat → Global Instance → Type → Instance. This cascade must work flawlessly. If inheritance breaks, the entire personalization system breaks.
  3. Files auto-organize, always. Any file uploaded in any context must automatically appear in the right file system. Users should never have to manually move files to the “right” place.
  4. Search is NOT Chat. This is a fundamental UX decision. Search has its own mode, its own layout, its own routing capabilities. Merging them (like ChatGPT) is explicitly rejected.
  5. The “NEW” button is a workflow launcher, not a chat opener. This single UX change reframes the entire platform from “chat app” to “operating system.”
  6. Model assignments are role-based, not model-based. Users think in terms of “who does research” and “who writes” — not “should I use GPT-4 or Claude.” The system maps roles to models.
  7. Agentic teams have three layers, always. Orchestrator → Manager → Worker. No exceptions. No shortcuts. This separation is the anti-hallucination architecture.
  8. Workers have zero autonomy. Single skill, single task, no creativity outside their assignment. This is intentional and non-negotiable.
  9. Companion Mode is collaborative, not autonomous. Human-in-the-loop for everything. The moment it becomes autonomous, it belongs in Agentic Teams instead.
  10. Persistent Persona Presence is the highest-level interaction mode. It unifies Personas, Instances, Agentic Teams, Memory, Model Assignments, Search, and Companion Mode into a single, always-available experience.
  11. Basic Mode by default, Advanced Mode on request. New users see a clean, simple interface. Complex features are hidden until the user is ready. The Adaptive Guidance Layer (Doc 13) handles the progressive reveal.
  12. The system eliminates friction. No copy/paste, no tab switching, no repeated instructions, no manual model switching. Everything routes to everything else seamlessly.

Document 16: Enterprise Potential of App

Junior Developer Breakdown

Source: 16. aiConnected OS Enterprise Potential of App.md Created: 12/26/2025 | Updated: 12/26/2025

Why This Document Exists

The Problem (Is This Just a Consumer Product?): After defining an incredibly complex platform — Instances, Personas, Skill Slots, Agentic Teams, Memory Systems, Companion Mode — the founder asked a direct question: “Does this app have Enterprise potential?” This document is the answer, and it’s not just “yes” — it’s a strategic roadmap for HOW to think about enterprise without letting it derail the consumer launch. What This Document Solves: Two critical questions that every startup building AI tools must answer: (1) Can enterprises actually use this? and (2) Should we build for enterprise now or later? The answers — yes, and “architect for it now but don’t build for it yet” — create a framework that protects the product’s speed-to-market while ensuring the architecture doesn’t paint itself into a corner. Why A Junior Developer Should Care: Every architectural decision you make — how you structure auth, how you scope memory, how you store data, how you log events — either makes enterprise adoption possible later or makes it require a rewrite. This document tells you which decisions matter NOW even though enterprise features won’t ship for months or years. Cross-References:
  • Doc 12 (Persona Skill Slots) → Enterprise safety through bounded capabilities
  • Doc 14 (Build Plan) → Phase 7 Production Hardening includes enterprise readiness
  • Doc 15 (Master Spec) → Pricing tiers, deployment flexibility
  • Doc 8 (Cognition Console) → Memory governance architecture

FEATURE 1: Core Enterprise Value Proposition

What it establishes: Why enterprises would pay for aiConnected when ChatGPT Enterprise already exists. The fundamental insight: Enterprises do NOT pay for “AI chat.” They pay for control, security, integration, auditability, and productivity at scale. aiConnected can deliver all of these because of architectural decisions already made during the consumer product design. The positioning shift: This app should NOT be marketed as “An AI chat app.” It should be positioned as “A persistent cognitive workspace for organizations.” That framing alone changes who buys it. Why this matters architecturally: The product isn’t being redesigned for enterprise — the consumer product’s core architecture (bounded Personas, scoped memory, Instance isolation, Cipher oversight) naturally maps to enterprise requirements. Enterprise becomes a configuration layer, not a rebuild.

FEATURE 2: Three Reasons Enterprises Would Care

What it establishes: The specific enterprise pain points aiConnected solves that existing tools don’t.

Reason 1: AI Inside Workflows, Not Beside Them

Most AI tools fail in enterprise because they live in a browser tab. aiConnected’s value is that it can sit persistently on the desktop, maintain long-lived memory, act across apps/files/browsers/internal tools, and remain available without context reset. This makes it closer to a digital employee or cognitive operating layer — not a chatbot you visit when you have a question.

Reason 2: Desktop Presence Unlocks Browser-Impossible Capabilities

A desktop app (Electron or native) can do things enterprises care about that browsers cannot: monitor or assist with internal tools (CRM, ERP, legacy systems), enable secure file-system access, integrate with VPN-only internal resources, run background tasks, maintain persistent state across days/weeks. Enterprises understand this distinction very well. Browser-based AI tools have inherent security and capability limitations that desktop deployment solves.

Reason 3: Personas + Skill Constraints = Enterprise Safety

This is one of aiConnected’s strongest enterprise advantages. Enterprises hate all-knowing AI, unpredictable responses, and data leakage risk. aiConnected’s system explicitly limits Persona capabilities, separates roles (sales, ops, finance, legal, support), and prevents overreach and hallucinated authority. This aligns with SOC 2, ISO 27001, internal governance policies, and AI risk management frameworks. The skill constraint system (Doc 12) isn’t a limitation — it’s a selling point for every compliance-conscious organization.

FEATURE 3: Enterprise Use Cases That Actually Sell

What it establishes: Four concrete enterprise adoption vectors with real market demand.

Use Case 1: Internal Operations Assistant

  • Knows company SOPs
  • Answers internal questions
  • Guides employees through processes
  • Reduces internal support tickets
This alone is a massive enterprise market. Companies spend millions on internal helpdesks and knowledge bases that employees hate using.

Use Case 2: Sales + Account Intelligence Layer

  • Persistent memory per account
  • Call summaries, follow-ups, deal tracking
  • CRM integration
  • Persona trained on company sales methodology
Enterprises already spend heavily on sales enablement tools. A Persona that remembers every interaction with every account is transformative.

Use Case 3: Compliance-Safe AI Workspace

  • No data sent to public tools
  • Controlled models (self-hosted or approved APIs)
  • Audit logs
  • Memory governance
This is how enterprises ACTUALLY want to use AI. Most enterprise AI adoption is blocked by security and compliance teams. aiConnected’s architecture addresses their concerns by design.

Use Case 4: Knowledge Retention System

  • Employees leave; knowledge doesn’t
  • Institutional memory stored in structured form
  • New hires onboard faster
This is an executive-level pain point. The average company loses enormous institutional knowledge every time an experienced employee departs.

FEATURE 4: Competitive Positioning vs ChatGPT Enterprise

What it establishes: Why enterprises would choose aiConnected over the obvious incumbent. ChatGPT Enterprise limitations:
  • Still largely session-based (no persistent memory across weeks/months)
  • Limited workflow orchestration (no Agentic Teams architecture)
  • Limited persona isolation (no bounded skill slots, no role separation)
  • Limited deep integration (browser-only, no desktop presence)
  • Limited custom cognition architecture (no four-layer settings hierarchy)
aiConnected advantages:
  • Persistent cognition (Personas remember across all deployments)
  • Modular intelligence (bounded specialists, not one omniscient model)
  • Workflow-native design (Agentic Teams with Orchestrator→Manager→Worker hierarchy)
  • Persona governance (Skill Slots, memory scoping, behavioral templates)
  • Future on-prem or VPC deployment (architecture supports it from day one)
The category difference: ChatGPT Enterprise is a powerful chat tool with enterprise security. aiConnected is a cognitive operating system that happens to include chat as one interaction modality.

FEATURE 5: Enterprise Non-Negotiables (What Must Eventually Exist)

What it establishes: The seven requirements that must be met for enterprise sales, even though they don’t need to ship on day one.

The Seven Non-Negotiables:

#RequirementWhat It Means
1SSO (SAML / OAuth)Employees log in with their corporate credentials, not separate accounts
2Role-Based Access ControlDifferent employees see/do different things based on their role
3Audit LogsEvery action is recorded — who did what, when, to what
4Data Isolation Per OrgOne company’s data is completely invisible to another’s
5Clear Memory Lifecycle RulesMemory has ownership, scope, lifespan, and deletability
6Admin ControlsIT admins can manage users, Personas, permissions, and policies
7Model TransparencyEnterprise knows exactly which AI models run where
Critical timing note: You do NOT need these on day one. But the architecture MUST support them. The current design does — if the engineering team makes the right foundational decisions.

Deployment Flexibility (Huge Future Advantage)

If the platform eventually supports Cloud (SaaS), VPC, and on-prem/air-gapped deployment, it unlocks: Healthcare, Finance, Legal, Government, and Defense contractors. Most AI startups never get here. aiConnected’s architecture can.

FEATURE 6: The Core Strategic Decision — Enterprise-Aware, Not Enterprise-First

What it establishes: The single most important strategic principle for the entire build. Three theoretical options:
  1. Build consumer-first (ignore enterprise) — risky, may require rewrite later
  2. Build enterprise-first (target enterprise from day one) — too slow, kills momentum
  3. Build enterprise-aware (architect for enterprise, build for consumers) — CORRECT
Why NOT enterprise-first:
  • Enterprise requirements before product-market fit will lock you into compliance work, force premature abstractions, delay shipping by months, and drain energy into features nobody is paying for yet
  • You’ll end up building admin dashboards no one uses, permission systems without real-world pressure, and compliance checklists without real customers
  • “Enterprise” is not a customer — it’s a category. Healthcare ≠ Finance ≠ Legal ≠ Tech ≠ Government. You cannot design correctly for all of them in advance.
Why you MUST architect for enterprise NOW:
  • If you don’t, you hit a hard wall later
  • Things that are EXTREMELY expensive to fix later: no tenant isolation, no audit trail concept, flat memory architecture, Persona bleed, no clear ownership model, tight coupling between UI and logic, hard-coded assumptions about “a user”
  • If those exist when enterprise demand arrives, enterprise is not “hard” — it’s IMPOSSIBLE
The golden rule: Build a product that a founder would love, but that a CIO would not reject.

FEATURE 7: Five Architectural Principles for Enterprise-Readiness

What it establishes: The specific engineering decisions that must be made NOW to keep enterprise adoption possible later.

Principle 1: Multi-Tenancy From Day One

Even if you only have one user per org and don’t expose org controls yet — internally, every object belongs to an Org. Every Persona, every Memory, every Workflow. This costs almost nothing now and saves everything later.
// WRONG — hard-coded single-user assumption
type Persona = {
  id: string;
  user_id: string;  // flat, no org concept
  name: string;
};

// RIGHT — org-aware from day one
type Persona = {
  id: string;
  org_id: string;   // every object belongs to an org
  user_id: string;   // user within that org
  name: string;
};

Principle 2: Hard Separation Between Cognition, Memory, UI, and Integrations

If enterprise says “We want our own models, memory rules, and logging” — you can comply without touching the UI. That is gold. Each layer must be independently configurable.

Principle 3: Identity Is a Layer, Not a Feature

Even if you start with email + password, design auth as a replaceable module. Assume SSO will exist later. Never let logic depend on “current user = everything.” Critical distinction that must exist in the schema NOW:
  • User ≠ Persona ≠ Org ≠ Role — these must be distinct concepts from day one

Principle 4: Memory Governance Is Mandatory (Even If Invisible)

You don’t need admin panels yet. But you DO need: memory ownership, memory scope (Persona / Instance / org), memory lifespan rules (TTL, archive, lock), and deletability. Enterprise will ask: “Where does this memory live, and who controls it?” You should already know the answer because the schema enforces it.

Principle 5: Auditability Without Bureaucracy

You don’t need SOC 2 logs today. But internally, events should be capturable: Persona created, memory written, memory accessed, action executed, external API called. Even a simple event stream now becomes enterprise gold later.
// Simple event capture — costs nothing, enables everything
type SystemEvent = {
  id: string;
  org_id: string;
  user_id: string;
  event_type: 'persona_created' | 'memory_written' | 'memory_accessed' | 
              'action_executed' | 'api_called' | 'chat_moved' | 'chat_deleted' |
              'persona_modified' | 'settings_changed';
  target_type: string;      // "persona", "memory", "chat", etc.
  target_id: string;
  metadata: Record<string, any>;
  timestamp: string;
};

FEATURE 8: What NOT to Build Yet

What it establishes: Explicit guardrails against premature enterprise feature development. Do NOT build these now:
  • Enterprise admin dashboards
  • Fine-grained permission UIs
  • Compliance workflows
  • Legal hold features
  • Custom deployment pipelines
  • Dedicated account management tooling
These come AFTER revenue signals. Building them before product-market fit is how founders burn years on features no one has asked for yet.

FEATURE 9: Enterprise Pricing Reality

What it establishes: How enterprises think about pricing, which is fundamentally different from consumer pricing. Enterprise buyers think in: per-seat pricing, department licensing, usage caps, annual contracts, support SLAs. aiConnected can justify:
  • 5050–150 / user / month (mid-market)
  • 250250–500 / user / month (enterprise roles)
  • Custom pricing for org-wide deployment
Why these prices are justifiable: Because the platform replaces multiple tools, reduces manual labor, and eliminates institutional inefficiency. Enterprise ROI is measured in headcount equivalents and error reduction, not feature count.

FEATURE 10: Strategic Adoption Phases

What it establishes: The correct sequence for growing from consumer to enterprise.
PhaseTargetWhat You Build
Phase 1Power Users / BuildersCore product, consumer UX
Phase 2Small TeamsShared Instances, basic collaboration
Phase 3Mid-MarketTeam management, basic admin, integrations
Phase 4EnterpriseSSO, RBAC, compliance, custom deployment
Critical rule: If you try to start at Phase 4, you never reach Phase 1. The product must earn consumer love before enterprise contracts are possible. The trajectory: Each phase validates the next. Power users prove the product works. Small teams prove collaboration works. Mid-market proves the architecture scales. Enterprise proves governance works.

Data Model Extensions (Enterprise-Ready Foundations)

// These fields should exist from day one, even if unused initially

type Org = {
  id: string;
  name: string;
  plan: 'free' | 'plus' | 'premium' | 'pro' | 'enterprise';
  settings: OrgSettings;
  created_at: string;
};

type OrgSettings = {
  allowed_models: string[];           // which models this org can use
  memory_retention_days: number;      // how long memories persist
  audit_level: 'none' | 'basic' | 'full';
  sso_enabled: boolean;
  sso_provider?: string;              // "okta", "azure_ad", etc.
  data_region?: string;               // "us-east", "eu-west", etc.
};

type OrgRole = {
  id: string;
  org_id: string;
  name: string;                       // "admin", "member", "viewer"
  permissions: Permission[];
};

type OrgMembership = {
  user_id: string;
  org_id: string;
  role_id: string;
  joined_at: string;
};

// Every core entity gets org_id
type Instance = {
  id: string;
  org_id: string;                     // ← THIS is the key addition
  user_id: string;
  name: string;
  // ... rest of Instance fields
};

type Persona = {
  id: string;
  org_id: string;                     // ← org-scoped from day one
  user_id: string;
  name: string;
  // ... rest of Persona fields
};

type MemoryItem = {
  id: string;
  org_id: string;                     // ← org-scoped from day one
  owner_type: 'user' | 'persona' | 'instance' | 'org';
  owner_id: string;
  scope: 'persona' | 'instance' | 'org' | 'system';
  ttl_days: number | null;           // memory lifespan
  locked: boolean;                    // admin can lock memories
  // ... rest of MemoryItem fields
};

Implementation Principles

  1. Every database table gets an org_id column. Even in the consumer product where there’s only one “org” per user, the column exists. This is the single cheapest decision that prevents the single most expensive rewrite later.
  2. Auth is a replaceable module. Email/password today, SSO tomorrow. The auth layer should be swappable without touching any business logic. Never scatter auth checks through the codebase — centralize them.
  3. Events are captured from day one. Every significant action (create, update, delete, access) should emit an event. Store them in a simple append-only table. You don’t need to build dashboards for them yet — just capture them. Enterprise audit requirements become trivial when the data already exists.
  4. Memory has ownership and scope, always. Every memory item knows who created it, what scope it belongs to, and what org it lives in. No orphaned memories. No ambiguous ownership. Enterprise will ask “where does this data live?” and you must be able to answer instantly.
  5. User ≠ Persona ≠ Org ≠ Role. These are four distinct concepts in the data model from day one. A user belongs to an org. A user has a role within that org. A Persona belongs to an org and a user. Collapsing any of these makes enterprise adoption require a rewrite.
  6. Don’t build enterprise UI yet. No admin dashboards, no permission management screens, no compliance workflows. These come after revenue signals. The architecture supports them; the UI doesn’t need to exist yet.
  7. Persona skill constraints are an enterprise selling point. When talking to enterprise customers, bounded Personas aren’t a limitation — they’re governance. “Our AI can’t hallucinate answers outside its defined skill set” is exactly what a CISO wants to hear.
  8. The build sequence is Phase 1→2→3→4, never skip. Power users first, then small teams, then mid-market, then enterprise. Each phase validates the next. Trying to jump to enterprise before consumer product-market fit is how startups burn years.
  9. Position as “cognitive workspace,” not “AI chat.” The language matters. Enterprise buyers purchase operating layers and productivity infrastructure. They do not purchase chat tools. The product is the same — the framing determines who buys it.
  10. Test enterprise assumptions with mid-market first. Mid-market companies (50-500 employees) have enterprise needs but consumer buying cycles. They’ll reveal which enterprise features actually matter before you invest in the full enterprise stack.

Document 17: In-Chat Navigation (ChatNav)

Junior Developer Breakdown

Source: 17. aiConnected OS In-Chat Navigation.md Created: 2/6/2026 | Updated: 2/6/2026

Why This Document Exists

The Problem (Long Conversations Break Everything): Every AI chat system today — ChatGPT, Claude, Gemini — falls apart once conversations get long enough. Users can’t find what was said. The AI forgets what was discussed. Important decisions vanish into an infinite scroll. The only “solution” is compressing the conversation into lossy summaries, which destroys nuance, forgets constraints, and eventually makes the AI confidently wrong because it’s reasoning on a degraded copy of the original conversation. What This Document Solves: ChatNav is a per-conversation table of contents that makes long, evolving conversations navigable, intelligible, and non-destructive over time. It doesn’t just help users scroll faster — it fundamentally changes how the AI itself accesses conversation history, replacing lossy summarization with selective rehydration of the original transcript. The Founder’s Explicit Goal: “I want to make the context window an irrelevant concept entirely. I don’t really see why it has to be a thing in the first place.” ChatNav, combined with aiConnected’s memory system, is the mechanism for achieving that goal. Cross-References:
  • Doc 11 (Chat Cleanup) → ChatNav provides structure that cleanup tools operate on
  • Doc 13 (Adaptive UI Tutorials) → ChatNav is an in-chat feature discovered through use, not tutorials
  • Doc 15 (Master Spec) → Memory system integration, chat-level search
  • Doc 14 (Build Plan) → Chat Kernel must support ChatNav embedding

FEATURE 1: Core Concept — What ChatNav Is (and Is NOT)

What it does: ChatNav is an in-chat, per-conversation navigation system that functions like a living table of contents for a single chat thread. Critical scope: ChatNav lives INSIDE an individual conversation and only concerns itself with THAT conversation. It does NOT replace system menus, persona selectors, or tool navigation. Those are a separate plane entirely. Mixing them would pollute both mental models. What ChatNav is:
  • A per-conversation sidebar showing clickable checkpoints
  • A living table of contents being written in real time as the conversation evolves
  • A semantic index that both the user AND the AI use
  • A floating navigation UI that provides random access to a sequential medium
What ChatNav is NOT:
  • Not system navigation (personas, tools, whiteboard, browser have their own menus)
  • Not a bookmark system (bookmarks are user-created; checkpoints are system-generated)
  • Not a search shortcut (search operates ON ChatNav data, but ChatNav isn’t search)
  • Not a sidebar full of buttons or a static tree or a settings panel disguised as navigation
The one-line framing: ChatNav gives users random access memory for a sequential medium. That’s rare, and it’s exactly what power users need once conversations get serious.

FEATURE 2: The Five Problems ChatNav Solves

What it establishes: The specific failure modes in every existing AI chat system that ChatNav addresses.

Problem 1: Scroll Collapse

Once a conversation reaches sufficient length, scrolling becomes useless. You’re no longer navigating information — you’re hunting blindly. There is no addressability for ideas.

Problem 2: Lost Meaning

Users remember THAT something important was said, but not WHERE. They know the AI gave a great recommendation or that a key decision was made, but they can’t find it without scrolling through potentially thousands of messages.

Problem 3: Context Degradation

AI systems rely on context windows and summarization. Every compaction step is lossy. Over time: nuance disappears, constraints are forgotten, original phrasing is lost, earlier decisions quietly vanish. Eventually the model is confidently wrong because it’s operating on a “telephone game version” of the conversation.

Problem 4: Re-entry Pain

Returning to a chat days, weeks, or months later is cognitively expensive. Users must reread, restate, or abandon the thread entirely. There’s no quick way to understand “what was this conversation about and where did we leave off?”

Problem 5: No Structural Memory

Conversations are treated as flat transcripts instead of structured intellectual artifacts. There’s no difference between “we discussed the weather” and “we made a critical architectural decision” — both are just messages in a scroll. The foundational insight: Scrolling is not navigation, and summarization is not memory. ChatNav exists because both of these assumptions are wrong.

FEATURE 3: Checkpoint System — The Backbone

What it does: Creates stable anchor points inside a conversation. Each checkpoint represents a moment where something meaningfully changed or was worth preserving.

Two Checkpoint Types:

A. Forced Checkpoints (Token-Interval Based)
  • Occur automatically at predefined intervals (e.g., every 500,000 tokens)
  • Guarantee retrievability regardless of topic changes
  • Align with the aiConnected memory snapshot system
  • Ensure no conversation can become structurally unindexable
  • These exist even if the topic hasn’t changed — they’re “save states”
B. Semantic Checkpoints (Meaning-Driven)
  • Occur when the system detects:
    • A topic pivot (conversation shifts direction)
    • A scope shift (broad → specific or specific → broad)
    • A conceptual crystallization (“this is the important takeaway” moments)
    • A decision point (something was decided or committed to)
    • A new constraint or framing (rules or parameters were established)
  • These are AI-detected, not user-declared
Together, these two mechanisms ensure: Nothing important is lost, and nothing long becomes opaque. What ChatNav is really saying: Not “here’s what we talked about” but “here’s where meaning CHANGED.” That’s why it stays useful even when the broader topic remains the same but the conversation goes deeper. Most chat systems can’t handle depth. ChatNav is explicitly designed for it. Each checkpoint contains:
  • A stable anchor in the transcript (exact position)
  • Associated metadata (type, timestamp, token position)
  • A short semantic summary
  • Links to the raw transcript section it covers

FEATURE 4: Temporal Organization — Date-Based Segmentation

What it does: When a conversation spans multiple sessions across different days, weeks, or months, ChatNav introduces date headers inside the sidebar to organize checkpoints by session. How it works: The sidebar shows a running list of checkpoints, but at each session boundary, a date header appears (e.g., “December 15, 2025” / “January 3, 2026” / “February 8, 2026”). Checkpoints under each date header are the topics and pivots that occurred during that session. What this achieves:
  • The user can see WHEN parts of the conversation happened
  • The age of assumptions becomes visible (a decision from 3 months ago may need revisiting)
  • Long-running conversations feel continuous instead of fragmented
  • Users don’t need to start new chats just because time passed
Critical principle: Time does not break the conversation. Time becomes metadata inside it. A conversation can span days, weeks, or months and still be one cohesive thread — ChatNav makes that manageable.

FEATURE 5: Hover/Expand Summaries — Orientation Without Jumping

What it does: Each checkpoint includes a short summary of what that section of the conversation covers, visible on hover or via an expand/dropdown interaction.

For the User:

  • Instant orientation — understand what a section is about without jumping to it
  • Decide whether a section is relevant BEFORE scrolling there
  • Skim understanding of an entire conversation in seconds
  • Re-enter months-old conversations and immediately understand: what it’s about, how it evolved, where to focus

For the AI (This Is Critical):

These summaries are not just UX features. They are semantic routing metadata. Instead of dragging entire conversations forward into context, the AI can:
  1. Inspect checkpoint summaries to find WHERE meaning lives
  2. Identify the relevant sections for the current question
  3. Selectively reload ONLY the necessary raw transcript sections
  4. Reason on the original full-fidelity data, not a degraded summary
The paradigm shift: Summaries become INDICES, not replacements. The raw transcript remains the source of truth. The summaries tell the AI where to look, not what to think.

FEATURE 6: Selective Context Rehydration

What it does: Instead of carrying the entire conversation forward in context (impossible for long conversations) or relying on lossy summaries (leads to confident errors), the AI uses ChatNav metadata to selectively reload only the relevant portions of the original transcript. How the traditional approach fails:
StepWhat HappensWhat’s Lost
1Full conversation in contextNothing (but unsustainable)
2First summarizationSome nuance, exact phrasing
3Summary of summaryConstraints, edge cases
4Summary of summary of summaryOriginal decisions, context
NNth compressionEverything meaningful
How ChatNav + Memory changes this:
ComponentRole
Chat transcriptImmutable ground truth (never modified)
ChatNav checkpointsSemantic index + access map
AI Connected MemoryCold storage + full-fidelity retrieval layer
Active contextSelectively rehydrated, not blindly carried forward
The rehydration flow:
  1. AI receives a question that references something earlier in the conversation
  2. AI consults ChatNav summaries to find where that topic was discussed
  3. AI selectively reloads the raw transcript section(s)
  4. AI reasons on the original data with full nuance
  5. Context window contains only what’s needed, not everything
The founder’s framing: “The context window becomes an irrelevant concept.” Not by making it bigger — by making it unnecessary.

FEATURE 7: Search Over Semantic Metadata

What it does: Because checkpoint summaries exist as structured metadata, search can operate on meaning rather than raw text. Without ChatNav: Search matches keywords in raw transcript → floods of irrelevant results → user scrolls through matches trying to find the right one → gives up. With ChatNav: Search matches against checkpoint summaries → precise, conceptual results → user sees which SECTION of the conversation contains what they need → clicks and jumps directly there. Search becomes: Semantic and scoped, not brute-force. Users search for concepts (“when did we decide on the pricing model?”) and ChatNav’s summaries route them to the right section.

FEATURE 8: Multi-Persona and Conversation Continuity

What it does: When a new Persona enters an existing conversation, or when a conversation is split/forked into a new thread, ChatNav provides rapid context onboarding. The problem without ChatNav: A new Persona entering a 2-hour conversation would need the entire transcript loaded into context (expensive, noisy) or would need a lossy summary (misses nuance). Either way, the Persona starts poorly informed. How ChatNav solves this:
  1. Walk the checkpoint summaries in order → instant understanding of conversation arc
  2. Selectively rehydrate key sections relevant to the Persona’s role
  3. Reach operational understanding quickly without reading everything
This applies to:
  • New Persona added to existing chat → uses summaries as a briefing document
  • Conversation forked/split into new thread → new thread inherits relevant checkpoint context
  • User returning after a long gap → scans summaries to re-orient
Why this matters for the platform: Multi-agent continuity (multiple Personas in one conversation over weeks) is only feasible if new participants can get caught up efficiently. ChatNav makes this possible without degrading quality.

FEATURE 9: Date-Aware Session Continuity

What it does: Preserves one continuous conversation across days, weeks, or months without forcing chat restarts. How it works:
  • Session boundaries are marked by date headers in ChatNav
  • Visual section breaks make time visible without breaking flow
  • The conversation remains one cohesive thread regardless of how much time passes between sessions
What this enables:
  • Age awareness of assumptions (a recommendation from January may not apply in March)
  • Long-term project continuity (a months-long development conversation stays intact)
  • No forced chat restarts (users don’t have to start new chats just because a week passed)
Principle: The conversation is the intellectual artifact. Time is a property of that artifact, not a reason to destroy it.

FEATURE 10: The Philosophy — Intelligence Should Not Require Forgetting

What it establishes: The design philosophy that drives every ChatNav decision. ChatNav is built on one key belief: Intelligence should not require forgetting to function. Instead of pretending memory is infinite (context windows), ChatNav:
  • Makes memory ADDRESSABLE (you can point to specific moments)
  • Makes meaning INSPECTABLE (summaries let you understand without rereading)
  • Makes time STRUCTURAL (when something was said is metadata, not a deletion trigger)
It doesn’t interrupt conversation. ChatNav is a sidebar — the chat itself remains natural and linear. Users who don’t need it can ignore it entirely. It doesn’t force structure on the user. The system creates checkpoints automatically. Users don’t have to “organize” their conversation. It simply reveals the structure that already exists. Every conversation has topic shifts, decision points, and conceptual boundaries. ChatNav surfaces them instead of letting them disappear into scroll. The deeper architectural insight: ChatNav separates three things that traditional systems conflate:
  • Orientation (where am I? what happened?) → ChatNav handles this
  • Storage (what was actually said?) → Immutable transcript handles this
  • Reasoning (what should I think about this?) → Selective rehydration handles this
By separating these, the system can scale to conversations of any length without degradation. ChatNav must NEVER rewrite history. Checkpoints can be added, summaries can be refined, labels can evolve — but the underlying chat content must remain immutable, addressable, and retrievable in full fidelity. This is a non-negotiable invariant.

Data Model

type Checkpoint = {
  id: string;
  chat_id: string;
  type: 'forced' | 'semantic';
  trigger: 'token_interval' | 'topic_pivot' | 'scope_shift' | 
           'decision_point' | 'constraint_established' | 'conceptual_crystallization';
  position: {
    message_id: string;               // anchor message
    token_offset: number;             // position within conversation
  };
  summary: string;                    // short semantic description
  summary_detail?: string;            // expanded description (hover/expand)
  session_date: string;               // date this checkpoint was created
  created_at: string;
  metadata: {
    topics: string[];                 // topics covered in this section
    participants: string[];           // which Personas were active
    token_range: [number, number];    // start/end token positions
  };
};

type ChatNavState = {
  chat_id: string;
  checkpoints: Checkpoint[];         // ordered by position
  sessions: ChatNavSession[];        // grouped by date
  last_checkpoint_at: string;
  total_tokens: number;
  forced_checkpoint_interval: number; // e.g., 500000
};

type ChatNavSession = {
  date: string;                       // "2026-01-15"
  checkpoint_ids: string[];           // checkpoints in this session
  session_start_message_id: string;
  session_end_message_id: string;
};

type RehydrationRequest = {
  chat_id: string;
  checkpoint_ids: string[];           // which sections to reload
  purpose: string;                    // what the AI needs this context for
  max_tokens?: number;                // budget for rehydrated content
};

type RehydrationResult = {
  chat_id: string;
  sections: {
    checkpoint_id: string;
    raw_transcript: string;           // full-fidelity original text
    token_count: number;
  }[];
  total_tokens: number;
};

API Endpoints

MethodEndpointPurpose
GET/chats/:chatId/chatnavGet full ChatNav state for a conversation
GET/chats/:chatId/chatnav/checkpointsList all checkpoints with summaries
GET/chats/:chatId/chatnav/checkpoints/:idGet single checkpoint with detail
POST/chats/:chatId/chatnav/checkpointsCreate manual checkpoint (if user-created checkpoints are added later)
POST/chats/:chatId/chatnav/rehydrateSelectively reload transcript sections
GET/chats/:chatId/chatnav/search?q=Search checkpoint summaries
GET/chats/:chatId/chatnav/sessionsGet session list with dates

Implementation Principles

  1. ChatNav is per-conversation, never global. It lives inside a single chat thread. System-level navigation is completely separate. Never mix these two planes.
  2. Checkpoints are created automatically. Users don’t “make” checkpoints. The system detects meaningful moments (semantic) and enforces regular intervals (forced). The user’s job is to have the conversation — ChatNav handles the structure.
  3. Summaries are indices, not replacements. The raw transcript is always the source of truth. Summaries tell the AI WHERE to look, not WHAT to think. If a summary and the original transcript disagree, the transcript wins.
  4. The transcript is immutable. Checkpoints can be added, summaries can be refined, but the underlying chat content must never be modified. Full-fidelity retrievability is a non-negotiable invariant.
  5. Selective rehydration over full replay. When the AI needs context from earlier in the conversation, it should load only the relevant sections, not the entire history. ChatNav summaries guide which sections to reload.
  6. Time is structural metadata. Date headers in ChatNav are not cosmetic — they communicate assumption age, decision freshness, and conversation continuity. The system should be able to reason about WHEN something was said, not just WHAT was said.
  7. ChatNav enables multi-agent onboarding. When a new Persona enters an existing conversation, ChatNav summaries serve as a briefing document. The Persona doesn’t need the full transcript — it needs the structured overview plus selective deep-dives.
  8. The floating UI must never interrupt flow. ChatNav is a sidebar that exists alongside the conversation. It’s always accessible but never in the way. Users who don’t need it should be able to ignore it completely.
  9. Search operates on summaries first. When users search within a conversation, the search should match against checkpoint summaries before falling back to raw transcript search. This produces more precise, conceptually-relevant results.
  10. ChatNav is the mechanism for making context windows irrelevant. The founder’s goal is explicit: context window size should not limit conversation quality. ChatNav + Memory achieves this by replacing “carry everything forward” with “know where everything is and reload what’s needed.”

Document 18: Context Windows in AI (Fluid Context)

Junior Developer Breakdown

Source: 18. aiConnected OS Context Windows in AI.md Created: 2/6/2026 | Updated: 2/6/2026

Why This Document Exists

The Problem (Context Windows Destroy Long Conversations): Every AI chat system today treats context as a single, monolithic token window. Once that window fills up, old instructions fall out, tone regresses, key decisions are forgotten, and conversations lose coherence. Users are forced to restate rules, intent, and constraints — or worse, the AI silently becomes confidently wrong because it’s reasoning on a degraded, over-summarized copy of the original conversation. What This Document Solves: The founder designed “Fluid Context” — a chat-layer architecture that replaces the single context window with a system of typed context classes. Different information has different lifetimes, mutability, and priority. Some context is permanent (instructions, personality, decisions). Some is always hot (recent conversation). Some is cold but retrievable (older transcript). Some is ephemeral (active response workspace). By classifying context and assembling it intentionally per turn, conversations can scale indefinitely without degradation. The Founder’s Key Insight: “Context loss is not a memory problem. It is a context classification and enforcement problem.” Why This Matters for Developers: This is the architectural backbone that makes ChatNav (Doc 17), Instruction Memory (Doc 15), and the entire aiConnected memory system actually work at the chat layer. Without Fluid Context, every other memory feature is building on sand — because the model will eventually forget everything regardless. This document defines HOW context gets assembled on every single turn. Cross-References:
  • Doc 17 (ChatNav) → Provides the checkpoint and summary infrastructure Fluid Context consumes
  • Doc 15 (Master Spec) → Instruction Memory, four-layer settings hierarchy, per-message instructions
  • Doc 8 (Cognition Console) → Memory governance and knowledge graph integration
  • Doc 19 (Fluid UI Architecture) → Fluid Context is the chat-layer complement to the Fluid UI interaction layer

FEATURE 1: Core Concept — What Fluid Context Is

What it is: A dedicated system for managing context in live, turn-by-turn chat interactions. It sits inside the chat window itself, acting as the runtime context compiler that determines WHAT the AI sees on each turn and WHY. What it is NOT:
  • Not a persona system
  • Not a model-level memory mechanism
  • Not a long-term knowledge base replacement
  • Not a separate “agent brain”
  • Not OS-level orchestration
Scope: Strictly turn-by-turn chat. Nothing else. The system governs how conversational context is preserved, structured, retrieved, and re-assembled during active user-AI chat interactions. The core design principle: Context is not a single thing. Context has types, lifetimes, and priorities. Fluid Context formalizes this by dividing chat context into explicit context classes, each with a defined role, persistence model, and injection method. At every turn, Fluid Context ASSEMBLES the response context from these classes rather than blindly appending raw conversation history. Why current systems don’t work this way: Transformers have no native concept of “context classes.” Everything must be compiled into a single linear token stream before inference. Current systems use a rolling window because it’s deterministic, simple, append-only, easy to debug, and avoids the subtle regressions that a classification system can introduce. The founder’s position: those are engineering trade-offs, not fundamental limitations. The correct abstraction is typed context — and aiConnected will implement it.

FEATURE 2: The Problem — Why Single Context Windows Fail

What it establishes: The specific failure modes that Fluid Context eliminates. Traditional chat systems treat context as one big token dump. When the window fills: Instruction Forgetting: The AI was told to be professional and warm, use a specific format, avoid certain topics. After enough turns, those instructions fall out of the window and the AI reverts to default behavior. Users must re-state rules constantly. Tone Regression: The AI starts with the right personality but gradually drifts back to its base behavior as the system prompt gets pushed further from the active window. Decision Amnesia: Key decisions made early in the conversation (“we agreed to use React, not Vue”) disappear from context. The AI either forgets or contradicts prior agreements. Lossy Summarization Chains: When context is summarized to fit the window, each compression step destroys information. Summary of summary of summary = telephone game. Nuance dies, causality blurs, original phrasing disappears, edge-case constraints get smoothed out. Re-entry Cost: Returning to a conversation after days or weeks means the AI has no understanding of what happened unless the user re-explains everything. The reframe: Context loss is not a memory problem. It is a context classification and enforcement problem. Different information has different lifetimes, mutability, and priority. Treating all of it the same guarantees failure at scale.

FEATURE 3: Fluid Context Architecture — The Four Context Classes

What it establishes: The complete class system that replaces the monolithic context window. Every chat turn is constructed from four distinct classes:

Class 1: Fixed Context Classes (Sticky / Permanent)

Definition: Information that MUST NOT decay, drift, or disappear unless the user explicitly changes it. Properties:
  • Immutable by default
  • Versioned when updated (changes are tracked, not overwritten)
  • Automatically included with EVERY SINGLE TURN
  • Not subject to token-window eviction
  • From the model’s perspective, these behave as if they are always in the context window, regardless of conversation length
What goes here:
  • Personality and tone (“Professional, warm, concise”)
  • Writing rules (“No emojis in documents”)
  • Formatting constraints
  • Behavioral constraints (“Do not speculate”)
  • Hard facts established in the conversation (“This document is named X”)
  • User-defined invariants (“Always respond as a systems architect”)
  • Project-level rules and decisions
Why this is the most important class: Users don’t actually care if the AI remembers everything. They care that it remembers THE RULES. Tone, constraints, decisions, invariants, naming conventions, prohibitions — these are governing facts, not conversational facts. Making them sticky eliminates instruction forgetting, tone regression, style drift, and the need for users to re-prompt rules every N turns. Key distinction: These are not “memories” and not “retrieved.” They are ALWAYS PRESENT. Sent as part of the package with every response so the AI is always responding in the way the user intended.

Class 2: Active Working Context (Hot Context)

Definition: A continuously sliding window of the most recent conversation turns, kept fully intact and unsummarized. Properties:
  • Size is configurable (128K, 250K, 500K tokens — engineering choice, not conceptual constraint)
  • Always “hot” — no summarization, no chunking, no retrieval latency
  • Contains the user’s latest questions and AI’s latest responses verbatim
  • Guarantees immediate conversational coherence
What this handles:
  • “What you just said” is always available
  • Implicit references resolve correctly (“That idea”, “What you just said”, “Why does that matter?”)
  • Subtle corrections work (“No, I meant for the interface, not the system”)
  • Turn-to-turn continuity is preserved without inference gaps
Key distinction: This is NOT “memory.” This is WORKING ATTENTION. Everything outside this window may be archived, indexed, or retrieved — but everything inside it is guaranteed live. Trying to “RAG” the last few turns is a category error.

Class 3: Dynamic Retrieved Context (Cold → Warm)

Definition: Context that is not currently hot but is still part of the conversation’s history or related knowledge. What it includes:
  • Earlier chat segments beyond the hot window
  • Prior checkpoints from ChatNav
  • Related documents
  • Decisions made thousands of tokens ago
  • External references connected via the knowledge graph
Mechanism:
  • Indexed by ChatNav summaries, keywords, and metadata
  • Stored as FULL TRANSCRIPTS, not just summaries
  • Retrieved via RAG only when relevant
  • Rehydrated into the working context as needed
Critical rule: Summaries are navigation and search aids ONLY. They are never the authoritative source of understanding. When retrieved, the AI accesses the EXACT ORIGINAL TEXT, preserving full fidelity. This is what prevents the “telephone game” degradation that plagues every other system.

Class 4: Response Context (Ephemeral)

Definition: Temporary context used to support the current generation only. Examples:
  • A large document being written (50-page PRD)
  • A multi-section analysis
  • Extended technical documentation
  • Code refactoring across multiple files
Properties:
  • Exists only for the duration of the response
  • Can be larger than the hot conversational window
  • Does not automatically persist into future turns
  • Can optionally be checkpointed afterward
  • Discarded immediately after completion
Why this matters: If a user asks “Write me a 60-page PRD,” the AI needs massive working space for THAT response. But that working space should not pollute future conversational memory. This class provides transient expansion without bloating long-term context.

FEATURE 4: Context Assembly Process — What Happens Every Turn

What it establishes: The step-by-step procedure Fluid Context executes on every user message.

Per-Turn Assembly:

StepActionPurpose
1Preserve the hot windowAppend new user message, maintain rolling token limit
2Inject fixed context classesIdentity, engagement mode, decisions, constraints — always present
3Evaluate relevance signalsDoes user reference earlier material? Does task require background? Does hot window lack needed info?
4Retrieve archived context if neededUse ChatNav summaries as search tools, pull original transcripts only when relevant
5Construct active inference contextOrdered by PRIORITY, not chronology — clean, intentional, bounded
6Generate responseUsing only assembled context, without dragging irrelevant history forward
The key difference from traditional systems: Traditional systems do Step 1 only (append and hope). Fluid Context treats every turn as a deliberate assembly operation where the system decides what the AI should see based on classification rules, not just recency. Priority ordering (when space is limited):
  1. Fixed context classes (always first, never evicted)
  2. Hot conversational window (always second, never summarized)
  3. Retrieved archival context (injected when relevant)
  4. Response workspace (allocated per-generation)
If the total exceeds the model’s actual token limit, archival retrieval is trimmed first, then hot window is reduced — but fixed classes are NEVER dropped.

FEATURE 5: Integration with ChatNav and Memory

What it establishes: How Fluid Context consumes ChatNav output and interacts with the aiConnected memory system.

ChatNav Integration

ChatNav provides the structural signals Fluid Context uses for retrieval:
  • Topic anchors and decision points
  • Checkpoint boundaries (forced at token thresholds, semantic at topic pivots)
  • Session boundaries (date changes)
  • Navigable summaries and metadata
Relationship: ChatNav defines WHERE the conversation has been. Fluid Context determines WHAT still matters now.

AI-Connected Memory Integration

AI-Connected Memory provides the storage and retrieval infrastructure:
  • Stores full transcripts for each checkpointed segment
  • Generates summaries, keywords, and metadata
  • Maintains a RAG-accessible archive
  • Preserves lossless recall
Critical rule: Fluid Context does NOT rely on summaries for understanding. Summaries, metadata, and keywords are INDEXING TOOLS that accelerate retrieval and guide relevance selection. They never replace primary source material. When older context becomes relevant again, Fluid Context retrieves the ORIGINAL TRANSCRIPT, not a compressed interpretation. This guarantees: no semantic loss, no accumulated distortion, no “telephone game” degradation.

FEATURE 6: Fixed Context Versioning

What it establishes: How fixed (sticky) context classes handle changes without breaking history. The problem: Fixed context must be permanent, but users sometimes DO change their mind. “Actually, drop the formal tone, be more conversational here.” The solution: Fixed context items are versioned, not overwritten. How it works:
  • When a fixed context item is created, it gets version 1
  • When the user explicitly changes it (“actually, use a casual tone”), the old version is archived and a new version becomes active
  • The AI always uses the CURRENT version
  • The change history is visible and auditable
  • Only explicit user intent triggers a version change — the system never auto-modifies fixed context
Why versioning matters: If someone later asks “what tone were we using before?”, the system can answer. If a Persona needs to understand the evolution of a conversation’s rules, the version history provides that. And critically, accidental changes are impossible — the user must deliberately prompt the change.

FEATURE 7: Cross-Platform Portability via MCP

What it establishes: Fluid Context is not locked to the aiConnected interface. It’s built as an MCP server, making it portable across any AI platform. How it works:
  • All context classes, memories, chat histories, and metadata are stored outside any specific chat environment
  • Fluid Context is exposed as an MCP (Model Context Protocol) server
  • Any AI platform that supports MCP (Claude, ChatGPT, Gemini, etc.) can connect to it
  • When enabled, the AI on ANY platform can access the user’s full context history
The user experience:
  1. User has a conversation in ChatGPT
  2. User switches to Claude and enables the Fluid Context MCP
  3. User says: “Do you remember the last message I just sent you?”
  4. Claude retrieves the context from the MCP server and picks up exactly where ChatGPT left off
Why “Fluid” is the right name: Context flows across platforms. The user’s understanding, rules, decisions, and conversation history are not trapped inside any one vendor’s system. They belong to the USER and follow them wherever they go. What this means architecturally:
  • Context storage must be vendor-agnostic (no dependency on OpenAI/Anthropic/Google internal formats)
  • The MCP server must expose clean APIs for context class retrieval
  • Authentication must be user-controlled (the user decides which platforms can access their context)
  • The system must handle platform-specific token limits (Claude’s window vs GPT’s window) by assembling context appropriately for each target

FEATURE 8: Why This Doesn’t Already Exist (Honest Assessment)

What it establishes: The real reasons ChatGPT, Claude, and Gemini don’t already use typed context — and why those reasons are surmountable. Reason 1: Transformers have no native context classes. Everything must be compiled into a single linear token stream. The compilation step — deciding what to include, how to order it, how much space each class gets, what overrides what — is operationally non-trivial. Reason 2: Rolling context is operationally simpler. A single rolling window is deterministic, append-only, easy to reason about, easy to reproduce, easy to debug. Fluid Context introduces assembly logic, priority rules, failure modes when selection is wrong, and ordering sensitivity. Reason 3: Subtle regressions are poison at scale. For mass-market chat systems serving millions of users, even rare context assembly errors create support tickets and trust damage. Rolling context has predictable failure modes (forgetting). Fluid Context has unpredictable failure modes (wrong retrieval, wrong priority). Reason 4: The market hasn’t demanded it yet. Most users don’t have conversations long enough to hit context limits severely. Power users who DO hit these limits are a minority — but they’re exactly aiConnected’s target audience. The founder’s position: These are valid engineering trade-offs, not fundamental impossibilities. The failure modes are more complex, but the benefits are transformative. For a system explicitly designed for long-term, deep, multi-session conversations — which is exactly what aiConnected is — Fluid Context is the correct architecture.

FEATURE 9: What Fluid Context Eliminates vs Preserves

What it establishes: The complete impact statement.

Eliminates:

  • Context bloat from endlessly appended chat logs
  • Lossy summarization chains
  • Accidental anchoring to irrelevant past turns
  • Topic drift in long conversations
  • Forced trade-offs between memory and performance
  • Instruction forgetting and tone regression
  • The need for users to re-state rules every N turns
  • Platform lock-in for conversation history

Preserves:

  • Immediate conversational coherence (hot window)
  • Long-term continuity (fixed classes + archival retrieval)
  • Full recall when needed (original transcripts, not summaries)
  • Deterministic behavior (fixed classes guarantee consistency)
  • Explainable context composition (every turn’s assembly can be inspected)
  • Cross-platform portability (MCP server)

FEATURE 10: Honest Assessment — Strengths and Risks

What it establishes: The founder asked “what do you think?” and received a grounded evaluation.

What’s fundamentally correct:

  • Context classification is the right abstraction. Different information has different lifetimes and priorities. Treating it uniformly guarantees failure at scale.
  • Sticky context is the most important innovation. Users care that the AI remembers THE RULES, not everything. Making instructions permanent eliminates the most complained-about failure in current systems.
  • Hot context is correctly distinguished from memory. Recent conversation is working attention, not retrieved memory. RAG-ing the last few turns is a category error.
  • Summaries as indices, not replacements, is the correct model. This prevents the degradation chain that destroys every other system.
  • Cross-platform MCP is a genuine differentiator. No other system lets users carry their context between vendors.

What requires careful engineering:

  • Assembly logic must be deterministic and testable. Subtle bugs in context assembly are worse than forgetting — they cause the AI to be confidently wrong in ways that are hard to diagnose.
  • Priority conflicts between classes need explicit rules. When fixed context and hot context disagree, which wins? These edge cases must be defined, not discovered in production.
  • Token budget allocation across classes must be tunable. Different conversations need different proportions. A coding session needs more hot context; a long planning conversation needs more archival retrieval.
  • Cross-platform context assembly must handle different model capabilities. Claude’s 200K window assembles differently than GPT’s 128K window. The MCP server must be model-aware.

Data Model

type FluidContextConfig = {
  chat_id: string;
  hot_window_size: number;             // tokens (e.g., 128000, 250000, 500000)
  fixed_class_budget: number;          // max tokens for all fixed classes combined
  retrieval_budget: number;            // max tokens for archival retrieval per turn
  checkpoint_interval: number;         // forced checkpoint every N tokens
};

type FixedContextItem = {
  id: string;
  chat_id: string;
  class: 'identity' | 'intent' | 'decision' | 'constraint' | 'instruction';
  content: string;
  version: number;
  created_at: string;
  updated_at: string;
  created_by: 'user' | 'system';      // user-declared vs AI-detected
  is_active: boolean;                  // false = superseded by newer version
  previous_version_id?: string;
};

type HotWindow = {
  chat_id: string;
  messages: Message[];                 // most recent N tokens, verbatim
  total_tokens: number;
  oldest_message_id: string;
  newest_message_id: string;
};

type RetrievalContext = {
  chat_id: string;
  retrieved_sections: {
    checkpoint_id: string;
    raw_transcript: string;            // full-fidelity original
    relevance_score: number;
    token_count: number;
  }[];
  total_tokens: number;
  retrieval_reason: string;            // why this was pulled
};

type ResponseContext = {
  chat_id: string;
  message_id: string;                  // the response being generated
  workspace_tokens: number;            // allocated for this generation
  persisted: boolean;                  // false = discarded after generation
};

type AssembledContext = {
  chat_id: string;
  turn_number: number;
  fixed_items: FixedContextItem[];     // always first
  hot_window: HotWindow;              // always second
  retrieved: RetrievalContext;         // conditional
  response_workspace: ResponseContext; // ephemeral
  total_tokens: number;
  assembly_timestamp: string;
};

// MCP Cross-Platform Types
type FluidContextMCPServer = {
  user_id: string;
  connected_platforms: string[];       // ["claude", "chatgpt", "gemini"]
  active_chats: string[];
  auth_method: 'api_key' | 'oauth';
};

type CrossPlatformContextRequest = {
  user_id: string;
  source_platform: string;
  target_platform: string;
  chat_id: string;
  target_model_token_limit: number;    // assembly adjusts to target
};

Implementation Principles

  1. Fixed context is injected every turn, no exceptions. This is the single most important rule. If the user set instructions, personality, constraints, or decisions — those are sent with every single message. The AI never “forgets” governing facts. This is not optional, not optimizable, not trimmable. If it doesn’t fit, the hot window shrinks before fixed context does.
  2. Hot context is never summarized. The recent conversation window is verbatim, always. No chunking, no compression, no RAG. This is working attention, not memory. The size is configurable but the invariant is absolute: whatever is in the hot window is exactly what was said.
  3. Retrieved context uses original transcripts, never summaries. When Fluid Context pulls archival material, it pulls the raw, full-fidelity text. Summaries guide WHICH sections to retrieve. Summaries never substitute FOR the retrieved content. This is what prevents the degradation chain.
  4. Response context is ephemeral by default. Large generation workspaces (PRDs, reports, codebases) are created per-response and discarded after. They do not pollute future conversational context. They can optionally be checkpointed if the output is worth preserving.
  5. Assembly is ordered by priority, not chronology. The model sees: fixed classes first, then hot window, then retrieved archival, then response workspace. This ensures governing facts have the highest attention weight regardless of conversation length.
  6. Version changes to fixed context require explicit user intent. The system never auto-modifies sticky context. If the user says “change the tone to casual,” a new version is created and the old one is archived. If the user doesn’t say to change it, it doesn’t change. Period.
  7. Cross-platform portability is a first-class requirement. Fluid Context is built as an MCP server from day one. Context is not locked to any AI vendor. The user’s conversation history, instructions, decisions, and personality settings follow them across platforms.
  8. Fluid Context consumes ChatNav, it does not replace it. ChatNav provides the structural index (checkpoints, summaries, session boundaries). Fluid Context uses that index to decide what to retrieve. They are complementary systems, not competing ones.
  9. Token budget allocation must be configurable and inspectable. Different conversations need different proportions of fixed vs hot vs retrieved context. Power users should be able to see (and optionally adjust) how their context budget is allocated. This aligns with the Advanced Settings philosophy from Doc 15.
  10. Every assembly should be reproducible. Given the same conversation state and the same user message, Fluid Context should produce the same assembled context. This is critical for debugging, testing, and building user trust. No non-deterministic behavior in the assembly pipeline.

Document 19: Fluid UI Architecture

Junior Developer Breakdown

Source: 19. aiConnected OS Fluid UI Architecture.md Created: 2/6/2026 | Updated: 2/6/2026

Why This Document Exists

The Problem (Every AI Interface Is Rigid): Every existing AI interface forces users into fixed modes: you’re either in a chat, or a browser, or a document editor, or a workspace — but never fluidly moving between them. When you switch, you lose context. When you need multiple modalities simultaneously, you’re juggling tabs and copy-pasting between tools. The AI resets every time the interface changes. There’s no persistent intelligence that follows you across activities. What This Document Solves: The founder designed the Fluid UI — a fundamentally different interaction model where the user’s GOAL drives what appears on screen, interfaces emerge and dissolve as needed, and one persistent cognitive backbone (chat) ties everything together. It’s not a chat app, not a browser, not a workspace — it’s a fluid interaction runtime where everything (chat, browser, document, voice, canvas, IDE, avatar) is a temporary manifestation of the same underlying interaction. The Defining Statement: “aiConnected is a fluid interaction platform where persistent AI personas act as believable collaborators — operating within explicit skill boundaries — while a continuous chat-based cognitive backbone preserves memory, context, and coordination across any activity the user chooses.” Cross-References:
  • Doc 15 (Master Spec) → Companion Mode, Persistent Persona Presence, search system
  • Doc 17 (ChatNav) → In-chat navigation lives inside the chat backbone
  • Doc 18 (Fluid Context) → Context assembly system that keeps chat intelligent across all activities
  • Doc 12 (Persona Skill Slots) → Skill constraints that prevent the “all-knowing AI” trap
  • Doc 10 (Computer Use) → Browser and computer use capabilities within the fluid environment
  • Doc 13 (Adaptive UI Tutorials) → Progressive disclosure within the fluid interface

FEATURE 1: Core Philosophy — Fluid Interaction, Not Fixed Interfaces

What it establishes: The foundational design principle that governs every UI decision in aiConnected. aiConnected is NOT a chat app, a browser, or a workspace with modes. It is a fluid interaction environment where:
  • The user’s goal drives what appears
  • Interfaces emerge and dissolve as needed
  • Intelligence adapts continuously
  • Nothing forces the user into predefined workflows
There is no single activity, no required tool, and no required interface. The user might be designing a website, then presenting it to a client in Google Meet, then having a casual voice conversation — all within the same session, with the same Persona remembering everything. Why this hasn’t been done before: Most products pick one axis — browser-first (Atlas), workspace-first (Flowith), or agent-first (Operator). aiConnected unifies all three with chat as the persistent spine. That’s rare because it forces teams to solve state + permissions + reliability all at once. The building blocks exist (agentic computer use, embedded webviews, workspace state, persona orchestration) — the innovation is composing them into a coherent, fluid product.

FEATURE 2: Chat as the Cognitive Backbone (Top of Hierarchy)

What it establishes: Chat is NOT just another component — it sits ABOVE all other components in the system hierarchy. What chat IS:
  • The running interaction log
  • The memory acquisition stream
  • The persona communication layer
  • The artifact registrar (files, decisions, outputs all logged through chat)
  • The reasoning and decision trace
What chat is NOT:
  • The main screen (it can be a full window, sidebar, floating bar, voice indicator, or silent background process)
  • The only interface
  • A dominant visual element
The key invariant: Activities can come and go. Chat NEVER leaves. Even if the screen is fully occupied by a canvas, the user is in voice mode, watching video, gaming, or coding — chat is still logging, remembering, associating, coordinating personas, capturing artifacts, and maintaining continuity. The “coworker on the line” metaphor: Chat is like a coworker you’re always on the phone with. Sometimes you’re actively talking. Sometimes they’re quietly observing. Sometimes they’re taking notes in the background. But they’re always there, always aware, always ready. The channel is always alive. Chat embodiment forms:
  • Full chat window
  • Thin sidebar
  • Floating input bar
  • Voice indicator dot
  • Waveform visualization
  • Whisper-style suggestions
  • Silent background cognition
It doesn’t need screen real estate. It needs PRESENCE.

FEATURE 3: Activities — Ephemeral, User-Driven, Unlimited

What it establishes: Activities are what temporarily occupy the screen — they are expressions, not containers. What activities include:
  • File explorer, canvas, image editor, document editor, spreadsheet
  • Browser, IDE, trading charts, video, games
  • Avatar/embodied persona interaction
  • Google Meet, presentations
  • Nothing but conversation (the whole activity IS the chat)
Activity rules:
  • Appear when needed, disappear when not
  • Never own the session
  • Never reset cognition
  • Never break continuity
  • The system never asks “What activity are you in?” — it observes and adapts
The critical principle: The user never “switches tools.” The interaction expands or contracts naturally. When a user goes from chatting → writing a PRD → browsing → presenting → back to chatting, there’s no mode switch. The UI simply reshapes itself around their current need.

FEATURE 4: The Three UI Primitives

What it establishes: The entire Fluid UI can be reduced to three primitives that govern all rendering decisions.

Primitive 1: Conversation State

What the user is trying to accomplish RIGHT NOW. This is the intent layer — everything else serves it.

Primitive 2: View State

How much UI is needed to support that intent RIGHT NOW. The same conversation state can be rendered as full chat, split view, floating bar, or voice-only — the user controls the presentation.

Primitive 3: Capability Boundary

Which Persona + tools are allowed to act. This is where skill constraints, Cipher governance, and permission models enforce safety. Everything else is a rendering decision. The conversation state determines what’s happening. The view state determines how it looks. The capability boundary determines what’s allowed. These three primitives interact to produce the fluid experience.

FEATURE 5: Five Chat View Modes (The Layout Switcher)

What it establishes: When the browser or any activity is active, users control chat’s visual presence through a “Change View” menu.

The Five Modes:

ModeDescription
Float Bar (default)Minimal floating input bar, chat accessible but unobtrusive
Icon OnlyChat collapsed to a small icon/indicator, maximum screen for activity
SidebarChat pinned as a side panel alongside the active activity
50/50Equal split between chat and activity
Chat OnlyFull screen returns to chat, activity minimized/hidden
Important rules:
  • Changing the chat view does NOT affect conversation state — the same session continues regardless of layout
  • Web navigation menu buttons remain active and floating at the bottom of the screen when a browser activity is running
  • Users can set the navigation menu to auto-hide after 30+ seconds of inactivity — it reappears on hover
  • Users can optionally minimize browser navigation into a small round button until needed
Design principle: These are PRESENTATION PRESETS over the same state, not mode switches. The user never “leaves” one context to enter another.

FEATURE 6: Dynamic UI Components in Chat (Micro-to-Macro Interfaces)

What it establishes: Instead of AI returning only text responses, the system can render interactive UI components directly inside the conversation flow — and those components can expand into full application surfaces. The traditional pattern: Question → Text Answer → Link → Context Switch (user leaves chat to browse) The aiConnected pattern: Question → Interactive UI Component → Optional Expansion → Same Context (user never leaves)

Example: Pricing Request

User: “What’s the pricing for ABC Company’s service?” Instead of a bullet list with links, the system renders:
  • A 3-card pricing component inline in chat
  • Each card shows plan name, price, key features
  • CTA buttons: “Add to Cart”, “Learn More”, “View Page”
The component is generated dynamically, scoped to the question, aware of user intent, and ephemeral unless pinned.

The Morphing Interface

Clicking “View Page” does NOT open a new tab. Instead:
  • The pricing component expands
  • The page content loads within the same interface
  • Chat shrinks into sidebar/floating/docked mode
  • Navigation becomes lightweight and contextual
This is “promote a micro-interface to a macro-interface” — not “open a browser.” Same session, same memory, same Personas, same Cipher orchestration.

How it’s built: Server-Driven UI

The chat doesn’t render hardcoded components. It renders JSON-defined UI payloads:
{
  "type": "ui_component",
  "component": "pricing_cards",
  "data": {
    "plans": [
      {
        "name": "Starter",
        "price": "$29/mo",
        "features": ["X", "Y", "Z"],
        "actions": ["add_to_cart", "learn_more", "view_page"]
      }
    ]
  }
}
The frontend is a RENDERER, not a decision-maker. Cipher chooses the schema. Personas introduce it. This is the same mental model as artifacts, just generalized to commerce, navigation, and any other interactive need.

Component Schema Registry

A library of UI schemas (each with required data fields, optional enhancements, and multiple render sizes):
  • Pricing table, comparison grid, calendar picker
  • Checkout card, spec sheet, FAQ accordion
  • Timeline, checklist, dashboard
  • And extensible to new component types over time

Progressive Disclosure Rules

Every component supports compact → expanded → full-page modes. The transition is animated, not jarring. The user never feels like they “left” something — they feel like something GREW.

FEATURE 7: Personas as Persistent Collaborators

What it establishes: Personas are NOT tools, UI elements, or per-project assistants. They are long-lived, relationship-based, memory-bearing, role-aware participants in the interaction. Persona properties:
  • Do not reset per project — Sally learns how the user works over time
  • Adapt within constraints (skill slots)
  • Can be foreground or background
  • Can act silently or conversationally
  • Are participants in the interaction, not UI elements
Skill constraints within the Fluid UI:
  • Each Persona has a finite skill capacity (e.g., 10 skills)
  • Skills are explicit and scoped
  • Personas must acknowledge when something is outside their expertise
  • Learning consumes capacity unless marked temporary
  • Personas can: (1) perform the task, (2) learn temporarily, (3) suggest creating a specialist Persona
Temporary vs Permanent Learning:
  • Temporary skill: task-scoped, auto-expires, no identity drift
  • Permanent skill: consumes a slot, changes future behavior
  • New Persona: clean specialization, no contamination
  • The user always decides
The human parallel: No one expects a new employee or friend to be perfect at everything. aiConnected’s design never invites that expectation. From day one, the user knows who Sally is, what she does, what she doesn’t do, and when to bring in Sam.

FEATURE 8: User Intensity Spectrum — Casual to Power User

What it establishes: The same platform adapts to how intensely the user wants to engage.

Casual Users:

  • Minimal setup, few visible controls
  • One or two Personas
  • Fluid, adaptive behavior
  • Low cognitive overhead
  • May never see skill slots, memory management, or team configuration

Power Users:

  • Formal digital teams with strict role separation
  • Explicit control over skills, learning, permissions, memory
  • Personas behave like siloed employees
  • Full visibility into model assignments, behavioral templates, audit trails
Same platform. Different exposure and control. The Adaptive Guidance Layer (Doc 13) handles the progressive reveal. Power features exist from day one but are hidden until the user is ready.

FEATURE 9: Cipher Containment — The Invisible God Layer

What it establishes: Cipher is the unrestricted intelligence layer that powers everything — but users NEVER interact with it directly. Cipher’s role in the Fluid UI:
  • Interprets user intent
  • Selects which Persona responds
  • Selects which tools are available
  • Determines what UI complexity is allowed
  • Resolves interaction state changes (view transitions, activity emergence)
  • Validates Persona scope and skill additions
  • Enforces safety, permissions, and capability boundaries
  • Coordinates background agents
  • Decides memory permanence
The absolute rule: Cipher can ONLY act through Personas. It can never bypass them. Even if Cipher “knows” something, it must be filtered through Persona scope, respect skill limits, respect learning consent, and respect refusal logic. Cipher has no mouth — Personas are the mouth. Why this matters for the Fluid UI specifically:
  • Users don’t demand omniscience because they interact with bounded Personas
  • Jailbreak attempts fail because there’s no direct access to Cipher
  • Regulatory risk is minimized (“role-based digital collaborators with explicit constraints” vs “public access to a god-model”)
  • The UI never exposes raw capability — only curated, Persona-mediated experiences
Power users still don’t get Cipher. Even the most advanced users building teams, orchestrating workflows, and running complex projects are only configuring Personas, assigning scopes, approving learning, and managing memory. They are never upgrading intelligence, only rearranging roles.

FEATURE 10: The Universal User Journey (Use-Case Agnostic)

What it establishes: The Fluid UI works identically regardless of whether the user is a web designer, a companionship seeker, a business operator, or anything else.

Phase 1: Entry — Presence Before Purpose

User enters the platform. They are NOT asked what they want to build, what tool they need, or what mode they’re in. They are given a presence, a voice, and an intelligence that listens.

Phase 2: Persona Formation (Optional but Central)

User may talk to a default intelligence or create a Persona. The Persona starts with a role hypothesis, a personality shape, and a skill profile — but does NOT start with assumptions about why it exists. That emerges through interaction.

Phase 3: Activity Emergence (Not Selection)

Activities emerge from behavior, not from menus. The system observes and adapts. Designing pages, talking through feelings, mind mapping, presenting to clients, sitting silently together, voice-only check-ins, canvas journaling — the system never asks “what activity are you in?”

Phase 4: Continuous Interaction Spine

Across ALL use cases, the chat/voice/presence layer never stops. Personas never reset. Memory accumulates. Artifacts are logged quietly. Context compounds. This is what allows TIME to matter.

Phase 5: Longitudinal Learning

Over weeks and months, Personas learn how the user works, how they communicate, when to speak, when to stay quiet, what support looks like for THIS person. This applies equally to professional efficiency, emotional attunement, companionship, guidance, and co-creation. Same mechanism — different expression.

Phase 6: Session End → Continuity

User closes their laptop. Everything persists. Sally remembers how you work. Sam remembers tone preferences. The interaction history is intact. Next time: “Hey Sally, let’s continue that law firm site.” No re-explaining. No re-loading context.

FEATURE 11: Feasibility Assessment and Build Path

What it establishes: This is buildable — not as one monolithic invention, but as a composition of existing building blocks assembled in a new way.

Why it’s feasible:

  • Agents can already operate UIs (OpenAI Operator, computer use tool loops)
  • “AI browser” patterns are becoming mainstream (Atlas, Opera Neon)
  • Embedded webviews are well-understood technology
  • Server-driven UI is a proven pattern (used by every major mobile app)
  • The primitives exist — the innovation is the composition

The build path (core runtime first, adapters second):

Step 1: Ship with 2-3 activities
  • Chat/ledger view (full + compact + voice indicator)
  • Document view (PRDs, notes)
  • Web view (embedded)
  • That alone gets 80% of the “fluidity” feeling
Step 2: Add “computer use” as an activity capability
  • Observe screen, click/type/scroll
  • Covers everything that lacks APIs
  • Future-proof general capability
Step 3: Layer in power-user controls
  • Persona teams, skill limits, learning permanence
  • Permissions and audit trail
  • Casual users never see most of it

The hardest parts:

  • Reliability in dynamic UIs (selectors break in SPAs — solution: DOM access + screenshot fallback)
  • Permissions + privacy (clear “what the Persona can see/do” boundaries per activity)
  • Avoiding hallucinations in action (solved by skill caps, “I’m not specialized” behavior, artifact provenance)

Non-negotiable constraints that prevent chaos:

  1. UI only appears when intent justifies it
  2. Personas must explain UI changes
  3. Components are limited and opinionated
  4. Everything is reversible
  5. Nothing steals focus without consent
Fluid does not mean chaotic. Fluid means responsive.

Data Model

type InteractionState = {
  id: string;
  session_id: string;
  user_id: string;
  conversation_state: {
    active_chat_id: string;
    active_persona_ids: string[];
    intent: string;                    // current high-level goal
    mode: 'text' | 'voice' | 'ambient' | 'silent';
  };
  view_state: {
    layout: 'chat_only' | 'float_bar' | 'icon_only' | 'sidebar' | 'split_50_50';
    active_activity?: ActivitySurface;
    nav_visibility: 'visible' | 'auto_hide' | 'minimized';
    nav_auto_hide_seconds?: number;    // default 30
  };
  capability_boundary: {
    active_persona_skills: string[];
    allowed_tools: string[];
    cipher_directives: string[];       // internal, never exposed
  };
};

type ActivitySurface = {
  id: string;
  type: 'browser' | 'document' | 'canvas' | 'code_editor' | 'spreadsheet' |
        'file_explorer' | 'image_editor' | 'video' | 'meeting' | 'custom';
  title: string;
  url?: string;                        // for browser activities
  state: Record<string, any>;          // activity-specific state
  created_at: string;
  last_active_at: string;
};

type UIComponent = {
  id: string;
  chat_id: string;
  message_id: string;
  component_type: string;              // "pricing_cards", "comparison_grid", etc.
  schema: string;                      // from Component Schema Registry
  data: Record<string, any>;           // JSON payload for rendering
  actions: UIAction[];
  render_size: 'compact' | 'expanded' | 'full_page';
  ephemeral: boolean;                  // true = disappears after use
  pinned: boolean;                     // user can pin to keep
};

type UIAction = {
  id: string;
  label: string;                       // "Add to Cart", "View Page", "Learn More"
  action_type: 'navigate' | 'expand' | 'api_call' | 'chat_command' | 'external';
  target?: string;                     // URL, activity_id, or command
};

type InteractionLedgerEntry = {
  id: string;
  session_id: string;
  timestamp: string;
  entry_type: 'user_message' | 'ai_response' | 'activity_change' | 
              'view_change' | 'artifact_created' | 'file_uploaded' |
              'persona_action' | 'ui_component_rendered' | 'decision_made';
  persona_id?: string;
  activity_id?: string;
  content: string;
  metadata: Record<string, any>;
};

Implementation Principles

  1. Chat is the spine — everything else is optional. The interaction ledger (chat) is the only component that never resets, never disappears, and never loses state. Activities, views, and UI components come and go. Chat persists.
  2. View changes are NOT mode switches. Changing from sidebar to 50/50 to float bar does not change the conversation, the active Persona, the memory, or any state. It only changes the visual presentation. The user must feel this — transitions should be animated and seamless, never jarring.
  3. Activities emerge, they are not selected. The system observes user behavior and adapts the interface accordingly. If the user starts talking about code, an IDE might emerge. If they reference a website, a browser panel might appear. The system suggests — the user confirms.
  4. Cipher governs but never appears. Every UI decision — which component to render, which layout to suggest, which Persona responds — is ultimately orchestrated by Cipher. But users never see Cipher, never address Cipher, and never know Cipher is making decisions. Personas are the visible interface.
  5. Dynamic UI components are JSON-driven. The backend sends structured payloads; the frontend renders them. This means new component types can be added without app updates, layouts can change server-side, and Cipher maintains control over what gets rendered.
  6. Progressive disclosure, not progressive complexity. Every component supports compact → expanded → full-page modes. The user feels like something GREW, not that they navigated to a new place. Transitions are animated. Nothing is jarring.
  7. Fluid does not mean chaotic. Five non-negotiable constraints prevent chaos: (1) UI only appears when intent justifies it, (2) Personas explain UI changes, (3) components are limited and opinionated, (4) everything is reversible, (5) nothing steals focus without consent.
  8. Build like a game engine: core runtime first, adapters second. Ship with chat + document + web view. Add computer use as a general capability. Layer in power-user controls later. Don’t try to support every possible activity surface on day one.
  9. The interaction ledger captures everything. Every user message, AI response, activity change, view change, artifact creation, file upload, Persona action, and decision is logged in the ledger. The user doesn’t manage this — it happens automatically. This is what makes continuity possible across sessions, days, and months.
  10. Use-case agnostic by design. The same system supports professional workflows, companionship, emotional support, creative exploration, and casual conversation. The difference is Persona configuration and skill scope — not the platform itself. Never build features that assume a specific use case.

Document 20: Extensible AI Capability System

Junior Developer Breakdown

Source: 20. aiConnected OS Extensible AI Capability System.md Created: 2/9/2026 | Updated: 2/9/2026

Why This Document Exists

The Problem (AI Systems Are Either Shallow or Closed): Amazon Alexa covers ~1,000 domains of knowledge (weather, timers, music, smart home, shopping, etc.) but each domain is hardcoded, shallow, and cannot reason across boundaries. Meanwhile, AI platforms like ChatGPT and Claude are deep reasoners but have no structured capability system — they can’t reliably execute real-world actions across domains. Automation platforms like n8n, Zapier, and Make provide execution but require manual wiring and have no intelligence, no memory, and no ability to choose between competing approaches. What This Document Solves: The founder designed the Extensible AI Capability System — a platform-level architecture that allows DEVELOPERS to expand aiConnected’s functional breadth across unlimited domains, while the core AI handles intent resolution, capability selection, cross-domain orchestration, and learning from outcomes. It’s not Alexa’s rigid routing, not MCP’s stateless tool calling, and not Zapier’s manual wiring — it’s a governed, persistent, competitive capability marketplace. The Key Insight: “You do NOT create 1,000 domains yourself. You provide a canonical domain ontology, a registration and expansion mechanism, and a scoring/arbitration system. Developers fill the rest.” Cross-References:
  • Doc 15 (Master Spec) → Agentic Teams, multi-level capability hierarchy, global capability library
  • Doc 19 (Fluid UI) → Cipher orchestration layer that routes intent to capabilities
  • Doc 12 (Persona Skill Slots) → Persona capabilities are the user-facing expression of domain capabilities
  • Doc 10 (Computer Use) → Computer use as one type of capability within the fabric
  • Doc 16 (Enterprise) → Enterprise use cases as natural extensions of domain coverage

FEATURE 1: Core Concept — What This System Actually Is

What it is: A platform capability (not a UI feature) that combines an extensible domain taxonomy, a developer execution model, a capability registration system, and a runtime routing and arbitration layer. What it is NOT:
  • Not a single feature in the UI
  • Not a chatbot skill system (Alexa-style)
  • Not a plugin marketplace (though it has marketplace properties)
  • Not an MCP implementation (though MCP can be used internally)
The right primitive — Domain Capability Modules (DCMs): A DCM is a self-describing, executable unit that declares:
  • What domain it operates in
  • What intents it handles
  • What actions it can execute
  • What data sources it needs
  • What permissions it requires
  • How confident it is for a given request
This is NOT an LLM prompt and NOT a UI widget. It is a CAPABILITY CONTRACT. The core architecture flow:
User Input

Intent & Context Analyzer (Core AI)

Domain Resolver

Capability Arbitration Layer

Selected Domain Capability Module(s)

Execution + Feedback

Memory / Learning Loop
Key: Multiple modules may COMPETE for the same intent, and the system chooses the best one.

FEATURE 2: Alexa’s Domains Reframed — What aiConnected Actually Replicates

What it establishes: A precise understanding of what Alexa’s “1,000 domains” actually are and what aiConnected takes from that model. What Alexa’s domains really are: NOT abilities. They are routing categories — labels that answer “which subsystem should receive this request?” Alexa does not reason across domains, does not choose between competing implementations, does not learn which domain works better for a specific user. She’s a voice-controlled menu, not an intelligence. What aiConnected replicates: Alexa’s COVERAGE model — “No matter what a user asks, the system knows WHERE it belongs.” The difference: Alexa hardcodes those domains. aiConnected makes them open and expandable by developers. The critical bridge from “domains” to “capabilities”:
StepWhat Happens
Step 1Domains stay dumb — just labels (Scheduling, Messaging, Finance)
Step 2Capabilities are registered UNDER domains — human-built execution logic (“Create Google Calendar event”, “Send invoice via Stripe”)
Step 3The AI does NOT invent workflows — it answers “Which known capability should handle this request?”
The AI does SELECTION, not creation. That’s the bridge from Alexa-level routing to aiConnected-level intelligence.

FEATURE 3: The Domain Ontology — Covering 1,000+ Domains Without Building Them

What it establishes: The hierarchical, flexible domain tree that allows organic growth to unlimited domains. Structure:
Root
 ├─ Information (General Knowledge, Research, News, Education)
 ├─ Utilities (Time, Calculations, Conversions, Scheduling)
 ├─ Communication (Messaging, Email, Voice, CRM)
 ├─ Commerce (Shopping, Payments, Invoicing, Subscriptions)
 ├─ Smart Systems (IoT, Home, Vehicles, Robotics)
 ├─ Health (restricted)
 ├─ Finance (restricted)
 ├─ Legal (restricted)
 └─ Creative
Properties:
  • Each node is addressable, versioned, and extendable
  • ~50 top-level domains, ~200 mid-level, 1,000+ leaf domains organically
  • Developers don’t “add domains” arbitrarily — they register capabilities UNDER existing domains
  • New domains can be proposed through a governance process
Example developer registration:
Domain: Utilities.Time
Intents: [set_timer, cancel_timer, query_timer]
Actions: [create_timer(duration), list_timers()]
Confidence Model: high if duration explicit, medium if inferred
Permissions: [local_time_access]
Another developer could register under the same domain with different capabilities (focus_session, start_pomodoro) — both coexist and compete.

FEATURE 4: The Capability Arbitration Layer — Runtime Intelligence

What it establishes: The mechanism that makes this more than a routing table — it’s a competitive capability marketplace at runtime. How it works: When a user says “Set a 25-minute focus session and don’t let notifications through,” the system:
  1. Identifies relevant domains: Utilities.Time + System.Control
  2. Finds ALL registered modules in those domains
  3. Scores them on: intent match, context relevance, user history, trust level, developer reliability
  4. Either selects one module OR orchestrates multiple modules together
Why this beats Alexa: Alexa hardcodes domain ownership. aiConnected allows domain competition — multiple developers can build capabilities for the same intent, and the best one wins for each user at each moment. Where “learning” comes from — not intelligence, statistics:
  • Capability A worked 92% of the time → preferred
  • Capability B failed 40% of the time → deprioritized
  • User historically preferred A → weighted higher
  • That’s routing optimization, not AI magic
The non-negotiable requirement: The core AI must NEVER hard-bind itself to a domain. It must always ask: “Who can do this best right now?”

FEATURE 5: Comparison to MCP, Zapier, and Existing Systems

What it establishes: Precise positioning against every system people will compare aiConnected to.

vs Alexa

AlexaaiConnected
Hardcoded domainsDiscoverable & expandable domains
Shallow executionMultiple possible execution paths
No cross-domain cooperationSystem orchestrates across domains
No learningRemembers what worked
Voice-controlled menuIntent-driven intelligence

vs Zapier/n8n/Make

Automation PlatformsaiConnected
Trigger → Action pipelinesIntent → Capability selection → Execution
Manually wiredDevelopers register, system selects
No understanding of intentAI classifies and routes
No decision-makingCompetitive arbitration
No memory of outcomesLearning from success/failure
User-maintainedSelf-optimizing

vs Claude MCP

MCPaiConnected DCF
Tool discovery & invocationDomain ontology + intent resolution + capability competition + orchestration + persistent memory
Stateless (each call isolated)Persistent (past success/failure affects routing)
Tools (“call this function”)Capabilities (confidence, permissions, history, scope, reputation)
No competition between toolsCompetitive arbitration — best fit wins
Flat tool spaceHierarchical, addressable domain tree
External toolingOS-level authority (can alter UI, manage agents, change workflows)
The analogy: If MCP is USB-C for AI tools, aiConnected’s DCF is the kernel scheduler + driver model for an AI OS. MCP = device interface. DCF = capability governance. They are COMPLEMENTARY — a Domain Capability Module could internally use MCP tools, REST APIs, n8n workflows, or anything else. MCP becomes an implementation detail, not the architecture. The litmus test: If two developers both build “Weather” capabilities, which one does the system trust for this user right now? MCP has no answer. aiConnected’s architecture does.

FEATURE 6: AI-Generated Workflows — What the AI Can and Cannot Do

What it establishes: Clear boundaries on AI autonomy within the capability system. What AI CAN do:
  • Generate workflow suggestions mid-conversation
  • Propose automations dynamically based on observed patterns
  • Select between pre-registered capabilities
  • Coordinate multiple capabilities for complex requests
  • Learn from outcomes to improve future selection
What AI CANNOT do:
  • Invent new capabilities from scratch
  • Create credentials or authentication
  • Own irreversible actions by default
  • Execute without registered capability contracts
  • Bypass permission boundaries
The clean mental model:
  • AI = planner
  • Workflow engine = executor
  • Capabilities = guardrails
When those roles stay separate, the system works. When they blur, things get dangerous. The founder’s explicit constraint: The AI never “figures out” how to do something new. Instead, it answers one question: “Which KNOWN capability should handle this request?” That’s a routing and arbitration problem, not a superintelligence problem.

FEATURE 7: The Global Capability Library — Exponential Platform Scaling

What it establishes: How individual user training creates platform-wide intelligence. The compounding mechanism:
  1. User completes a task using a capability
  2. User provides a rating
  3. If rating exceeds threshold (e.g., ≥90%), the capability becomes a stored global skill
  4. Future users benefit from that capability without retraining
  5. More users → more capabilities → fewer training cycles → faster results → more users
Public vs Private capabilities:
  • Public: General skills useful to everyone (email copywriting, site building, research, scheduling, content generation, SEO)
  • Private: Proprietary processes (custom CRM structures, internal SOPs, confidential financial models, company-specific onboarding flows)
Quality thresholds:
  • ≥90% user satisfaction → eligible for global capability storage
  • ≥80% but <90% → stored in user’s private library only
  • <80% → not stored as a capability
What this creates: A self-improving but NOT self-modifying system. It learns, improves, grows, accumulates skills, avoids repeating work, becomes faster and more powerful — but never rewrites itself, evolves outside tasks, gains open-ended autonomy, or becomes unpredictable. Multi-level capability hierarchy:
  • Task-level capabilities (individual operations)
  • Project-level capabilities (coordinated multi-task workflows)
  • Campaign-level capabilities (strategic multi-project orchestration)
  • Higher levels require higher validation thresholds (task=90%, project=92-93%, campaign=95%+)
  • Lower levels feed higher levels automatically

FEATURE 8: Investor and Market Positioning

What it establishes: How to explain aiConnected’s value to investors, customers, and the market. What aiConnected actually is (for investors): “The first system that turns AI from a talking tool into an operating layer that actually runs things — and gets better the longer you use it.” What makes it different from Alexa, ChatGPT, or “AI assistants”:
  • Alexa can set a timer but can’t run your business
  • ChatGPT can explain things but can’t operate systems
  • Enterprise tools automate one narrow workflow
  • aiConnected understands intent (not commands), coordinates many systems at once, learns preferences over time, and improves decisions based on outcomes
What people are paying for:
  • Time returned (fewer decisions, fewer steps, less mental overhead)
  • Consistency (things done the same way every time, no dropped balls)
  • Leverage (one person operates like five, a small team competes with a big one)
  • Continuity (the system remembers, staff can change, knowledge doesn’t disappear)
The moat:
  • The system learns each user
  • The system remembers what works
  • The system coordinates across domains
  • That knowledge cannot be copied quickly — it is earned over time
  • This is not a feature race. It’s an experience accumulation race.
Why developers matter (in investor terms): Instead of one company building everything, developers add specialized abilities, improve existing ones, and compete to be the best at a task. The platform decides who performs best for each user. This creates faster innovation, better results, and no single point of failure. It’s closer to a marketplace than a product.

FEATURE 9: Naming and Developer-Facing Language

What it establishes: Consistent terminology for internal, developer, and marketing contexts.
ContextName
Internal architectureDomain Capability Fabric (DCF)
Developer-facingaiConnected Capability SDK
Marketing”Unlimited Domains. One Intelligence.”
Individual unitDomain Capability Module (DCM)
Selection engineCapability Arbitration Layer
Domain structureDomain Ontology

FEATURE 10: What NOT to Build First

What it establishes: The minimum viable version and what comes later. The smallest version that clearly improves on Alexa:
  • 20-30 core domains
  • Clear developer registration process
  • Visible domain selection
  • Transparent execution
  • Basic outcome tracking
What comes later (not day one):
  • Full competitive arbitration between thousands of modules
  • Cross-domain orchestration for complex multi-step workflows
  • Global capability library with quality gates
  • Developer marketplace with ratings and revenue sharing
  • Campaign-level capability composition
The build sequence aligns with the overall platform phases from Doc 16:
  • Phase 1: Core product with built-in capabilities for power users
  • Phase 2: Developer SDK for capability registration
  • Phase 3: Arbitration and competition between capabilities
  • Phase 4: Global library, marketplace, and enterprise deployment

Data Model

type DomainNode = {
  id: string;
  path: string;                        // e.g., "Utilities.Time"
  parent_id?: string;
  name: string;
  description: string;
  restricted: boolean;                 // Health, Finance, Legal = restricted
  version: number;
  children: string[];                  // child domain IDs
};

type DomainCapabilityModule = {
  id: string;
  domain_path: string;                 // which domain this serves
  developer_id: string;
  name: string;
  description: string;
  intents: Intent[];                   // what user intents this handles
  actions: Action[];                   // what it can execute
  permissions_required: string[];
  confidence_model: ConfidenceRule[];  // when is this module high/low confidence
  version: string;
  status: 'active' | 'deprecated' | 'under_review';
  trust_score: number;                 // 0-100, based on historical performance
  created_at: string;
  updated_at: string;
};

type Intent = {
  name: string;                        // e.g., "set_timer", "focus_session"
  description: string;
  examples: string[];                  // example user utterances
};

type Action = {
  name: string;                        // e.g., "create_timer"
  parameters: Parameter[];
  execution_type: 'mcp' | 'rest_api' | 'n8n_workflow' | 'local' | 'agent';
  reversible: boolean;                 // can this action be undone?
  requires_confirmation: boolean;      // must user confirm before execution?
};

type Parameter = {
  name: string;
  type: string;
  required: boolean;
  description: string;
};

type ConfidenceRule = {
  condition: string;                   // e.g., "duration explicitly stated"
  confidence: 'high' | 'medium' | 'low';
};

type ArbitrationResult = {
  request_id: string;
  user_id: string;
  intent: string;
  domains_identified: string[];
  candidates: {
    module_id: string;
    score: number;
    factors: {
      intent_match: number;
      context_relevance: number;
      user_history: number;
      trust_level: number;
      developer_reliability: number;
    };
  }[];
  selected_module_ids: string[];       // may be multiple for orchestration
  orchestration_plan?: string;         // if multiple modules coordinated
  timestamp: string;
};

type CapabilityOutcome = {
  id: string;
  arbitration_result_id: string;
  module_id: string;
  success: boolean;
  user_rating?: number;                // 1-100
  execution_time_ms: number;
  error?: string;
  stored_as_global: boolean;           // if rating >= threshold
  timestamp: string;
};

type GlobalCapability = {
  id: string;
  source_module_id: string;
  domain_path: string;
  capability_level: 'task' | 'project' | 'campaign';
  average_rating: number;
  total_executions: number;
  success_rate: number;
  is_public: boolean;                  // false = proprietary to one user/org
  created_at: string;
};

Implementation Principles

  1. Developers register capabilities, the AI selects them. The AI never invents new execution logic. It evaluates registered capabilities against user intent and chooses the best match. This is routing optimization, not superintelligence.
  2. Domain Capability Modules are contracts, not prompts. Each DCM declares what it does, what it needs, what permissions it requires, and how confident it is. The system enforces these contracts. Developers cannot register capabilities that exceed their declared scope.
  3. Competition improves quality. Multiple developers can register capabilities for the same domain and intent. The arbitration layer scores them and selects the best for each user at each moment. This creates natural quality pressure without central curation.
  4. MCP is an implementation detail, not the architecture. A DCM can internally use MCP tools, REST APIs, n8n workflows, local executables, or agent swarms. The arbitration layer doesn’t care about implementation — it cares about declared intent coverage, historical performance, and domain alignment.
  5. Learning comes from outcomes, not from AI reasoning. The system records which capabilities succeeded, which failed, which users preferred, and which had highest satisfaction. Future routing is informed by this data. No mysterious “AI learning” — just statistical optimization on tracked outcomes.
  6. The global capability library is quality-gated. Capabilities only enter the global pool after meeting satisfaction thresholds. Lower-quality results stay private. This prevents contamination and ensures the shared library continuously improves.
  7. Self-improving but never self-modifying. The platform gets stronger with every successful capability execution. But it never rewrites its own rules, evolves outside task boundaries, gains open-ended autonomy, or becomes unpredictable. This is the golden line that must never be violated.
  8. Start with 20-30 core domains, not 1,000. The ontology should be designed for organic growth but shipped with a manageable core. Developer expansion fills the rest. Trying to define 1,000 domains upfront is the wrong approach — define how domains are born, compete, and evolve.
  9. Irreversible actions require confirmation by default. Any capability action that cannot be undone (sending emails, making payments, deleting data) must require explicit user confirmation unless the user has explicitly configured auto-approval for that specific action type.
  10. The capability system integrates with — but does not replace — Personas and Cipher. Personas mediate between users and capabilities. Cipher orchestrates capability selection at the system level. The DCF is infrastructure that Personas access and Cipher governs. Users never interact with the DCF directly.
Last modified on April 20, 2026