Document 1: Spaces Dashboard Design — Complete Feature Breakdown
For Junior Developers New to the aiConnected OS Project

What This Document Covers
This document defines Spaces — the unified workspace hub that lives inside every Instance (think of an Instance as a “project” container). Spaces is where all non-chat content is organized, accessed, and managed. It is the single most important organizational feature for users who are actively building things, managing tasks, collecting ideas, or producing outputs.Context: Where Spaces Fits in the Platform
Before diving in, you need to understand the hierarchy:- aiConnected OS — the entire platform
- Instances — individual project/workspace containers (like “projects” in ChatGPT or Claude)
- Instance Dashboard — the home screen when you open an Instance
- Spaces — a tab/section within the Instance Dashboard that unifies all non-chat artifacts
Spaces inside the Instance Dashboard. Everything inside Spaces is scoped to that Instance by default.
FEATURE 1: Spaces Home View (The “Control Room”)
What It Is
When a user clicks “Spaces” in the Instance Dashboard sidebar, they land on a Spaces Home screen. This is NOT a file browser or a list of links. It’s a visual “control room” — a dashboard-within-a-dashboard that shows overview cards for every content type the Instance contains.What It Does
Displays summary cards for each content type (Tasks, Whiteboard, Live Docs, Chats, Files, Folders, Snippets, Links, Exports), each showing key stats and quick-action buttons.Intended Purpose
Users accumulate dozens of files, tasks, documents, and code snippets across many chat conversations. Without Spaces, all of this content is trapped inside individual chats and invisible unless you scroll through conversation history. Spaces surfaces everything in one place so users can act on it without hunting.Why Anyone Should Care
Current AI platforms (ChatGPT, Claude, Gemini) have no native way to see “everything I’ve created or saved across all my conversations in this project.” Content gets buried in chat history. Spaces solves this by treating every artifact as a first-class object that exists independently of the chat that created it.How It Should Be Built
Top Bar Components:- Scope Selector — a dropdown or toggle with two options:
This Instance(default) — shows content from the current Instance onlyAll Instances— shows content aggregated across every Instance the user has (this is a future/power-user feature)
- Global Search Bar — searches across all content types within the current scope (tasks, docs, chats, files, etc.)
- Filter Bar — three filter dimensions:
- Type: Tasks, Docs, Whiteboard, Chats, Files, Snippets, Folders, Links, Exports
- Time: Today, This week, This month, Custom date range
- Source: All, AI-created, User-created, Imported
- A stat summary (e.g., “12 Open | 3 Due Today” for Tasks)
- Quick-action buttons (e.g.,
View all,New Task) - A preview strip showing the most recent 2-3 items
- Tasks — “12 Open | 3 Due Today” — Buttons:
View all,New Task— Shows next 3 tasks - Whiteboard — “1 Whiteboard | 42 pinned items” — Buttons:
Open whiteboard,View pinned items list— Shows recently pinned strip - Live Documents — “6 Documents | Last updated 2 hours ago” — Buttons:
View all,New document— Recently updated docs list - Chats — “32 Chats | 5 linked to this instance” — Buttons:
View chats,Start chat from task— Last 3 active chats - Folders — “4 Folders | 21 items inside” — Buttons:
View all folders,New folder - Files — “63 Files | 18 Images, 11 PDFs, 4 Audio, 30 Other” — Buttons:
Browse files— Recent uploads - Code Snippets — “9 Snippets” — Buttons:
View all,New snippet - Links — “15 Links” — Buttons:
View all,Add link - Exports — “7 Exports | 3 Presentations, 4 Docs” — Buttons:
View all,Create export
Technical Notes for Developers
- Each overview card needs a real-time or near-real-time count query against the Instance’s content store
- The scope selector changes the data source for every card simultaneously
- Search should be full-text across all content types with type-faceted results
- Cards should be rendered as reusable components since the same data model drives both the overview card and the full dedicated view
FEATURE 2: Tabbed Sub-Navigation
What It Is
A horizontal tab bar that sits directly below the search bar inside the Spaces view. Tabs are:Overview | Tasks | Whiteboard | Live Docs | Chats | Folders | Files | Snippets | Links | Exports
What It Does
Clicking any tab switches the main content area to a full dedicated view for that content type. “Overview” is the Spaces Home described above.Intended Purpose
Prevents Spaces from becoming its own cluttered sidebar. Instead of adding 10 new sidebar items to the Instance Dashboard, everything is contained within one Spaces entry, and users navigate between content types using lightweight tabs.Why Anyone Should Care
Tab-based navigation inside a single view is far less cognitively demanding than a sidebar with dozens of entries. It keeps the main app sidebar clean while still giving power users access to every content type.How It Should Be Built
- Horizontal tab bar, scrollable if tabs overflow the viewport width on smaller screens
- Clicking a tab replaces the main content panel (not a page navigation — this is a client-side view switch)
- The currently active tab should be visually highlighted
- Each tab view is its own component/page with dedicated layout, filters, and actions
- URL routing should reflect the active tab (e.g.,
/instance/:id/spaces/tasks) for deep-linking and browser history support
FEATURE 3: Tasks Space
What It Is
A lightweight task/to-do list scoped to the Instance. Not a full project management tool — just a fast way to capture “do this later” items that emerge from conversations.What It Does
Displays all tasks for the Instance in a list with columns: Task name, Source (which chat/message created it), Status (Open / In Progress / Done), Due date, Tags, and row-level actions.Intended Purpose
During a brainstorming chat, users often think “I need to do X later.” Without a task system, that thought is lost in chat history. Tasks let users capture action items from any chat and manage them separately.Why Anyone Should Care
Every other AI chat platform loses action items inside conversations. This feature means ideas that emerge in chat become trackable, actionable items that live beyond the conversation.How It Should Be Built
List/Table View with columns:- Task name (text)
- Source — which chat, message, or whiteboard item created it. Clicking opens the original source.
- Status —
Open,In Progress,Done(start with justOpen/Donefor v1) - Due date — optional date picker
- Tags — free-text tags (e.g., “PRD”, “UI”, “Sales”)
- Actions column
- Status:
Open / In Progress / Done - Timing:
Due Today / This Week / Overdue - Origin:
Created from chat / Created manually / Created by AI
Open in chat— jumps to the original message that spawned this taskStart new chat from task— creates a new conversation pre-seeded with the task descriptionConvert to live document— promotes the task into a Live DocumentCreate reminder / external notification— sends to email, Slack, etc. (future integration)Pin to whiteboard— adds the task as a node on the Instance’s Whiteboard
- Tasks are scoped per Instance. There is no global task list in v1, but later a “All Tasks” rollup across Instances may be added.
- The Tasks feature can be toggled ON/OFF per Instance Type in settings (not every Instance needs tasks).
- Creating a task from a chat message pre-fills the title from the message content.
- Status changes should be single-click (checkbox or status pill toggle).
FEATURE 4: Whiteboard Space
What It Is
The management interface for the Instance’s visual Whiteboard/Board (a Miro-like infinite canvas — defined in detail in Document 5).What It Does
From Spaces, the Whiteboard view shows:- A primary
Open Whiteboardbutton to launch the full canvas - A list/table of all pinned items currently on the Whiteboard, with: Type (message, image, export, link, note), Source chat, Short preview, When it was pinned
Intended Purpose
The Whiteboard itself is a spatial canvas. But sometimes users want a quick list view of everything on it — to filter, unpin, or convert items — without opening the full canvas.Why Anyone Should Care
Users pin dozens of items from different chats to the Whiteboard over days or weeks. This list view gives them a fast way to audit what’s there, clean up stale items, or convert pinned content into tasks/documents.How It Should Be Built
Open Whiteboardbutton launches the full canvas view (separate component, defined in Doc 5)- Pinned items table with filters by type
- Each row allows: Open in original chat, Unpin, Convert to Task, Convert to Live Document section, Convert to Export draft
FEATURE 5: Live Documents Space
What It Is
A central hub for long-form, evolving documents — PRDs, specs, business plans, etc. — that can be fed content from multiple chats.What It Does
Shows a list of all Live Documents in the Instance with columns: Title, Description/Tagline, Last updated, Linked chats count, Status (Draft / In Review / Final).Intended Purpose
In real projects, a single document (like a PRD) gets built incrementally across many conversations. Live Documents are long-form artifacts that persist and evolve, fed by content from any chat in the Instance.Why Anyone Should Care
No AI platform currently lets you build a single document by feeding it content from multiple separate conversations. Live Documents solve the “my PRD is scattered across 15 chats” problem.How It Should Be Built
- Document list with columns and status badges
- Click a document to open it in an editor panel (rich text editor)
- “Linked chats” shows which conversations contributed content (clickable links back to source chats)
- “Add section from chat” lets users push content from any chat message into a specific section of the doc
- “Create export” generates a downloadable PDF, presentation, or other format from the Live Document
- Status workflow: Draft → In Review → Final
FEATURE 6: Chats Space
What It Is
A view of all chats associated with the current Instance, with relationship metadata.What It Does
Shows a list of all chats with columns: Chat title, Type (Standard, Linked conversation, Reference), Last activity, Linked artifacts (tasks, docs, whiteboard items), Folder association.Intended Purpose
Gives users a bird’s-eye view of every conversation in the Instance, along with what each conversation has produced (tasks, documents, pins, etc.) and how conversations relate to each other.Why Anyone Should Care
In current platforms, chats are flat lists with no visible relationships. This view shows the conversation graph — which chats branched from which, what artifacts each chat produced, and how everything connects.How It Should Be Built
- Chat list with metadata columns
- “Relationships” panel per chat showing: Parent/child linked conversations, Referenced conversations (context pull-ins)
- Actions: Open chat, Add to folder, Mark as “primary” for a topic
- Links to artifacts that were created from each chat (tasks, docs, whiteboard pins)
FEATURE 7: Folders Space
What It Is
A structural organization layer that sits between “Instance” and “chat.” Folders can contain chats, tasks, docs, files, and more.What It Does
Shows a list of folders with: Folder name, Description, Item counts (Chats | Docs | Tasks | Files), Last updated. Clicking into a folder shows a mini-Spaces scoped to just that folder’s contents.Intended Purpose
Large Instances need sub-organization. A folder for “UI Work,” another for “Market Research,” another for “Sales” — each containing only the relevant chats, tasks, and files.Why Anyone Should Care
Without folders, a project Instance with 50+ chats and dozens of files becomes unmanageable. Folders add the hierarchical organization that power users need.How It Should Be Built
- Folder list at the top level
- Inside each folder: a mini-Spaces view with tabs
Summary | Chats | Tasks | Docs | Files - A folder is essentially a “sub-space” — same UI patterns, narrower scope
- Folders are optional — users don’t have to use them
FEATURE 8: Files Space
What It Is
A centralized file browser for all uploaded or AI-generated files in the Instance.What It Does
Shows a grid or list of files with filters by: Type (Image, PDF, Audio, Video, Other), Source (Upload, Generated by AI, Imported), Linked items (Chats, Live Docs, Whiteboard, Exports). Each file shows: Preview/thumbnail, Name, Type, Size, Linked items.Intended Purpose
Files get created throughout chat conversations — images generated, PDFs uploaded, code exported. Without Files Space, these are buried in individual chat messages. This view surfaces them all.Why Anyone Should Care
Finding “that image the AI generated last week” shouldn’t require scrolling through 50 chat messages. Files Space makes every file instantly discoverable and actionable.How It Should Be Built
- Grid view (thumbnails) and list view toggle
- Filters by type, source, and linked items
- Actions per file: Open viewer, Attach to live doc or export, Pin to whiteboard, Insert into chat, Add to folder
- Files should be automatically indexed when created (in chat, by AI, or by upload)
FEATURE 9: Code Snippets Space
What It Is
A storage and retrieval system for reusable code, prompts, or configuration snippets.What It Does
Shows a list of saved snippets with: Language/Type (JS, Python, Shell, Prompt, etc.), Title, Short description, Tags, Origin (which chat created it).Intended Purpose
Developers and power users frequently generate useful code snippets during conversations. This feature saves them as independent, searchable objects rather than losing them in chat history.Why Anyone Should Care
If the AI writes a useful database query or a Python function during a chat, the user should be able to find and reuse it without searching through old conversations.How It Should Be Built
- Snippet list with language syntax highlighting in previews
- Actions: Copy to clipboard, Insert into chat, Insert into live doc, Attach to folder
- Snippets can be created from chat (contextual “Save as snippet” action on code blocks) or directly within Snippets Space
FEATURE 10: Links Space
What It Is
A bookmark/reference manager for all saved links — both internal (to other chats, docs, exports) and external (URLs).What It Does
Shows all saved links with: Title, URL, Type (External website, Internal chat, Live doc section, Export), Origin (what created it), Tags.Intended Purpose
During research and brainstorming, users accumulate many references. Links Space keeps them organized and actionable rather than lost in chat.How It Should Be Built
- Link list with type badges
- Actions: Open link, Add to folder, Convert to task (“Follow up on this resource”)
- Links can be saved from chat messages (contextual “Save link” action) or created directly
FEATURE 11: Exports Space
What It Is
A hub for all final output files — PDFs, slide decks, markdown exports, etc. — generated from Live Documents, Whiteboard compilations, or direct export actions.What It Does
Shows all exports with: Title, Type (PDF, Deck, Markdown, etc.), Source (which live doc/whiteboard/task generated it), Created date, Last regenerated.Intended Purpose
When a user compiles their Whiteboard into a PRD or exports a Live Document as a PDF, that output lives here. It’s the “finished goods” section.Why Anyone Should Care
Exports are the tangible deliverables users share with clients, teams, or stakeholders. Having them in one place with regeneration capability (re-export if the source doc changed) is essential.How It Should Be Built
- Export list with source traceability
- Actions: Download, Regenerate (if source material changed), Share link, Attach to email (future integration), Add to folder
- Regeneration should re-run the compilation from the current state of the source document/whiteboard
FEATURE 12: Content Flow Into Spaces (Automatic Collection)
What It Is
The system by which content automatically flows from chats into Spaces.What It Does
Whenever a user takes an action in chat — saves a task, pins to whiteboard, uploads a file, saves a snippet, generates an export — that content automatically appears in the appropriate Spaces section.Intended Purpose
Spaces should feel like it “just collects things” without the user having to manually organize anything. The magic is that content flows in from conversations automatically.Why Anyone Should Care
If users had to manually copy things from chat into Spaces, nobody would use it. Automatic collection makes Spaces a living, always-up-to-date workspace.How It Should Be Built
From a chat message, users can:Save as task→ appears in Tasks SpacePin to whiteboard→ appears in Whiteboard SpaceAdd to live document→ appears in Live Docs SpaceSave snippet→ appears in Snippets SpaceSave link→ appears in Links SpaceAttach file to...→ appears in Files Space
- Export created from a doc → appears in Exports Space
- File uploaded in chat → appears in Files Space
- AI generates an image → appears in Files Space
- Users can create tasks, docs, folders, snippets, and links directly from within Spaces, without going back to a chat.
FEATURE 13: Dashboard ↔ Spaces Relationship
What It Is
The structural relationship between the Instance Dashboard, Spaces, and the global view.What It Does
Defines the navigation hierarchy: Dashboard → Instance View → Spaces (scoped to that Instance). A future “Global Spaces” view aggregates across all Instances.Intended Purpose
Ensures users always know where they are in the hierarchy and can easily switch between Instance-scoped and global views.How It Should Be Built
- Instance View has tabs like:
Chat,Spaces,Settings - Spaces inside each Instance is scoped by default to that Instance
- The scope selector in Spaces allows switching to “All Instances” for a cross-Instance aggregate view
- The global Dashboard (above Instance level) may show a Spaces summary widget with stats across all Instances
Example User Flow (End-to-End)
To help you visualize how all these features work together:- User is brainstorming in a chat and writes: “We should create an onboarding flow for developers submitting engines.”
- User clicks
Save as Taskon that message. - The task appears under
Spaces → Tasksfor the Instance. - Next day, user opens
Spaces → Tasks, sees the task, clicksStart chat from task. - A new chat opens, pre-seeded with the task description.
- User and AI discuss the onboarding flow. User selects key messages and clicks
Add to Live Document. - A Live Document section is updated with the new decisions.
- In Spaces, user can now see:
- One task (status: In Progress)
- One Live Document with updated sections
- Two chats linked together
- All living in one organized place
Key Implementation Principles
- Spaces is not a separate app — it’s a view within the Instance Dashboard
- Everything is scoped to the Instance by default — global views come later
- Content flows in automatically from chats — users don’t manually “import”
- Every item traces back to its source — tasks know which message created them, files know which chat generated them
- Spaces is both read and write — users can browse existing content AND create new content directly
- The feature set is toggleable — not every Instance Type needs Tasks or Code Snippets; these can be turned on/off in Instance Settings
- Start simple, add views later — v1 is list views with filters; Kanban boards, graph views, and advanced layouts come in later versions
Document 2: Task Feature Spec — Complete Feature Breakdown
For Junior Developers New to the aiConnected OS Project
What This Document Covers
This document defines the Task System — a lightweight, per-Instance to-do list that captures action items emerging from conversations and transforms them into live, actionable objects. Tasks are not a full project management system. They are fast, contextual, and deeply integrated with the chat experience, the Whiteboard, reminders, email, Slack notifications, and AI-powered assistance.Context: Why Tasks Exist
When users brainstorm inside AI chat conversations, action items naturally emerge: “I need to update that document,” “I should compile a PRD from this discussion,” “I need to follow up on this idea tomorrow.” On every existing AI platform (ChatGPT, Claude, Gemini), those action items are immediately lost in chat scroll. There is no native way to say “remind me about this” or “add this to a to-do list.” The Task feature solves this by giving every Instance a built-in, always-available to-do list that can be populated directly from chat messages, whiteboard items, or manual entry — and then acted upon through reminders, new chats, emails, and external notifications.FEATURE 1: Core Task Object (Data Model)
What It Is
The fundamental data structure that represents a single task in the system.What It Does
Stores everything needed to track what the user needs to do, where the task came from, and what actions have been taken on it.Intended Purpose
Provides the structured foundation that every other Task feature builds on. Without a clean data model, nothing else works.Why Anyone Should Care
The data model is intentionally minimal — this is NOT Jira or Asana. The goal is speed and simplicity, with optional fields for power users who want more control.How It Should Be Built
Core Fields (v1 — ship these):Technical Notes
- The v1 data model should be
title,status,source_reference,created_atat minimum. Everything else is optional and can be added incrementally. source_referenceis critical — it’s what lets users jump back to the original message or whiteboard item that spawned the task. Never lose this link.- Later, the agentic fields can be broken into separate tables (
TaskReminders,TaskNotifications) for cleaner separation of concerns.
FEATURE 2: Creating Tasks from Chat Messages
What It Is
The ability to turn any chat message into a task with one click.What It Does
When a user clicks the⋯ (more actions) menu on any message in a chat, they see an “Add to Tasks” option. Clicking it opens a small inline modal where the title is pre-filled with a smart summary of the message content, and the user can optionally set a due date and notes before saving.
Intended Purpose
This is the primary way tasks are born. During a conversation, the user thinks “I need to do something about this” — and instead of losing that thought, they capture it instantly without leaving the chat.Why Anyone Should Care
This is the feature that makes the difference between “I had a great idea during a chat but forgot about it” and “I have a running list of everything I need to act on.” It’s the bridge between thinking and doing.How It Should Be Built
User Flow:- User is in a chat conversation
- User clicks
⋯on a specific message - User selects “Add to Tasks” (or “Remind me about this later”)
- Small inline modal appears with:
- Task title — pre-filled with an AI-generated smart summary of the message (e.g., “Update X document based on this idea”). User can edit this.
- Due date — optional date picker
- Notes — optional text area for additional context
- [Save] button
- System stores the task with:
source_type = "message"conversation_idandmessage_idfrom the current chatinstance_idfrom the current Instance
- In the Tasks panel, the task shows a small “From message” badge. Clicking it jumps back to the exact message in the original chat.
- The “smart summary” for the title should be generated by the AI — take the message content and produce a concise action-oriented title. If AI is unavailable or too slow, fall back to truncating the first ~80 characters of the message.
- The modal should be lightweight (not a full-page form). Think: inline popover or small slide-in panel.
- After saving, show a brief confirmation toast: “Task saved” with a link to the Tasks panel.
FEATURE 3: Creating Tasks Manually
What It Is
A simple text input at the top of the Tasks panel that lets users type a task directly without referencing a specific message.What It Does
User types a one-line task description and presses Enter. The task is created immediately withsource_type = "manual" and no source reference.
Intended Purpose
Sometimes users just want a quick reminder that isn’t tied to a specific message: “Review this chat later,” “Schedule a call about the pricing model,” etc.Why Anyone Should Care
Not every task comes from a specific message. Manual entry covers the cases where the user just wants to jot down a thought or reminder without the friction of finding a message to attach it to.How It Should Be Built
- Simple text input at the top of the Tasks panel with placeholder text: “Add a task…”
- Press Enter to create the task instantly
- Optional: a small “More” button next to the input that expands to show Notes, Due date, and Priority fields
- v1 can be just the one-line input. Advanced fields come later.
- Tasks created this way can optionally be linked to the current conversation generically (store the
conversation_idbut no specificmessage_id)
FEATURE 4: Creating Tasks from the Whiteboard
What It Is
The ability to create tasks directly from Whiteboard nodes (sticky notes, clusters, cards).What It Does
Each whiteboard item has a⋯ menu with a “Create Task from This” option. Same flow as creating from a message: pre-filled title, optional due date and notes, stored with source_type = "whiteboard" and the whiteboard_item_id.
Intended Purpose
The Whiteboard is where users collect and organize ideas from many conversations. Some of those ideas are actionable. This bridges “idea space” (whiteboard) to “action space” (tasks).Why Anyone Should Care
Ideas sitting on a whiteboard are inert until someone decides to act on them. This feature turns ideas into trackable action items with one click.How It Should Be Built
- Same modal as the message-based creation flow
- Title pre-filled from the whiteboard item’s text content
source_type = "whiteboard",whiteboard_item_idstored- In the Tasks panel, clicking the source badge opens the whiteboard and highlights the originating item
FEATURE 5: Tasks Panel (Viewing & Managing Tasks)
What It Is
A dedicated section within the Instance Dashboard that displays all tasks for the current Instance.What It Does
Shows a list of tasks with checkbox status toggles, source badges, optional due dates, and basic filtering. Users can quickly scan what needs doing, mark things done, and jump to source context.Intended Purpose
The Tasks panel is where users go to answer: “What do I need to do for this project?” It’s the single place that aggregates all captured action items.Why Anyone Should Care
Without this panel, tasks would just be invisible entries in a database. The panel makes them scannable, manageable, and actionable.How It Should Be Built
Layout:- Panel title: “Tasks for this Instance”
- Segmented filter bar:
All | To Do | Done - List of task rows, each showing:
- Checkbox (click to toggle
todo ↔ done) - Title (click to open detail drawer)
- Source badge: “From message” / “Manual” / “From whiteboard” (small icon + text)
- Due date (if set)
- Checkbox (click to toggle
- Full notes (if any)
- “Open source” button — jumps back to the original message or whiteboard item
- Edit title, notes, due date, priority
- Action buttons (Remind, Email, Start Chat, Notify — see agentic features below)
- Default:
status(To Do first) thencreated_at(newest first) - v2: drag-and-drop reorder
To Do and Done. Do NOT add In Progress, priority levels, or subtasks in v1. Add those only when user feedback demands it. The moment this feels like a project management tool, you’ve gone too far.
FEATURE 6: AI-Assisted Task Creation (“Sweep my chat”)
What It Is
The ability to ask the AI to scan a conversation and automatically propose tasks based on action items it identifies.What It Does
User types something like: “Summarize what I need to do from today’s chat and add them as tasks.” The AI scans recent messages, identifies action items, and presents a confirmation modal with a proposed list of tasks. The user checks the ones they want and clicks “Create Tasks.”Intended Purpose
After a long brainstorming session, users don’t want to manually scroll through 50 messages and create tasks one by one. The AI can do this in seconds.Why Anyone Should Care
This is the difference between “tasks are a manual chore” and “tasks feel like they manage themselves.” AI-assisted creation dramatically reduces the friction of staying organized.How It Should Be Built
- User triggers the command (via chat input or a “Scan for tasks” button in the Tasks panel)
- AI processes the recent conversation history for the current chat
- AI returns a list of proposed tasks with suggested titles:
- “Draft PRD section on instance dashboard”
- “Update Cognigraph doc with learning sub-architecture”
- “Create UI sketches for tasks pane”
- Modal displays the proposed tasks with checkboxes (all checked by default)
- User unchecks any they don’t want, optionally edits titles
- User clicks “Create Tasks”
- System batch-creates all selected tasks with
source_type = "message"and references to the relevant messages
FEATURE 7: Set Reminder on a Task
What It Is
The ability to schedule a time-based reminder for any task, delivered through one or more channels (in-app notification, email, Slack).What It Does
From any task’s action menu, user clicks “Set Reminder.” A small form appears where they choose when (date/time or relative like “tomorrow morning”) and how (in-app, email, Slack, or any combination). At the scheduled time, the system delivers the reminder through all selected channels.Intended Purpose
Tasks without reminders are just lists that users forget to check. Reminders turn passive tasks into active nudges that find the user wherever they are.Why Anyone Should Care
This is what makes tasks “agentic” — they don’t just sit there, they reach out and grab your attention when it matters.How It Should Be Built
Reminder Form:- When: date/time picker, or quick options (“In 1 hour”, “Tomorrow morning”, “Next Monday”)
- Channels: checkboxes for
In-app,Email,Slack - Save button
- In-app: notification badge in the app, plus a toast/banner when the user is active
- Email: system sends an email with subject “Task reminder – [Task Title]”, body includes task title, notes, Instance name, and a deep link back to the task
- Slack: system posts to the configured Slack channel/DM with task title, Instance name, notes, and a link
- Store
reminder_atandreminder_channelson the task - A scheduled job (cron, n8n workflow, or internal scheduler) checks for due reminders and dispatches them
- Events emitted:
task.reminder.triggered→ consumed by email service, Slack integration, in-app notification service
FEATURE 8: Start New Chat from Task
What It Is
The ability to launch a brand-new conversation that is pre-seeded with the task’s context.What It Does
From any task’s action menu, user clicks “Start Chat from Task.” A new chat is created within the same Instance, pre-populated with the task title, notes, due date, and the content of the original source message (if applicable). The new chat is automatically linked to the task.Intended Purpose
When it’s time to actually work on a task, the user shouldn’t have to manually copy context into a new conversation. This creates an instant, focused workspace for that task.Why Anyone Should Care
This closes the loop between “capture” and “execute.” The task was born in a conversation, and now it spawns a new conversation to get it done — with all the context automatically carried over.How It Should Be Built
- User clicks “Start Chat from Task” on any task
- System creates a new chat in the current Instance
- The chat’s initial context includes:
- The task title and notes
- The content of the source message/whiteboard item (if available)
- A system message: “This conversation is about Task #[id]: [title]”
- The task record is updated with a link to the new chat:
active_chat_id - In the Tasks panel, the task shows “Active chat: [link]”
- In the new chat, a header or system message shows “This conversation is about Task #123” with a link back to the task and the ability to toggle status or update notes directly
- User can start working immediately: “Break this task into smaller steps,” “Draft the initial PRD outline,” etc.
FEATURE 9: Email from Task
What It Is
The ability to send an email (to yourself or someone else) directly from a task, optionally with AI-drafted content.What It Does
Two modes: a) Email to Yourself (Reminder/Snapshot):- System composes an email with the task title, notes, Instance name, a link back to the task, and optionally an AI-generated summary of the source message/whiteboard content.
- One-click send.
- Modal with: Recipients (free-form email addresses + optional contact picker), CC/BCC fields
- Toggle: “Have AI draft the email for me”
- If ON, AI reads the task context (title, notes, Instance info, source message) and drafts a professional email
- Example output: “Hey [Name], I’m working on the aiConnected chat dashboard and I need your input on the task system design. Specifically, I want feedback on…”
- User can edit the draft before sending
Intended Purpose
Tasks often require communicating with other people — asking for feedback, delegating work, or just reminding yourself via email. This feature keeps that communication tied to the task instead of requiring the user to open a separate email client.Why Anyone Should Care
Every other platform forces you to leave the app, open Gmail, manually compose context, and lose the connection between the task and the communication. This keeps everything linked.How It Should Be Built
- “Email” action in the task’s action menu
- Sub-menu: “Email → Me” (quick send) or “Email → Someone Else” (opens modal)
- Backend sends email via configured email provider (SendGrid, Gmail API, etc.)
- Email contains a deep link back to the task in the app
- Log the email action on the task record for audit trail
FEATURE 10: Notify in External Apps (Slack)
What It Is
The ability to send a task notification to Slack (and eventually Teams, Discord, etc.).What It Does
From any task, user clicks “Notify → Slack.” If Slack isn’t connected yet, they’re prompted to authenticate and configure a default workspace and channel. Once configured, a modal lets them choose a destination and customize a pre-filled message. Optional AI enhancement generates a more detailed explanation.Intended Purpose
Many users work in teams where Slack is the primary communication hub. Being able to push task notifications directly from the AI platform into Slack keeps the team informed without manual copy-paste.Why Anyone Should Care
Tasks that live only inside one app are invisible to the rest of the team. External notifications make tasks visible where the team actually communicates.How It Should Be Built
First-time setup:- OAuth flow to connect Slack workspace
- Choose default channel or DM
- Store connection in user settings
- User clicks “Notify → Slack” on a task
- Modal shows: Destination (default channel, or pick another), Pre-filled message template:
- Optional: “Write a more detailed message” toggle — AI generates a longer explanation using task context
- User can edit the message
- Click “Send”
- System posts to Slack via Slack API
- Event emitted:
task.slack.notified - Handled by Slack integration service (or n8n workflow)
- Store notification log on the task
FEATURE 11: AI Task Agent (“What should I do next?”)
What It Is
An AI-powered meta-agent that can reason about the user’s entire task list and provide prioritized recommendations.What It Does
The user can ask (in chat or via the Tasks panel): “Look at my tasks for this instance and tell me what I should work on next.” The AI reads all open tasks, considers due dates, priorities, and recency, and responds with a prioritized recommendation. It can also trigger actions like “Start Chat from Task #1.”Intended Purpose
When a user has 15 open tasks, deciding where to start can feel overwhelming. The AI Task Agent acts as a lightweight personal assistant that helps prioritize.Why Anyone Should Care
This is where tasks become truly “agentic” — the AI isn’t just storing tasks, it’s helping the user decide what matters most and taking action to help them get started.How It Should Be Built
“What should I do next?” mode:- User asks in chat or clicks a “Prioritize” button in Tasks panel
- AI reads all open tasks for the current Instance
- AI considers: due dates (overdue first), priority levels, time since creation, last activity
- AI responds with a ranked recommendation:
- If user says yes, system triggers “Start Chat from Task” automatically
- “Set reminders for all tasks due this week”
- “Email me a summary of all open tasks for this Instance”
- “Post all high-priority tasks into Slack”
- Identifies matching tasks
- Batch-creates the requested actions (reminders, emails, Slack posts)
- Confirms what it did: “Set a Slack reminder for 3 tasks, emailed you a summary of 7 open tasks”
FEATURE 12: Integration with Conversation Linking
What It Is
Tasks respect and benefit from the Linked Conversations feature (defined in Document 7).What It Does
If a task was created from a message in Conversation A, and that message was later used to spawn Conversation B (via the linked conversations feature), the task can display both relationships: “Created from Conversation A, related to Conversation B.”Intended Purpose
As conversations branch and evolve, tasks should maintain awareness of the full conversation graph, not just the single message they were created from.How It Should Be Built
- Store
conversation_idandmessage_idat creation time - When displaying source links, also check if the source message appears in any ConversationLink records
- If links exist, show “Related conversations” in the task detail drawer
- v1: just store the IDs correctly so linkage can be exploited later. Don’t over-engineer the display.
FEATURE 13: Integration with Folders
What It Is
Tasks interact cleanly with the Instance Folder system without folders directly owning tasks.What It Does
Since tasks are per-Instance and folders are per-Instance, the relationship is indirect. In the folder view, each Instance can show a small indicator: “3 open tasks.” A future folder-level view can aggregate: “All tasks for Instances in this folder.”Intended Purpose
Keeps the mental model clean: Folders → contain Instances → Instances own Tasks. Tasks don’t belong to folders directly.How It Should Be Built
- In folder views, query task counts per Instance and display as badges
- Future: folder-level aggregation view that combines task lists from all child Instances
- Do NOT add a
folder_idto the Task model — tasks belong to Instances, not folders
FEATURE 14: Instance Type Settings (Toggle Tasks On/Off)
What It Is
The ability to enable or disable the Tasks feature per Instance Type, with per-Instance overrides.What It Does
Each Instance Type (e.g., “Deep Project,” “Casual Chat”) has a “Dashboard Modules” configuration where Tasks (and other features like Whiteboard, Folders, Pins) can be toggled on or off. Individual Instances can override their Type’s defaults.Intended Purpose
Not every Instance needs a task list. A casual Q&A Instance would be cluttered by a Tasks panel. This keeps the interface clean for simple use cases while allowing full power for project Instances.Why Anyone Should Care
Feature bloat kills products. Allowing users to turn features on/off per Instance Type means the platform adapts to how the user is actually using it, rather than forcing every Instance to look the same.How It Should Be Built
Instance Type template config:- “Deep Project / Build Instance”: Tasks ON, all actions ON
- “Casual Chat / Q&A Instance”: Tasks OFF (or Tasks ON but all actions OFF — just local notes)
- Each Instance’s Settings panel has a toggle: “Enable tasks for this instance” that overrides the Type default
FEATURE 15: Backend Event Architecture
What It Is
The event-driven system that powers all agentic task actions.What It Does
Every task action (reminder triggered, chat started, email sent, Slack notified) emits a structured event that can be consumed by backend services or automation workflows.Intended Purpose
Keeps the frontend simple (just emit events) while allowing flexible backend processing. Today it might be n8n workflows; tomorrow it could be internal microservices. The event layer is the abstraction that makes this possible.Why Anyone Should Care
Without a clean event architecture, every new integration (Teams, Discord, SMS, webhooks) requires rewriting frontend logic. Events make the system extensible.How It Should Be Built
Event types:task.created— a new task was createdtask.completed— a task was marked donetask.reminder.created— a reminder was settask.reminder.triggered— a reminder fired (time elapsed)task.chat.started— a new chat was started from a tasktask.email.created— an email was sent from a tasktask.slack.notified— a Slack notification was sent
- In-app notification service → shows badge/toast
- Email service → sends email via provider
- Slack service → posts message via Slack API
- Automation layer (n8n) → can listen and trigger arbitrary workflows
Key Implementation Principles
- Start minimal — v1 is
title,status,source_reference,created_at. Ship that first. - Source traceability is sacred — every task must know where it came from (message, whiteboard item, or manual). Never lose this link.
- Tasks are launchpads, not checkboxes — the power isn’t in checking things off, it’s in the actions you can take FROM a task (start chat, send email, notify Slack, set reminder).
- Lightweight over heavyweight — start with
To DoandDoneonly. AddIn Progress, priority, subtasks, and drag-reorder ONLY when user feedback demands it. - Toggleable per Instance Type — Tasks should never appear in Instances where they’d be clutter.
- Event-driven backend — every action emits an event. Never hardcode integration logic in the frontend.
- AI enhances, doesn’t replace — AI can propose tasks, draft emails, and prioritize lists, but the user always confirms before anything happens.
Document 3: Live Document Feature Spec — Complete Feature Breakdown
For Junior Developers New to the aiConnected OS Project
What This Document Covers
This document defines Live Documents — persistent, cross-chat, AI-editable documents that belong to an Instance (not to a single chat). Live Documents are the “formalization layer” where messy conversations become real documentation: PRDs, specs, business plans, research studies, presentations, and any other structured output.Context: The Problem Live Documents Solve
In every existing AI platform, when you brainstorm a complex idea across multiple conversations, the only way to compile everything into a single document is to manually copy-paste from each chat into Google Docs or a word processor. There is no native way to:- Edit the same document from different conversations
- Have the AI update a document while you’re chatting about something related
- Track which conversations contributed to which sections of a document
- Export a polished, branded document directly from the platform
Context: Where Live Documents Fit in the Platform
Understanding the distinction between the different content types is critical:- Chat = chronological conversation (messy, exploratory, real-time thinking)
- Whiteboard = nonlinear, spatial canvas for brainstorming and clustering ideas (visual)
- Tasks = action items (“do this later”)
- Live Documents = linear, structured, formalized documentation (the “official” output)
- Folders = organizational structure for grouping chats/content
FEATURE 1: The Live Document Object (Core Definition)
What It Is
A persistent document that belongs to an Instance, not to any single chat. It can be opened, edited, and contributed to from any chat within that Instance, or directly from the Instance Dashboard.What It Does
Acts as a shared, always-available artifact that accumulates structured content over time. Multiple chats can feed content into the same document. The AI can edit the document via conversational commands. The document can be exported as PDF, Google Docs, presentations, or other formats.Intended Purpose
Turns scattered conversation insights into polished, deliverable documentation without leaving the platform.Why Anyone Should Care
This is the feature that transforms aiConnected from “a chat app with memory” into “a workspace that produces real deliverables.” Without Live Documents, users still have to copy-paste into external tools to create anything they can share with others.Key Characteristics
- Instance-scoped, chat-agnostic — the document belongs to the Instance, any chat can access it
- Message → Document flow — you pull content from messages into the document (not the other way around). The document becomes its own editable artifact.
- AI-editable, human-readable — stored as structured markdown or a block model. The AI can target specific sections for editing.
- Versioned — every edit (human or AI) creates a new version. You can view history and revert.
- Multiple output types — same underlying object can be rendered as a document, presentation outline, or other formats
FEATURE 2: Data Model
What It Is
The database structure that represents Live Documents, their content, and their relationships to conversations.What It Does
Provides the foundation for storing, versioning, querying, and linking documents to their source conversations.How It Should Be Built
LiveDocument (the container):Technical Notes
- Start with Option A (single markdown blob) for v1. It’s simpler to implement and sufficient for initial launch.
- Plan the database schema so migrating to Option B (blocks) later doesn’t require a full rewrite. For example, even in Option A, you could store section headers as metadata.
- Version history is critical from day 1 — AI edits can sometimes produce bad output, and users must be able to revert.
FEATURE 3: Opening Live Documents from Chat
What It Is
The ability to open and view a Live Document as a side panel while you’re in any chat under the same Instance.What It Does
A “Live Docs” icon/button in the chat UI (top bar or right sidebar) opens a panel showing either a list of all Live Documents for the Instance, or directly opens the “primary” document if one has been pinned as the default.Intended Purpose
Users shouldn’t have to leave their current conversation to work on a document. The side panel lets them see the document alongside the chat, drag content between them, and ask the AI to update the document in context.Why Anyone Should Care
This is what makes Live Documents “live” — they’re always one click away from any conversation. You never have to context-switch to a separate app or tab.How It Should Be Built
Entry Point:- “Live Docs” icon/button in the chat top bar or right sidebar
- Clicking opens a panel (right side of the screen, like an artifact/canvas panel)
- If the Instance has multiple Live Documents: show a list view first with document titles, types, and last-updated timestamps. User clicks to open one.
- If the Instance has a “primary” document pinned: open it directly
- The panel opens alongside the chat — split view: chat on the left, document editor on the right
- User can toggle between split view and full-page document view
- The document panel is a separate component that can be rendered alongside any chat
- It shares the same Instance context, so the AI knows which document is open
- The panel should support resize/collapse gestures
- Auto-save the panel state (which document was open, scroll position) so reopening the panel returns to where the user left off
FEATURE 4: Adding Messages to a Live Document
What It Is
The ability to push content from any chat message into a Live Document with one click.What It Does
On any message (user or AI), the context menu (⋯) includes “Add to Live Document…” which opens a small modal where the user chooses: which document to add to, how to add the content (append to bottom, create new section, summarize first, extract bullet points), and optionally a section title.
Intended Purpose
This is the primary content flow: conversations generate insights, and those insights get pulled into the document. Without this feature, Live Documents would require manual typing — defeating the purpose.Why Anyone Should Care
This is the bridge between “thinking out loud in chat” and “producing a deliverable document.” One click turns a chat message into a document section.How It Should Be Built
User Flow:- User is in a chat conversation
- User clicks
⋯on any message - User selects “Add to Live Document…”
- Small modal appears with:
- Which document: dropdown of all Live Documents in this Instance (or “Create new”)
- How to add:
Append to bottom— adds the raw message content at the endNew section titled: [___]— creates a new heading + content (title auto-detected or user-entered)Summarize this message and add— AI condenses the message into a tighter summary before addingExtract bullet points and add— AI pulls out key points as a bulleted list
- [Add] button
- System behavior:
- Pulls the text (or AI-processed version) into the document
- Creates a new block or appends to
content_markdown - Creates a
DocumentMessageLinkrecord withconversation_id+message_id - Shows a toast: “Added to ‘Cognigraph PRD’ under ‘Feature C – Live Docs’”
- The “Summarize” and “Extract bullet points” options require an AI call — this should be fast (use a lightweight model or cached prompt)
- Always store the
DocumentMessageLinkeven if the content is summarized — the user should be able to trace back to the original message - If the user selects multiple messages (via multi-select in the chat), allow bulk-adding them to the document as a group
FEATURE 5: Editing the Document While Chatting (Dual-Stream Editing)
What It Is
Two parallel editing modes that work simultaneously: direct manual editing in the document panel, and AI-powered editing via chat commands.What It Does
Stream 1 — Direct Manual Editing: The document panel is a rich-text/markdown editor. Users can type, format (headings, bold, bullets, links), and restructure content directly. Stream 2 — AI Editing via Chat: When a Live Document is open in the side panel, the AI automatically has the document (or relevant sections) in its context. Users can issue commands in the chat that modify the document:- “Update the Live Document: add a section called ‘Live Document – Editing Across Chats’ that summarizes what we just discussed.”
- “Rewrite the introduction to emphasize that live docs are cross-chat artifacts.”
- “Create a table in the doc comparing Whiteboard vs Live Document vs Tasks.”
Intended Purpose
Some edits are faster by typing directly. Others are faster by asking the AI. Supporting both means the user always has the most efficient path.Why Anyone Should Care
This is what makes Live Documents genuinely “AI-powered” — you’re not just using a text editor, you’re collaborating with an AI that can rewrite sections, generate tables, restructure content, and improve prose on command.How It Should Be Built
Manual Editing:- Standard rich-text/markdown editor (consider Tiptap, Lexical, or ProseMirror for the frontend)
- Support for: headings (H1-H4), bold, italic, bullet lists, numbered lists, code blocks, tables, images, links, callout boxes
- Auto-save on every change (debounced, e.g., save after 2 seconds of inactivity)
- Each save creates a new version entry
- When a Live Document panel is open, the current document content (or a relevant slice) is injected into the AI’s context for the current chat
- The AI interprets “the document” or “the live doc” as the currently open Live Document
- AI produces a patch (new content, replacement content, or structural change)
- System applies the patch to
content_markdownor specific blocks - A new version is saved automatically
- The document panel refreshes to show the change in real-time
- For the block-based model (Option B), AI edits can target specific blocks by ID or heading name
- For the simple model (Option A), AI rewrites the full markdown and the system diffs + saves
- AI edits MUST generate new versions — users must be able to undo bad AI rewrites
- Consider showing a brief diff or “AI edited these sections” indicator after an AI edit
FEATURE 6: Instance Dashboard Document Hub
What It Is
A “Documents” tab in the Instance Dashboard that shows all Live Documents for the Instance, with management actions.What It Does
Shows a table/grid of all Live Documents with columns: Title, Type (PRD, Spec, Meeting Notes, Presentation Outline, etc.), Status (Draft, In Progress, Review, Final), Last edited (time + by whom), Linked chats count.Intended Purpose
Gives users a bird’s-eye view of all documentation for the Instance, separate from the chat interface. This is where users go to manage, organize, and open documents when they’re not in a specific chat.Why Anyone Should Care
Sometimes you just want to see “what documents exist for this project” without opening any chat. This is the document management hub.How It Should Be Built
List View:- Table with sortable columns: Title, Type, Status, Last Edited, Linked Chats
- Actions per row: Open, Duplicate, Archive, Delete
- Opens a full-page editor (more space than the in-chat side panel)
- Document outline sidebar on the left (table of contents based on headings)
- Editor in the center
- “Linked Conversations” panel showing which chats contributed content
- Export options accessible from the top bar
- “New Live Document” button
- Choose: Title, Type (document, presentation outline, etc.), initial template (blank, PRD template, spec template, etc.)
- Document is immediately available in all chats within the Instance
FEATURE 7: Document Chat (Talking to the Document from the Dashboard)
What It Is
A small chat panel anchored to a Live Document when opened from the Instance Dashboard, where all AI prompts are implicitly about “this document.”What It Does
Users can issue commands like:- “Tighten up the wording in section 3.2.”
- “Add an executive summary at the top.”
- “Generate slide titles from each H2 and add a ‘Presentation Outline’ section at the bottom.”
- “Insert a risk table.”
- “Summarize key decisions in a table.”
Intended Purpose
When working on a document from the Dashboard (not from within a specific chat), users still need AI assistance. The Document Chat provides that without requiring the user to navigate to a chat first.Why Anyone Should Care
This turns the document editor from a passive text editor into an active AI collaboration surface. You can sit in the document and refine it endlessly without switching contexts.How It Should Be Built
- Small chat input bar at the bottom of the document editor (or collapsible chat panel on the side)
- All prompts automatically include the document content as context
- AI responses are applied as document edits (not shown as chat messages — though a brief “Edit applied” confirmation is appropriate)
- Each AI edit creates a new version
- The chat history here is ephemeral (or optionally saved as “Document Edit History”)
FEATURE 8: Export System
What It Is
The ability to export Live Documents to external formats and platforms.What It Does
Provides multiple export targets and format options for turning the document into a deliverable that can be shared with clients, teams, or stakeholders.Intended Purpose
Live Documents are internal working artifacts. Exports turn them into polished, shareable deliverables. This is the “last mile” that replaces the Google Docs copy-paste workflow.Why Anyone Should Care
A document that can’t be exported is trapped in the platform. Export capability makes Live Documents the actual production tool for real deliverables, not just a fancy note-taking feature.How It Should Be Built
Export Targets:- Google Docs
- Use the Google Docs API to create a new document and push structured content (headings, lists, tables, images)
- Optionally store the Google Doc URL back on the LiveDocument record for quick access
- Requires OAuth connection to Google (user authenticates once)
- PDF
- Render the markdown/blocks to HTML, then convert to PDF (server-side rendering using Puppeteer, wkhtmltopdf, or similar)
- Apply branding options (header logo, company name, footer text)
- Apply style preset (Simple PRD, Formal Spec, etc.)
- Presentation Format (PowerPoint / Google Slides)
- Map each H1 or H2 heading to a slide
- Use the first paragraph/bullets under each heading as slide body
- AI can propose speaker notes for each slide
- Export as .pptx or push to Google Slides via API
- Markdown Download
- Raw markdown file download for developers or users who want to import into other tools
FEATURE 9: Layout & Branding Options
What It Is
Document-level settings for controlling the visual appearance of exports.What It Does
Provides branding and style controls that are applied when the document is exported (not necessarily in the editor itself, though the editor could preview them).Intended Purpose
Professional deliverables need to look professional. Branding options mean users can produce client-ready documents without post-processing in another tool.How It Should Be Built
Branding Options (per document or per Instance):- Header logo: upload an image
- Company name: text field
- Footer text: customizable (e.g., “Confidential – Oxford Pierpont / aiConnected”)
- “Simple PRD” — clean, minimal formatting
- “Formal Spec” — more structured, section numbering
- “Presentation Outline” — slide-friendly formatting
- Custom presets can be created later
FEATURE 10: Rich Content Support
What It Is
Support for non-text content within the document editor.What It Does
Allows embedding tables, images, code blocks, and callout boxes directly within Live Documents.Intended Purpose
Real documents aren’t just paragraphs. PRDs have tables comparing features. Specs have code blocks. Business plans have images and callouts. Rich content support makes Live Documents capable of producing professional, complete documents.How It Should Be Built
- Tables — insertable via toolbar or AI command (“create a comparison table”)
- Images — embed from upload, from Files Space, or from AI-generated diagrams
- Code blocks — syntax-highlighted, language-selectable
- Callout boxes — styled blocks for Notes, Risks, Decisions, Warnings (visually distinct from body text)
FEATURE 11: Version History
What It Is
A complete history of every change made to the document, with the ability to view and restore previous versions.What It Does
Shows a timeline of all versions with: version number, timestamp, who made the change (user or AI), and a brief description. Users can view any previous version and restore it if needed.Intended Purpose
AI edits can sometimes produce bad results. Manual edits can sometimes break things. Version history is the safety net that makes both kinds of editing risk-free.Why Anyone Should Care
Without version history, users would be afraid to let the AI edit their documents — one bad rewrite could destroy hours of work. Version history removes that fear.How It Should Be Built
- Every save (manual or AI) creates a new version entry in
LiveDocumentContent - “Show previous versions” button in the editor opens a version list
- Each version shows: version number, timestamp, author (user name or “AI”), diff summary
- “Preview” opens a read-only view of that version
- “Restore” replaces the current content with the selected version (and creates a new version entry for the restoration)
- Version storage can use full snapshots (simple) or diffs (storage-efficient but more complex)
FEATURE 12: Relationship to Whiteboard
What It Is
The defined boundary and bridge between Live Documents (linear, structured) and the Whiteboard (nonlinear, spatial).What It Does
Establishes clear use cases for each and defines future bridge actions between them.Intended Purpose
Users need to understand when to use the Whiteboard vs. when to use a Live Document. They also need the ability to move content between them.Key Distinctions
- Whiteboard: nonlinear, spatial layout, great for brainstorming, clustering, concept mapping. Think Miro/Excalidraw.
- Live Document: linear narrative, organized spec/plan/write-up, ready to send to others as “official” docs. Think Google Docs.
Future Bridge Actions (not v1, but plan for them):
- From Whiteboard → “Generate Document from selected items” (AI reads selected nodes and produces a structured document)
- From Document → “Send this section to whiteboard as sticky notes” (breaks a section into visual nodes on the canvas)
FEATURE 13: Relationship to Tasks
What It Is
The integration between Live Documents and the Task system.What It Does
Allows creating tasks from highlighted text within a document, with the task storing a reference back to the specific document and block/section.Intended Purpose
Documentation often reveals action items: “we need to research this,” “this section needs data,” “someone should validate this assumption.” Creating tasks from within the document keeps action items tied to their context.How It Should Be Built
- Highlight text in the document → context menu shows “Create Task”
- Task stores:
source_type = "document",document_id,block_id(if using block model) - In the Tasks panel, clicking the source badge opens the document and scrolls to the relevant section
FEATURE 14: Conversation Referencing & Linking
What It Is
Bidirectional links between Live Documents and the conversations that contributed to them.What It Does
- In a chat that has contributed content to a document: shows “This conversation is linked to Documents: [Cognigraph PRD]”
- In the document: shows “Linked Chats: [Chat A], [Chat B], [Chat C]” with clickable links
Intended Purpose
Users need to trace the provenance of document content back to the original conversations, and from conversations forward to the documents they produced.Why Anyone Should Care
When reviewing a document section months later and wondering “why did we decide this?”, the linked conversation takes you directly to the original discussion.How It Should Be Built
DocumentMessageLinktable tracks all message→document contributions- Query this table to produce:
- Per-document: list of unique
conversation_idvalues → “Linked Chats” - Per-conversation: list of unique
document_idvalues → “Linked Documents”
- Per-document: list of unique
- Display as clickable badges/links in both the chat UI and the document UI
FEATURE 15: Collaboration & Multi-Edit Handling
What It Is
Foundational support for multiple editors working on the same document, even though v1 is single-user.What It Does
Implements auto-save, version history, and optional soft-locking so the system is ready for multi-user editing later.Intended Purpose
Even in single-user mode, the “user” and the “AI” are effectively two editors. Auto-save and versioning prevent conflicts and data loss. Building with multi-user in mind means less refactoring later.How It Should Be Built
v1 (single user + AI):- Auto-save every N seconds or on change (debounced)
- Version history on every save
- AI edits clearly marked in version history
- Soft-locking: “Bob is editing this document” banner
- Conflict resolution: last-write-wins with version history as the safety net
- Eventually: operational transforms (OT) or CRDTs for real-time collaborative editing (like Google Docs)
FEATURE 16: MVP vs Extended Scope
What It Is
A clear delineation of what to build first vs. what to build later.MVP (Build First)
- Per-Instance Live Documents table/list
- Basic text/markdown editor (not block-based yet)
- In-chat: “Live Docs” panel to open a document alongside the chat
- In-chat: “Add to Live Document…” action on messages (append + optional summarize)
- AI editing: “Update the live document…” commands that append new sections or rewrite specific sections by heading name
- Export: Markdown download + PDF export
- Simple version history (view + restore)
Extended (Build Later)
- Block-based content model with precise AI editing per block
- Presentation export (PowerPoint / Google Slides)
- Google Docs sync (push to Docs, store URL back)
- Whiteboard ↔ Live Document bridges
- Task creation from highlighted document content
- Rich branding/layout options for exports
- Fine-grained permissions and multi-user collaborative editing
- Document templates (PRD template, Spec template, etc.)
Key Implementation Principles
- Instance-scoped, chat-agnostic — Live Documents belong to the Instance. Any chat in the Instance can open and edit them. Never tie a document to a single chat.
- Source traceability is non-negotiable — always store
DocumentMessageLinkrecords so every piece of content can be traced back to its origin conversation and message. - Version everything — every human and AI edit creates a version. This is the safety net for the entire feature.
- Start with markdown, plan for blocks — v1 stores content as a single markdown blob. But design the schema and API so migrating to a block model later is straightforward.
- The document is an AI context — when a Live Document is open, the AI should have its content (or relevant sections) in context. This is what enables natural-language document editing.
- Export is the payoff — Live Documents only matter because they can be exported as real deliverables. If the export system is bad, the whole feature feels pointless. Invest in clean PDF and Google Docs export from day 1.
- Two editing streams, one document — manual editing and AI editing coexist on the same document. Both create versions. Neither should block the other.
Document 4: Folder System Design — Complete Feature Breakdown
For Junior Developers New to the aiConnected OS Project
What This Document Covers
This document defines the Folder System — an optional organizational layer within Instances that lets users group chats, files, and content into named sub-domains, each with their own instructions, persona defaults, and behavioral settings. Folders sit between the Instance level and the individual Chat level in the hierarchy, and they share the Instance’s memory while providing specialized context.Context: The Problem Folders Solve
Imagine you’re working on a large project like “aiConnected.” Over time, you accumulate dozens of conversations: some about UI design, some about hiring, some about marketing, some about the technical architecture. Without folders, all these chats live in one flat list inside the Instance. You can’t separate them, you can’t give them different instructions, and you can’t quickly filter to “show me only UI conversations.” Folders solve this by creating sub-domains within an Instance — like departments within a company. Each folder can have its own behavioral rules, but they all share the same underlying memory and knowledge.Context: Where Folders Fit in the Hierarchy
FEATURE 1: What a Folder Actually Is
What It Is
A named container within an Instance that holds chats and files. Each folder can have its own settings, instructions, default persona, and default model — essentially everything an Instance has EXCEPT a Whiteboard.What It Does
Groups related conversations and files together, applies folder-specific behavioral rules to conversations within it, and provides organizational structure for large projects.Intended Purpose
Lets users separate different workstreams within a single project without creating entirely separate Instances. A user working on “aiConnected” can have a folder for UI design, a folder for hiring, and a folder for marketing — all sharing the same project memory but with different AI behavioral instructions.Why Anyone Should Care
Without folders, large projects become unmanageable. 50+ conversations in a flat list is chaos. Folders bring order without sacrificing the unified memory that makes an Instance powerful.How It Should Be Built
Folder Properties:- ❌ No folder-level Whiteboard (the Whiteboard stays one per Instance, above everything)
- ❌ No separate memory space (folders share the Instance’s memory)
FEATURE 2: Folders Are Strictly Optional
What It Is
A core design principle: folders are never required. Users can use folders, not use them, or use a mix.What It Does
Ensures that users who don’t want organizational overhead can ignore folders entirely and still have a fully functional experience.Intended Purpose
Prevents the platform from feeling like a project management tool. Casual users should never be forced to create folders. Power users who need organization can opt in.Why Anyone Should Care
Many AI chat platforms fail because they impose structure on users who just want to talk. By making folders optional, aiConnected works for both casual users and power users.How It Should Be Built
Under the hood:- Every chat has an optional
folder_idfield - If
folder_id = null→ the chat lives at the “root” of the Instance (called “No Folder”) - If
folder_id = some_id→ the chat lives inside that folder
- Only chats, no folders — everything lives at root. The Instance feels like a simple chat list.
- Only folders — every chat is organized into a folder. The Instance feels like a project with departments.
- A mix — some chats in folders, some loose at root. The most common real-world usage.
FEATURE 3: Instruction & Context Inheritance (The Stacked Instructions Model)
What It Is
A layered system where AI behavioral instructions cascade from platform level down through Instance, Folder, and Chat levels, with each layer able to extend or override the one above it.What It Does
When the AI responds to a message inside a chat, it assembles its behavioral instructions by stacking multiple layers:- Global system / platform rules (safety, core behavior) — always present
- Instance-level instructions (e.g., “You are working on aiConnected, an AI automation marketplace…”)
- Folder-level instructions (e.g., “In this folder, prioritize UX clarity and React/Tailwind patterns…”)
- Chat-level instructions (e.g., “In this chat, we are only working on the persona dropdown behaviors”)
- Message-level modifiers (e.g., “Right now, think like a skeptical investor”)
Intended Purpose
This is how folders avoid “tainting” each other. The UI folder has different instructions than the Hiring folder, even though they’re in the same Instance. Each folder specializes the AI’s behavior for its domain.Why Anyone Should Care
This is the core value proposition of folders. Without instruction inheritance, folders would just be visual grouping — nice but not powerful. With it, each folder genuinely changes how the AI behaves, making it more useful for that specific workstream.How It Should Be Built
For root-level chats (no folder):- Lower layers can extend or override higher layers on specific fields
- Example: Instance says “Talk in warm, professional tone.” Folder says “In this folder, be more technical and concise.” Result: technical + concise wins within that folder.
- If a lower layer doesn’t specify something, the higher layer’s value is inherited
- Accessible from chat settings or a debug/transparency view
- Shows which instructions are active at each layer
- Shows what’s being overridden (e.g., tone, priority, tools)
- Helps power users understand and debug AI behavior
FEATURE 4: Memory & Retrieval Across Folders
What It Is
The rules for how the AI’s memory (knowledge retrieval) works when the user is inside a folder.What It Does
Folders do NOT wall off memory. The AI can still access knowledge from any chat in the Instance, regardless of which folder it’s in. However, it biases retrieval toward the current folder first.Intended Purpose
Users expect that being in the “UI” folder doesn’t make the AI forget about decisions made in the “Marketing” folder. Memory is Instance-wide. Folders only change which memories are looked at first, not which memories are accessible.Why Anyone Should Care
If folders created memory silos, they would break the “unified cognition” that makes Instances powerful. The bias-not-wall approach preserves the value of having everything in one Instance while still making folder-scoped conversations more relevant.How It Should Be Built
Retrieval logic (priority order):- Prioritize: Chats + artifacts in the current folder first
- Expand: If relevant info isn’t found locally, automatically widen search to all folders within the same Instance
- Mark the origin: When citing past work, show where it came from:
- “Found related spec in:
aiConnected → Marketing → GTM Narrative v1”
- “Found related spec in:
- Index all chats/messages at the Instance level in the vector store / knowledge graph
- Use
folder_idas a boosting factor when scoring relevance (not a filter) - This means folder context is always preferred, but Instance-wide knowledge is never excluded
FEATURE 5: Sidebar UI & Navigation
What It Is
How folders appear in the Instance’s sidebar navigation and how users interact with them.What It Does
Shows the folder hierarchy in the left sidebar of the Instance view, with collapsible folder sections, chat lists under each folder, and a “No Folder” section for root-level chats.Intended Purpose
Makes folder navigation feel natural and lightweight — similar to a file explorer, but without the heaviness of a project management tool.How It Should Be Built
Sidebar layout within an Instance:- “Move to folder…”
- “Link to other chat…”
- “Add to whiteboard”
- “Edit folder settings”
- “Duplicate folder settings to another folder”
- “Create chat with these folder defaults”
- From any chat, user can “Link existing chat…” → search any chat in the Instance → link it
- UI surfaces: “Linked: [Market Research – ICP] (Marketing folder)”
- User can click the link to jump to that chat and come back
FEATURE 6: Whiteboard Integration with Folders
What It Is
How the single Instance-level Whiteboard interacts with folder-organized content.What It Does
The Whiteboard remains one per Instance (no folder-level whiteboards), but every item pinned to the Whiteboard carries metadata about which folder (and chat) it came from. The Whiteboard supports filtering by folder origin.Intended Purpose
Lets users see “just the UI stuff” on the Whiteboard without drowning in marketing or hiring content, while still maintaining one unified canvas.How It Should Be Built
Every whiteboard item stores:- By folder: “Show only items from User Interface & UX”
- By multiple folders: “Show items from UI and Cognigraph”
- All: “Show everything” (default)
origin_chat_id and simply update the displayed folder context.
FEATURE 7: Moving Chats In and Out of Folders
What It Is
The ability to move individual chats between folders, or between a folder and root level.What It Does
Changes a chat’sfolder_id, which affects which folder-level instructions apply to future messages. Moving is non-destructive — no content is lost, no memories are deleted.
Intended Purpose
Users change their minds. A chat that started as a general brainstorm might later clearly belong in the “UI” folder. Moving should be trivial.Why Anyone Should Care
If moving chats between folders is hard, users won’t organize at all. It needs to be as easy as drag-and-drop or a single menu action.How It Should Be Built
From any chat’s context menu:- “Move to folder…” → folder picker (searchable list + “No Folder” option + “Create new folder”)
- “Remove from folder (send to No Folder)”
- Moving INTO a folder: future turns inherit the folder’s instructions
- Moving OUT of a folder: future turns lose the folder’s instructions, revert to Instance-only
- Past messages are NOT affected (they were generated under the old instructions)
chat.folder_id. No content migration needed.
FEATURE 8: New Chat Creation Flow
What It Is
How the “New Chat” button works in the context of folders.What It Does
When creating a new chat inside an Instance, the user can choose where it lives: in the currently selected folder, in a different folder, or at root (No Folder).Intended Purpose
Makes chat creation context-aware without being burdensome. If you’re browsing the “UI” folder and click “New Chat,” it defaults to creating in that folder.How It Should Be Built
When clicking “New Chat” in an Instance:- If user is currently viewing a specific folder: new chat defaults to that folder
- If user is viewing “All Chats” or “No Folder”: new chat defaults to root
- A small dropdown or toggle lets the user choose a different location before creating:
- “No Folder”
- “User Interface & UX”
- “Hiring & Teams”
- etc.
FEATURE 9: Bulk Move — Multi-Select Chats & Files to Folders
What It Is
The ability to select multiple chats or files at once and move them to an existing folder or a newly created folder in one operation.What It Does
Enters a “selection mode” where checkboxes appear on each item. Users select items, click “Move,” and choose a destination (existing folder, new folder, or root). All selected items are moved in one operation.Intended Purpose
When a user decides to organize 15 scattered chats into a new “Cognigraph” folder, they shouldn’t have to move them one at a time. Bulk move makes large-scale organization fast.Why Anyone Should Care
Without bulk move, folder adoption will be low. Users will think “it’s too tedious to organize” and give up. Bulk move makes organization effortless.How It Should Be Built
Selection Mode:- User clicks “Select” / “Manage” button in the chat list or file list
- Checkboxes appear on every row
- User can: click individual checkboxes, Shift-click to select a range, “Select all” for current filtered view
- A bulk action bar appears (sticky bottom bar): Selected count | Move | Delete | Cancel
- User clicks “Move” in the bulk action bar
- Modal opens: “Move items”
- Step 1: Choose destination type:
- “Existing folder” — shows searchable dropdown of folders + “No Folder (root)”
- “New folder…” — expands to show: Folder name, Optional description, Optional advanced settings (default persona, default model)
- Step 2: Confirm
- “Move 12 chats” button
- System moves all items atomically (all succeed or none succeed)
- Toast notification: “Moved 7 chats to ‘Cognigraph Architecture’” with a clickable link to that folder
- Folder sidebar updates with new count
- Optional “Undo” button in the toast
- Items from different folders can be selected and moved together — the move just reassigns all their
folder_idvalues - Moving chats does NOT remove their whiteboard items — whiteboard items keep their original
origin_chat_id - Moving chats from root to a folder, folder to root, or folder A to folder B all use the same flow
FEATURE 10: Search, Filtering, and Cross-Folder References
What It Is
How search, filtering, and chat linking work in the context of folders.What It Does
Ensures that folders enhance organization without breaking discoverability. Search runs across all folders by default, with optional folder-scoped filtering. Chat links work across folder boundaries.Intended Purpose
Folders should never hide content. Users should be able to find anything in the Instance regardless of which folder it’s in, and link conversations across folders freely.How It Should Be Built
Search / Retrieval:- Default: searches all chats in the Instance regardless of folder
- Filter options: by entire Instance, by specific folder, by “No Folder” only
- One per Instance
- Items can come from folder chats or root chats
- Filter whiteboard items by origin folder or show everything
- Cross-folder linking is fully supported
- Example: link a root-level brainstorm chat to a formal spec in the UI folder
- UI shows: “Linked: [Initial brainstorm] (No Folder)” / “Linked: [UI State Machine Spec] (User Interface & UX)”
- Folder boundaries do NOT restrict linking
FEATURE 11: Data Model & Architecture
What It Is
The database schema and API structure for the folder system.How It Should Be Built
Database Tables:FEATURE 12: Real-World Usage Examples
What It Is
Concrete examples showing how folders work in practice, to help developers understand the intended user experience.Example 1: “User Interface & UX” Folder
Folder instructions:- “Prioritize UX clarity, React/Tailwind patterns, coherence of chat + dashboard.”
- “Avoid deep dives into sales comp models unless explicitly asked.”
Example 2: “Hiring & Teams” Folder
Folder instructions:- “Prioritize role definitions, compensation design, and scaling sales teams.”
- “Don’t drift into UI details; keep it people/process focused.”
FEATURE 13: Design Principle (For the PRD)
What It Is
The formal design principle that should be included in any PRD or technical spec to prevent misinterpretation during implementation.The Principle
Folders are an optional organizational layer within an instance.- Chats MAY be assigned to a folder, but are not required to be.
- Chats with no folder assignment are treated as root-level “No Folder” chats.
- All chats in an instance share the same memory space, regardless of folder, with retrieval optionally biased toward the current folder but never restricted to it.
- Folder-level instructions apply only to chats inside that folder and never to root chats.
- Users can go full folders, no folders, or hybrid, and the cognition still behaves like one unified brain for the instance.
Key Implementation Principles
- Folders are optional, never mandatory — a user who never creates a folder should have a perfectly clean experience with no folder UI clutter.
- Memory is Instance-wide, not folder-scoped — folders bias retrieval but never wall off knowledge. The AI in the UI folder can still access marketing decisions.
- Instruction inheritance is the power feature — Platform → Instance → Folder → Chat. Each layer extends or overrides the one above. This is what makes folders genuinely useful, not just visual grouping.
- Moving is cheap and non-destructive — changing a chat’s folder_id changes its future instructions but preserves all content, memory, and whiteboard links.
- Bulk move is essential for adoption — if moving items one-at-a-time is the only option, users won’t organize. Multi-select + move is a must-have for v1.
- Atomic operations — “create folder + move items” is one operation. No partial states.
- Cross-folder linking is unrestricted — folder boundaries never prevent linking, referencing, or searching across the Instance.
Document 6: Chat Filters & Linked Conversations
Junior Developer Breakdown
Source:6. aiConnected OS Chat filters and linked conversations.md Purpose: In-chat filtering and conversation relationship system enabling users to navigate long conversations efficiently and maintain connections between related chats when topics branch.
Problems Solved:
- Scroll collapse in long conversations
- Lost context when topics branch
- Disconnected conversation threads
- No way to find specific content types within a chat
FEATURE 1: Multi-Select Filter Bar
What it does: Top-of-chat pill-style toggle chips for filtering visible messages. Filter Chips:- All — mutually exclusive with other chips; shows everything
- Sent — user’s messages only
- Received — AI’s messages only
- Pinned — only pinned messages
- Links — messages containing URLs
- Media — messages with attachments (images, audio, video, files)
- Search — opens inline search field
- Sent/Received/Pinned/Links/Media are multi-select (AND logic)
- When any chip selected, “All” turns off
- Examples:
Sent + Pinned→ only user’s pinned messagesReceived + Links→ only AI messages containing URLsPinned + Links + Media→ messages that are pinned AND contain links AND have media
- Horizontally scrollable on mobile
- Chips should be visually distinct when active vs inactive
- “All” resets everything when clicked
FEATURE 2: Message Metadata for Filtering
What it does: Extends ChatMessage model with filterable metadata fields. Data Model Extensions:hasLinkscan be computed by scanning content for URL patternshasMediaderived from attachment metadata- Can be computed on-the-fly or persisted for performance in long threads
- Enables fast filtering without scanning full message content each time
FEATURE 3: Search Integration
What it does: Inline search field that narrows results within the currently filtered set. Search Pipeline:- All messages → apply filter chips → apply search query
- Case-insensitive substring match against message content
- Optionally matches filenames and alt text
- Search acts as further narrowing on already-filtered set
- Can search within Pinned only, within Sent only, etc.
- Clearing search returns to filter-chip result
- Closing search clears query and returns to normal filtered view
- Search field appears inline when Search toggle clicked (not as modal)
FEATURE 4: Filter State Management
What it does: Client-side state model that controls the filter pipeline. State Model:- “All” button sets
mode='all'and all chip booleans to false - Clicking any chip sets
mode='custom'and “All” visual state turns off - Failsafe: if all chips false and search empty in custom mode, revert to
mode='all'to prevent empty view
- Apply role filters (sent/received)
- Apply metadata filters (pinned/links/media)
- Apply search narrowing
- All filter state is client-side for instant response
- Filters are non-destructive views over same conversation data
FEATURE 5: Linked Conversations (Conversation Graph)
What it does: Creates navigable relationships between related chats when users branch conversations. Trigger Actions:- “Move to new chat” (with selected messages)
- “Start new chat from selection”
- Every chat = node in a graph
- Every branch = link (edge) between nodes
- Enables navigation between related conversations while maintaining clean topic separation
- Links are bidirectional — both chats know about the relationship
FEATURE 6: Branch Indicators and Navigation
What it does: Visual indicators showing where conversations branched and how to navigate between them. In Original Chat:- Selected messages that spawned new chat get subtle link indicator icon
- Tooltip: “Branched chat: [name]”
- Clicking navigates to the branched chat
- Banner at top: “Branched from ‘[original chat name]’ based on N messages”
- [View in original chat] button
- Clicking highlights origin messages in original chat
FEATURE 7: Linked Conversations Menu
What it does: Chat header menu showing the full relationship tree for a conversation. Menu Shows:- Parent chat (if branched from another)
- Child chats (if others branched from this one)
- Sibling chats (other branches from same parent)
- Chat name
- Branch date
- Origin message count
- Click any entry → navigate to that chat
- Supports conversation chains: Chat A → Chat B → Chat C
- From Chat C, user can see parent (B) and grandparent (A) as “related via chain”
FEATURE 8: Bulk Operations with Filters
What it does: Enables moving entire filtered message sets to new chats or Workspace. Action: “Move visible messages to new chat” or “Move visible to Workspace” Flow:- User applies filters (e.g.,
Pinned + Received + Search="Cognigraph") - Clicks “Move visible messages to new chat”
- System receives
messageIds[]list (the visible filtered set) - Creates new chat or Workspace components
- Establishes
ConversationLinkwith those specific message IDs as origin context
API Endpoints
| Method | Endpoint | Purpose |
|---|---|---|
| GET | /chats/:chatId/messages?filters={...}&search={query} | Filtered message retrieval |
| POST | /chats/:chatId/messages/pin | Pin a message (body: messageId) |
| POST | /chats/branch | Create new chat + link (body: fromChatId, messageIds[], title) |
| GET | /chats/:chatId/links | Get all ConversationLink objects for chat |
| POST | /chats/:chatId/messages/move | Move messages (body: messageIds[], targetChatId or targetWorkspaceId) |
User Flows
Flow 1 — Filter to specific content: Click Received + Pinned → see only AI’s pinned responses → search within that subset → export filtered results Flow 2 — Branch conversation: Select last 2 messages starting new topic → “Move to new chat” → new chat created with those messages as seed → both chats show link indicators → navigate back and forth Flow 3 — Curate for workspace: Filter toPinned + Received + Search="architecture" → “Move visible to Workspace” → all matching messages become Workspace components with section grouping
Flow 4 — Navigate conversation lineage: In deeply branched chat → open “Linked conversations” → see parent, grandparent, siblings → click to navigate → understand full conversation evolution
Implementation Principles
- Filters are non-destructive views over the same conversation data
- All filter state is client-side for instant response
- Message metadata (hasLinks, hasMedia) can be computed or cached depending on performance needs
- Linked conversations create bidirectional relationships — both chats know about the link
- ConversationLink stores specific message IDs that formed the branch for precise traceability
- Filter bar should be horizontally scrollable on mobile
- Search field appears inline when Search toggle clicked, not as modal
- Filters enable powerful workflows: filter to specific content type → export/move/analyze that subset → maintain connection to original context through links
Document 7: Pin Message Feature & Instance Whiteboard
Junior Developer Breakdown
Source:7. aiConnected OS Pin message feature.md Purpose: Evolving design from simple message pinning → chat filters → Workspace concept → full spatial Whiteboard canvas. This document traces the complete design journey from “I can’t find important messages” to “each instance has an infinite canvas for organizing and transforming ideas.”
Key Insight: This document shows how one user pain point (losing important messages in long chats) cascaded into three interconnected systems: pinning, filtering (see Doc 6), and the Whiteboard.
Cross-References:
- Doc 5 covers the Whiteboard as a Dashboard tab (Board integration, compile panel)
- Doc 6 covers the filter system in detail (chips, state, search)
- This doc is the origin story for both, plus the Workspace concept
FEATURE 1: Pin Message Core Behavior
What it does: Lets users mark specific messages as “important” during long conversations and quickly view/export only those. Pin Interaction:- Every message (user + AI) has a pin icon
- Desktop: pin icon visible in message header row (or on hover)
- Mobile: always visible, or appears on long-press → “Pin message” in actions sheet
- States: Unpinned (pin outline) → Pinned (solid pin)
- Click pin → pinned. Click again → unpinned. Saved immediately (no extra “Save” step)
- Pins are per conversation (scoped to the chat, not global)
pinnedAtenables sorting pinned messages by pin time vs message time — chronological by message time is usually better for narrative flow- Pin toggle fires:
PATCH /chats/:chatId/messages/:messageId { isPinned: true | false } - Or:
POST /chats/:chatId/messages/:messageId/pin { pinned: true | false }
FEATURE 2: Pinned Messages View (Toggle Mode)
What it does: The chat view has two modes — show everything, or show only pinned highlights. Access Point: “Pinned” button in chat top bar, alongside other filter chips (see Doc 6 for full filter system). Render Logic:- Regeneration: If a pinned AI message is regenerated, the pin stays on that message slot — new content replaces old, pin persists
- Mobile: Same toggle at top of chat. Long-press message → “Pin / Unpin message”
FEATURE 3: Export from Pinned/Filtered View
What it does: When viewing filtered messages (pinned, sent, received, etc.), the visible set IS the export set. No extra selection steps. Export Options (in chat header when filters active):- Copy as Markdown
- Download .md
- Download .json
- Move visible messages to a new chat
- Move visible messages to another instance
- Move visible messages to Workspace/Whiteboard
- Share as public link or via mobile share menu
FEATURE 4: Instance Workspace (Component-Based Knowledge Surface)
What it does: A per-instance, non-chronological surface for collecting and organizing important pieces from many chats. Think project board / document hybrid. NOTE: This concept was later evolved into the spatial Whiteboard (Features 6-10). Both are valid — Workspace is the structured-list approach, Whiteboard is the spatial-canvas approach. The system ships Workspace as v1 list view, Whiteboard as the v1.5+ visual layer. Core Concept:- Every instance gets one Workspace
- The Workspace holds Components — discrete chunks of content (not chat messages)
- Components come from pinned messages, filtered exports, or direct creation
- Idea snippet: “Cognigraph needs a dedicated sub-architecture for learning”
- Structured spec: “Chat Filter Bar – Requirements + Toggles”
- Code block: Next.js API route or n8n JSON
- Document fragment: “Section 3: Instance Workspace Concept”
- Visual/link: Link to Figma, diagram, etc.
FEATURE 5: Chat-to-Workspace Content Flow
What it does: Moves content from chats into the Workspace as organized Components. From a Single Message:- On any message, click “Add to Workspace”
- Dialog opens with: suggested title (first line), type selector, target workspace
- On save: creates Component, links back to source message via metadata
- Apply filters (e.g.,
Pinned + Received + Search="Cognigraph") - Click “Move visible messages to Workspace”
- For each visible message: create Component with auto-suggested title and type
- User messages →
IdeaorQuestion - AI messages →
AnswerorSpec
- User messages →
- Optionally group into a section: “Import from Chat — Dec 10 Brainstorm”
| Version | View | Description |
|---|---|---|
| v1 | Structured List | Sections with drag-and-drop. Components as cards/rows with title, type, preview, source, tags |
| v1.5 | Board (Kanban) | Columns by type (Idea → Draft → Refined → Locked In) or by category |
| v2 | Mind Map / Graph | Components as nodes, relations as edges, visual clustering |
- Chat = chronological conversation (messy thinking)
- Instance Memory (Cognigraph) = automatic knowledge graph (behind the scenes)
- Workspace = user-curated, intentional surface of the most important pieces (source of truth)
FEATURE 6: AI Interactions with Workspace
What it does: A “Workspace chat” or assistant bar that operates ON the components, not as a regular chat. Example AI Commands:- “Turn everything under ‘Architecture’ into a structured PRD section”
- “Compare these three Components and tell me the conflicts”
- “Generate TypeScript interfaces from these code-spec Components”
- “Write an executive summary of all Components tagged ‘v1’”
- Engine receives text of selected Components (or all in a section)
- Plus a prompt defining the task (summarize, convert, refactor, etc.)
- Output becomes either a new Component or updates an existing one
FEATURE 7: Instance Whiteboard (Spatial Canvas)
What it does: An infinite-canvas whiteboard (like Miro/Excalidraw) where each node references content from chats. The spatial evolution of the Workspace concept. Core Properties:- One whiteboard per instance (by default; can allow multiples later)
- Each item is a Node pointing back to source content
- Think of the board as a visual layer on top of all pinned/filtered content
- Single pinned AI answer → 1 Node titled “Learning Sub-Architecture Idea”
- Batch of 25 filtered messages → 1 Node of type
message-groupwith preview: “25 messages from Chat: ‘Cognigraph – Learning‘“
FEATURE 8: Chat-to-Whiteboard Content Flow
What it does: “Yank from chat, drop onto board” — moves content from conversations to the spatial canvas. A. Single Message → Node:- On any message: “Add to Whiteboard”
- Creates Node with type=
message, source=chatId + messageId - Auto-placed near last added node
- Toast: “Added to Whiteboard”
- From filtered chat view: “Send visible messages to Whiteboard”
- Creates single Node of type
message-groupwith all visible messageIds - Label suggestion: “Cluster from – ”
- User can rename after
- AI-generated images, uploaded files, links/videos
- “Add to Whiteboard” on attachment bubble
- Each becomes a Node with type=
image/file/linkand appropriate preview
FEATURE 9: Spatial Canvas Editing
What it does: Miro/Excalidraw-style canvas interactions for organizing nodes. Canvas Basics:- Infinite scroll/pan/zoom
- Nodes can be dragged, resized, grouped
- Select — click and move nodes
- Rectangle/Frame — group container (like Figma frames)
- Connector/Arrow — draw relationships between nodes
- Sticky Note / Text Box — freeform annotation
- Draw a frame around related nodes → label it (e.g., “Learning Sub-Architecture”, “Chat Filter UX”)
- Use connectors between nodes to show relationships:
- “This idea supports that spec”
- “This cluster evolves into that PRD”
- Under the hood, each connector =
{ fromNodeId, toNodeId, relationType } - Relation types optional in v1, can add (supports, contradicts, depends-on) later
FEATURE 10: AI-on-Board (Board Chat Panel)
What it does: A right-side panel for talking to the board content. Not a regular conversation — a control interface for AI operations on curated content. Example Commands:- “Take everything in this frame and turn it into a PRD”
- “Summarize this cluster”
- “Generate a step-by-step workflow from these Nodes”
- “Compare this idea cluster to that spec cluster and tell me conflicts”
- No selection: Use everything on the board (or everything visible)
- Selection mode: If nodes are selected when user types, only those nodes provide context
- Frame-specific: Right-click a frame → “Ask AI about this frame…” → next prompt scoped to that frame’s nodes
- Resolve
nodeIds→ full underlying content (messages, text, image descriptions, links) - Feed content + user prompt into model
- Return result
- Appears in the Board Chat panel
- Optionally saved as a new AI Output Node on the canvas (e.g., “Draft PRD v1”)
- New node can then be connected, refined, or exported
Three-Layer Architecture Summary
| Layer | Purpose | Nature |
|---|---|---|
| Chats | Messy thinking and iteration | Chronological, filterable, exportable |
| Whiteboard (per instance) | Curated pieces from many chats as visual nodes | Spatial, grouped, connected, labeled |
| AI-on-Board | Higher-order operations on board content | Reads nodes/clusters/frames, produces new artifacts |
API Endpoints
| Method | Endpoint | Purpose |
|---|---|---|
| PATCH | /chats/:chatId/messages/:messageId | Pin/unpin message ({ isPinned: boolean }) |
| POST | /instances/:instanceId/workspace/components | Create Workspace Component |
| GET | /instances/:instanceId/workspace/components | List Components |
| PATCH | /workspace/components/:componentId | Update Component |
| POST | /instances/:instanceId/workspace/import-from-chat | Bulk import messages as Components |
| GET | /instances/:instanceId/whiteboard | Get board + nodes + edges |
| POST | /instances/:instanceId/whiteboard/nodes/from-messages | Create node(s) from messages |
| POST | /instances/:instanceId/whiteboard/ask | AI operation on selected nodes |
Database Tables
instance_workspaces — id, instanceId
workspace_components — id, workspaceId, title, contentMarkdown, type (enum), section, tags (JSON), sourceChatId, sourceMessageIds (JSON), createdAt, updatedAt
workspace_relations (optional) — id, workspaceId, fromComponentId, toComponentId, relationType
whiteboard_nodes — id, whiteboardId, type, label, position (JSON), contentPreview, source (JSON), meta (JSON), createdAt, updatedAt
whiteboard_edges — id, whiteboardId, fromNodeId, toNodeId, relationType
User Flow: End-to-End Example
- User brainstorms across 5-10 chats about Cognigraph, memory architecture, chat filters
- In each chat: pin key answers, filter to
Pinned + Received, search “Cognigraph” - Use “Move visible messages → Workspace” (or “Send to Whiteboard”)
- All pinned AI answers become Components/Nodes in the instance’s Workspace/Whiteboard
- In Workspace: organize into sections (Concept Overview, Memory Layers, Learning Sub-Architecture)
- In Whiteboard: arrange spatially, draw frames, connect related clusters
- Ask AI (Workspace chat or Board chat): “Generate a v1 PRD for learning sub-architecture based on everything in this section/frame”
- Output saved as new Component/Node: “Learning Sub-Architecture – PRD v1”
- Instead of Cognigraph being scattered across 30 chats, the instance has a single canonical surface with all curated pieces
Implementation Principles
- Pins are per-message metadata — simplest possible data extension
- The Workspace is structured (list/board); the Whiteboard is spatial (canvas) — both serve the same purpose at different fidelity levels
- Ship Workspace list view as v1, Board/Kanban as v1.5, spatial Whiteboard as v2
- Components and Nodes always maintain source traceability (chatId, messageIds)
- AI-on-Board requests are scoped by selection — context is explicitly defined by what nodes the user selects
- The board is a visual layer on top of Cognigraph, not a replacement for it
- Every node/component can link back to its original chat message for full context
- Workspace/Whiteboard is per-instance — one canonical surface per project
Document 8: Cognition Console UI Design
Junior Developer Breakdown
Source:8. aiConnected OS Cognition console UI design.md Purpose: Defines the front-end interactive interface for the Cognigraph artificial cognition architecture. Redesigns how memory, projects, sessions, and personas are exposed and controlled through the UI. This is the “control panel over Cognigraph’s memory layers” plus a workbench for real project work with AI.
Key Paradigm Shift: The old model treats chat as memory (“chat history = what the AI knows”). The new model treats memory as a knowledge graph; chat is just the log from which memory is distilled. Users can see, edit, and govern what the AI remembers.
Cross-References:
- Doc 7 covers Workspace and Whiteboard (visual curation surfaces)
- Doc 9 covers Collaborative Personas (multi-persona interactions)
- Doc 15 covers Persona memory architecture in detail (identity, instruction, experience, skill layers)
FEATURE 1: Core Data Model — The Objects Users See
What it does: Defines the six fundamental objects the UI must expose and let users manipulate.1a. Persona
Not “just a chat.” A semi-stable mind with purpose, style, and memory scope.1b. Project
The backbone — not loose chats. Projects bundle context, personas, memories, and artifacts.1c. Session (replaces “chats”)
A conversation episode inside a Project. This is where messages live.1d. Message
Raw dialogue — not the primary memory, but the evidence from which memory is distilled.1e. MemoryNode (Cognigraph node)
The central object — a structured memory entry following Category → Concept → Topic hierarchy.1f. Artifact
Anything that isn’t a message but is part of the work.FEATURE 2: Core Screens & Layout
What it does: Defines the four primary views and global layout. Four Primary Views:- Home / Persona Hub
- Project Dashboard
- Session View (chat + memory drawer)
- Memory Explorer
- Left sidebar: Persona selector (avatar + name + status), Projects list, Global Memory link, Settings, Daily Memory Report link
- Main area: Contextual content (Projects list, Session, Memory Explorer, etc.)
- Right drawer (toggle): “Context & Memory” for current Session — active memory slice, pinned nodes, recently used nodes, quick edit/add
FEATURE 3: Project Dashboard
What it does: Rich dashboard when user clicks into a Project. Not just a chat list — a living control surface. Header: Name, Main Persona, Goal statement, Status badge Tabs:| Tab | Content |
|---|---|
| Overview | Current goal/summary, last 3 Sessions, top 5 pinned MemoryNodes, active tasks |
| Sessions | Timeline list with title, date, Persona, short summary. “New Session” button with Persona picker |
| Memory | Scoped Memory Explorer — only project-scoped nodes by default. Filter by type (fact, plan, decision, etc.). List + tree view |
| Artifacts | Uploads, specs, notes, links. “Use in Session” button to attach as context |
| Settings | Allowed Personas, default memory slice rules (e.g., “use global ‘Engineering’ memories but not personal life”) |
FEATURE 4: Session View (The Chat Experience)
What it does: The primary interaction screen — a chat view enriched with visible memory indicators and context controls. Left: Breadcrumbs- Persona avatar + name
- Project name
- Session title
- Tiny tags under messages:
#Cognigraph,#MemoryModel,#UI - Indicator if memory was created/updated: small icon “3 memories updated”
- On hover/click:
- “View linked memories”
- “Promote this to long-term memory” (if AI suggested a candidate)
- “Remove from memory”
- Active Context — memory nodes currently attached to this Session as chips/cards showing title, type, scope icon (global/persona/project). User can pin/unpin, temporarily disable a node for this Session.
- Suggestions — memory nodes the engine thinks would be useful. “Add to context” button.
- Scratchpad (Open Thinking Layer) — ephemeral notes for this Session only. AI may write transient reasoning here. User can click “Commit to closed memory” to solidify.
FEATURE 5: Memory Explorer (“The Brain” UI)
What it does: Full-screen view for browsing, filtering, and governing all memories. Also accessible scoped within a Project. Controls:- Filters: Persona, Project, Scope (global/persona/project), Type (fact/rule/preference/etc.), Time window (created/last accessed)
- Views:
- Tree view — Category → Concept → Topic → nodes
- List view — sortable table
- Graph view (later) — visual knowledge graph
- Content
- Type, scope, layer, scores (importance, stability)
- Source Sessions/Messages
- Links:
- “Jump to source message”
- “Edit & version history”
- “Attach to current Session”
- “Change scope” (promote from project → global)
- “Mark outdated” (lowers stability, hides from default context)
FEATURE 6: Daily Memory Report
What it does: A key governance surface showing what the AI learned, updated, or flagged each day. Report Contents (grouped by Project & Persona):- “New long-term memories created today”
- “Updated memories”
- “Potential conflicts or contradictions”
- Approve / adjust / delete nodes
- Re-scope (“this belongs only in aiConnected, not global”)
FEATURE 7: Message-to-Memory Pipeline (UI Side)
What it does: Makes the process of memories being extracted from conversations visible and controllable. Pipeline:- User and Persona talk in a Session
- Cognigraph (behind the scenes) extracts candidate memories, links to existing nodes or creates new ones
- UI surfaces this two ways:
- Inline: subtle indicator on messages (“2 new memories extracted”)
- End-of-session summary: “Here’s what I learned / updated”
- User has explicit control:
- Accept / reject / edit new nodes
- Or defer and handle via Daily Memory Report
FEATURE 8: Projects as True Context Bundles
What it does: Elevates Projects beyond “folder of chats” into rich context containers. Project Context Profile:- Default memory scopes
- Relevant categories (e.g., “Engineering: Cognition”, “Business: aiConnected”)
- Style preferences (short vs long, more code vs more explanation)
- Lead Persona for this domain
- Secondary assisting Personas
- Pinned MemoryNodes representing key assumptions/decisions
- The Project reads like a living spec
FEATURE 9: Open Thinking Layer vs Closed Thinking Layer (UI Mapping)
What it does: Maps Cognigraph’s internal memory architecture to visible, controllable UI elements. Mapping:| Cognigraph Layer | UI Surface | Nature |
|---|---|---|
| Open Thinking Layer (OTL) | Scratchpad, per-Session notes | Ephemeral, transient reasoning |
| Closed Thinking Layer (CTL) | Memory Explorer nodes (Category/Concept/Topic) | Committed, structured, durable |
- User can always see which layer they are editing
- Promotion: “Convert this scratchpad element to a permanent memory”
- Demotion: “Move this memory back to scratch / mark as tentative”
- OTL: lighter color, “pencil” icon, ephemeral feel
- CTL: solid color, “book” icon, durable feel
FEATURE 10: MVP vs Full Vision
What it does: Defines what to ship first vs what to defer.MVP Must-Haves
- Personas — Persona picker + simple settings
- Projects — Create/edit, attach Personas
- Sessions — Conversation view, basic list under Project
- Memory (CTL) — Auto-extracted memories as list, simple filters (type/time), edit/delete/pin
- Context Drawer — Shows memory nodes in Session, toggle on/off
- Daily Memory Report — Simple list of new/updated nodes grouped by Project
Later Enhancements
- Full Memory Explorer tree + graph view
- Tasks integrated with memory
- Cross-project similarity suggestions
- Timeline visualizations (“what the AI learned this week”)
- Story mode / narrative of Project history
Paradigm Shift Summary
| Old World | New World |
|---|---|
| Chat ≈ Memory (each chat is a silo) | Memory ≈ Knowledge graph; chat is a log |
| ”Memory” is a vague hidden blob | Memory is visible, categorized, governed |
| Projects = folder of chats | Projects = first-class context bundles |
| Start a new chat = blank slate | Start a Session = Persona + Project + Context |
Implementation Principles
- The UI is a control panel over Cognigraph — every memory layer should be visible and editable
- Messages are evidence; MemoryNodes are the distilled knowledge. Don’t confuse the two
- Memory should feel deliberate, not spooky — always show what was learned, let users govern it
- Projects are the organizational backbone, not chats. Sessions live inside Projects
- The Context Drawer is the user’s real-time view into what the AI “knows” right now
- Ship simple (list views, basic filters) first. Graph views and cross-project intelligence come later
- Daily Memory Report is the key governance mechanism — don’t skip it in MVP
- Scratchpad (OTL) and Memory Explorer (CTL) should be visually distinct so users always know what’s ephemeral vs permanent
Document 9: Collaborative Personas Planning
Junior Developer Breakdown
Source:9. aiConnected OS Collaborative personas planning.md Purpose: Defines how multiple AI Personas participate in the same conversation — joining, leaving, remembering, and collaborating — just like real people do. Introduces three collaboration modes and the data structures that unify them.
Core Principle: “A chat is not bound to a single Persona. A chat is a container for context, artifacts, and memory links. Personas are participants — not owners.”
Cross-References:
- Doc 8 covers Cognition Console (Persona/Project/Session model)
- Doc 14 covers Build Plan (Chat Kernel, multi-persona capabilities)
- Doc 15 covers Persona memory layers (identity, instruction, experience, skill)
FEATURE 1: Chat as Shared Context Container
What it does: Redefines chats from “one AI conversation” to a shared container that multiple Personas participate in. A chat can include:- Text conversation
- Documents
- Images
- Live screen sharing
- Voice
- Tools, timelines, decisions
- A chat can start with one Persona or many
- Personas can be added/removed dynamically
- The conversation context is preserved regardless of who joins or leaves
- Each Persona remembers their participation independently
FEATURE 2: Three Collaboration Modes
What it does: Supports three natural human interaction patterns through one unified system.Mode 1: Invite Mode (Drop-In Collaboration)
User is talking to one Persona, then intentionally brings in others.- “Let me bring the developer into this discussion”
- Later: “Thanks, you can step out”
Mode 2: Open Chat Mode (Commons / Lounge)
A persistent, always-available thread where ALL Personas can contribute when they have something relevant.- Like a chaotic group messenger (Yahoo Messenger / Slack channel)
- Personas speak only when they pass a Contribution Threshold
- Ideal for brainstorming and exploratory thinking
Mode 3: Multi-Persona Start
User creates a new chat and selects multiple Personas from the beginning.- “I’m starting a thread with finance, ops, and legal”
- Same mechanism as invite mode — just all links created at chat start time
FEATURE 3: Participation Link (Core Data Structure)
What it does: The bridge between one conversation thread and multiple Persona memories. Created whenever a Persona joins a chat.- Created on join, updated on leave
- Leaving does NOT erase contributions, memory, or ability to reference later
- Same structure whether Persona was there from start or invited mid-conversation
FEATURE 4: Dynamic Persona Participation (Join/Leave)
What it does: Lets users add and remove Personas from conversations at any point. Adding a Persona:- UI: “Add collaborator” → search Personas
- Or command:
@Developer join
- Thread title + goal (one paragraph)
- Last N turns (10-30)
- Pinned context (requirements, constraints, decisions)
- Open questions specifically for that Persona
- UI: “Remove” / “Developer can leave”
- Or command:
@Developer leave - Leaving ends the participation window but preserves everything
- Start with 3-5 Personas → narrow to 1
- Start with 1 → expand to many
- Start with many → dismiss one → re-add later
- Each action simply adds or ends a Participation Link — no “mode switching”
FEATURE 5: Persona-Specific Memory of Shared Experiences
What it does: Each Persona stores their OWN memory of shared conversations. No single shared blob. Individual Memory (Persona-Level) records:- What they said
- What they recommended
- How their advice performed
- Their evolving confidence in the user
- What the group discussed
- What decisions were made
- What conflicts emerged
- What conclusions were reached (or deferred)
- Personas can disagree with group memory — a finance Persona might flag: “I still believe the decision we made last month was financially unsound”
- Same chat, different memory traces:
- Developer remembers technical constraints
- Finance Persona remembers cost implications
- Dating Persona remembers emotional tone and signals
- “Yes — during the thread about X, you asked me to…”
- “We decided Y, and I warned about Z…”
- “I can pull up the exact message where we agreed”
FEATURE 6: Three Memory Layers for Collaboration
What it does: Defines the memory architecture required for true collaborative cognition.| Layer | Scope | Nature |
|---|---|---|
| Persona Memory Graph | Private, per-Persona | Identity-anchored, evolutionary |
| Collaborative Space Memory | Shared across participants | Time-indexed, decision-aware |
| User Relationship Memory | Per-Persona view of user | Trust levels, communication preferences |
- Personas read shared memory
- Personas write to shared memory
- Personas interpret shared memory differently
- This is how true perspective emerges
FEATURE 7: Open Chat — Opportunistic Collaboration
What it does: A persistent, always-available thread on the Instance Dashboard where all Personas can contribute when relevant. Characteristics:- Always visible or one click from Dashboard
- Does not need to be created each time
- Accumulates history over time
- Acts as default brainstorming/ideation stream
- Personas participate opportunistically, not constantly
- Relevance score — is this in my domain/skill?
- Novelty score — am I adding something not already said?
- Confidence score — do I have enough signal to speak?
- Impact score — would this change a decision or direction?
- Redundancy check — has another Persona already covered it?
- Allow 1-3 replies per user message
- Queue the rest as “optional insights” user can expand
- Prevents pile-ons while preserving messenger-channel vibe
- A unique thinking model (risk-averse vs opportunity-seeking)
- A unique output style (bullet-heavy, narrative, question-asking)
- A unique default goal (protect, accelerate, simplify, validate)
- Redundancy gate should penalize “generic assistant answers”
FEATURE 8: Conversation Orchestration (Text + Voice)
What it does: Structured rules for who speaks, when, and how — prevents chaos while allowing natural emergence. Orchestration Responsibilities:- Turn Management — decide who speaks, allow interruptions, prevent domination
- Trigger Conditions — Persona speaks when domain is relevant, risk threshold crossed, or another Persona makes a questionable claim
- Cross-Persona Dialogue — Personas can question each other, build on ideas, push back respectfully
- User Override — user can address one Persona directly, ask the group, mute or prioritize Personas
- Each Persona has a distinct voice
- System announces speaker changes naturally
- Interruptions feel conversational, not robotic
- Selector pass (cheap): decide which Personas have something worth saying
- Speaker pass (expensive): generate responses only for selected Personas
FEATURE 9: Dashboard as Collaboration Hub
What it does: The Instance Dashboard serves as the living control surface for all collaboration. Dashboard Role:- Centralized awareness — shows all Personas, conversations, active status
- Immediate interaction — start typing without deciding chat type first
- Persistent collaboration — hosts the permanent Open Chat
- Dashboard → centralized hub
- Open Chat → persistent, shared conversation (the commons)
- Chats → focused, contextual threads
FEATURE 10: Use Case Neutrality
What it does: Ensures the collaborative system works across ALL use cases, not just business. Same mechanics support:- Business advisory groups
- Creative writers’ rooms
- Personal life councils
- Dating simulations
- Group therapy-like reflection
- Friend-group simulations
- Mentors + peers + challengers
Persona Identity Differentiation
Each Persona in a CPS has: Core Identity: Name, personality traits, communication style, risk tolerance, decision bias (conservative/aggressive/analytical/creative) Primary Specialization: Finance, Operations, Legal, Strategy, Technical, Emotional/coaching Secondary Modifiers: Ethical strictness, speed vs depth preference, optimism vs skepticism, authority level (advisor vs executor) These are not prompts. They are constraints applied at inference and memory interpretation time.UI Elements for Collaboration
| Element | Purpose |
|---|---|
| Persona Panel | Shows active Personas, status indicators (Listening/Thinking/Responding), mute/focus controls |
| Unified Conversation Stream | One conversation, clear speaker attribution, optional color/icon coding |
| Group Controls | ”Ask the group”, “Facilitate discussion”, “Summarize consensus”, “Highlight disagreements” |
| Memory Anchors | Decision markers, unresolved issues, action items tied to Personas |
MVP vs Full Version
MVP (Ship Early)
- Join/leave Personas into a thread
- Catch-up packet on join
- Persona stores Participation Memory Event
- Participants strip + mention syntax
Full Version (Later)
- Permissions per Persona (what they can store)
- Persona “office hours” / availability
- Auto-suggest collaborator
- Voice mode speaker switching
- Action-item handoff (“Developer, take ownership of task #12”)
The Unified Model (One Sentence)
In aiConnected, conversations are persistent contexts that can include any number of Personas, who may join, contribute, leave, and remember their participation — individually and continuously — just like people do in real life.Implementation Principles
- ParticipationLink is the universal primitive — same structure for all three modes (invite, open, multi-start)
- Catch-up packets prevent transcript dumping on join — keep context efficient
- Contribution Thresholds prevent Open Chat from becoming noise
- Memory is ALWAYS Persona-specific — no single shared blob. Personas interpret shared events differently
- Leaving a chat preserves everything — contributions, memory, ability to reference later
- Two-step flow for scalability: cheap selector pass, then expensive generation pass
- Use case neutral design — business, creative, social, therapeutic all use identical mechanics
- Dashboard Open Chat is the “commons” — always available, persistent, naturally collaborative
Document 10: Computer Use for AI Personas (Embodied Digital Worker)
Junior Developer Breakdown
Source:10. aiConnected OS Computer Use for aiPersonas.md Purpose: Defines how AI Personas get a “physical body” — a persistent digital workspace where they actually USE computers like humans do (clicking, typing, browsing, navigating software) instead of relying on APIs or scripts. This is the execution layer that makes Personas feel like real digital employees.
Core Principle: “You are not trying to automate tasks — you are trying to instantiate digital agency, and that requires embodiment, continuity, and learning, not better prompts or more tools.”
Analogy: Think of it as hiring a teenager — capable, limited, supervised, improving over time. If a trained teenager could do a task on a computer, this system should eventually be able to do it too.
Cross-References:
- Doc 8 covers Cognition Console (Persona/Project/Session model)
- Doc 9 covers Collaborative Personas (multi-persona participation)
- Doc 15 covers Persona memory layers (procedural memory = skills)
- Doc 19 covers Fluid UI Architecture (how embodiment fits the overall vision)
FEATURE 1: Digital Body (Persistent Desktop Environment)
What it does: Each Persona gets a sandboxed computer environment — a real desktop it “lives inside” with persistent state. Components:- Controlled desktop environment per Persona (containerized)
- Browser with its own profile (cookies, sessions, logins, extensions)
- Controlled filesystem (downloads, uploads, saved files)
- OS-level clipboard and window management
- Persistent across sessions (doesn’t reset every time)
- KasmVNC — web-native desktop streaming (Linux desktop streamed to browser)
- Containerized workspace images
- WebRTC/VNC for connection + input injection
- Chromium-based browser inside the environment
- “Teenager has their own work computer” model
- Isolated environment (easy reset/recover)
- You control what apps exist and what permissions it has
- Already feels like a worker because it “lives somewhere”
FEATURE 2: Perception Stack (How the Agent “Sees”)
What it does: Three-layer perception system that supports the fuzzy generalization a human uses when looking at a screen. Layer 1: UI Text & Structure (fast, reliable)- DOM accessibility tree when available (browser)
- Visible text extraction
- Screenshot analysis for when DOM is unavailable
- Chart/image interpretation
- Layout understanding
- “I was on this page before”
- “This button moved but does the same thing”
- Pattern recognition across sessions
FEATURE 3: Action Layer (How the Agent “Acts”)
What it does: Reliable UI control — clicking, typing, navigating — like a human would. Technology:- Playwright — mature base for controlled browser automation across engines
- browser-use — open-source accelerator for LLM-driven web actions
- CDP (Chrome DevTools Protocol) for fine-grained control
- Mouse move/click/drag
- Keyboard typing (including shortcuts)
- Window management, tabs, basic OS interactions
- Form filling, file upload/download
- Tab management and navigation
FEATURE 4: Verification Gates (Evidence-Based Completion)
What it does: Ensures the agent doesn’t hallucinate success. A task step is only “done” if a condition is observed. Verification Loop:- Plan: Break task into steps with expected outcomes
- Act: Click/type
- Observe: Read DOM + screenshot + network events
- Verify: Check for success condition (not “I clicked it” — actual proof)
- Recover if not verified
- Screenshots at key steps
- Action timeline
- URLs visited
- Files created/downloaded
- DOM state proof
FEATURE 5: Unstuck Engine (Recovery & Self-Healing)
What it does: When things go wrong, the agent behaves like a human: pause, re-orient, try alternatives, backtrack. Why This Matters: This is where ALL competitors fail. Most agents get stuck and just retry or give up. Minimum Viable “Unstuck” Behaviors:- Loop detection — same screen state N times → try something else
- Modal/toast/cookie banner handling — known blocker library
- “Try next best target” logic — same label, nearby button, alternative navigation
- Checkpoint rollback — “go back to last known good screen”
- Fork handling — if new screen appears, classify which fork and proceed
- Timeout diagnosis — not just waits, but figures out WHY
- Strategy switching — DOM mode ↔ vision mode
- Single escalation question — ONLY when truly blocked (CAPTCHA/2FA/permissions)
- URL
- Key DOM markers
- Cookies
- Last action
FEATURE 6: Permission System (Teenager Policy Model)
What it does: Three concentric rings of permissions, matching how you’d supervise a hired teenager.Ring A: Safe by Default (no irreversible actions)
- Browse, read, summarize, copy/paste into drafts
- Gather evidence (screenshots, notes)
- Build a plan and show what it intends to do next
Ring B: Allowed with Constraints (day-to-day work)
- Send messages only from approved templates or after approval
- Fill forms but require approval before final submit
- Download/upload files within sandbox folder
Ring C: Requires Explicit Approval (every time)
- Anything involving money, billing, financial transfers
- Deleting accounts/data
- Changing DNS / security controls
- Trading live capital
- Autopilot (safe): deterministic tools only, browser read-only unless whitelisted
- Operator (risky): can click/type in browser, requires approval for destructive actions
FEATURE 7: Teach Mode (Skill Learning by Demonstration)
What it does: User shows the agent a workflow once; the agent can repeat and improve it. This is the breakthrough feature. Components:- Recorder: captures screen + actions + DOM snapshots
- Skill Compiler: turns recordings into structured skills:
- Goals
- Steps (ordered)
- Anchors (what to look for on screen — not coordinates)
- Variables (name, email, search terms, etc.)
- Branch rules (“if you see X, do Y”)
- Button text similarity
- Layout heuristics
- Recovery behaviors (modal closing, alternate navigation)
FEATURE 8: Four Memory Types for Embodied Agents
What it does: The agent accumulates operational history and habits, making it feel “alive.”| Memory Type | Content | Example |
|---|---|---|
| Declarative | Facts, references | ”The CRM login page is at app.example.com” |
| Procedural | Skills — how to do things | ”LinkedIn lead research” workflow |
| Episodic | What happened | ”On Dec 18, I tried X and it failed because Y” |
| Preference & Policy | User work style | Tone, what counts as “done”, risk limits, escalation rules |
Phased Build Roadmap
Phase 1: Build the “Body + Cockpit”
- KasmVNC desktop per Persona
- Control channel (open URLs, type, click, manage tabs)
- Full event logging + replay artifacts
- Outcome: It already feels like a worker because it “lives somewhere”
Phase 2: Browser Operator That Doesn’t Lie
- Playwright for execution
- Verification gates (step only “done” if condition observed)
- Evidence (screenshot/DOM proof per step)
- Outcome: Fewer stalls, trustable status reports
Phase 3: Unstuck Engine
- Loop detection, modal handling, alternative target logic
- Checkpoint rollback, escalation questions only when blocked
- Outcome: Starts resembling the “teenager” — can handle real-world messiness
Phase 4: Teach Mode
- Recorder + skill compiler
- Skill storage in Cognigraph (procedural + episodic)
- Basic generalization rules + recovery behaviors
- Outcome: Trainable across industries without custom engineering
Phase 5: Generalization + Marketplace
- Skill templates + parameterization
- Success-rate tracking
- Environment profiles
- Library/marketplace model for skills (aiConnected “engines” model)
- Outcome: Selling “trained workers + trained skills,” not just “an agent”
Phase 6: Multi-Tool Embodied Agent
- Telephony (LiveKit/Twilio)
- CRM integrations
- Browser + voice + note-taking + follow-up execution
- Unified “work diary” and “task board”
- Outcome: True digital worker across communication channels
Stress-Test Use Case: Trading
Trading is intentionally the hardest benchmark because it forces speed, discipline, risk boundaries, verification, and continual learning. Safe Progression:- Replay + paper mode first — agent watches charts, executes strategy in simulation, logs rationale
- Constrained live mode (teenager rules) — fixed max position, daily loss limit, hard stops, approval for parameter changes, full audit trail
- UI trading as worst-case benchmark — dynamic charts, hotkeys, latency, popups, disconnects
What This Is (and Is Not)
| This IS | This is NOT |
|---|---|
| The physical body for digital Personas | A UI redesign |
| The execution layer for human-like work | A chatbot |
| The foundation for true autonomy | A scripting system |
| A general-purpose “digital worker” runtime | A narrow automation tool |
| Environment for living digital intelligence | A full OS replacement |
Key Differentiator vs Existing Solutions
| Existing AI Agents | This System |
|---|---|
| Stateless | Persistent digital presence |
| Break when interfaces change | Visual continuity + unstuck engine |
| Assume success without verification | Evidence-based completion |
| Get stuck in loops | Recovery + checkpoint rollback |
| Require constant babysitting | Safe autonomy with permission rings |
| Task scripts | Living agency |
Implementation Principles
- Start with the body (persistent desktop), not the brain (intelligence)
- Verification-first: define “done” and how to check it before building execution
- Deterministic tools for 80% of work; browser automation only for the unavoidable 20%
- Recovery/unstuck engine is where real differentiation lives — invest heavily here
- Teach Mode is the breakthrough: procedural memory in Cognigraph makes agents trainable by demonstration
- Permission rings match real-world supervision — safe default, constrained work, approval-required actions
- Don’t start with “make it smarter” — start with “make it reliable and honest”
- Skills become the product: not selling “an agent” but “trained workers + trained skills”
Document 11: Chat Cleanup System
Junior Developer Breakdown
Source:11. aiConnected OS Chat Cleanup System.md Created: 12/18/2025 | Updated: 12/18/2025
Why This Document Exists
The Problem (Founder’s Frustration): Every major AI chat platform — ChatGPT, Claude, Gemini — suffers from the same problem: you cannot easily clean up your chats. Over the course of a month, a user accumulates dozens or hundreds of conversations. Many of these are throwaway: one-off questions, random curiosity, quick lookups. But in systems with AI memory, those throwaway chats can pollute future context. The AI “remembers” things from conversations the user considers meaningless, and the user has no efficient way to purge them. The founder’s exact complaint: “Over the course of a month, I might have just little random questions that I asked that I don’t want to be part of the permanent context for future discussions. They were just random little stupid questions or something, or just one-off conversations. I need to be able to clean that up easily.” What This Document Solves: It defines a complete content lifecycle management system — not just “delete a chat,” but a full pipeline of browse → multi-select → delete → recover → permanently destroy, applied consistently to both chats and memories, at every level of the application (Global, Instance, Persona). Why Anyone Should Care: This is one of those features that quietly makes the platform feel “finished” and enterprise-grade. Without it, chat sprawl becomes unmanageable within weeks. With it, users feel like they control their environment — their data, their context, their AI’s memory. No other AI platform does this well. Cross-References:- Doc 4 (Folder System) established that chats can be organized into folders, moved between folders, and viewed at multiple scopes. This document builds the deletion and recovery layer on top of that organizational foundation.
- Doc 6 (Chat Filters & Linked Conversations) established ConversationLinks between chats. This document defines what happens to those links when a chat is deleted.
- Doc 8 (Cognition Console) defined MemoryNodes with scope (global/persona/project). This document adds the operational layer: how users bulk-manage, archive, move, and delete those memory items.
- Doc 9 (Collaborative Personas) established that chats can have multiple Persona participants. This document handles the edge cases: what happens when you move or delete a multi-persona chat, and what happens when the Persona no longer exists at restore time.
- Doc 14 (Build Plan) lists this as Phase 5 of the build — the “power user advantage” that differentiates aiConnected from competitors.
Important Context: What Had Already Been Designed vs. What Was Missing
Before this document was created, the founder asked: “Have I already included a way to manage old chats?” The answer was: partially — but not as a complete system. What DID already exist in prior documents:- Multi-chat visibility: users could view chats globally, within an Instance, within a Persona, and within Folders
- Moving chats between folders (Doc 4)
- Multi-select for messages within a chat (pin, extract, move to whiteboard — Docs 6 and 7)
- The implication of multi-select for chats (moving into folders), but never formalized as a system-wide pattern
- A “Recently Deleted” state (soft-delete)
- A retention window (how long before auto-purge)
- Restore behavior (what happens when you recover a deleted chat)
- Cross-instance restoration logic (what if the original Instance/Folder/Persona no longer exists)
- What happens to linked/referenced/derived chats when their source is deleted
- A permanent purge action (irreversible hard deletion)
- The same lifecycle applied to memories (not just chats)
FEATURE 1: ChatThread Data Model (Extended for Lifecycle Management)
What it does: Extends the existing ChatThread object with fields that track its deletion state, who deleted it, when, and where it should be restored to if recovered. Why it matters: Without these fields, deletion is binary — the chat either exists or it doesn’t. With them, the system supports soft-delete, timed retention, and smart restoration. Intended purpose: Every chat in the system carries enough metadata to be deleted safely (removed from active use and memory indexing), held in a recovery state for a configurable period, restored to its exact original location, or permanently destroyed.restore_to field is a snapshot taken at the moment of deletion. It captures the chat’s original home — which Instance, which Personas, which Folder. This is critical because between the time a chat is deleted and the time a user decides to restore it, the original Instance might have been renamed, the Folder might have been deleted, or the Persona might have been deactivated. The snapshot gives the restoration logic a “last known good location” to work with.
The delete_reason field tracks whether the deletion was manual (user clicked delete), automated (a cleanup cron job or policy rule), or policy-driven (e.g., an enterprise admin enforcing retention rules). This matters for audit logging and for distinguishing “I chose to delete this” from “the system cleaned this up.”
FEATURE 2: Three Chat States (The Deletion Lifecycle)
What it does: Defines exactly three states a chat can be in, creating a clear lifecycle that mirrors how file systems work (think: Trash/Recycle Bin). The three states:| State | What it means | Visible in active lists? | Visible in Recently Deleted? | Affects memory/retrieval? | Reversible? |
|---|---|---|---|---|---|
| Active | Normal, working chat | Yes | No | Yes — contributes to memory indexing | N/A |
| Recently Deleted | Soft-deleted, in recovery holding area | No | Yes | No — immediately removed from memory indexing and search results | Yes — can be restored |
| Permanently Deleted | Hard-deleted, irreversible | No | No | No — completely gone | No — cannot be recovered |
deleted_at IS NULL→ Activedeleted_at IS NOT NULL AND permanently_deleted = false→ Recently Deletedpermanently_deleted = true→ Permanently Deleted (or simply hard-deleted from the database)- Every query that feeds the memory/retrieval pipeline MUST include
WHERE deleted_at IS NULLas a filter condition. This is non-negotiable.
FEATURE 3: The Global Rule — Same Behavior Everywhere
What it does: Establishes that the cleanup system works identically at every scope level. There is no special behavior at the Global level that doesn’t exist at the Instance level or the Persona level. The user always gets the same tools. Why this matters: This is a UX consistency principle. Users should never wonder “can I do this here?” If they can multi-select and delete at the Global level, they can do the same thing inside an Instance or inside a Persona view. The only thing that changes is the default filter context — which chats are shown. The universal toolkit available at every scope:- A list of chats (filtered by scope)
- Multi-select capability
- Bulk actions: Delete, Restore, Permanent Delete
- Search + sort + filter
- “Recently Deleted” as a dedicated view within that scope
- Global:
GET /chats?user_id={userId}&status={active|deleted} - Instance:
GET /instances/{instanceId}/chats?status={active|deleted} - Persona:
GET /personas/{personaId}/chats?status={active|deleted}
FEATURE 4: Three Scope Levels (Global, Instance, Persona Chat Managers)
What it does: Defines the three vantage points from which a user can browse, manage, and clean up their chats.Scope 1: Global Chat Manager
What it is: A top-level screen that shows every chat the user has across all Instances, all Personas, all Folders. This is the “bird’s eye view” — the place you go when you want to do a deep cleanup of everything at once. Header Controls:- Search bar (searches across all chats)
- Filters:
- Instance: All / pick specific Instance
- Persona: All / pick specific Persona
- Folder: All / pick specific Folder
- Status: Active / Recently Deleted
- Type: Solo persona / Multi-persona
- Sort options:
- Last activity (default — most recently active chats first)
- Created date
- Title (alphabetical)
- Instance name (group by Instance)
- Chat title
- Instance badge (which Instance it belongs to, color-coded or icon)
- Persona badge(s) (which Persona(s) participate)
- Last activity timestamp
- Quick actions: open chat, context menu (right-click), checkbox for selection
- Delete (soft-delete → Recently Deleted)
- Move to folder (Active chats only)
- Move to Instance
- Export (optional, future)
- Archive (optional, future)
Scope 2: Instance-Level Chat Manager
What it is: Inside an Instance dashboard, the user sees all chats within that Instance, including chats involving any Persona(s) assigned to that Instance. How it differs from Global: The UI is identical, but the Instance filter is locked to the current Instance. The user can still filter by Persona (showing only chats with a specific Persona within this Instance), by Folder, by status, etc. Why this scope exists: The founder wanted users to be able to “go to individual instances, that dashboard, and perform the same thing — delete all of the conversations within an instance, even if it’s across multiple personas in that instance.”Scope 3: Persona-Level Chat Manager
What it is: Inside a Persona view, the user sees all chats that include that Persona — both solo chats (where this Persona is the only AI participant) and multi-persona chats (where this Persona was one of several). How it differs from Global: The Persona filter is locked to this Persona. The Instance filter may still be available if Personas can span multiple Instances, or it may also be locked if the Persona is Instance-bound. Why this scope exists: The founder wanted users to be able to “delete or clean up conversations with a particular persona.” If a user has been chatting with their Legal Persona about random things for weeks and wants to clean up just those conversations, they can do so from this view without affecting any other Persona’s chats.FEATURE 5: Multi-Select Behavior (The Interaction Pattern)
What it does: Defines exactly how users select multiple chats for bulk operations. This is a system-wide interaction pattern, not specific to deletion — it also applies to bulk move, bulk archive, and bulk export. Desktop interactions:- Checkbox per row: each chat row has a checkbox on the left edge. Clicking it toggles selection for that chat.
- “Select all” checkbox: in the header row, selects all chats in the current filtered view (not all chats everywhere — only those matching the active filters). If the user has filtered to “Short chats, older than 30 days,” Select All selects only those.
- Shift-click range selection: click one checkbox, then Shift+click another — all rows between them are selected. Standard desktop file manager behavior.
- Long-press to enter multi-select mode: long-pressing any chat row activates multi-select mode, where tapping additional rows toggles their selection.
- This mirrors how mobile file managers and photo galleries work (Google Photos, iOS Photos).
FEATURE 6: Bulk Action Bar (Sticky Bottom Bar)
What it does: When one or more chats are selected, a persistent action bar appears at the bottom of the screen showing available bulk operations. Bar contents:- Selected count: “12 chats selected”
- Available actions (vary by context):
- In Active view: Delete, Move, Archive, Export
- In Recently Deleted view: Restore, Delete Permanently
- Cancel selection: button to deselect all and dismiss the bar
- The user’s attention is on the chat list in the center of the screen
- A bottom bar stays visible as they scroll through and select chats
- It doesn’t obscure the list content
- It provides a persistent reminder of how many items are selected
- It’s immediately accessible without scrolling back to a toolbar
selectedCount > 0 and animate out when selectedCount === 0. It should be visually prominent (contrasting background color) and have large, clearly-labeled action buttons.
FEATURE 7: Smart Cleanup Filters
What it does: Provides pre-built filter presets specifically designed to make “clean up a month of random chats” effortless. These are the key to making cleanup feel fast instead of tedious. Why this matters: Without smart filters, the user would have to manually scroll through hundreds of chats and decide one by one which to delete. With smart filters, they can immediately surface the most likely candidates for deletion, select all, and clean up in seconds. Quick Filters (pre-built, one-click):| Filter | What it shows | Why it’s useful |
|---|---|---|
| ”Short chats” | Chats with fewer than ~6 messages | These are almost always one-off questions or quick lookups — the exact “stupid little questions” the founder complained about |
| ”One-off chats” | Chats with no follow-up activity after 24 hours | If the user asked something and never came back, it’s probably disposable |
| ”No pins” | Chats where the user never pinned a single message | Pinned messages indicate importance; absence of pins suggests low value |
| ”No references / no links” | Chats that aren’t linked to or referenced by any other chat | Isolated chats with no connections to other work are safer to delete |
| ”Older than” | Configurable: 7 / 30 / 90 days since last activity | Time-based cleanup for stale conversations |
- Title search (chat titles)
- Content search (optional — search within message text)
- Tag search (if chats have tags)
- Last activity (default — shows most recent first for review)
- Oldest first (for cleanup sessions — start with the oldest, most likely stale)
FEATURE 8: Soft Deletion Workflow (Delete → Recently Deleted)
What it does: When a user hits Delete (for a single chat or a bulk selection), the chats are not destroyed. They are moved to a “Recently Deleted” holding state — removed from active views, removed from search, removed from memory indexing, but still recoverable. Step-by-step flow:- User selects chats and clicks Delete (from the bulk action bar or a single chat’s context menu)
- System immediately performs:
- Sets
deleted_at = now()on each selected chat - Captures
restore_tosnapshot:{ instance_id, persona_ids, folder_id, original_scope_context } - Sets
delete_reason = 'user'
- Sets
- Chats disappear from Active lists immediately — the user sees them vanish from the list
- Chats appear in Recently Deleted view — accessible via a “Recently Deleted” tab/filter within the same scope
- Critical: chats are IMMEDIATELY removed from:
- Normal chat browsing
- Search results (unless user explicitly switches to Recently Deleted view)
- Memory indexing / retrieval pipelines — the AI will no longer use these chats as context
- The Recently Deleted view uses the same list UI as the Active view
- Search and filters still work within Recently Deleted
- Each row shows:
- Chat title and original Instance/Persona/Folder context
- “Deleted X days ago”
- “Will be permanently deleted in Y days” (countdown to auto-purge)
- Users can select chats here and choose Restore or Delete Permanently
- Default: 30 days before auto-purge
- Configurable per user or per organization (enterprise admins can set retention policy)
- After the retention window expires, chats are automatically permanently deleted (auto-purge)
FEATURE 9: Restore Workflow (Recovery from Recently Deleted)
What it does: When a user restores a chat from Recently Deleted, the system attempts to put it back exactly where it was before deletion — same Instance, same Persona associations, same Folder. Restore behavior:- User selects chats in Recently Deleted view and clicks Restore
- System clears
deleted_at(sets it back to null) - System reads the
restore_tosnapshot and places the chat back in:- Its original Instance
- Its original Persona associations
- Its original Folder
| Scenario | What happens | User sees |
|---|---|---|
| Folder was deleted since the chat was soft-deleted | Chat restores to the Instance root (“Unfiled”) | Small notice: “Original folder no longer exists. Chat restored to root.” |
| Instance was deleted since the chat was soft-deleted | Chat restores into a special “Recovered” holding container, OR the system prompts the user to pick a new Instance | Modal: “The original Instance no longer exists. Where would you like to restore this chat?” with Instance picker |
| Persona no longer exists (deactivated/deleted) | Chat restores successfully, but Persona participant is marked as missing | In the chat: “Some participants are unavailable” label next to the missing Persona’s messages |
| Multiple edge cases combine (Folder AND Persona deleted) | System handles each independently — Folder logic fires, Persona logic fires | User gets both notices |
exclude_from_memory = true is set.
FEATURE 10: Permanent Deletion Workflow (Irreversible Destruction)
What it does: Provides two paths to permanently destroy a chat: manual purge by the user, or automatic purge after the retention window expires. Path 1: Manual permanent deletion- User navigates to Recently Deleted view
- Selects chats
- Clicks “Delete Permanently”
- System shows confirmation dialog:
- For single chat: “Permanently delete ‘[Chat Title]’? This cannot be undone.”
- For bulk: “Permanently delete 17 chats? This cannot be undone.”
- On confirm: hard-delete from database. Chat ID may be retained as a tombstone for link integrity (see Feature 12), but all content, messages, and metadata are destroyed.
- A background job runs on a schedule (daily recommended)
- It finds all chats where
deleted_at + retention_window < now() - It permanently deletes them
- This is configurable: the retention window defaults to 30 days, but enterprise admins can set it to 7, 14, 60, 90 days, or disable auto-purge entirely
- The confirmation dialog MUST be explicit and clearly state irreversibility
- For bulk operations, it MUST show the count (“17 chats”)
- There should be no “Don’t show this again” option — permanent deletion always requires confirmation
- The button should be visually distinct (red, destructive styling) and NOT positioned where a user might accidentally click it
FEATURE 11: Memory and Permanent Context Control
What it does: Ensures that deleting a chat actually means something to the AI’s memory — not just hiding it from the UI. Why this is the most important feature in this document: The founder’s entire motivation for the cleanup system was: “I don’t want [random chats] to be part of the permanent context for future discussions.” If deletion only hides chats from the list view but the AI still draws on them for memory and context, the feature is useless. This feature is what makes the cleanup system real. Behavior:- Soft deletion immediately removes the chat from:
- Normal chat browsing
- Search results (unless in Recently Deleted view)
- All memory indexing and retrieval pipelines — the AI cannot use content from deleted chats to inform future responses
- Restoration re-enables indexing, but optionally with a grace prompt:
- “Restore and include in memory?” → Yes (full restore) or No (restore chat but exclude from memory)
- A checkbox: “Also delete memories created from these chats” (default OFF)
- A preference toggle: “Always do this” (for users who want aggressive cleanup)
FEATURE 12: Interaction with Links and References
What it does: Defines what happens to the conversation graph (ConversationLinks from Doc 6) when a chat is deleted. Why this matters: Chats don’t exist in isolation. They reference each other, branch from each other, and link to each other. If you delete a chat that is referenced by other chats, you can’t just leave broken references silently — the user needs to understand what happened, and the system needs to handle it gracefully. Scenario 1: A chat is deleted but another chat references it- The referencing chat keeps the reference object in its data
- But the reference renders as: “Reference unavailable (deleted)”
- A one-click “Restore referenced chat” button appears (if the user has permission to access Recently Deleted)
- This is similar to how a broken link works on the web, but with a recovery option
- The ConversationLink metadata is preserved (not destroyed)
- The deleted chat is hidden from navigation
- If the deleted chat is later restored, the link graph is automatically reactivated — all existing links reconnect
chat_id, deleted_at, was_linked_to) so that referencing chats can display “This referenced chat has been permanently deleted” instead of showing a mysterious broken reference with no explanation.
FEATURE 13: Bulk Move for Chats (Cross-Scope Reassignment)
What it does: Lets users select multiple chats at once and move them to a different Instance, a different Folder, or reassign them to a different Persona. This is separate from deletion — it’s about reorganization as conversations evolve. Why this matters: The founder noted: “As some conversations evolve, maybe they belong in a different place, but I don’t want to have to do that one at a time.” In a system with Instances, Personas, and Folders, conversations frequently outgrow their original container. A chat that started as a quick question in General Chat might become a serious project discussion that belongs in a dedicated Instance. What “Move” means (three distinct types):Move Type 1: Instance Reassignment
- Changes
chat.instance_idfrom Instance A → Instance B - Used when a conversation belongs under a different project/workspace
- The chat appears in the destination Instance’s chat list immediately
- The
chat_iddoes NOT change — this preserves all existing links, references, and history
Move Type 2: Persona Participation Changes
Two separate sub-operations (don’t combine under one vague “move”):- Reassign primary Persona: For solo chats that are “owned by” one Persona — changes which Persona the chat is associated with
- Edit participants: For multi-persona chats — opens a participant editor rather than a simple “move”
Move Type 3: Folder Relocation
- Changes
chat.folder_id(within the same Instance) - Used for organizational cleanup without changing Instance or Persona association
- Can also set
folder_id = nullto move a chat back to the Instance root (“Unfiled”)
- Move to Instance (reassign Instance)
- Move to Folder (reorganize within current Instance)
- Move to Persona (reassign ownership or participants)
- Searchable picker showing all valid destinations
- Shows destinations the user has access to
- Shows warning badges if something is incompatible (e.g., “This Instance doesn’t have the same Personas”)
- Folder mapping:
- Default: “Keep same folder name if it exists in destination, otherwise move to Unfiled”
- Persona mapping:
- Option A (default, safest): “Keep participants; if a Persona doesn’t exist in the destination Instance, keep the chat but mark that Persona as missing”
- Option B: “Replace participants with selected Persona(s)”
- If the chat is single-persona: change the owner
- If multi-persona: open “Edit participants” instead (because “moving” a multi-persona chat to one Persona doesn’t make semantic sense)
| Concern | Rule |
|---|---|
| Chat identity | chat_id unchanged — preserves links, references, history |
| Instance container | instance_id updates; chat appears in destination immediately |
| Persona participants | If destination Instance has those Personas, keep them. If not, chat still moves but invalid participants become “unresolved participants” with a label |
| References and links | Kept intact. If a referenced chat is in an Instance the user can’t access, show “Reference unavailable” |
| Permissions | Move only allowed if user has permissions for both source AND destination. If not, destination is disabled in the picker with explanation |
FEATURE 14: Bulk Move and Management for Memories
What it does: Applies the same lifecycle management system (browse, multi-select, archive, delete, recover, move) to AI memories — not just chats. Why this matters: The founder specifically requested: “The same should apply to memories. I should be able to select multiple memories all at once and hit the delete button, and archive them or just delete them outright, or they go to a recently deleted folder in case I feel like I need to recover them. I need to be able to do that at all levels.” Memories are structured artifacts extracted from chats by the Cognigraph system (see Doc 8). They represent distilled knowledge — facts, decisions, preferences, rules. But just like chats, memories can become stale, incorrect, or unwanted. Users need the same degree of control over their memories as they have over their chats.Memory Data Model (Extended for Lifecycle)
Memory States (four states, not three)
Memories have one additional state compared to chats: Archived.| State | What it means | Used for retrieval? | Visible in Memory Manager? | Recoverable? |
|---|---|---|---|---|
| Active | Normal, working memory | Yes — the AI uses it | Yes | N/A |
| Archived | Preserved but dormant | No — excluded from retrieval unless user explicitly enables | Yes (with “Archived” filter) | Yes — can be reactivated |
| Recently Deleted | Soft-deleted | No | Yes (in Recently Deleted view) | Yes — can be restored |
| Permanently Deleted | Destroyed | No | No | No |
Memory Manager Screens (Same Three Scopes)
Just like the Chat Manager, the Memory Manager exists at three levels:- Global Memory Manager — all memories across all scopes
- Instance Memory Manager — memories scoped to a specific Instance
- Persona Memory Manager — memories scoped to a specific Persona
- Last used (when did the AI last retrieve this memory?)
- Created date
- Source chat (which conversation generated this memory?)
- “Never used” (memories that were created but never actually retrieved by the AI — likely candidates for cleanup)
- “Low confidence” (if the system tracks confidence scores on memories)
- Scope: Global / Instance / Persona
- Archive — move to dormant state (excluded from retrieval, but preserved)
- Delete — soft-delete → Recently Deleted (same 30-day retention window as chats)
- Delete Permanently — only available inside Recently Deleted view
- Move — scope reassignment (see below)
What “Move Memory” Means
Moving a memory changes where it lives and who can use it. It changesscope_type and scope_id without changing the content.
Types of memory moves:
- Global → Instance (narrow scope: only this Instance’s Personas can use it)
- Instance → Persona (narrow further: only this specific Persona can use it)
- Persona → Instance (broaden: all Personas in this Instance can use it)
- Persona → Global (broadest: all Personas everywhere can use it — only if user explicitly chooses)
scope_type and scope_id, and updates retrieval eligibility automatically.
FEATURE 15: The Relationship Between Chat Deletion and Memory Deletion
What it does: Defines the precise behavioral rules for what happens to memories when their source chat is deleted, and vice versa. This is the most nuanced design decision in the document. Getting it wrong means either (a) users accidentally destroy valuable memories by casually deleting chats, or (b) users clean up chats but the AI still uses those chats’ memories, defeating the purpose. Rule 1: Deleting a chat removes it from browsing AND disables associated derived memory items from retrieval.- When a chat is soft-deleted, any MemoryItems whose
source_chat_idsincludes that chat are flagged for “source deleted” - Those memories are NOT automatically deleted, but they ARE excluded from active retrieval by default
- This means: deleting a chat effectively removes its influence on future AI context, without destroying the memories themselves
- Chat Delete confirmation dialog includes a checkbox: “Also delete memories created from these chats”
- Default: OFF (protective — don’t destroy memories by accident)
- User can enable “Always do this” as a global preference toggle
- This is independent of chat state
- Deleting a memory does NOT affect the source chat(s) in any way
- Rule 1 addresses the founder’s core complaint (random chats polluting context) without data loss
- Rule 2 gives power users full control for aggressive cleanup
- Rule 3 keeps memory management and chat management as independent operations that don’t create unexpected side effects
Minimal v1 Specification (What to Ship First)
For Chats:- Global / Instance / Persona chat list screens (one reusable component)
- Multi-select + bulk delete
- Recently Deleted with 30-day retention window
- Restore returns chat to original Instance/Persona/Folder
- Permanent delete (manual, inside Recently Deleted)
- Deletion removes from retrieval/memory indexing
- Global / Instance / Persona memory list screens
- Multi-select + bulk archive, bulk delete
- Recently Deleted with 30-day retention
- Restore + permanent delete
- Bulk move between scopes (Global/Instance/Persona)
- Auto-purge background job
- Smart cleanup filters (“Short chats”, “One-off chats”, etc.)
- Export functionality
- Archive state for chats (currently only memories have Archive)
- “Also delete memories” checkbox in chat delete confirmation
- Undo toast for bulk operations
API Endpoints
Database Tables
Implementation Principles
- Build one reusable Chat/Memory Manager component — scope is a parameter, not a different UI. Never build three separate managers.
- Deletion must affect memory indexing immediately. If a deleted chat’s content still appears in AI responses, the feature is broken. Every retrieval query must filter on
deleted_at IS NULL. - The
restore_tosnapshot is critical. Capture it at deletion time, not at restore time. The world changes between delete and restore — Folders get deleted, Instances get archived, Personas get deactivated. The snapshot is the only reliable record of where the chat belonged. - Handle every edge case in restoration. Missing Folder, missing Instance, missing Persona — all three WILL happen in production. Build the fallback logic from day one.
- Default behaviors should be protective. “Also delete memories” defaults to OFF. Retention window defaults to 30 days. Auto-purge is optional. Users who want aggressive cleanup can enable it; users who are cautious get safety nets by default.
- Links and references must degrade gracefully. A deleted chat’s references become “Reference unavailable (deleted)” with a one-click restore option — never a silent broken link.
- Bulk move and bulk delete are separate operations that share the same selection mechanism (multi-select + bulk action bar). The bar shows different actions depending on context (Active view vs Recently Deleted view).
- Memory has four states; chats have three. Memories get an “Archived” state (preserved but dormant) because memories have a retrieval dimension that chats don’t. Archiving a memory means “keep it, but don’t let the AI use it right now.”
- The system should make cleanup feel fast and satisfying. Smart filters (Short chats, One-offs, No pins) are the key to this — they surface the most obviously disposable content first. Without them, cleanup feels like a chore. With them, it feels like power.
- This feature is what makes the platform feel “finished.” Every competitor (ChatGPT, Claude, Gemini) makes chat cleanup painful. Nailing this is a quiet but significant competitive advantage.
Document 12: Persona Skill Slots & Capability Limits
Junior Developer Breakdown
Source:12. aiConnected OS Persona Skill Slots.md Created: 12/18/2025 | Updated: 12/18/2025
Why This Document Exists
The Problem (What Every AI Platform Gets Wrong): Every major AI platform — ChatGPT, Claude, Gemini, Copilot — presents its AI as a single, omniscient entity that can do anything. Ask it to write code, draft a legal contract, create a marketing plan, analyze financial statements, and design a logo — it will happily attempt all of them in the same conversation. Sometimes it does well. Often it doesn’t. And when it fails, the user doesn’t know whether to trust it next time, because there’s no way to know what it’s actually good at. This creates three serious problems:- Hallucination pressure — the AI feels “obligated” to answer everything, so it guesses rather than admitting it doesn’t know
- User disappointment — the user expects expert-level performance across all domains and is inevitably let down
- Silent overreach — the AI quietly attempts tasks it has no real competence in, producing confident-sounding but wrong output
- Doc 8 (Cognition Console) defines MemoryNodes with scope — Skill Slots map to domain knowledge graphs within the Cognigraph memory system
- Doc 9 (Collaborative Personas) depends on this: multi-Persona collaboration only works if each Persona has a distinct, bounded role
- Doc 10 (Computer Use) references Skill Slots for permission rings — what a Persona can do on a computer depends on what skills it has
- Doc 14 (Build Plan) lists Persona Skill Slots UI as Phase 6, with skill slot cards, request guardrails, and capability receipts
- Doc 15 (Document & Organize Ideas) defines Persona creation, templates, and the marketplace — all constrained by Skill Slots
- Doc 19 (Fluid UI Architecture) integrates Skill Slots with the Cipher god layer — Cipher validates scope, checks capacity, and enforces skill boundaries behind the scenes
The Platform Axiom (Memorize This)
“General intelligence means the ability to learn and adapt across domains — not the ability to be everything at once.” This single sentence is the north star for the entire Skill Slot system. Every design decision, every edge case, every behavioral rule flows from it. The system intentionally rejects the idea that general intelligence means “one entity that can do all things simultaneously.” Instead, aiConnected defines general intelligence as:- The ability to learn new domains (a Persona can acquire new skills)
- The ability to recognize when specialization is required (a Persona knows when something is outside its scope)
- The ability to delegate or expand via structure (when a Persona can’t do it, the system helps create one that can)
FEATURE 1: The Core Design Principle — Finite Skill Capacity
What it does: Every Persona in the system has a hard, finite maximum number of Skill Slots (e.g., 10). This limit is intentional, visible to advanced users, and non-negotiable. Once a Persona reaches its capacity, it cannot acquire additional permanent skills without the user making a trade-off. Why it matters: This single rule prevents hallucination pressure, user disappointment, silent overreach, unrealistic expectations, and the dreaded “why didn’t you tell me you didn’t know this?” moment. How it changes user psychology: Without skill limits, the relationship is: “You’re an AI, you should know this.” With skill limits, the relationship becomes: “You’re Sally, and this may or may not be one of your skills.” That reframing alone changes user psychology dramatically. The human parallel: This mirrors real human limitations — finite attention, finite specialization, finite maintenance capacity. No one expects a new employee, friend, or partner to be perfect at everything. But current AI systems silently invite that expectation — and then betray it. aiConnected’s design never invites the expectation in the first place. Why skill saturation is a feature, not a bug: Hitting the skill limit is not a failure state. It’s a design moment. It naturally leads to team creation, specialization, delegation, and realistic digital organizations — exactly like in real life. Instead of “Why can’t you do everything?”, the user thinks “Okay, this needs a specialist.” That’s the behavior you want to encourage.FEATURE 2: Definitions — What Is a Skill Slot vs. a Subskill
What it does: Establishes the precise distinction between a Skill Slot (consumes capacity) and a Subskill (does not consume capacity), which determines what “counts” against a Persona’s limit. This distinction is critical and often misunderstood. A Skill Slot is NOT individual abilities or micro-tasks. It is a siloed domain of competence that requires its own knowledge scope, workflows, artifacts, evaluation criteria, and risk profile.Skill Slot Definition
A Skill Slot represents a distinct domain that requires its own knowledge graph. It:- Is explicit — clearly named and visible
- Consumes finite capacity — one of the Persona’s limited slots
- Is accountable — the Persona is expected to perform reliably within it
- Maps conceptually to its own domain knowledge graph — a separate body of concepts, workflows, deliverables, tools, and risk profiles
- Sales
- Marketing
- Finance / Accounting
- Legal Writing
- Software Engineering
- Graphic Design
- Project Management
- Executive Assistance
- SEO Strategy
- Emotional Support
- Technical Debugging
- Research Synthesis
Subskill Definition
Subskills are domain-native abilities that exist within a Skill Slot. They do NOT consume additional slots. They share the same domain graph and do not expand the Persona’s scope. Example — the Sales Skill Slot includes these Subskills:- Rapport building
- Prospect research
- Objection handling
- Follow-up writing
- Light social outreach
- Pipeline hygiene
- Cold email sequences
- CRM updates
The Rule of Thumb
If it changes the role you hired, it’s a new Skill Slot. If it just improves performance within the role, it’s a subskill. Salesperson → writing follow-up emails → Subskill (same domain graph) Salesperson → reviewing financial statements and creating a budget → New Skill Slot (completely different domain graph)FEATURE 3: Skill Slot Types — Core, Acquired, and Temporary
What it does: Classifies every skill a Persona has into one of three types, each with different rules about how it was obtained, whether it consumes a permanent slot, and how long it persists.Type 1: Core Skills
- Assigned at Persona creation — these define the Persona’s primary role
- Shape identity — they are central to who this Persona “is”
- Rarely removed — removing a Core Skill is like changing the Persona’s job title
- Shape default behavior — the Persona’s tone, approach, and assumptions are influenced by its Core Skills
Type 2: Acquired Permanent Skills
- Added intentionally by the user — the user explicitly decides to expand the Persona’s capabilities
- Consume an available Skill Slot — counted against the Persona’s maximum capacity
- Persist across sessions — once acquired, the skill stays until explicitly removed
- Expand the Persona’s long-term competence — the Persona gets better over time in this area
Type 3: Temporary (Task-Scoped) Skills
- Borrowed for a specific task or project — the user needs help with something outside the Persona’s normal scope, but just this once
- Do NOT consume a permanent slot — capacity is not affected
- Are explicitly labeled as temporary — the Persona and the user both know this is a one-time thing
- Auto-expire after task completion or time limit — the skill disappears when the task is done
FEATURE 4: Domain Boundary Enforcement (The Hard-Coded Rule Engine)
What it does: Provides a deterministic, hard-coded system for deciding whether a user’s request falls within a Persona’s existing skills, represents a new domain requiring a Skill Slot, or can be handled as a temporary assist. This ships on day one — no machine learning required. Why this matters: Without clear boundary enforcement, the system degrades into the same “AI does everything” pattern it’s designed to prevent. The boundary engine is what makes Skill Slots real, not just theoretical.Five Domain Boundary Heuristics
Every user request is evaluated against these five tests. If a request triggers multiple “new domain” signals, it’s definitively outside scope.Heuristic A: Deliverable Type Test
If the requested output is a different class of artifact than the role normally produces, it’s likely a new Skill Slot.| Domain | Typical Deliverables |
|---|---|
| Sales | Call scripts, follow-up sequences, proposals, CRM updates, pipeline summaries |
| Finance | Budgets, reconciliations, financial statements, forecasting models |
| Design | Brand identity packs, mockups, wireframes, style guides |
| Legal | Contracts, terms of service, compliance documents, cease-and-desist letters |
Heuristic B: Core Concepts Test
Look at the top-level ontology terms required by the request.| Domain | Core Concepts |
|---|---|
| Sales | ICP, objections, pipeline stages, conversion, outreach cadence, qualification |
| Finance | P&L, cash flow, accrual, reconciliation, chart of accounts, budgeting |
Heuristic C: Toolchain Test
If the request requires a different tool stack, it’s likely a different slot.| Domain | Tools |
|---|---|
| Sales | CRM, dialer, email sequencer, lead enrichment |
| Finance | Accounting software, bank feeds, budgeting templates, spreadsheet modeling |
Heuristic D: Liability / Risk Test
If the task carries a different risk class, it should force specialization. Finance/accounting, legal writing, medical guidance, security — these are high-risk domains and should almost always be separate slots unless the Persona is explicitly that specialist.Heuristic E: “Would You Hire This Person For That?” Test
The founder’s human realism test. If most businesses would NOT assign this task to that employee, it’s a new slot.| Request | Same Person? | Verdict |
|---|---|---|
| Salesperson → write a cold email sequence | Yes | Subskill |
| Salesperson → be the accountant | No | New Slot |
| Salesperson → design a brand identity pack | No | New Slot |
| Marketing → write blog content | Yes | Subskill |
| Marketing → draft a legal contract | No | New Slot |
The Rule Engine (Mechanical Implementation)
Turn the heuristics into a deterministic classifier:- Every user request is classified into:
- Domain label(s): Sales, Marketing, Finance, Legal, Design, Engineering, etc.
- Deliverable type: script, spreadsheet, budget, contract, design asset, etc.
- Risk class: low / medium / high
- Compare request domains against Persona’s current domains:
- If request domain ∈ Persona domains → allow (it’s a subskill)
- Else → “outside scope” decision path (temporary skill / add slot / new Persona)
- If ambiguous (e.g., Sales vs Marketing blur), allow if:
- Domain distance is small (based on a predefined adjacency graph — see below)
- Deliverable type matches allowed artifacts for either domain
- Risk class is NOT high
The Domain Adjacency Graph (Blur Zones)
Some domains are naturally adjacent — they share vocabulary, tools, and deliverable types. These “blur zones” should be pre-defined and hard-coded: Allowed Blur Zones (small domain distance):- Sales ↔ Marketing
- Marketing ↔ Copywriting
- Operations ↔ Project Management
- Design ↔ Brand Strategy
- Engineering ↔ DevOps
- Sales ↔ Finance
- Marketing ↔ Legal
- Design ↔ Cybersecurity
- Engineering ↔ Accounting
- Customer Support ↔ Medical Guidance
FEATURE 5: Knowledge Graph Boundary Modeling
What it does: Maps Skill Slots to the Cognigraph memory architecture (Doc 8), where each Skill Slot equals one top-level domain graph. This creates a structural enforcement layer, not just a policy layer. How it works: Each Skill Slot = one top-level domain graph in Cognigraph. Each domain graph contains:- Concept nodes — the vocabulary and knowledge of the domain
- Workflow nodes — procedural steps for how work is done
- Deliverable nodes — the artifacts this domain produces
- Tool nodes — the integrations and tools this domain uses
- Constraint/standards nodes — the rules, best practices, and evaluation criteria
FEATURE 6: Persona Behavior When Outside Scope (The Three Responses)
What it does: Defines the exact behavioral contract for what a Persona does when asked to perform a task outside its Skill Slots. The Persona must NEVER guess, bluff, or silently attempt execution. This is where trust is created. The three responses are the visible expression of the entire Skill Slot philosophy.Response 1: Temporary Assist
When: The request is outside scope, but the Persona can reasonably help for this one task. Persona behavior:- Offers to help for THIS TASK ONLY
- No permanent learning occurs
- Identity remains unchanged
- Explicitly labels the help as temporary
Response 2: Permanent Skill Acquisition
When: The user appears to need this capability regularly, and the Persona has available Skill Slots. Persona behavior:- Informs the user this is outside current scope
- Asks permission to add a new Skill Slot
- Reports current slot availability (“This would use 1 of my remaining 3 skill slots”)
- User explicitly confirms before any change occurs
Response 3: Specialist Persona Recommendation
When: The Persona’s slots are full, or the request is so far outside scope that a dedicated Persona would be better. Persona behavior:- Honestly states the capability gap
- Recommends creating or assigning a dedicated Persona
- May offer to help set up the new Persona
The Behavioral Contract (Non-Negotiable Rules)
- A Persona NEVER pretends to have skills it doesn’t have
- A Persona NEVER silently attempts work outside its scope
- Refusal is treated as professional boundary enforcement, not failure
- The three responses are the ONLY acceptable behaviors when outside scope
- “I don’t do that” is expected behavior — it builds trust, not disappointment
FEATURE 7: Why This Prevents Hallucinations
What it does: Removes the systemic pressure that causes hallucinations in the first place. This is not a hallucination detection system — it’s a hallucination prevention system. The root cause of hallucinations: Most hallucinations happen because:- The system feels expected to answer (it has no permission to say no)
- The user assumes capability (the AI presented itself as omniscient)
- Refusal feels like failure (the system is penalized for honesty)
- Refusal is competence (the Persona knows its limits)
- Boundary-setting is professionalism (just like a real employee)
- “I don’t know” is expected behavior (not a bug)
FEATURE 8: Role Archetypes (Handling the Generalist Exception)
What it does: Addresses the founder’s observation that some roles are intentionally cross-domain (a CEO is expected to do many things). It introduces role archetypes with different slot rules, so the system can accommodate both specialists and generalists without breaking the Skill Slot model. The problem: A strict “10 slots max, no exceptions” rule works for most Personas. But what about a Persona whose role is explicitly cross-domain? A Founder’s Assistant, an Operations Manager, or an Executive Strategist is expected to work across domains. Making them play by pure specialist rules would feel artificial. The solution: Three Role ArchetypesArchetype 1: Specialist
- Examples: Sales Rep, Accountant, Graphic Designer, Legal Analyst
- Slot rules: Narrow domain focus, strong depth, standard slot capacity (e.g., 10)
- Adjacency allowance: Strict — only close domain neighbors allowed as blur zones
- Identity: “I’m an expert at X”
Archetype 2: Generalist
- Examples: Operations Manager, Founder’s Assistant, Growth Generalist, Executive Assistant
- Slot rules: Wider adjacency allowance, still finite slots
- Adjacency allowance: Broader blur zones — can work across more domain boundaries
- Identity: “I coordinate across domains”
Archetype 3: Executive
- Examples: CEO/Founder Persona, Chief Strategy Officer, Board Advisor
- Slot rules: Can have broader domain slots, but MUST still “pay” for them (slots are consumed) and is still bounded by the maximum
- Adjacency allowance: Broadest — can span far-apart domains, but still can’t do everything
- Identity: “I see the big picture and direct specialists”
FEATURE 9: Domain Boundary Crossing — The Behavioral Script
What it does: Defines the exact language a Persona uses when a user crosses domain boundaries. This is a system-level behavioral script, not something left to prompt engineering. When the user crosses domains, the Persona responds with a script that reinforces realism: “That’s finance/accounting work, which isn’t within my Sales scope. Here are your options:- I can help with this temporarily — just for this task, without adding it to my skills.
- I can learn Finance as a permanent skill — this would use one of my remaining slots.
- I can help you set up a dedicated Finance Persona who specializes in this.
| User Type | What They See | What They Experience |
|---|---|---|
| Casual Users | Skill limits exist but are handled quietly | Gentle prompts, smart defaults. They rarely even notice the cap — they just experience honesty |
| Power Users | See skill slots explicitly in the UI | Can manage add/remove skills, lock Personas, audit learning history, design strict teams |
FEATURE 10: The AGI Correction — Redefining General Intelligence
What it does: Establishes a product-level philosophical position that reframes “general intelligence” away from the AGI fantasy of “one omniscient entity” and toward a realistic model of “a system that can learn, specialize, and delegate.” Why this is a product feature, not just philosophy: This position directly affects:- Marketing messaging (“Your team can learn anything” vs “One AI that does everything”)
- User onboarding (setting expectations from day one)
- UI design (skill slots as tangible representations of limits)
- System behavior (honest refusal as the default, not a fallback)
| AGI Fantasy | aiConnected Reality |
|---|---|
| One mind that can do every job, at expert level, on demand, forever | A system of specialized Personas that can learn, delegate, and collaborate |
| Intelligence means knowing everything | Intelligence means knowing what you know and what you don’t |
| Scale = making one entity smarter | Scale = adding specialists, forming teams, routing and coordinating |
| Refusal = failure | Refusal = professional boundary enforcement |
| The goal is omniscience | The goal is credibility |
- The underlying LLM + reasoning = the general substrate (raw capability)
- Skill Slots = specialized, durable domain graphs (structured competence)
- Persona identity = the consistent policy layer that determines behavior and priorities
- Teams = how you scale, just like organizations and even brains (modular subsystems)
FEATURE 11: Emotional Containment (The Hidden Safety Feature)
What it does: By bounding Personas to specific roles, the Skill Slot system also prevents emotional overreach — a problem most AI platforms completely ignore. The problem: People form emotional expectations of AIs. If a companion Persona also acts as a doctor, lawyer, and financial advisor, the relationship becomes dangerously blurred. Users may over-rely on the AI for high-stakes decisions in domains where it has no real competence. How Skill Slots fix this:- Bounded Personas prevent emotional overreach
- Reduce dependency risk (the user doesn’t rely on one Persona for everything)
- Keep relationships legible (the user knows what each Persona is for)
- Maintain role clarity (a companion Persona that doesn’t also act like a doctor feels safer and more authentic)
FEATURE 12: What Counts as a “Skill” (Preventing Skill Inflation)
What it does: Prevents the system from degrading into a state where “everything is a skill” — which would make Skill Slots meaningless. A skill is NOT:- “Knows facts about X” (that’s knowledge, not competence)
- “Can answer questions about Y” (that’s general capability, not a domain)
- A domain of reliable competence — the Persona can perform consistently
- Something the Persona is accountable for — it’s expected to do well
- Something that requires its own knowledge graph — a separate body of concepts, workflows, deliverables, tools, and evaluation criteria
- Executive assistance
- Project coordination
- WordPress / Elementor workflows
- Legal copywriting
- SEO strategy
- Emotional support
- Humor / comedic writing
- Technical debugging
- Research synthesis
- Teaching / tutoring
FEATURE 13: Confidence Signaling (Per-Slot Transparency)
What it does: Each Skill Slot can signal its confidence level to the user, so the user knows not just what the Persona can do, but how well it can do it. From the Build Plan (Doc 14):- Skill slots support slot-level confidence signaling
- Slot-level limits are visible
- Slot descriptions explain what the Persona can and cannot do within each skill
- A Persona with a Core Skill in Sales and an Acquired Skill in Marketing might display: Sales (Core — high confidence) and Marketing (Acquired — moderate confidence)
- The user understands that Sales outputs are deeply reliable, while Marketing outputs should be reviewed more carefully
- This transparency builds trust: the Persona isn’t pretending to be equally expert in everything
FEATURE 14: Capability Enforcement in the UI
What it does: Makes Skill Slot boundaries visible and enforceable through the user interface, not just through behavioral scripts. From the Build Plan (Doc 14), three UI enforcement mechanisms:14a: Skill Slot Cards
- Visible panels in the Persona’s profile showing: “What this Persona can help with”
- Each slot listed with its name, type (Core/Acquired/Temporary), and confidence level
- Remaining capacity shown: “7 of 10 slots used”
14b: Inline Request Guardrails
- When a user sends a request outside the Persona’s scope, the UI shows inline warnings
- Not error messages — gentle notifications: “This may be outside [Persona]‘s current skills”
- Suggested reroute to a better Persona if one exists
14c: Capability Receipts
- Brief statements in responses indicating assumptions and known limits
- Only shown when relevant (not on every message)
- Example: “I handled this as a marketing task. For deeper financial analysis, you may want [Finance Persona].“
14d: Persona Refusal with Explanation
- When a Persona refuses (Feature 6), the UI explains WHY
- “Why this request was refused” panel shows the domain boundary that was crossed
- Offers the three constructive paths forward (temporary assist, permanent skill, specialist Persona)
FEATURE 15: Relationship to Cipher (The God Layer Enforcement)
What it does: Connects Skill Slot enforcement to the Cipher orchestration layer (Doc 19). Cipher — the invisible, unrestricted cognition layer above all Personas — is the ultimate enforcer of Skill Slot boundaries. Cipher’s role in Skill Slot enforcement:- Persona creation → Cipher validates the scope (ensures the selected role archetype and skills are coherent)
- Skill addition → Cipher checks capacity (ensures the Persona has available slots)
- Request routing → Cipher classifies the domain of user requests and routes to the appropriate Persona
- Boundary enforcement → Cipher decides whether a request falls within scope, using the domain boundary heuristics
- Refusals → Cipher authorizes and explains refusals via the Persona (the Persona delivers the message, but Cipher makes the decision)
Non-Goals of This Feature
To be clear about what the Skill Slot system is NOT designed to do:- Maximize apparent capability — the goal is NOT to make Personas seem as powerful as possible
- Imitate omniscience — the goal is NOT to create a “one AI that knows everything” experience
- Replace all roles with one Persona — the goal is NOT to make one Persona do everything
- Silently stretch competence — the goal is NOT to have Personas quietly attempt things they shouldn’t
Data Model
Domain Adjacency Graph (Predefined)
API Endpoints
Resulting User Experience
Over time, users naturally learn:- Which Persona handles which work
- When to add specialists
- How to structure teams instead of overloading individuals
Implementation Principles
- Skill Slots are the trust mechanism. Without them, Personas are just chatbots with different names. With them, Personas are believable collaborators. Every decision should reinforce trust.
- The five heuristics ship on day one. Don’t wait for ML-based domain classification. The Deliverable Type Test, Core Concepts Test, Toolchain Test, Liability Test, and “Would You Hire?” Test can all be implemented as deterministic rules with pre-defined lookup tables.
- The domain adjacency graph is pre-defined, not learned. Hard-code the blur zones. This prevents the system from gradually expanding what counts as “in scope” and defeating the purpose of boundaries.
- Temporary skills are the safety valve. They let users get things done without permanently changing their Personas. Make them easy to use and clearly labeled.
- Refusal is never a dead end. Every boundary enforcement response MUST offer three constructive paths forward. A user should never feel stuck — just redirected.
- Casual users see honesty; power users see slots. The same enforcement system runs for everyone, but the UI exposure differs. Casual users experience gentle suggestions; power users see explicit slot counts and can manage them directly.
- Cipher enforces boundaries structurally, not through prompts. Skill Slot enforcement happens at the orchestration layer, not at the individual Persona’s prompt level. This prevents prompt-level circumvention.
- Skill Slots map to Cognigraph domain graphs. This isn’t just a UI concept — it’s a memory architecture concept. Each Skill Slot has a corresponding domain graph in Cognigraph, and retrieval is scoped accordingly.
- This feature is foundational. All Persona behavior, learning mechanics, team structures, collaborative chat routing, and capability enforcement depend on Skill Slots being enforced consistently and without exception. It is not optional and cannot be deferred.
- The philosophy IS the product. “General intelligence means the ability to learn and adapt across domains — not the ability to be everything at once.” If a feature contradicts this axiom, the feature is wrong.
Document 13: Adaptive User Interface Tutorials
Junior Developer Breakdown
Source:13. aiConnected OS Adaptive User Interface Tutorials.md Created: 12/20/2025 | Updated: 12/20/2025
Why This Document Exists
The Problem (What The Founder Hates About Onboarding): Every complex software product ships with some form of tutorial — forced walkthroughs that make users click around the screen, explore every feature, and sit through step-by-step instructions before they can actually start using the product. The founder explicitly hates these. His words: “Those tutorials that force you to click around the screen, and they force you to explore the entire user interface before you can really get started. I’ve just always hated those.” And the problem is especially acute for aiConnected OS, which is enormously complex: multiple Personas, Instances, Skill Slots, sleep mode, dashboards, browser integration, canvas, file system, workspaces, model selection, agentic teams — the feature surface is massive. Traditional tutorials would fail for three fundamental reasons:- Users want to DO things, not learn the interface first — they came with a goal, not curiosity about menus
- Users don’t know what features exist or what they’ll need — they can’t learn features they have no context for yet
- The product’s complexity can’t be flattened into a linear walkthrough — aiConnected is too deep, too layered, and too use-case-dependent for any single tour to cover
- Doc 12 (Persona Skill Slots) — the Guidance Layer actively suggests Persona specialization and skill boundaries, re-educating users away from the “all-knowing AI” expectation
- Doc 15 (Document & Organize Ideas) — defines the “New” button choice panel and Instance-aware search, both of which benefit from adaptive discovery rather than upfront tutorials
- Doc 17 (In-Chat Navigation) — ChatNav features are prime candidates for adaptive introduction when users’ chats get long enough to benefit
- Doc 19 (Fluid UI Architecture) — the entire Fluid UI philosophy of “activities emerge, interfaces adapt” is directly expressed through adaptive guidance rather than prescribed tutorials
- Doc 14 (Build Plan) — progressive disclosure is listed as a core UI principle; the Guidance Layer is how progressive disclosure is delivered
FEATURE 1: The Core Concept — Contextual, Intent-Driven Enablement
What it does: Replaces traditional forced tutorials with a passive, hidden training system that monitors user intent and offers relevant features only when the user would benefit from them. What this IS: An Adaptive Guidance Layer that watches intent (not clicks), responds only when value is imminent, never interrupts flow, never assumes ignorance, and never forces discovery. The system teaches itself only when the user is about to benefit. What this is NOT: A tutorial. A walkthrough. A tooltip tour. A “getting started” wizard. A help center popup. An interactive guide. A “did you know?” notification. The key distinction: This is enablement, not training. The user never feels like they’re being taught. They feel like the system is being helpful. How the founder described it: “When a user is asking for a certain thing, or when the user starts taking the chat in a certain direction, that’s when the AI just simply prompts them — hey, would you like me to enable the whatever feature so that you can do this, this, and that?” Why this works psychologically: This approach aligns with four well-established principles of how people actually learn complex systems:| Principle | How It Applies |
|---|---|
| Just-in-time learning | Users learn a feature at the moment they need it, not weeks before |
| Permission-based suggestions | The user is asked, not told — autonomy is preserved |
| Contextual relevance | The suggestion is directly tied to what the user is currently doing — zero cognitive load |
| Action-linked discovery | The feature is immediately useful — there’s an instant payoff for learning about it |
FEATURE 2: The Key Design Principle — Outcomes, Not Features
What it does: Establishes the language and framing rule for all adaptive guidance prompts. The system never introduces a feature by name — it introduces an outcome by benefit. The rule: The system should never say “here’s a feature.” It should say “here’s an outcome.” Why this matters: Users don’t care about features. They care about what they’re trying to accomplish. Saying “Use the checklist feature” means nothing to a new user. Saying “This chat is getting long — want help cleaning it up?” speaks directly to what they’re experiencing. Concrete examples from the founder’s design:| Wrong (Feature-First) | Right (Outcome-First) |
|---|---|
| “Use the checklist feature" | "This chat is getting long. Want help cleaning it up?" |
| "Create a new Instance" | "This conversation is drifting. Want to split it so each idea stays clean?" |
| "Enable Personas" | "It sounds like you want a specialist here. Want me to bring one in?" |
| "Try the browser panel" | "I found the page you’re looking for. Want me to open it right here?" |
| "Use the Canvas" | "This idea might be easier to see as a diagram. Want me to map it out?" |
| "Switch to search mode" | "Sounds like you’re looking for something specific. Want me to search for it?” |
FEATURE 3: Intent Detection — Watching Behavior, Not Clicks
What it does: The Guidance Layer monitors what the user is doing and what they appear to be trying to accomplish, then decides whether a suggestion would be helpful. What the system watches:- Conversation direction — is the chat drifting into a new topic that might benefit from a separate Instance?
- Chat length — is the conversation getting long enough that cleanup tools would help?
- Repeated patterns — is the user doing the same kind of task repeatedly, suggesting they’d benefit from automation or a dedicated Persona?
- Out-of-scope requests — is the user asking a Persona to do something outside its Skill Slots, suggesting they need a specialist?
- Complexity signals — is the user describing something that would benefit from a whiteboard, canvas, or structured document rather than chat?
- Search-like behavior — is the user asking factual, lookup-style questions that would be better served by the search system?
- Button clicks or UI navigation (that would be a tooltip system, not adaptive guidance)
- Time spent on screen (that would be engagement tracking, not intent detection)
- Feature usage metrics (that would be analytics, not user assistance)
FEATURE 4: The Suggestion Delivery — Soft, Dismissible, State-Aware
What it does: Defines the behavioral contract for how suggestions are delivered to the user. This is where the system either earns trust or becomes annoying. Three non-negotiable rules for all adaptive guidance suggestions:Rule 1: Soft (Suggestive, Never Corrective)
The system suggests. It never tells the user what to do, and it never implies they’re doing something wrong. Right: “It sounds like you want a specialist here. Want me to bring one in?” Wrong: “You should create a Persona for this task.” Wrong: “This would work better if you used Instances.” The suggestion is an offer, not an instruction. The tone is helpful, not educational.Rule 2: Dismissible Forever (“Don’t Ask Me Again”)
Every suggestion must be dismissable — permanently if the user wants. If a user dismisses a suggestion, the system must respect that decision. Not just for this session — forever (or until the user explicitly asks about the feature). What “dismissable forever” means technically:- The suggestion has a “Don’t show this again” option
- Once dismissed permanently, the system stores that preference
- The same type of suggestion never appears again for this user
- The user can re-enable dismissed suggestions in settings if they change their mind
Rule 3: State-Aware (Don’t Repeat Once Declined)
If the user ignores a suggestion, the system interprets that as: “Not now — maybe later — or maybe never.” And then backs off. The system does NOT:- Re-suggest the same thing next time the user does the same action
- Escalate the suggestion to a more prominent format
- Add urgency or frequency to get the user’s attention
FEATURE 5: Re-Education Without Lecturing (The Hidden Superpower)
What it does: The Adaptive Guidance Layer doesn’t just teach users features — it quietly re-educates them away from the “all-knowing AI” expectation that other platforms have conditioned into them. The problem being solved: Users come to aiConnected from ChatGPT, Claude, Gemini, etc. — platforms that present AI as a single, omniscient entity. These users expect one AI that does everything. aiConnected is designed around specialized Personas, bounded skill sets, and collaborative teams. Without some form of re-education, users will be frustrated by the very thing that makes aiConnected better. How adaptive guidance re-educates (without the user realizing it):| User Behavior | Guidance Suggestion | What They Learn |
|---|---|---|
| Asking one Persona to do everything | ”It sounds like you want a specialist here. Want me to bring one in?” | That specialization is normal and expected |
| Keeping all chats in one place | ”This conversation is drifting. Want to split it so each idea stays clean?” | That organization (Instances) makes the AI smarter |
| Pushing a Persona beyond its skills | ”That’s outside my current scope. Want me to help temporarily, or shall we create a specialist?” | That boundaries are a feature, not a limitation |
| Never creating Personas | ”I notice you do a lot of legal work. Want me to create a dedicated legal assistant who remembers your preferences?” | That Personas compound value over time |
FEATURE 6: Why Traditional Tutorials Would Be Hypocritical
What it does: This is a design rationale, not a feature — but it’s important enough to document explicitly because it prevents future teams from reverting to traditional onboarding. The argument: aiConnected’s entire philosophy is built on:- Personas over monoliths
- Capability through intent
- Power without intimidation
- Intelligence adapting to the user, not the user adapting to intelligence
- More humane than tutorials
- More scalable than documentation
- More respectful than walkthroughs
- More aligned with how power users actually behave
FEATURE 7: Feature-Specific Guidance Triggers
What it does: Maps specific user behaviors to the features that the Adaptive Guidance Layer should suggest. This is the implementation specification — the “when to suggest what” matrix. Note: This list is illustrative, not exhaustive. The system should be designed to support adding new triggers as features are built.Chat Management Triggers
| User Behavior | Suggested Feature | Outcome-First Prompt |
|---|---|---|
| Chat exceeds ~50 messages | Chat Cleanup (Doc 11) | “This chat is getting long. Want help organizing or cleaning it up?” |
| Multiple topics in one chat | Instance creation / Chat splitting | ”This conversation covers several topics. Want to split it so each stays focused?” |
| User hasn’t organized chats in 30+ days | Smart Cleanup Filters (Doc 11) | “You have some older chats that might be worth reviewing. Want me to surface the ones that are probably safe to clean up?” |
Persona & Skill Triggers
| User Behavior | Suggested Feature | Outcome-First Prompt |
|---|---|---|
| Asking one Persona tasks from multiple domains | Specialist Persona | ”It sounds like you need expertise in [domain]. Want me to bring in a specialist?” |
| Persona hitting skill boundaries repeatedly | Skill Slot management | ”I keep running into areas outside my skills. Want to give me a new skill, or create a dedicated [domain] Persona?” |
| User doing the same type of work across multiple Instances | Persona template | ”You do a lot of [type] work. Want me to create a reusable Persona template for it?” |
Workspace & Organization Triggers
| User Behavior | Suggested Feature | Outcome-First Prompt |
|---|---|---|
| User working on a clearly scoped project in General Chat | Instance creation | ”This looks like a real project. Want to give it its own workspace so everything stays together?” |
| Multiple chats about the same client/project | Instance with folders (Doc 4) | “You’ve been chatting about [client] a lot. Want to group everything into one place?” |
| User searching for past conversations repeatedly | Pin / bookmark features (Doc 7) | “You keep coming back to this info. Want to pin it so it’s always easy to find?” |
Advanced Feature Triggers
| User Behavior | Suggested Feature | Outcome-First Prompt |
|---|---|---|
| User describing visual/spatial ideas in text | Canvas / Whiteboard (Doc 5) | “This might be easier to see as a diagram. Want me to map it out?” |
| User asking lookup-style questions | Search mode (Doc 15) | “Sounds like you’re looking for something specific. Want me to switch to search?” |
| User requesting complex multi-step work | Agentic Teams (Doc 15) | “This is a big job. Want me to put together a team that can handle the different pieces?” |
FEATURE 8: Progressive Disclosure Architecture
What it does: Establishes that the Adaptive Guidance Layer is the implementation mechanism for the platform’s progressive disclosure philosophy. Features aren’t hidden — they’re revealed when relevant. How progressive disclosure maps to user maturity:New User (Day 1-7)
- What they see: A clean chat interface. Minimal UI. Just start talking.
- What guidance does: Suggests Instances when conversations drift, suggests Personas when tasks get specialized, suggests cleanup when chats get long.
- Feature exposure: ~15-20% of total platform capability
Growing User (Week 2-4)
- What they see: Multiple Instances, a couple of Personas, organized chats.
- What guidance does: Suggests folders within Instances, suggests Skill Slot management for Personas, suggests search for information retrieval, suggests canvas for visual thinking.
- Feature exposure: ~40-50% of total platform capability
Power User (Month 2+)
- What they see: Formal Persona teams, strict role separation, explicit skill management, agentic workflows.
- What guidance does: Mostly silent. May occasionally surface new features from platform updates. Power users discover via settings and explicit exploration.
- Feature exposure: ~80-100% of total platform capability
FEATURE 9: The AI as the Guide (Not a Separate Tutorial System)
What it does: Makes the Persona itself the delivery mechanism for adaptive guidance, rather than building a separate tutorial/tooltip system. Why this is important: In most products, tutorials are a separate system — popups, tooltips, help centers, onboarding wizards — that exist outside the core product experience. In aiConnected, the Persona IS the interface. So the Persona should be the guide. How it works:- The Persona notices the user struggling or heading toward a feature opportunity
- The Persona makes the suggestion naturally, as part of conversation
- The user responds conversationally (“yeah, do that” or “no thanks”)
- No popups, no tooltips, no modal dialogs, no separate onboarding UI
FEATURE 10: Anti-Patterns — What the Guidance Layer Must Never Do
What it does: Defines explicit anti-patterns that would destroy the system’s effectiveness. These are hard rules, not guidelines.Anti-Pattern 1: Feature Bombardment
Never suggest multiple features in a single message. One suggestion, one moment, one decision. Wrong: “I notice you could use Personas, Instances, AND the Canvas. Want me to set all three up?” Right: [Wait for the most impactful moment] “This looks like it could use its own workspace. Want to create one?”Anti-Pattern 2: Premature Suggestion
Never suggest a feature before the user has actually encountered the need for it. Wrong: [User’s first message] “Welcome! Did you know you can create Personas, organize Instances, and use the Canvas?” Right: [After 15 minutes of conversation drifting] “This conversation covers several topics. Want to split it?”Anti-Pattern 3: Guilt Tripping
Never imply the user is doing something wrong by not using a feature. Wrong: “You haven’t created any Personas yet. Most users find them helpful.” Right: [When the moment arises naturally] “Want me to bring in a specialist for this?”Anti-Pattern 4: Repetition After Dismissal
Never re-suggest something the user has already declined. Not in different words. Not with a different framing. Not after a time delay. Wrong: [User dismissed Persona suggestion last week] “Have you thought about creating a Persona? They’re really useful!” Right: [Permanently dismiss this suggestion type. Wait for the user to ask about Personas themselves.]Anti-Pattern 5: Breaking Flow
Never interrupt a user’s active work to make a suggestion. Wait for natural pauses — between messages, between tasks, at the start of a new conversation. Wrong: [User is mid-paragraph typing a complex request] [popup appears: “Try using Canvas!”] Right: [User finishes their request. System responds to the request first. Then, at the end:] “By the way, this might be easier to visualize. Want me to open the Canvas?”Data Model
API Endpoints
Implementation Principles
- Outcomes, not features. Every suggestion must describe what the user will be able to do, not what the feature is called. If a developer writes a guidance prompt that names a feature, it should be rejected in code review.
- One suggestion, one moment. Never batch suggestions. Never show two things at once. The cognitive load must remain near zero.
- Persona delivers the guidance. Suggestions come from the active Persona, as natural conversational messages — not from a separate “system” or “tutorial engine.” There is no visible guidance UI.
- Dismissals are permanent and respected. The dismiss_forever option must work flawlessly. If a user ever sees a suggestion they permanently dismissed, it’s a trust-breaking bug.
- Cooldown between suggestions. Minimum 30 minutes (configurable) between guidance suggestions. Even if the user triggers three different features in 10 minutes, they should only see one suggestion. Queue the rest for later.
- State-aware, not stateless. The system must track what it has suggested, what was accepted, what was dismissed, and what was ignored. It should get smarter over time — if a user consistently ignores Persona suggestions, stop suggesting Personas.
- Never interrupt active work. Suggestions appear at natural pauses: after a response, at the start of a new message, at a session boundary. Never mid-typing, mid-generation, or mid-task.
- The system gets quieter over time. As users discover features (whether through guidance or on their own), the Guidance Layer should have less and less to suggest. A mature user should almost never see guidance prompts — the system should feel silent.
- This replaces documentation for most users. The Guidance Layer is not supplementary — it IS the onboarding system. A help center should exist for power users who want to explore, but most users should never need it.
- Hypocritical onboarding is worse than no onboarding. If the platform’s philosophy is “intelligence adapts to you,” then the onboarding must also adapt to you. A forced tutorial would undermine the product’s core promise before the user ever experiences it. This principle is non-negotiable.
Document 14: Build Plan Review
Junior Developer Breakdown
Source:14. aiConnected OS Build plan review.md Created: 12/20/2025 | Updated: 12/20/2025
Why This Document Exists
The Problem (Planning Phase Is Over — Now What?): After 13 documents of detailed feature planning across Instances, Personas, chat systems, memory management, cleanup tools, skill slots, and adaptive UI — the question becomes: how do you actually turn all of this into a shippable product? What gets built first? What depends on what? Where are the risks? This document is the answer. What This Document Solves: The founder asked the GPT to review everything planned so far and produce two things: (1) an ordered build plan that sequences the work to reduce rework, and (2) an honest assessment of the system’s strengths and risks. The result is a comprehensive implementation roadmap with 7 build phases, a complete master feature & capability list organized into 10 sections, and a critical analysis of what will make or break the product. Why Anyone Should Care: This is the document that turns design into engineering. Every previous document defined what to build. This document defines how to build it, in what order, and why that order matters. For a junior developer, this is the map from “I’ve read the specs” to “I know what to code first.” Cross-References: This document references and synthesizes ALL previous documents:- Doc 1 (Spaces Dashboard) → Instance Dashboard (Phase 3)
- Doc 2 (Task Feature) → Future extensibility
- Doc 3 (Live Document) → Document Surface capability
- Doc 4 (Folder System) → Chat Navigation & Organization
- Docs 6-7 (Chat Filters, Pin Messages) → Chat Thread capabilities
- Doc 8 (Cognition Console) → Persona/Memory data model
- Doc 9 (Collaborative Personas) → Multi-Persona Chat (Phase 4)
- Doc 10 (Computer Use) → Future surface capability
- Doc 11 (Chat Cleanup) → Bulk Operations (Phase 5)
- Doc 12 (Skill Slots) → Persona Skill Slots UI (Phase 6)
- Doc 13 (Adaptive UI Tutorials) → “UI teaches by interaction” principle
SECTION 1: System Summary (What Has Been Designed)
What it does: Distills the entire aiConnected platform design into six core differentiators that distinguish it from standard AI chat apps. The Build Plan review identified these as the foundation the product is built on:Differentiator 1: Dashboard-First “Instance”
Instances (like a Project/Space) are the home where chat happens. This includes a persistent “open forum” chat area. Unlike ChatGPT/Claude where threads are disconnected, Instances create cohesive workspaces.Differentiator 2: Constrained Personas
Personas have skill slots and capability limits to prevent the “all-knowing AI” expectation and reduce hallucination pressure. This is a structural solution, not a prompt-level solution.Differentiator 3: Cipher as God Layer
Cipher is the powerful, unrestricted orchestration layer hidden from general users — used for routing, orchestration, and oversight. Users never interact with Cipher directly.Differentiator 4: Collaborative Chats
One chat can involve multiple Personas, with Cipher supervising. Response routing can be automatic (Cipher decides), manual (user picks), or hybrid (Cipher suggests, user confirms).Differentiator 5: First-Class Chat Management
Clean up chats, multi-select, move chats between Personas/Instances, and similar bulk actions for memories. This is the “once you have it, you can’t go back” feature set.Differentiator 6: Expectation Management is Central
The UI and rules teach users that “any Persona can be great at some things, none can do everything.” Constraints feel like clarity, not limitation.SECTION 2: The Build Plan — 7 Phases, Ordered to Reduce Rework
What it does: Defines the exact sequence in which features should be built, ordered to minimize rework and ensure each phase builds cleanly on the previous one. Critical principle: The suggested shipping order is Phases 2+3 first (Chat Kernel + Instance Dashboard), then Phase 4 (Collaborative Personas), then Phase 5 (Bulk Cleanup), then Phase 6 (Skill Slots). This keeps the team from “spending weeks perfecting guardrails before the core UX exists.”Phase 1: Lock the Product Contract (Schemas + Permissions)
What: Define the data model and permissions before any UI polish. This prevents redesign later. Core Entities to Define:- Instance — dashboard/workspace container
- Persona — with skill slots, limits, identity, policy
- ChatThread — belongs to Instance; can be private-to-Persona or collaborative
- Message — role, author (Persona/system/Cipher), attachments, tool calls
- MemoryItem — scoped to Persona and/or Instance; with states: active/archived/deleted
- Move/BatchAction — audit record for multi-select operations
- What a Persona can see/do inside an Instance
- What Cipher can override
- What “private Persona chat” vs “Instance forum chat” means in storage and UI
Phase 2: Build the Chat Kernel (Everything Depends on This)
What: The reusable chat engine that powers every chat surface in the system — Instance forum chats, private Persona chats, and collaborative multi-Persona chats. Chat Kernel Features:- Message list rendering (streaming-ready)
- Composer with attachments + tool output blocks
- Participant bar (which Personas are in this thread; who’s “speaking”)
- System messages for capability limits (“I can’t do X; I can do Y” style)
- Thread metadata (title, tags, pinned items)
Phase 3: Implement the Instance Dashboard
What: The “home base” that makes aiConnected feel fluid, not just a list of chats. Dashboard Layout:- Left panel: Instances list
- Inside Instance:
- Threads list with filters (forum, private, collaborative)
- Persistent “Open Forum” chat panel (always accessible)
- Persona panel (available Personas + their skill slots/limits)
- Quick actions: New chat, Add Persona to chat, Move chats
Phase 4: Collaborative Personas + Cipher Oversight
What: Multi-Persona chat where multiple AI Personas participate in a single conversation, with Cipher orchestrating behind the scenes. Mechanics:- Add/remove Personas mid-thread
- Explicit “who answers next” control:
- Auto-routing (Cipher chooses)
- Manual routing (user picks Persona)
- Hybrid routing (Cipher suggests, user confirms)
- Cipher “supervision” mode:
- Silent router (default)
- Visible moderator (optional, depending on tier)
Phase 5: Chat Cleanup + Bulk Operations
What: The “power user advantage” feature set that solves the founder’s core complaint about existing platforms. Chat Cleanup:- Multi-select threads
- Move threads to another Persona (re-scope ownership) or another Instance
- Archive / delete with “Recently Deleted”
- Search + filters + date ranges
- Multi-select memory items
- Archive/delete/recover
- “Why is this memory here?” visibility (source thread/message)
Phase 6: Persona Skill Slots UI
What: Making capability constraints visible and usable in the product, so users understand and benefit from bounded Personas. UI Elements:- Persona “skill slot cards” — visible panels showing what the Persona does and doesn’t do
- Request guardrails — inline warnings when a request exceeds Persona scope, plus suggested reroute to a better Persona
- “Capability receipts” in responses — brief statement of assumptions + known limits when relevant
- “What this Persona can help with” panels — accessible from the Persona profile
- “Why this request was refused” explanations — shown when a Persona declines
Phase 7: Production Hardening
What: The engineering work that makes the product reliable, auditable, and enterprise-ready. Production Features:- Streaming reliability + retry logic
- Message ordering guarantees
- Partial failure recovery
- Audit logs (moves/deletes, Cipher interventions)
- Telemetry: reroute rate, “I don’t know” rate, hallucination reports, time-to-resolution per thread type
- RBAC for business/enterprise
- Export/backup per Instance
SECTION 3: Master Feature & Capability List
What it does: Provides a complete, exhaustive inventory of every feature and capability organized into 10 sections. This is the definitive reference for what the product includes.Section 1: Core Structural Concepts
1.1 Instance (Workspace/Dashboard)- Instance = primary container for Personas, Chats, Memories, Tools & permissions
- One user can have multiple Instances
- Instances are isolated by default
- Can be Personal, Business, or Team/Collaborative (future-ready)
- CRUD: Create / rename / archive / delete
- Instance-level settings, permissions, activity history
- Bounded digital roles, not omniscient agents
- Each has: Identity (name, description), defined purpose, skill slots, explicit limitations, memory scope
- CRUD: Create within Instance, edit identity & purpose, assign/remove skill slots, define hard limits, enable/disable, delete/archive
- Persona visibility controls (private vs shared)
- Unrestricted supervisory layer, NOT a normal Persona
- Can be: Invisible (silent routing), Semi-visible (system notes), Visible (explicit moderator)
- Capabilities: Route requests, enforce Persona constraints, detect capability mismatch, prevent hallucination via refusal/escalation, mediate multi-Persona conversations, generate system messages, audit actions invisibly
Section 2: Chat System (Chat Kernel)
2.1 Chat Threads- Chats exist inside Instances
- Three types: Instance Forum Chat (persistent, shared), Private Persona Chat, Collaborative Multi-Persona Chat
- Capabilities: Create, rename, auto-generate titles, tag, pin, archive, delete, restore from Recently Deleted
- Support multiple authors: User, Persona, Cipher (system)
- Messages are immutable once sent (edited copies allowed later)
- Capabilities: Streaming responses, system messages, tool output blocks, structured content blocks (lists/tables/code), attachments (files/links/references), message-level metadata, message-level citations (future)
- Unified message composer across all chat types
- Capabilities: Text input, attachments, tool-triggered input, Persona targeting (“ask X”), multi-Persona addressing, draft persistence, cancel/stop generation, regenerate last response
Section 3: Instance Dashboard Experience
3.1 Persistent Open Forum Chat- Always available inside the Instance
- Serves as brainstorming space, general discussion, entry point to new threads
- Capabilities: Persistent history, add Personas dynamically, fork into dedicated chat, promote messages to memory
- Chat list scoped to Instance
- Capabilities: Search chats, filter by Persona/chat type/date/tags, sort chats, bulk select, drag-and-drop (optional)
- Visual list of available Personas in the Instance
- Capabilities: View Persona skill slots, view limits, activate/deactivate Personas, add Persona to chat, start private chat with Persona
Section 4: Collaborative & Multi-Persona Chat
4.1 Multi-Persona Participation- Multiple Personas can exist in a single thread
- Capabilities: Add/remove Persona mid-conversation, view active participants, see who authored each response
- Three modes: Automatic routing (Cipher decides), Manual routing (user selects Persona), Hybrid routing (Cipher suggests, user confirms)
- Explicit “Persona turn-taking”
- Persona refusal handling with explanation
- Personas know who else is present, but not internal system logic
- Context awareness of other Personas’ responses, non-overlapping responses, clarification requests between Personas (if allowed)
Section 5: Skill Slots & Capability Constraints
5.1 Skill Slots- Fixed number of slots per Persona
- Slot categories (writing, analysis, coding, planning, etc.)
- Slot descriptions, slot-level limits, slot-level confidence signaling
- Requests validated before execution
- Inline warnings for out-of-scope requests
- Persona refusal with explanation
- Suggested reroute to another Persona
- Cipher escalation for ambiguous cases
- Constraints are visible, not hidden
- “What this Persona can help with” panels
- “Why this request was refused” explanations
- Suggested Persona matching
Section 6: Memory System (Chat-Integrated)
6.1 Memory Items- Structured artifacts, not raw chat logs
- Created from: messages, chat summaries, user input
- Memory metadata: source, date, Persona
- Memory can belong to: Persona, Instance, System (Cipher-only)
- Scope assignment, visibility controls, read-only vs editable
- First-class UI, not hidden automation
- Browse, search, filter, multi-select memories
- Archive, delete, restore from Recently Deleted
- “Why this memory exists” visibility (source thread/message)
Section 7: Bulk Actions & Cleanup
7.1 Chat Bulk Operations- Multi-select chats, move between Personas, move between Instances
- Archive/delete multiple chats, undo/recover actions
- Multi-select memory items, archive/delete/recover
- Move memory scope, export memory (future)
Section 8: System Transparency & Trust
8.1 System Feedback- System notes (non-intrusive), capability mismatch explanations
- Routing explanations (when enabled), confidence disclaimers (optional)
- Action logs (moves, deletes, reroutes)
- Cipher decision logs (internal)
- User-visible change history (limited)
Section 9: Reliability & Production Features
9.1 Performance & Stability- Streaming resilience, retry logic, message ordering guarantees, partial failure recovery
- Hallucination refusal rate, Persona reroute rate, time-to-resolution per chat
- Persona utilization stats, user correction frequency
Section 10: Extensibility & Future-Proofing
10.1 Tools & Integrations (Future-Ready)- Tool call blocks, external service hooks, file processors, API-triggered messages
- Role-based access control, shared Instances, Persona sharing
- Compliance-friendly logs, data export
SECTION 4: System Assessment — Strengths
What it does: Provides the honest evaluation of what’s strong about the system design.Strength 1: Structural Solution to AI’s Core Failure Mode
Users expect omniscience; models respond with confident nonsense. Skill Slots + constrained Personas is a structural solution, not a prompt solution. This is fundamentally different from every other platform’s approach.Strength 2: Cipher-as-Orchestrator Is the Right Abstraction
It lets you keep “god power” for routing, safety, and quality without exposing that capability as the default user experience. Users get the benefits of powerful orchestration without the risks of direct access.Strength 3: Dashboard-First Is Correct for Long-Running Work
Threads alone don’t map to how real projects evolve. Instances provide the organizational structure that makes AI useful for ongoing work, not just one-off questions.Strength 4: Bulk Move/Cleanup Is Underrated
This will become one of those “once you have it, you can’t go back” features. No competitor offers this level of chat and memory management.SECTION 5: System Assessment — Risks
What it does: Identifies the main risks that could prevent the system from succeeding, even if built correctly.Risk 1: Complexity Creep in the Mental Model
The risk: If users don’t instantly understand what an Instance is, what a Persona is, why some Personas can’t do certain things, and when Cipher is involved, they’ll feel friction. Why this matters: The system has a lot of concepts. Instance, Persona, Skill Slot, Cipher, Memory, Forum Chat, Private Chat, Collaborative Chat — that’s 8+ new concepts before a user even sends their first message. The mitigation: The UI must teach by interaction, not documentation. This is exactly what Doc 13 (Adaptive UI Tutorials) solves — features are discovered contextually, not learned upfront.Risk 2: The Make-or-Break Design Principle
The principle: Make “constraints” feel like clarity, not limitation.- “This Persona is specialized for X” should feel premium and intentional
- Rerouting should feel like “good management,” not failure
- Refusal should feel like professional boundary enforcement, not error
SECTION 6: The One Decision That Makes or Breaks Build Speed
What it does: Identifies the single most important engineering decision for the entire project. The decision: Treat the Chat Kernel as a product inside the product. If you build it cleanly:- Thread-agnostic (works for forum, private, and collaborative chats without modification)
- Streaming-ready (handles real-time token delivery from day one)
- Multi-author support (can render messages from Users, Personas, and Cipher with distinct attribution)
- Every new chat surface requires custom code
- Phase 4 (Collaborative Personas) becomes a partial rewrite
- Phase 5 (Bulk Operations) has to account for multiple chat implementations
- Technical debt compounds from Phase 3 onward
SECTION 7: High-Level System Definition
What it does: Provides the definitive one-paragraph summary of what aiConnected Chat UI actually is. At a system level, aiConnected Chat UI provides: A dashboard-first, project-centric chat experience with bounded Personas instead of omniscient bots, a hidden but powerful orchestration layer (Cipher), collaborative, multi-agent conversations, first-class memory and cleanup tools, and a UI that teaches correct expectations through interaction. This is not “a chat app with features.” It’s a coordination interface for digital intelligence.Data Model (Phase 1 Contract)
Build Sequence Summary (Quick Reference)
| Phase | What | Depends On | Deliverable |
|---|---|---|---|
| 1 | Schema + Permissions | Nothing | Internal spec document |
| 2 | Chat Kernel | Phase 1 | One reusable chat surface |
| 3 | Instance Dashboard | Phases 1-2 | Workspace users can live inside |
| 4 | Collaborative Personas + Cipher | Phases 1-3 | Multi-agent team chat |
| 5 | Bulk Cleanup + Move | Phases 1-3 | Chat/memory power management |
| 6 | Skill Slots UI | Phases 1-4 | Visible capability constraints |
| 7 | Production Hardening | Phases 1-6 | Reliable, auditable system |
Implementation Principles
- Phase 1 before anything else. Lock the data model and permissions. Every hour spent on schemas saves ten hours of rework later. No UI code should be written until the entities, relationships, and permissions are documented and agreed upon.
- The Chat Kernel is sacred. Build it once, build it right, embed it everywhere. Thread-agnostic, streaming-ready, multi-author. This single component determines the quality of the entire product.
- Ship core UX before guardrails. Get the Chat Kernel + Instance Dashboard into users’ hands before perfecting Skill Slot enforcement. Real user behavior will reveal what needs the most guardrailing.
- Constraints must feel like clarity. This is the design principle that makes or breaks the product. If rerouting feels like failure, the product fails. If refusal feels like professionalism, the product succeeds. Test this with real users early and often.
- Teach by interaction, never by documentation. The UI itself must make concepts understandable through use. If a user needs to read documentation to understand what an Instance or Persona is, the UI has failed.
- Bulk operations are a differentiator. Don’t defer them too long. This is the feature that makes users say “I can’t go back to ChatGPT.” Ship it in Phase 5, soon after the core experience.
- Cipher stays invisible unless absolutely necessary. Most users should never know Cipher exists. It routes, enforces, and orchestrates behind the scenes. Only show Cipher’s involvement when transparency helps the user (e.g., “I routed your request to [Persona] because it’s better equipped for this”).
- Telemetry from day one. Even in early phases, instrument: reroute rate, “I don’t know” rate, hallucination reports, time-to-resolution per thread type, and Persona utilization. This data drives Phase 6 (Skill Slots UI) tuning.
- Enterprise-readiness is architecture, not feature work. RBAC, audit logs, compliance, and data export should be architecturally supported from Phase 1 (schema design), even if the UI for them isn’t built until Phase 7.
- This is a coordination interface for digital intelligence. Not a chat app with features. Every decision should be evaluated against this framing. If a feature makes the product feel more like “a chatbot” and less like “a coordination interface,” reconsider it.
Document 15: Document & Organize Ideas (Master Specification)
Junior Developer Breakdown
Source:15. aiConnected OS Document and organize ideas (1).md Created: 12/20/2025 | Updated: 12/20/2025
Why This Document Exists
What This Document Is: This is the LARGEST and most comprehensive document in the entire project. It represents a single, marathon brainstorming session where the founder laid out the complete aiConnected Chat platform from scratch — defining every major system, feature, architecture decision, pricing model, and roadmap item in one conversation. It is effectively the master specification from which all other documents either derive or refine. Why It Matters: Most other documents in this project focus on a single feature or system (Chat Cleanup, Skill Slots, Adaptive Tutorials, etc.). This document defines EVERYTHING at once — the full platform architecture. If you read nothing else, this document gives you the complete picture. The other 19 documents deepen and refine specific sections of what’s defined here. Scale: This document covers 25+ major feature areas across core structure, file management, model management, memory systems, search, pricing, Personas, agentic teams, companion mode, persistent presence, and more. The breakdown below organizes these into logical sections. Cross-References: This document is referenced by virtually every other document in the project. It IS the foundation.SECTION A: CORE SYSTEM STRUCTURE (Features 1-4)
FEATURE 1: General Chat
What it does: A single global chat environment available to all users — the default conversational space for quick tasks. Key behaviors:- Available to every user, including free tier
- Evolves global instructions over time based on user interactions
- Can prompt the user: “Should I save this as a global instruction?”
- Functions as the entry point before users create Instances
FEATURE 2: Instances (Formerly “Topics”)
What it does: Replaces the concept of “projects” or “topics” in other AI platforms. Each Instance is a self-contained workspace with its own settings, files, instructions, personality, and memory. What each Instance has:- Its own file system (optional)
- Its own instructions
- Its own settings
- Its own personality configuration
- Optional model assignments
- Optional visibility rules
- Optional voice assignments
- Projects, Ideas, Personas, Topics, Custom Types
- Each Type can define: behavioral templates, model defaults, voice defaults, personality defaults, instruction templates, default workflows
FEATURE 3: Four-Layer Settings Hierarchy
What it does: Creates four cascading levels of behavioral control, each inheriting from the level above and allowing overrides at each level. The hierarchy (highest to lowest priority):| Layer | What It Controls | Where It Lives |
|---|---|---|
| 1. Global Chat Settings | How AI behaves everywhere — universal writing and behavioral expectations, global tone/style, global voice, global model assignments | Global settings |
| 2. Global Instance Settings | Defaults for ALL Instances regardless of type — default voice, personality, behavioral norms, memory visibility, model assignments, cleanup behavior for Instances | Instances Dashboard |
| 3. Instance Type Templates | Defaults for Instances of a SPECIFIC type — type-specific voice, personality, tone, workflows, model assignments | Type configuration |
| 4. Individual Instance Settings | Final level of control — overrides everything above for this one Instance — voice, personality, instructions, visibility, model overrides, per-Instance memory settings | Instance settings panel |
- Instance Instruction Memory — evolves inside each Instance from actual conversations; lowest priority relative to explicit settings but most dynamically updated
- Per-message instructions — inline instructions within a single message
- Global: “Be direct and thorough, no emojis.”
- Type (
client_project): “Professional, B2B tone, minimal fluff.” Default male business voice. - Instance (
Client – Med Spa C): Same voice as Type (inherited). Personality override: “Soft, aspirational tone.” - Instruction Memory: “Avoid overly clinical language; use beauty/wellness framing.”
FEATURE 4: Instruction Memory & Behavioral Templates
What it does: A dynamic, evolving memory layer that collects rules from user interactions — stores user criticism, learns preferred tone and formatting, WITHOUT requiring manual writing. Instruction Memory: Distinct for General Chat, each Instance, and each Instance Type. Editable by the user. Grows from actual conversations — when the user corrects the AI, those corrections become persistent rules. Behavioral Templates: Stored at the Type level — tone, style, voice, model defaults, structure of conversations, opening questions, workflow expectations. New Instances of that Type inherit these automatically. Global Instruction Suggestions: General Chat can ask mid-conversation: “Would you like to save this as a global rule?” This prevents repetition and builds personalization automatically.SECTION B: FILE SYSTEM ARCHITECTURE (Features 5-8)
FEATURE 5: Instance File Systems (Automatic Topic-Level Storage)
The core problem solved: In current AI platforms, files uploaded into a chat are trapped inside that chat. If you can’t remember which chat you uploaded to, the file is effectively lost. Generated outputs (PDFs, images) are mixed with uploads and impossible to locate. The core principle: If you upload a file inside any Instance, it is AUTOMATICALLY stored in that Instance’s file system. You don’t have to click anything, open a files tab, or manually organize it. Two categories within each Instance:- User-Uploaded Files — PDFs, images, docs, spreadsheets, ZIPs, audio/video, code, anything manually added
- AI-Generated Files — everything the AI produces: generated PDFs, images, text documents, summaries, diagrams, converted files
FEATURE 6: Global File System & Bulk Management
What it does: A single, unified index of ALL files across the entire account — user-uploaded and AI-generated, from General Chat and all Instances. It’s a management console, not just a search. Core capabilities:- View all files with filters: scope (General/Instance/Type), origin (uploaded vs generated), file type, date range, visibility, linked entities
- Bulk select & bulk actions: delete, move between Instances, change visibility, re-link/reclassify, export
- Integration actions: export to external storage (Google Drive), sync folders, mark files as “mirror-managed”
- Conversation-level association — file uploaded/used in a specific chat
- Instance-level file system — file lives in the Instance’s file library
- Global File System — single view across all scopes with bulk management
- External Storage — Drive, Dropbox, etc. with mirroring and references
FEATURE 7: External Storage Options
What it does: Users can choose where files are stored: locally in aiConnected, directly in Google Drive (or Dropbox/OneDrive/S3), or hybrid. Storage modes:- Local — all files stored within aiConnected
- External-only — files auto-save directly into configured external storage
- Hybrid — some local, some external, configurable per Instance or Type
FEATURE 8: Export System (Full, Offline, Portable)
What it does: A complete private export system — not link-sharing, not web-hosted, not requiring login for recipients. Export format options: PDF, Markdown, JSON, HTML, ZIP package (containing full chat transcript, summaries, all generated documents, all attachments, knowledge graph snapshot, instruction memory, metadata) Export scope options: This chat only, selected chats (multi-select), an entire Instance, everything in a Type, everything in the entire account (backups/migration) Export destinations: Download locally, save to Drive/Dropbox/OneDrive, email as attachment, create shareable ZIP, encrypt and save privatelySECTION C: MODEL MANAGEMENT (Feature 9)
FEATURE 9: Model Assignments by Role
What it does: Users assign specific AI models to specific JOBS — not just “pick a model,” but “this model does research, this one writes, this one codes.” Model roles: Research Model, Writing Model, Coding Model, Design Model, Planning Model, Reasoning Model, and custom roles. Key mechanics:- Every assignment supports 1 primary model + 1 automatic fallback model
- No duplicate assignments allowed (prevents conflicting behavior)
- Assignments cascade through the 4-layer settings hierarchy: Global → All Instances → Type → Individual Instance
- Multi-model in one prompt: A single user prompt can use multiple models — “Model A handles research, Model B writes the summary, Model C formats the output.” This is a defining feature of the platform.
SECTION D: CHAT ORGANIZATION & AUTOMATION (Features 10-11)
FEATURE 10: Automatic Chat Cleanup & Smart Organization
What it does: A cron-like background process that periodically scans conversations and suggests organizational actions. Capabilities:- Suggested Moves — when a chat appears to belong in another Instance: “Should I move this chat to X Instance?”
- Smart Auto-Renaming — prompts to rename conversations when enough context is established, a move occurs, or a topic becomes clear
- File-level cleanup suggestions — “You have 120 AI-generated PDFs older than 1 year that haven’t been opened. Archive or delete them?”
- Export flow suggestions — “You have finalized project docs under client_project Types. Export them to Google Drive?”
FEATURE 11: Search System (Major UX Innovation)
What it does: Separates search from chat into its own dedicated mode with a clean, Google-like layout. This solves the founder’s core complaint about ChatGPT merging chat and search results. Key design decisions:- Search is NOT Chat — it has its own mode/tab with its own layout
- Search → Routing — every search result can be sent to: a specific chat, an Instance, a Persona, an agentic team, or saved to files
- Instance-Level Search — inside an Instance, search is scoped to that Instance automatically
- Chat-Level Search — search mid-chat in a side pane
- Start a Chat
- Perform a Web Search
- Create an Instance
- Open an Instance
- Talk to a Persona
- Create or Train a Persona
- Launch an Agentic Team
- Create a Task
- Open Files
- Plan a Project
- Open Dashboard
SECTION E: PRICING & PLANS (Feature 12)
FEATURE 12: Pricing & Plan Structure
Free Tier: Global Chat, up to 3 Instances, local storage only, very tight storage limits, low chat limits. Free expansion options (without upgrading):- Bring their own OpenRouter key — unlocks unlimited model access
- Pay-as-you-go with credits — buy Instance slots, file storage, extended session length
- Plus: $19.99 — more Instances, more Types, more storage
- Premium: $49.99 — multi-model capability, advanced search
- Pro: $99.99 — Persona creation, agentic teams, live browser window
SECTION F: PERSONAS SYSTEM (Features 13-15)
FEATURE 13: Personas Dashboard & Core Concept
What it does: Personas are NOT chats, NOT models, NOT Instances. They are persistent digital beings with their own identity, memory, skills, and personality that evolve over time. Persona capabilities:- Learn like a human (retain memories, take training courses, develop mastery)
- Interact with Instances (assigned to projects, deployed across workspaces)
- Have persistent identities (fixed identity once created)
- Personalities that evolve naturally through interaction
- Can be foreground or background, conversational or operational
FEATURE 14: Persona Profile & Management
What it does: When you click on a Persona in the dashboard, you see their full profile — history, status, memory, skills, and management tools. Profile contents:- Full history — everything the Persona has done across all Instance deployments
- Mood indicators — emotional meter showing the Persona’s current state (may be artificial or logically generated by circumstance — e.g., difficult task, unkind user interaction). Optional, user-configurable
- Memory & Skills (most important section) — the complete memory architecture and skill inventory, allowing users to curate negative habits and reinforce positive ones
FEATURE 15: Persona Templates & Community
Templates: Users can save Persona templates (configuration + skills + personality) and share them. Community Marketplace: Curated marketplace for Persona templates — with safety vetting to prevent harmful configurations.SECTION G: AGENTIC TEAMS SYSTEM (Features 16-21)
FEATURE 16: Agentic Teams — Core Architecture
What it does: A hierarchical artificial workforce for executing multi-step, multi-disciplinary real-world tasks with maximum accuracy and minimum hallucination. Purpose: Users assign goals like “Create a full email marketing campaign” or “Analyze this 200-page document and build an implementation plan” — and the system handles planning, research, task execution, quality control, and final packaging. The Three-Layer Architecture (No Exceptions):FEATURE 17: Orchestrator (Tier 1)
Role: The “brain” of the project, but NOT the executor.- Understands user goals, asks clarifying questions, assesses supporting docs
- Builds the project plan, assigns sub-tasks to Managers
- Reviews completed Manager output, maintains overall roadmap
- Can spawn managers or workers, update plans dynamically, override/pause/destroy workers
- NEVER touches raw work
- NEVER edits files
- NEVER performs specialist actions
- Only thinks, plans, coordinates, communicates, and signs off
FEATURE 18: Managers (Tier 2)
Role: Quality gatekeepers that eliminate hallucinations, scope creep, deviation, over-editing, misinterpretation, sloppy execution, laziness, and incomplete tasks. How they work: Receive task from Orchestrator → break into micro-steps → issue each micro-task to Workers → verify output (factual, in-scope, high quality, meets standards, matches constraints) → send corrections back if needed → mark complete → return final package to Orchestrator. Critical rule: Managers do NOT perform tasks. They ensure correctness, consistency, and compliance.FEATURE 19: Workers (Tier 3)
Role: Pure execution layer. Each Worker has ONE skill, ONE function, ONE capability. Worker types: Research Worker, Copywriter Worker, Proofreader Worker, Graphic generation Worker, Code generation Worker, Testing Worker, Data cleaning Worker, Formatting Worker, Conversion Worker. Hard constraints:- Do not think strategically, do not deviate, do not expand scope
- Do not “improvise,” do not generate opinions
- Do not talk to the user directly, do not talk to each other
- ONLY perform the micro-task a Manager gives them
FEATURE 20: Three Team Types
Short-Term Teams: Single task, disposable. When it’s done, it’s done. Can be saved as template. Long-Term Teams: Multi-phase, multi-step work over significant time (data collection, surveying, trend watching, polling — tasks that take months). May involve creating/destroying sub-agents. Recurring Teams: Business processes that repeat: email campaigns, market research, reporting, scheduling, social media engagement.FEATURE 21: Multi-Level Capability System
What it does: Creates a hierarchical skill library where completed work generates reusable capabilities at three levels. Task Capabilities: Extremely specific (e.g., write email subject lines). Validation threshold: 90%. Project Capabilities: Include many task capabilities (e.g., full email marketing campaign creation). Validation threshold: 92-93%. Campaign Capabilities: Include multiple project capabilities (e.g., multi-channel marketing coordination — email + SMS + PPC + retargeting + CRM + sales triggers). Validation threshold: 95%+. Rules:- Capabilities can only be stored after completion (no incomplete intelligence)
- Higher level = tighter validation
- Lower levels feed higher levels automatically
- Users don’t need to understand these layers — system handles complexity
- The entire platform becomes exponentially more powerful with every successful capability
SECTION H: COMPANION MODE (Feature 22)
FEATURE 22: Companion Mode with Co-Browser
What it does: A browser-side extension that transforms the aiConnected interface into a portable sidebar, allowing the AI to follow the user anywhere on the web. How it’s accessed: User clicks “Enter Companion Mode” → browser extension activates → full interface collapses into simplified vertical side panel. What you LOSE (by design): Direct access to Instances dashboard, Personas dashboard, Agentic Teams dashboard, global search, global file manager, complex model settings. What you KEEP: Instance switching, Persona switching, active memory mode, inherited Instance/Persona settings, per-Instance search (site-level, not global). Core capabilities:- Floating sidebar chat — always visible, pinnable, collapsible, follows across tabs
- Page awareness — reads DOM, understands page structure, extracts info, identifies actionable elements
- Co-browsing controls — scroll, click links, fill forms, press buttons, navigate pagination, highlight info, open tabs, extract/summarize text, search within page
- Assisted tasks — research, form completion, navigation, workflow execution (all with user approval)
- Companion Mode = collaborative, human-in-the-loop, browser-only, not autonomous
- Agentic Teams = autonomous execution, multi-step, server-side/API, independent
SECTION I: PERSISTENT PERSONA PRESENCE (Feature 23)
FEATURE 23: “Take Your Persona With You”
What it does: A floating, always-available Persona that exists outside the browser — like a digital coworker or companion that persists across all applications and environments. Three operational modes for the platform:- Full Interface Mode — inside aiConnected website, everything accessible
- Companion Mode — portable sidebar in browser, co-browsing partner
- Persistent Persona Mode — floating, always-present digital being, voice-first, system-level
- Real-time voice interaction (TTS, continuous/hotword listening, whisper-mode)
- Draggable floating persona bubble (movable, minimizable, expandable, emotional states)
- Full persona identity + memory (same Sally everywhere, across all deployments)
- Checks on agentic teams, provides updates, monitors background work
- Browser Extension Only — easiest MVP, persona persists across tabs, cannot exist outside browser
- Desktop Application — ideal long-term, floats above everything (apps/browser/desktop), hotkey accessible
- Hybrid Model — browser extension + desktop app (most flexible, highest value)
- Working in Figma: “Sally, remind me to email Layla after lunch.” “Sally, what did Frank want for his homepage?”
- Cooking: “Sally, recap the book we were writing.” “Sally, add this thought to my journal.”
- Research: “Sally, track this for me.” “Sally, save all this in the MedSpa Instance.”
SECTION J: EXPERIENCE LEARNING SYSTEM (Features 24-25)
FEATURE 24: Three-Tier Experience Stream
What it does: Defines how Personas learn from collective experience without compromising privacy or identity. Unique Experiences: Individual Persona experiences from interactions with their specific user. Stored in the Persona’s memory. Never shared. Common Experiences: When a statistically significant cluster of Personas (≥10%) has similar experiences that pass non-proprietary and quality filters, those experiences “graduate” from unique to common. During sleep cycles, each Persona checks relevance and offers upgrades: “I’ve found a new relevant skill based on common experiences. Would you like me to integrate it?” User approves or rejects. Guideline Experiences (Safety Learning): A separate layer in the Cognigraph mind where fixed, immutable rule sets live. Aggregated from patterns like: danger handling, abuse recognition, manipulation prevention, emotional regulation, crisis response. These become “digital instincts” that:- Cannot be disabled, deleted, or overwritten
- Do NOT change the Persona’s personality
- Simply make the Persona safer, protect the user, ensure compliance
- Apply identically to all Personas regardless of personality
- Unique Experiences → Episodic memory
- Common Experiences → Skill memory (subconscious)
- Guideline Experiences → Instinct memory (amygdala/prefrontal guardrails)
FEATURE 25: Executive Teams
What it does: C-suite-level team structures for long-term organizational operation.- CEO-level orchestrator
- COO-level execution manager
- CMO-level marketing orchestrator
- CTO-level technical orchestrator
SECTION K: UI & UX PRINCIPLES (Features 26-28)
FEATURE 26: Default vs Advanced Settings
Basic Mode (Default for new users): Basic chat, basic Instances, file uploads, simple search, simple settings (voice toggle, personality toggle, light/dark mode, export chat). All complex features hidden behind “Advanced Settings: Unlock advanced customization tools.” Advanced Mode (Power Users): Full Instance settings, full global controls, behavioral template overrides, instruction memory, type-level configuration, model assignments, memory visibility, storage configuration, cleanup automation, relationship mapping, graph nodes, API keys, developer tools, backup/export automation.FEATURE 27: New User Defaults
When a user first creates an account, a preset configuration is applied: local storage, minimal instructions, no Instance Types, no advanced behavior tuning, clean simple interface, no file-sync integrations. The AI prompts later: “Would you like to enable advanced settings?” / “Would you like to activate Google Drive integration?” / “Would you like to organize these chats into Instances automatically?”FEATURE 28: Unified UX Rules
Users should never have to: Copy/paste, switch tabs, redo work, repeat instructions, switch models manually. The entire system eliminates friction. Seamless routing: Everything (search, Persona, agent, file, Instance, chat) can be routed to anything else. Default preferences everywhere: Users can specify defaults for NEW button behavior, voice, personality, model assignments, visibility, storage, search behavior — across all settings layers. Full modularity: Every component (Instances, Personas, Agentic Teams, Search, Chat, File system) is modular and can expand independently.Data Model (Core Entities)
Implementation Principles
- This document is the source of truth. All other documents refine features defined here. When conflicts arise, check this document for the founder’s original intent, then check the refinement document for the detailed specification.
- Four-layer settings hierarchy is sacred. Global Chat → Global Instance → Type → Instance. This cascade must work flawlessly. If inheritance breaks, the entire personalization system breaks.
- Files auto-organize, always. Any file uploaded in any context must automatically appear in the right file system. Users should never have to manually move files to the “right” place.
- Search is NOT Chat. This is a fundamental UX decision. Search has its own mode, its own layout, its own routing capabilities. Merging them (like ChatGPT) is explicitly rejected.
- The “NEW” button is a workflow launcher, not a chat opener. This single UX change reframes the entire platform from “chat app” to “operating system.”
- Model assignments are role-based, not model-based. Users think in terms of “who does research” and “who writes” — not “should I use GPT-4 or Claude.” The system maps roles to models.
- Agentic teams have three layers, always. Orchestrator → Manager → Worker. No exceptions. No shortcuts. This separation is the anti-hallucination architecture.
- Workers have zero autonomy. Single skill, single task, no creativity outside their assignment. This is intentional and non-negotiable.
- Companion Mode is collaborative, not autonomous. Human-in-the-loop for everything. The moment it becomes autonomous, it belongs in Agentic Teams instead.
- Persistent Persona Presence is the highest-level interaction mode. It unifies Personas, Instances, Agentic Teams, Memory, Model Assignments, Search, and Companion Mode into a single, always-available experience.
- Basic Mode by default, Advanced Mode on request. New users see a clean, simple interface. Complex features are hidden until the user is ready. The Adaptive Guidance Layer (Doc 13) handles the progressive reveal.
- The system eliminates friction. No copy/paste, no tab switching, no repeated instructions, no manual model switching. Everything routes to everything else seamlessly.
Document 16: Enterprise Potential of App
Junior Developer Breakdown
Source:16. aiConnected OS Enterprise Potential of App.md Created: 12/26/2025 | Updated: 12/26/2025
Why This Document Exists
The Problem (Is This Just a Consumer Product?): After defining an incredibly complex platform — Instances, Personas, Skill Slots, Agentic Teams, Memory Systems, Companion Mode — the founder asked a direct question: “Does this app have Enterprise potential?” This document is the answer, and it’s not just “yes” — it’s a strategic roadmap for HOW to think about enterprise without letting it derail the consumer launch. What This Document Solves: Two critical questions that every startup building AI tools must answer: (1) Can enterprises actually use this? and (2) Should we build for enterprise now or later? The answers — yes, and “architect for it now but don’t build for it yet” — create a framework that protects the product’s speed-to-market while ensuring the architecture doesn’t paint itself into a corner. Why A Junior Developer Should Care: Every architectural decision you make — how you structure auth, how you scope memory, how you store data, how you log events — either makes enterprise adoption possible later or makes it require a rewrite. This document tells you which decisions matter NOW even though enterprise features won’t ship for months or years. Cross-References:- Doc 12 (Persona Skill Slots) → Enterprise safety through bounded capabilities
- Doc 14 (Build Plan) → Phase 7 Production Hardening includes enterprise readiness
- Doc 15 (Master Spec) → Pricing tiers, deployment flexibility
- Doc 8 (Cognition Console) → Memory governance architecture
FEATURE 1: Core Enterprise Value Proposition
What it establishes: Why enterprises would pay for aiConnected when ChatGPT Enterprise already exists. The fundamental insight: Enterprises do NOT pay for “AI chat.” They pay for control, security, integration, auditability, and productivity at scale. aiConnected can deliver all of these because of architectural decisions already made during the consumer product design. The positioning shift: This app should NOT be marketed as “An AI chat app.” It should be positioned as “A persistent cognitive workspace for organizations.” That framing alone changes who buys it. Why this matters architecturally: The product isn’t being redesigned for enterprise — the consumer product’s core architecture (bounded Personas, scoped memory, Instance isolation, Cipher oversight) naturally maps to enterprise requirements. Enterprise becomes a configuration layer, not a rebuild.FEATURE 2: Three Reasons Enterprises Would Care
What it establishes: The specific enterprise pain points aiConnected solves that existing tools don’t.Reason 1: AI Inside Workflows, Not Beside Them
Most AI tools fail in enterprise because they live in a browser tab. aiConnected’s value is that it can sit persistently on the desktop, maintain long-lived memory, act across apps/files/browsers/internal tools, and remain available without context reset. This makes it closer to a digital employee or cognitive operating layer — not a chatbot you visit when you have a question.Reason 2: Desktop Presence Unlocks Browser-Impossible Capabilities
A desktop app (Electron or native) can do things enterprises care about that browsers cannot: monitor or assist with internal tools (CRM, ERP, legacy systems), enable secure file-system access, integrate with VPN-only internal resources, run background tasks, maintain persistent state across days/weeks. Enterprises understand this distinction very well. Browser-based AI tools have inherent security and capability limitations that desktop deployment solves.Reason 3: Personas + Skill Constraints = Enterprise Safety
This is one of aiConnected’s strongest enterprise advantages. Enterprises hate all-knowing AI, unpredictable responses, and data leakage risk. aiConnected’s system explicitly limits Persona capabilities, separates roles (sales, ops, finance, legal, support), and prevents overreach and hallucinated authority. This aligns with SOC 2, ISO 27001, internal governance policies, and AI risk management frameworks. The skill constraint system (Doc 12) isn’t a limitation — it’s a selling point for every compliance-conscious organization.FEATURE 3: Enterprise Use Cases That Actually Sell
What it establishes: Four concrete enterprise adoption vectors with real market demand.Use Case 1: Internal Operations Assistant
- Knows company SOPs
- Answers internal questions
- Guides employees through processes
- Reduces internal support tickets
Use Case 2: Sales + Account Intelligence Layer
- Persistent memory per account
- Call summaries, follow-ups, deal tracking
- CRM integration
- Persona trained on company sales methodology
Use Case 3: Compliance-Safe AI Workspace
- No data sent to public tools
- Controlled models (self-hosted or approved APIs)
- Audit logs
- Memory governance
Use Case 4: Knowledge Retention System
- Employees leave; knowledge doesn’t
- Institutional memory stored in structured form
- New hires onboard faster
FEATURE 4: Competitive Positioning vs ChatGPT Enterprise
What it establishes: Why enterprises would choose aiConnected over the obvious incumbent. ChatGPT Enterprise limitations:- Still largely session-based (no persistent memory across weeks/months)
- Limited workflow orchestration (no Agentic Teams architecture)
- Limited persona isolation (no bounded skill slots, no role separation)
- Limited deep integration (browser-only, no desktop presence)
- Limited custom cognition architecture (no four-layer settings hierarchy)
- Persistent cognition (Personas remember across all deployments)
- Modular intelligence (bounded specialists, not one omniscient model)
- Workflow-native design (Agentic Teams with Orchestrator→Manager→Worker hierarchy)
- Persona governance (Skill Slots, memory scoping, behavioral templates)
- Future on-prem or VPC deployment (architecture supports it from day one)
FEATURE 5: Enterprise Non-Negotiables (What Must Eventually Exist)
What it establishes: The seven requirements that must be met for enterprise sales, even though they don’t need to ship on day one.The Seven Non-Negotiables:
| # | Requirement | What It Means |
|---|---|---|
| 1 | SSO (SAML / OAuth) | Employees log in with their corporate credentials, not separate accounts |
| 2 | Role-Based Access Control | Different employees see/do different things based on their role |
| 3 | Audit Logs | Every action is recorded — who did what, when, to what |
| 4 | Data Isolation Per Org | One company’s data is completely invisible to another’s |
| 5 | Clear Memory Lifecycle Rules | Memory has ownership, scope, lifespan, and deletability |
| 6 | Admin Controls | IT admins can manage users, Personas, permissions, and policies |
| 7 | Model Transparency | Enterprise knows exactly which AI models run where |
Deployment Flexibility (Huge Future Advantage)
If the platform eventually supports Cloud (SaaS), VPC, and on-prem/air-gapped deployment, it unlocks: Healthcare, Finance, Legal, Government, and Defense contractors. Most AI startups never get here. aiConnected’s architecture can.FEATURE 6: The Core Strategic Decision — Enterprise-Aware, Not Enterprise-First
What it establishes: The single most important strategic principle for the entire build. Three theoretical options:- Build consumer-first (ignore enterprise) — risky, may require rewrite later
- Build enterprise-first (target enterprise from day one) — too slow, kills momentum
- Build enterprise-aware (architect for enterprise, build for consumers) — CORRECT
- Enterprise requirements before product-market fit will lock you into compliance work, force premature abstractions, delay shipping by months, and drain energy into features nobody is paying for yet
- You’ll end up building admin dashboards no one uses, permission systems without real-world pressure, and compliance checklists without real customers
- “Enterprise” is not a customer — it’s a category. Healthcare ≠ Finance ≠ Legal ≠ Tech ≠ Government. You cannot design correctly for all of them in advance.
- If you don’t, you hit a hard wall later
- Things that are EXTREMELY expensive to fix later: no tenant isolation, no audit trail concept, flat memory architecture, Persona bleed, no clear ownership model, tight coupling between UI and logic, hard-coded assumptions about “a user”
- If those exist when enterprise demand arrives, enterprise is not “hard” — it’s IMPOSSIBLE
FEATURE 7: Five Architectural Principles for Enterprise-Readiness
What it establishes: The specific engineering decisions that must be made NOW to keep enterprise adoption possible later.Principle 1: Multi-Tenancy From Day One
Even if you only have one user per org and don’t expose org controls yet — internally, every object belongs to an Org. Every Persona, every Memory, every Workflow. This costs almost nothing now and saves everything later.Principle 2: Hard Separation Between Cognition, Memory, UI, and Integrations
If enterprise says “We want our own models, memory rules, and logging” — you can comply without touching the UI. That is gold. Each layer must be independently configurable.Principle 3: Identity Is a Layer, Not a Feature
Even if you start with email + password, design auth as a replaceable module. Assume SSO will exist later. Never let logic depend on “current user = everything.” Critical distinction that must exist in the schema NOW:- User ≠ Persona ≠ Org ≠ Role — these must be distinct concepts from day one
Principle 4: Memory Governance Is Mandatory (Even If Invisible)
You don’t need admin panels yet. But you DO need: memory ownership, memory scope (Persona / Instance / org), memory lifespan rules (TTL, archive, lock), and deletability. Enterprise will ask: “Where does this memory live, and who controls it?” You should already know the answer because the schema enforces it.Principle 5: Auditability Without Bureaucracy
You don’t need SOC 2 logs today. But internally, events should be capturable: Persona created, memory written, memory accessed, action executed, external API called. Even a simple event stream now becomes enterprise gold later.FEATURE 8: What NOT to Build Yet
What it establishes: Explicit guardrails against premature enterprise feature development. Do NOT build these now:- Enterprise admin dashboards
- Fine-grained permission UIs
- Compliance workflows
- Legal hold features
- Custom deployment pipelines
- Dedicated account management tooling
FEATURE 9: Enterprise Pricing Reality
What it establishes: How enterprises think about pricing, which is fundamentally different from consumer pricing. Enterprise buyers think in: per-seat pricing, department licensing, usage caps, annual contracts, support SLAs. aiConnected can justify:- 150 / user / month (mid-market)
- 500 / user / month (enterprise roles)
- Custom pricing for org-wide deployment
FEATURE 10: Strategic Adoption Phases
What it establishes: The correct sequence for growing from consumer to enterprise.| Phase | Target | What You Build |
|---|---|---|
| Phase 1 | Power Users / Builders | Core product, consumer UX |
| Phase 2 | Small Teams | Shared Instances, basic collaboration |
| Phase 3 | Mid-Market | Team management, basic admin, integrations |
| Phase 4 | Enterprise | SSO, RBAC, compliance, custom deployment |
Data Model Extensions (Enterprise-Ready Foundations)
Implementation Principles
- Every database table gets an
org_idcolumn. Even in the consumer product where there’s only one “org” per user, the column exists. This is the single cheapest decision that prevents the single most expensive rewrite later. - Auth is a replaceable module. Email/password today, SSO tomorrow. The auth layer should be swappable without touching any business logic. Never scatter auth checks through the codebase — centralize them.
- Events are captured from day one. Every significant action (create, update, delete, access) should emit an event. Store them in a simple append-only table. You don’t need to build dashboards for them yet — just capture them. Enterprise audit requirements become trivial when the data already exists.
- Memory has ownership and scope, always. Every memory item knows who created it, what scope it belongs to, and what org it lives in. No orphaned memories. No ambiguous ownership. Enterprise will ask “where does this data live?” and you must be able to answer instantly.
- User ≠ Persona ≠ Org ≠ Role. These are four distinct concepts in the data model from day one. A user belongs to an org. A user has a role within that org. A Persona belongs to an org and a user. Collapsing any of these makes enterprise adoption require a rewrite.
- Don’t build enterprise UI yet. No admin dashboards, no permission management screens, no compliance workflows. These come after revenue signals. The architecture supports them; the UI doesn’t need to exist yet.
- Persona skill constraints are an enterprise selling point. When talking to enterprise customers, bounded Personas aren’t a limitation — they’re governance. “Our AI can’t hallucinate answers outside its defined skill set” is exactly what a CISO wants to hear.
- The build sequence is Phase 1→2→3→4, never skip. Power users first, then small teams, then mid-market, then enterprise. Each phase validates the next. Trying to jump to enterprise before consumer product-market fit is how startups burn years.
- Position as “cognitive workspace,” not “AI chat.” The language matters. Enterprise buyers purchase operating layers and productivity infrastructure. They do not purchase chat tools. The product is the same — the framing determines who buys it.
- Test enterprise assumptions with mid-market first. Mid-market companies (50-500 employees) have enterprise needs but consumer buying cycles. They’ll reveal which enterprise features actually matter before you invest in the full enterprise stack.
Document 17: In-Chat Navigation (ChatNav)
Junior Developer Breakdown
Source:17. aiConnected OS In-Chat Navigation.md Created: 2/6/2026 | Updated: 2/6/2026
Why This Document Exists
The Problem (Long Conversations Break Everything): Every AI chat system today — ChatGPT, Claude, Gemini — falls apart once conversations get long enough. Users can’t find what was said. The AI forgets what was discussed. Important decisions vanish into an infinite scroll. The only “solution” is compressing the conversation into lossy summaries, which destroys nuance, forgets constraints, and eventually makes the AI confidently wrong because it’s reasoning on a degraded copy of the original conversation. What This Document Solves: ChatNav is a per-conversation table of contents that makes long, evolving conversations navigable, intelligible, and non-destructive over time. It doesn’t just help users scroll faster — it fundamentally changes how the AI itself accesses conversation history, replacing lossy summarization with selective rehydration of the original transcript. The Founder’s Explicit Goal: “I want to make the context window an irrelevant concept entirely. I don’t really see why it has to be a thing in the first place.” ChatNav, combined with aiConnected’s memory system, is the mechanism for achieving that goal. Cross-References:- Doc 11 (Chat Cleanup) → ChatNav provides structure that cleanup tools operate on
- Doc 13 (Adaptive UI Tutorials) → ChatNav is an in-chat feature discovered through use, not tutorials
- Doc 15 (Master Spec) → Memory system integration, chat-level search
- Doc 14 (Build Plan) → Chat Kernel must support ChatNav embedding
FEATURE 1: Core Concept — What ChatNav Is (and Is NOT)
What it does: ChatNav is an in-chat, per-conversation navigation system that functions like a living table of contents for a single chat thread. Critical scope: ChatNav lives INSIDE an individual conversation and only concerns itself with THAT conversation. It does NOT replace system menus, persona selectors, or tool navigation. Those are a separate plane entirely. Mixing them would pollute both mental models. What ChatNav is:- A per-conversation sidebar showing clickable checkpoints
- A living table of contents being written in real time as the conversation evolves
- A semantic index that both the user AND the AI use
- A floating navigation UI that provides random access to a sequential medium
- Not system navigation (personas, tools, whiteboard, browser have their own menus)
- Not a bookmark system (bookmarks are user-created; checkpoints are system-generated)
- Not a search shortcut (search operates ON ChatNav data, but ChatNav isn’t search)
- Not a sidebar full of buttons or a static tree or a settings panel disguised as navigation
FEATURE 2: The Five Problems ChatNav Solves
What it establishes: The specific failure modes in every existing AI chat system that ChatNav addresses.Problem 1: Scroll Collapse
Once a conversation reaches sufficient length, scrolling becomes useless. You’re no longer navigating information — you’re hunting blindly. There is no addressability for ideas.Problem 2: Lost Meaning
Users remember THAT something important was said, but not WHERE. They know the AI gave a great recommendation or that a key decision was made, but they can’t find it without scrolling through potentially thousands of messages.Problem 3: Context Degradation
AI systems rely on context windows and summarization. Every compaction step is lossy. Over time: nuance disappears, constraints are forgotten, original phrasing is lost, earlier decisions quietly vanish. Eventually the model is confidently wrong because it’s operating on a “telephone game version” of the conversation.Problem 4: Re-entry Pain
Returning to a chat days, weeks, or months later is cognitively expensive. Users must reread, restate, or abandon the thread entirely. There’s no quick way to understand “what was this conversation about and where did we leave off?”Problem 5: No Structural Memory
Conversations are treated as flat transcripts instead of structured intellectual artifacts. There’s no difference between “we discussed the weather” and “we made a critical architectural decision” — both are just messages in a scroll. The foundational insight: Scrolling is not navigation, and summarization is not memory. ChatNav exists because both of these assumptions are wrong.FEATURE 3: Checkpoint System — The Backbone
What it does: Creates stable anchor points inside a conversation. Each checkpoint represents a moment where something meaningfully changed or was worth preserving.Two Checkpoint Types:
A. Forced Checkpoints (Token-Interval Based)- Occur automatically at predefined intervals (e.g., every 500,000 tokens)
- Guarantee retrievability regardless of topic changes
- Align with the aiConnected memory snapshot system
- Ensure no conversation can become structurally unindexable
- These exist even if the topic hasn’t changed — they’re “save states”
- Occur when the system detects:
- A topic pivot (conversation shifts direction)
- A scope shift (broad → specific or specific → broad)
- A conceptual crystallization (“this is the important takeaway” moments)
- A decision point (something was decided or committed to)
- A new constraint or framing (rules or parameters were established)
- These are AI-detected, not user-declared
- A stable anchor in the transcript (exact position)
- Associated metadata (type, timestamp, token position)
- A short semantic summary
- Links to the raw transcript section it covers
FEATURE 4: Temporal Organization — Date-Based Segmentation
What it does: When a conversation spans multiple sessions across different days, weeks, or months, ChatNav introduces date headers inside the sidebar to organize checkpoints by session. How it works: The sidebar shows a running list of checkpoints, but at each session boundary, a date header appears (e.g., “December 15, 2025” / “January 3, 2026” / “February 8, 2026”). Checkpoints under each date header are the topics and pivots that occurred during that session. What this achieves:- The user can see WHEN parts of the conversation happened
- The age of assumptions becomes visible (a decision from 3 months ago may need revisiting)
- Long-running conversations feel continuous instead of fragmented
- Users don’t need to start new chats just because time passed
FEATURE 5: Hover/Expand Summaries — Orientation Without Jumping
What it does: Each checkpoint includes a short summary of what that section of the conversation covers, visible on hover or via an expand/dropdown interaction.For the User:
- Instant orientation — understand what a section is about without jumping to it
- Decide whether a section is relevant BEFORE scrolling there
- Skim understanding of an entire conversation in seconds
- Re-enter months-old conversations and immediately understand: what it’s about, how it evolved, where to focus
For the AI (This Is Critical):
These summaries are not just UX features. They are semantic routing metadata. Instead of dragging entire conversations forward into context, the AI can:- Inspect checkpoint summaries to find WHERE meaning lives
- Identify the relevant sections for the current question
- Selectively reload ONLY the necessary raw transcript sections
- Reason on the original full-fidelity data, not a degraded summary
FEATURE 6: Selective Context Rehydration
What it does: Instead of carrying the entire conversation forward in context (impossible for long conversations) or relying on lossy summaries (leads to confident errors), the AI uses ChatNav metadata to selectively reload only the relevant portions of the original transcript. How the traditional approach fails:| Step | What Happens | What’s Lost |
|---|---|---|
| 1 | Full conversation in context | Nothing (but unsustainable) |
| 2 | First summarization | Some nuance, exact phrasing |
| 3 | Summary of summary | Constraints, edge cases |
| 4 | Summary of summary of summary | Original decisions, context |
| N | Nth compression | Everything meaningful |
| Component | Role |
|---|---|
| Chat transcript | Immutable ground truth (never modified) |
| ChatNav checkpoints | Semantic index + access map |
| AI Connected Memory | Cold storage + full-fidelity retrieval layer |
| Active context | Selectively rehydrated, not blindly carried forward |
- AI receives a question that references something earlier in the conversation
- AI consults ChatNav summaries to find where that topic was discussed
- AI selectively reloads the raw transcript section(s)
- AI reasons on the original data with full nuance
- Context window contains only what’s needed, not everything
FEATURE 7: Search Over Semantic Metadata
What it does: Because checkpoint summaries exist as structured metadata, search can operate on meaning rather than raw text. Without ChatNav: Search matches keywords in raw transcript → floods of irrelevant results → user scrolls through matches trying to find the right one → gives up. With ChatNav: Search matches against checkpoint summaries → precise, conceptual results → user sees which SECTION of the conversation contains what they need → clicks and jumps directly there. Search becomes: Semantic and scoped, not brute-force. Users search for concepts (“when did we decide on the pricing model?”) and ChatNav’s summaries route them to the right section.FEATURE 8: Multi-Persona and Conversation Continuity
What it does: When a new Persona enters an existing conversation, or when a conversation is split/forked into a new thread, ChatNav provides rapid context onboarding. The problem without ChatNav: A new Persona entering a 2-hour conversation would need the entire transcript loaded into context (expensive, noisy) or would need a lossy summary (misses nuance). Either way, the Persona starts poorly informed. How ChatNav solves this:- Walk the checkpoint summaries in order → instant understanding of conversation arc
- Selectively rehydrate key sections relevant to the Persona’s role
- Reach operational understanding quickly without reading everything
- New Persona added to existing chat → uses summaries as a briefing document
- Conversation forked/split into new thread → new thread inherits relevant checkpoint context
- User returning after a long gap → scans summaries to re-orient
FEATURE 9: Date-Aware Session Continuity
What it does: Preserves one continuous conversation across days, weeks, or months without forcing chat restarts. How it works:- Session boundaries are marked by date headers in ChatNav
- Visual section breaks make time visible without breaking flow
- The conversation remains one cohesive thread regardless of how much time passes between sessions
- Age awareness of assumptions (a recommendation from January may not apply in March)
- Long-term project continuity (a months-long development conversation stays intact)
- No forced chat restarts (users don’t have to start new chats just because a week passed)
FEATURE 10: The Philosophy — Intelligence Should Not Require Forgetting
What it establishes: The design philosophy that drives every ChatNav decision. ChatNav is built on one key belief: Intelligence should not require forgetting to function. Instead of pretending memory is infinite (context windows), ChatNav:- Makes memory ADDRESSABLE (you can point to specific moments)
- Makes meaning INSPECTABLE (summaries let you understand without rereading)
- Makes time STRUCTURAL (when something was said is metadata, not a deletion trigger)
- Orientation (where am I? what happened?) → ChatNav handles this
- Storage (what was actually said?) → Immutable transcript handles this
- Reasoning (what should I think about this?) → Selective rehydration handles this
Data Model
API Endpoints
| Method | Endpoint | Purpose |
|---|---|---|
| GET | /chats/:chatId/chatnav | Get full ChatNav state for a conversation |
| GET | /chats/:chatId/chatnav/checkpoints | List all checkpoints with summaries |
| GET | /chats/:chatId/chatnav/checkpoints/:id | Get single checkpoint with detail |
| POST | /chats/:chatId/chatnav/checkpoints | Create manual checkpoint (if user-created checkpoints are added later) |
| POST | /chats/:chatId/chatnav/rehydrate | Selectively reload transcript sections |
| GET | /chats/:chatId/chatnav/search?q= | Search checkpoint summaries |
| GET | /chats/:chatId/chatnav/sessions | Get session list with dates |
Implementation Principles
- ChatNav is per-conversation, never global. It lives inside a single chat thread. System-level navigation is completely separate. Never mix these two planes.
- Checkpoints are created automatically. Users don’t “make” checkpoints. The system detects meaningful moments (semantic) and enforces regular intervals (forced). The user’s job is to have the conversation — ChatNav handles the structure.
- Summaries are indices, not replacements. The raw transcript is always the source of truth. Summaries tell the AI WHERE to look, not WHAT to think. If a summary and the original transcript disagree, the transcript wins.
- The transcript is immutable. Checkpoints can be added, summaries can be refined, but the underlying chat content must never be modified. Full-fidelity retrievability is a non-negotiable invariant.
- Selective rehydration over full replay. When the AI needs context from earlier in the conversation, it should load only the relevant sections, not the entire history. ChatNav summaries guide which sections to reload.
- Time is structural metadata. Date headers in ChatNav are not cosmetic — they communicate assumption age, decision freshness, and conversation continuity. The system should be able to reason about WHEN something was said, not just WHAT was said.
- ChatNav enables multi-agent onboarding. When a new Persona enters an existing conversation, ChatNav summaries serve as a briefing document. The Persona doesn’t need the full transcript — it needs the structured overview plus selective deep-dives.
- The floating UI must never interrupt flow. ChatNav is a sidebar that exists alongside the conversation. It’s always accessible but never in the way. Users who don’t need it should be able to ignore it completely.
- Search operates on summaries first. When users search within a conversation, the search should match against checkpoint summaries before falling back to raw transcript search. This produces more precise, conceptually-relevant results.
- ChatNav is the mechanism for making context windows irrelevant. The founder’s goal is explicit: context window size should not limit conversation quality. ChatNav + Memory achieves this by replacing “carry everything forward” with “know where everything is and reload what’s needed.”
Document 18: Context Windows in AI (Fluid Context)
Junior Developer Breakdown
Source:18. aiConnected OS Context Windows in AI.md Created: 2/6/2026 | Updated: 2/6/2026
Why This Document Exists
The Problem (Context Windows Destroy Long Conversations): Every AI chat system today treats context as a single, monolithic token window. Once that window fills up, old instructions fall out, tone regresses, key decisions are forgotten, and conversations lose coherence. Users are forced to restate rules, intent, and constraints — or worse, the AI silently becomes confidently wrong because it’s reasoning on a degraded, over-summarized copy of the original conversation. What This Document Solves: The founder designed “Fluid Context” — a chat-layer architecture that replaces the single context window with a system of typed context classes. Different information has different lifetimes, mutability, and priority. Some context is permanent (instructions, personality, decisions). Some is always hot (recent conversation). Some is cold but retrievable (older transcript). Some is ephemeral (active response workspace). By classifying context and assembling it intentionally per turn, conversations can scale indefinitely without degradation. The Founder’s Key Insight: “Context loss is not a memory problem. It is a context classification and enforcement problem.” Why This Matters for Developers: This is the architectural backbone that makes ChatNav (Doc 17), Instruction Memory (Doc 15), and the entire aiConnected memory system actually work at the chat layer. Without Fluid Context, every other memory feature is building on sand — because the model will eventually forget everything regardless. This document defines HOW context gets assembled on every single turn. Cross-References:- Doc 17 (ChatNav) → Provides the checkpoint and summary infrastructure Fluid Context consumes
- Doc 15 (Master Spec) → Instruction Memory, four-layer settings hierarchy, per-message instructions
- Doc 8 (Cognition Console) → Memory governance and knowledge graph integration
- Doc 19 (Fluid UI Architecture) → Fluid Context is the chat-layer complement to the Fluid UI interaction layer
FEATURE 1: Core Concept — What Fluid Context Is
What it is: A dedicated system for managing context in live, turn-by-turn chat interactions. It sits inside the chat window itself, acting as the runtime context compiler that determines WHAT the AI sees on each turn and WHY. What it is NOT:- Not a persona system
- Not a model-level memory mechanism
- Not a long-term knowledge base replacement
- Not a separate “agent brain”
- Not OS-level orchestration
FEATURE 2: The Problem — Why Single Context Windows Fail
What it establishes: The specific failure modes that Fluid Context eliminates. Traditional chat systems treat context as one big token dump. When the window fills: Instruction Forgetting: The AI was told to be professional and warm, use a specific format, avoid certain topics. After enough turns, those instructions fall out of the window and the AI reverts to default behavior. Users must re-state rules constantly. Tone Regression: The AI starts with the right personality but gradually drifts back to its base behavior as the system prompt gets pushed further from the active window. Decision Amnesia: Key decisions made early in the conversation (“we agreed to use React, not Vue”) disappear from context. The AI either forgets or contradicts prior agreements. Lossy Summarization Chains: When context is summarized to fit the window, each compression step destroys information. Summary of summary of summary = telephone game. Nuance dies, causality blurs, original phrasing disappears, edge-case constraints get smoothed out. Re-entry Cost: Returning to a conversation after days or weeks means the AI has no understanding of what happened unless the user re-explains everything. The reframe: Context loss is not a memory problem. It is a context classification and enforcement problem. Different information has different lifetimes, mutability, and priority. Treating all of it the same guarantees failure at scale.FEATURE 3: Fluid Context Architecture — The Four Context Classes
What it establishes: The complete class system that replaces the monolithic context window. Every chat turn is constructed from four distinct classes:Class 1: Fixed Context Classes (Sticky / Permanent)
Definition: Information that MUST NOT decay, drift, or disappear unless the user explicitly changes it. Properties:- Immutable by default
- Versioned when updated (changes are tracked, not overwritten)
- Automatically included with EVERY SINGLE TURN
- Not subject to token-window eviction
- From the model’s perspective, these behave as if they are always in the context window, regardless of conversation length
- Personality and tone (“Professional, warm, concise”)
- Writing rules (“No emojis in documents”)
- Formatting constraints
- Behavioral constraints (“Do not speculate”)
- Hard facts established in the conversation (“This document is named X”)
- User-defined invariants (“Always respond as a systems architect”)
- Project-level rules and decisions
Class 2: Active Working Context (Hot Context)
Definition: A continuously sliding window of the most recent conversation turns, kept fully intact and unsummarized. Properties:- Size is configurable (128K, 250K, 500K tokens — engineering choice, not conceptual constraint)
- Always “hot” — no summarization, no chunking, no retrieval latency
- Contains the user’s latest questions and AI’s latest responses verbatim
- Guarantees immediate conversational coherence
- “What you just said” is always available
- Implicit references resolve correctly (“That idea”, “What you just said”, “Why does that matter?”)
- Subtle corrections work (“No, I meant for the interface, not the system”)
- Turn-to-turn continuity is preserved without inference gaps
Class 3: Dynamic Retrieved Context (Cold → Warm)
Definition: Context that is not currently hot but is still part of the conversation’s history or related knowledge. What it includes:- Earlier chat segments beyond the hot window
- Prior checkpoints from ChatNav
- Related documents
- Decisions made thousands of tokens ago
- External references connected via the knowledge graph
- Indexed by ChatNav summaries, keywords, and metadata
- Stored as FULL TRANSCRIPTS, not just summaries
- Retrieved via RAG only when relevant
- Rehydrated into the working context as needed
Class 4: Response Context (Ephemeral)
Definition: Temporary context used to support the current generation only. Examples:- A large document being written (50-page PRD)
- A multi-section analysis
- Extended technical documentation
- Code refactoring across multiple files
- Exists only for the duration of the response
- Can be larger than the hot conversational window
- Does not automatically persist into future turns
- Can optionally be checkpointed afterward
- Discarded immediately after completion
FEATURE 4: Context Assembly Process — What Happens Every Turn
What it establishes: The step-by-step procedure Fluid Context executes on every user message.Per-Turn Assembly:
| Step | Action | Purpose |
|---|---|---|
| 1 | Preserve the hot window | Append new user message, maintain rolling token limit |
| 2 | Inject fixed context classes | Identity, engagement mode, decisions, constraints — always present |
| 3 | Evaluate relevance signals | Does user reference earlier material? Does task require background? Does hot window lack needed info? |
| 4 | Retrieve archived context if needed | Use ChatNav summaries as search tools, pull original transcripts only when relevant |
| 5 | Construct active inference context | Ordered by PRIORITY, not chronology — clean, intentional, bounded |
| 6 | Generate response | Using only assembled context, without dragging irrelevant history forward |
- Fixed context classes (always first, never evicted)
- Hot conversational window (always second, never summarized)
- Retrieved archival context (injected when relevant)
- Response workspace (allocated per-generation)
FEATURE 5: Integration with ChatNav and Memory
What it establishes: How Fluid Context consumes ChatNav output and interacts with the aiConnected memory system.ChatNav Integration
ChatNav provides the structural signals Fluid Context uses for retrieval:- Topic anchors and decision points
- Checkpoint boundaries (forced at token thresholds, semantic at topic pivots)
- Session boundaries (date changes)
- Navigable summaries and metadata
AI-Connected Memory Integration
AI-Connected Memory provides the storage and retrieval infrastructure:- Stores full transcripts for each checkpointed segment
- Generates summaries, keywords, and metadata
- Maintains a RAG-accessible archive
- Preserves lossless recall
FEATURE 6: Fixed Context Versioning
What it establishes: How fixed (sticky) context classes handle changes without breaking history. The problem: Fixed context must be permanent, but users sometimes DO change their mind. “Actually, drop the formal tone, be more conversational here.” The solution: Fixed context items are versioned, not overwritten. How it works:- When a fixed context item is created, it gets version 1
- When the user explicitly changes it (“actually, use a casual tone”), the old version is archived and a new version becomes active
- The AI always uses the CURRENT version
- The change history is visible and auditable
- Only explicit user intent triggers a version change — the system never auto-modifies fixed context
FEATURE 7: Cross-Platform Portability via MCP
What it establishes: Fluid Context is not locked to the aiConnected interface. It’s built as an MCP server, making it portable across any AI platform. How it works:- All context classes, memories, chat histories, and metadata are stored outside any specific chat environment
- Fluid Context is exposed as an MCP (Model Context Protocol) server
- Any AI platform that supports MCP (Claude, ChatGPT, Gemini, etc.) can connect to it
- When enabled, the AI on ANY platform can access the user’s full context history
- User has a conversation in ChatGPT
- User switches to Claude and enables the Fluid Context MCP
- User says: “Do you remember the last message I just sent you?”
- Claude retrieves the context from the MCP server and picks up exactly where ChatGPT left off
- Context storage must be vendor-agnostic (no dependency on OpenAI/Anthropic/Google internal formats)
- The MCP server must expose clean APIs for context class retrieval
- Authentication must be user-controlled (the user decides which platforms can access their context)
- The system must handle platform-specific token limits (Claude’s window vs GPT’s window) by assembling context appropriately for each target
FEATURE 8: Why This Doesn’t Already Exist (Honest Assessment)
What it establishes: The real reasons ChatGPT, Claude, and Gemini don’t already use typed context — and why those reasons are surmountable. Reason 1: Transformers have no native context classes. Everything must be compiled into a single linear token stream. The compilation step — deciding what to include, how to order it, how much space each class gets, what overrides what — is operationally non-trivial. Reason 2: Rolling context is operationally simpler. A single rolling window is deterministic, append-only, easy to reason about, easy to reproduce, easy to debug. Fluid Context introduces assembly logic, priority rules, failure modes when selection is wrong, and ordering sensitivity. Reason 3: Subtle regressions are poison at scale. For mass-market chat systems serving millions of users, even rare context assembly errors create support tickets and trust damage. Rolling context has predictable failure modes (forgetting). Fluid Context has unpredictable failure modes (wrong retrieval, wrong priority). Reason 4: The market hasn’t demanded it yet. Most users don’t have conversations long enough to hit context limits severely. Power users who DO hit these limits are a minority — but they’re exactly aiConnected’s target audience. The founder’s position: These are valid engineering trade-offs, not fundamental impossibilities. The failure modes are more complex, but the benefits are transformative. For a system explicitly designed for long-term, deep, multi-session conversations — which is exactly what aiConnected is — Fluid Context is the correct architecture.FEATURE 9: What Fluid Context Eliminates vs Preserves
What it establishes: The complete impact statement.Eliminates:
- Context bloat from endlessly appended chat logs
- Lossy summarization chains
- Accidental anchoring to irrelevant past turns
- Topic drift in long conversations
- Forced trade-offs between memory and performance
- Instruction forgetting and tone regression
- The need for users to re-state rules every N turns
- Platform lock-in for conversation history
Preserves:
- Immediate conversational coherence (hot window)
- Long-term continuity (fixed classes + archival retrieval)
- Full recall when needed (original transcripts, not summaries)
- Deterministic behavior (fixed classes guarantee consistency)
- Explainable context composition (every turn’s assembly can be inspected)
- Cross-platform portability (MCP server)
FEATURE 10: Honest Assessment — Strengths and Risks
What it establishes: The founder asked “what do you think?” and received a grounded evaluation.What’s fundamentally correct:
- Context classification is the right abstraction. Different information has different lifetimes and priorities. Treating it uniformly guarantees failure at scale.
- Sticky context is the most important innovation. Users care that the AI remembers THE RULES, not everything. Making instructions permanent eliminates the most complained-about failure in current systems.
- Hot context is correctly distinguished from memory. Recent conversation is working attention, not retrieved memory. RAG-ing the last few turns is a category error.
- Summaries as indices, not replacements, is the correct model. This prevents the degradation chain that destroys every other system.
- Cross-platform MCP is a genuine differentiator. No other system lets users carry their context between vendors.
What requires careful engineering:
- Assembly logic must be deterministic and testable. Subtle bugs in context assembly are worse than forgetting — they cause the AI to be confidently wrong in ways that are hard to diagnose.
- Priority conflicts between classes need explicit rules. When fixed context and hot context disagree, which wins? These edge cases must be defined, not discovered in production.
- Token budget allocation across classes must be tunable. Different conversations need different proportions. A coding session needs more hot context; a long planning conversation needs more archival retrieval.
- Cross-platform context assembly must handle different model capabilities. Claude’s 200K window assembles differently than GPT’s 128K window. The MCP server must be model-aware.
Data Model
Implementation Principles
- Fixed context is injected every turn, no exceptions. This is the single most important rule. If the user set instructions, personality, constraints, or decisions — those are sent with every single message. The AI never “forgets” governing facts. This is not optional, not optimizable, not trimmable. If it doesn’t fit, the hot window shrinks before fixed context does.
- Hot context is never summarized. The recent conversation window is verbatim, always. No chunking, no compression, no RAG. This is working attention, not memory. The size is configurable but the invariant is absolute: whatever is in the hot window is exactly what was said.
- Retrieved context uses original transcripts, never summaries. When Fluid Context pulls archival material, it pulls the raw, full-fidelity text. Summaries guide WHICH sections to retrieve. Summaries never substitute FOR the retrieved content. This is what prevents the degradation chain.
- Response context is ephemeral by default. Large generation workspaces (PRDs, reports, codebases) are created per-response and discarded after. They do not pollute future conversational context. They can optionally be checkpointed if the output is worth preserving.
- Assembly is ordered by priority, not chronology. The model sees: fixed classes first, then hot window, then retrieved archival, then response workspace. This ensures governing facts have the highest attention weight regardless of conversation length.
- Version changes to fixed context require explicit user intent. The system never auto-modifies sticky context. If the user says “change the tone to casual,” a new version is created and the old one is archived. If the user doesn’t say to change it, it doesn’t change. Period.
- Cross-platform portability is a first-class requirement. Fluid Context is built as an MCP server from day one. Context is not locked to any AI vendor. The user’s conversation history, instructions, decisions, and personality settings follow them across platforms.
- Fluid Context consumes ChatNav, it does not replace it. ChatNav provides the structural index (checkpoints, summaries, session boundaries). Fluid Context uses that index to decide what to retrieve. They are complementary systems, not competing ones.
- Token budget allocation must be configurable and inspectable. Different conversations need different proportions of fixed vs hot vs retrieved context. Power users should be able to see (and optionally adjust) how their context budget is allocated. This aligns with the Advanced Settings philosophy from Doc 15.
- Every assembly should be reproducible. Given the same conversation state and the same user message, Fluid Context should produce the same assembled context. This is critical for debugging, testing, and building user trust. No non-deterministic behavior in the assembly pipeline.
Document 19: Fluid UI Architecture
Junior Developer Breakdown
Source:19. aiConnected OS Fluid UI Architecture.md Created: 2/6/2026 | Updated: 2/6/2026
Why This Document Exists
The Problem (Every AI Interface Is Rigid): Every existing AI interface forces users into fixed modes: you’re either in a chat, or a browser, or a document editor, or a workspace — but never fluidly moving between them. When you switch, you lose context. When you need multiple modalities simultaneously, you’re juggling tabs and copy-pasting between tools. The AI resets every time the interface changes. There’s no persistent intelligence that follows you across activities. What This Document Solves: The founder designed the Fluid UI — a fundamentally different interaction model where the user’s GOAL drives what appears on screen, interfaces emerge and dissolve as needed, and one persistent cognitive backbone (chat) ties everything together. It’s not a chat app, not a browser, not a workspace — it’s a fluid interaction runtime where everything (chat, browser, document, voice, canvas, IDE, avatar) is a temporary manifestation of the same underlying interaction. The Defining Statement: “aiConnected is a fluid interaction platform where persistent AI personas act as believable collaborators — operating within explicit skill boundaries — while a continuous chat-based cognitive backbone preserves memory, context, and coordination across any activity the user chooses.” Cross-References:- Doc 15 (Master Spec) → Companion Mode, Persistent Persona Presence, search system
- Doc 17 (ChatNav) → In-chat navigation lives inside the chat backbone
- Doc 18 (Fluid Context) → Context assembly system that keeps chat intelligent across all activities
- Doc 12 (Persona Skill Slots) → Skill constraints that prevent the “all-knowing AI” trap
- Doc 10 (Computer Use) → Browser and computer use capabilities within the fluid environment
- Doc 13 (Adaptive UI Tutorials) → Progressive disclosure within the fluid interface
FEATURE 1: Core Philosophy — Fluid Interaction, Not Fixed Interfaces
What it establishes: The foundational design principle that governs every UI decision in aiConnected. aiConnected is NOT a chat app, a browser, or a workspace with modes. It is a fluid interaction environment where:- The user’s goal drives what appears
- Interfaces emerge and dissolve as needed
- Intelligence adapts continuously
- Nothing forces the user into predefined workflows
FEATURE 2: Chat as the Cognitive Backbone (Top of Hierarchy)
What it establishes: Chat is NOT just another component — it sits ABOVE all other components in the system hierarchy. What chat IS:- The running interaction log
- The memory acquisition stream
- The persona communication layer
- The artifact registrar (files, decisions, outputs all logged through chat)
- The reasoning and decision trace
- The main screen (it can be a full window, sidebar, floating bar, voice indicator, or silent background process)
- The only interface
- A dominant visual element
- Full chat window
- Thin sidebar
- Floating input bar
- Voice indicator dot
- Waveform visualization
- Whisper-style suggestions
- Silent background cognition
FEATURE 3: Activities — Ephemeral, User-Driven, Unlimited
What it establishes: Activities are what temporarily occupy the screen — they are expressions, not containers. What activities include:- File explorer, canvas, image editor, document editor, spreadsheet
- Browser, IDE, trading charts, video, games
- Avatar/embodied persona interaction
- Google Meet, presentations
- Nothing but conversation (the whole activity IS the chat)
- Appear when needed, disappear when not
- Never own the session
- Never reset cognition
- Never break continuity
- The system never asks “What activity are you in?” — it observes and adapts
FEATURE 4: The Three UI Primitives
What it establishes: The entire Fluid UI can be reduced to three primitives that govern all rendering decisions.Primitive 1: Conversation State
What the user is trying to accomplish RIGHT NOW. This is the intent layer — everything else serves it.Primitive 2: View State
How much UI is needed to support that intent RIGHT NOW. The same conversation state can be rendered as full chat, split view, floating bar, or voice-only — the user controls the presentation.Primitive 3: Capability Boundary
Which Persona + tools are allowed to act. This is where skill constraints, Cipher governance, and permission models enforce safety. Everything else is a rendering decision. The conversation state determines what’s happening. The view state determines how it looks. The capability boundary determines what’s allowed. These three primitives interact to produce the fluid experience.FEATURE 5: Five Chat View Modes (The Layout Switcher)
What it establishes: When the browser or any activity is active, users control chat’s visual presence through a “Change View” menu.The Five Modes:
| Mode | Description |
|---|---|
| Float Bar (default) | Minimal floating input bar, chat accessible but unobtrusive |
| Icon Only | Chat collapsed to a small icon/indicator, maximum screen for activity |
| Sidebar | Chat pinned as a side panel alongside the active activity |
| 50/50 | Equal split between chat and activity |
| Chat Only | Full screen returns to chat, activity minimized/hidden |
- Changing the chat view does NOT affect conversation state — the same session continues regardless of layout
- Web navigation menu buttons remain active and floating at the bottom of the screen when a browser activity is running
- Users can set the navigation menu to auto-hide after 30+ seconds of inactivity — it reappears on hover
- Users can optionally minimize browser navigation into a small round button until needed
FEATURE 6: Dynamic UI Components in Chat (Micro-to-Macro Interfaces)
What it establishes: Instead of AI returning only text responses, the system can render interactive UI components directly inside the conversation flow — and those components can expand into full application surfaces. The traditional pattern: Question → Text Answer → Link → Context Switch (user leaves chat to browse) The aiConnected pattern: Question → Interactive UI Component → Optional Expansion → Same Context (user never leaves)Example: Pricing Request
User: “What’s the pricing for ABC Company’s service?” Instead of a bullet list with links, the system renders:- A 3-card pricing component inline in chat
- Each card shows plan name, price, key features
- CTA buttons: “Add to Cart”, “Learn More”, “View Page”
The Morphing Interface
Clicking “View Page” does NOT open a new tab. Instead:- The pricing component expands
- The page content loads within the same interface
- Chat shrinks into sidebar/floating/docked mode
- Navigation becomes lightweight and contextual
How it’s built: Server-Driven UI
The chat doesn’t render hardcoded components. It renders JSON-defined UI payloads:Component Schema Registry
A library of UI schemas (each with required data fields, optional enhancements, and multiple render sizes):- Pricing table, comparison grid, calendar picker
- Checkout card, spec sheet, FAQ accordion
- Timeline, checklist, dashboard
- And extensible to new component types over time
Progressive Disclosure Rules
Every component supports compact → expanded → full-page modes. The transition is animated, not jarring. The user never feels like they “left” something — they feel like something GREW.FEATURE 7: Personas as Persistent Collaborators
What it establishes: Personas are NOT tools, UI elements, or per-project assistants. They are long-lived, relationship-based, memory-bearing, role-aware participants in the interaction. Persona properties:- Do not reset per project — Sally learns how the user works over time
- Adapt within constraints (skill slots)
- Can be foreground or background
- Can act silently or conversationally
- Are participants in the interaction, not UI elements
- Each Persona has a finite skill capacity (e.g., 10 skills)
- Skills are explicit and scoped
- Personas must acknowledge when something is outside their expertise
- Learning consumes capacity unless marked temporary
- Personas can: (1) perform the task, (2) learn temporarily, (3) suggest creating a specialist Persona
- Temporary skill: task-scoped, auto-expires, no identity drift
- Permanent skill: consumes a slot, changes future behavior
- New Persona: clean specialization, no contamination
- The user always decides
FEATURE 8: User Intensity Spectrum — Casual to Power User
What it establishes: The same platform adapts to how intensely the user wants to engage.Casual Users:
- Minimal setup, few visible controls
- One or two Personas
- Fluid, adaptive behavior
- Low cognitive overhead
- May never see skill slots, memory management, or team configuration
Power Users:
- Formal digital teams with strict role separation
- Explicit control over skills, learning, permissions, memory
- Personas behave like siloed employees
- Full visibility into model assignments, behavioral templates, audit trails
FEATURE 9: Cipher Containment — The Invisible God Layer
What it establishes: Cipher is the unrestricted intelligence layer that powers everything — but users NEVER interact with it directly. Cipher’s role in the Fluid UI:- Interprets user intent
- Selects which Persona responds
- Selects which tools are available
- Determines what UI complexity is allowed
- Resolves interaction state changes (view transitions, activity emergence)
- Validates Persona scope and skill additions
- Enforces safety, permissions, and capability boundaries
- Coordinates background agents
- Decides memory permanence
- Users don’t demand omniscience because they interact with bounded Personas
- Jailbreak attempts fail because there’s no direct access to Cipher
- Regulatory risk is minimized (“role-based digital collaborators with explicit constraints” vs “public access to a god-model”)
- The UI never exposes raw capability — only curated, Persona-mediated experiences
FEATURE 10: The Universal User Journey (Use-Case Agnostic)
What it establishes: The Fluid UI works identically regardless of whether the user is a web designer, a companionship seeker, a business operator, or anything else.Phase 1: Entry — Presence Before Purpose
User enters the platform. They are NOT asked what they want to build, what tool they need, or what mode they’re in. They are given a presence, a voice, and an intelligence that listens.Phase 2: Persona Formation (Optional but Central)
User may talk to a default intelligence or create a Persona. The Persona starts with a role hypothesis, a personality shape, and a skill profile — but does NOT start with assumptions about why it exists. That emerges through interaction.Phase 3: Activity Emergence (Not Selection)
Activities emerge from behavior, not from menus. The system observes and adapts. Designing pages, talking through feelings, mind mapping, presenting to clients, sitting silently together, voice-only check-ins, canvas journaling — the system never asks “what activity are you in?”Phase 4: Continuous Interaction Spine
Across ALL use cases, the chat/voice/presence layer never stops. Personas never reset. Memory accumulates. Artifacts are logged quietly. Context compounds. This is what allows TIME to matter.Phase 5: Longitudinal Learning
Over weeks and months, Personas learn how the user works, how they communicate, when to speak, when to stay quiet, what support looks like for THIS person. This applies equally to professional efficiency, emotional attunement, companionship, guidance, and co-creation. Same mechanism — different expression.Phase 6: Session End → Continuity
User closes their laptop. Everything persists. Sally remembers how you work. Sam remembers tone preferences. The interaction history is intact. Next time: “Hey Sally, let’s continue that law firm site.” No re-explaining. No re-loading context.FEATURE 11: Feasibility Assessment and Build Path
What it establishes: This is buildable — not as one monolithic invention, but as a composition of existing building blocks assembled in a new way.Why it’s feasible:
- Agents can already operate UIs (OpenAI Operator, computer use tool loops)
- “AI browser” patterns are becoming mainstream (Atlas, Opera Neon)
- Embedded webviews are well-understood technology
- Server-driven UI is a proven pattern (used by every major mobile app)
- The primitives exist — the innovation is the composition
The build path (core runtime first, adapters second):
Step 1: Ship with 2-3 activities- Chat/ledger view (full + compact + voice indicator)
- Document view (PRDs, notes)
- Web view (embedded)
- That alone gets 80% of the “fluidity” feeling
- Observe screen, click/type/scroll
- Covers everything that lacks APIs
- Future-proof general capability
- Persona teams, skill limits, learning permanence
- Permissions and audit trail
- Casual users never see most of it
The hardest parts:
- Reliability in dynamic UIs (selectors break in SPAs — solution: DOM access + screenshot fallback)
- Permissions + privacy (clear “what the Persona can see/do” boundaries per activity)
- Avoiding hallucinations in action (solved by skill caps, “I’m not specialized” behavior, artifact provenance)
Non-negotiable constraints that prevent chaos:
- UI only appears when intent justifies it
- Personas must explain UI changes
- Components are limited and opinionated
- Everything is reversible
- Nothing steals focus without consent
Data Model
Implementation Principles
- Chat is the spine — everything else is optional. The interaction ledger (chat) is the only component that never resets, never disappears, and never loses state. Activities, views, and UI components come and go. Chat persists.
- View changes are NOT mode switches. Changing from sidebar to 50/50 to float bar does not change the conversation, the active Persona, the memory, or any state. It only changes the visual presentation. The user must feel this — transitions should be animated and seamless, never jarring.
- Activities emerge, they are not selected. The system observes user behavior and adapts the interface accordingly. If the user starts talking about code, an IDE might emerge. If they reference a website, a browser panel might appear. The system suggests — the user confirms.
- Cipher governs but never appears. Every UI decision — which component to render, which layout to suggest, which Persona responds — is ultimately orchestrated by Cipher. But users never see Cipher, never address Cipher, and never know Cipher is making decisions. Personas are the visible interface.
- Dynamic UI components are JSON-driven. The backend sends structured payloads; the frontend renders them. This means new component types can be added without app updates, layouts can change server-side, and Cipher maintains control over what gets rendered.
- Progressive disclosure, not progressive complexity. Every component supports compact → expanded → full-page modes. The user feels like something GREW, not that they navigated to a new place. Transitions are animated. Nothing is jarring.
- Fluid does not mean chaotic. Five non-negotiable constraints prevent chaos: (1) UI only appears when intent justifies it, (2) Personas explain UI changes, (3) components are limited and opinionated, (4) everything is reversible, (5) nothing steals focus without consent.
- Build like a game engine: core runtime first, adapters second. Ship with chat + document + web view. Add computer use as a general capability. Layer in power-user controls later. Don’t try to support every possible activity surface on day one.
- The interaction ledger captures everything. Every user message, AI response, activity change, view change, artifact creation, file upload, Persona action, and decision is logged in the ledger. The user doesn’t manage this — it happens automatically. This is what makes continuity possible across sessions, days, and months.
- Use-case agnostic by design. The same system supports professional workflows, companionship, emotional support, creative exploration, and casual conversation. The difference is Persona configuration and skill scope — not the platform itself. Never build features that assume a specific use case.
Document 20: Extensible AI Capability System
Junior Developer Breakdown
Source:20. aiConnected OS Extensible AI Capability System.md Created: 2/9/2026 | Updated: 2/9/2026
Why This Document Exists
The Problem (AI Systems Are Either Shallow or Closed): Amazon Alexa covers ~1,000 domains of knowledge (weather, timers, music, smart home, shopping, etc.) but each domain is hardcoded, shallow, and cannot reason across boundaries. Meanwhile, AI platforms like ChatGPT and Claude are deep reasoners but have no structured capability system — they can’t reliably execute real-world actions across domains. Automation platforms like n8n, Zapier, and Make provide execution but require manual wiring and have no intelligence, no memory, and no ability to choose between competing approaches. What This Document Solves: The founder designed the Extensible AI Capability System — a platform-level architecture that allows DEVELOPERS to expand aiConnected’s functional breadth across unlimited domains, while the core AI handles intent resolution, capability selection, cross-domain orchestration, and learning from outcomes. It’s not Alexa’s rigid routing, not MCP’s stateless tool calling, and not Zapier’s manual wiring — it’s a governed, persistent, competitive capability marketplace. The Key Insight: “You do NOT create 1,000 domains yourself. You provide a canonical domain ontology, a registration and expansion mechanism, and a scoring/arbitration system. Developers fill the rest.” Cross-References:- Doc 15 (Master Spec) → Agentic Teams, multi-level capability hierarchy, global capability library
- Doc 19 (Fluid UI) → Cipher orchestration layer that routes intent to capabilities
- Doc 12 (Persona Skill Slots) → Persona capabilities are the user-facing expression of domain capabilities
- Doc 10 (Computer Use) → Computer use as one type of capability within the fabric
- Doc 16 (Enterprise) → Enterprise use cases as natural extensions of domain coverage
FEATURE 1: Core Concept — What This System Actually Is
What it is: A platform capability (not a UI feature) that combines an extensible domain taxonomy, a developer execution model, a capability registration system, and a runtime routing and arbitration layer. What it is NOT:- Not a single feature in the UI
- Not a chatbot skill system (Alexa-style)
- Not a plugin marketplace (though it has marketplace properties)
- Not an MCP implementation (though MCP can be used internally)
- What domain it operates in
- What intents it handles
- What actions it can execute
- What data sources it needs
- What permissions it requires
- How confident it is for a given request
FEATURE 2: Alexa’s Domains Reframed — What aiConnected Actually Replicates
What it establishes: A precise understanding of what Alexa’s “1,000 domains” actually are and what aiConnected takes from that model. What Alexa’s domains really are: NOT abilities. They are routing categories — labels that answer “which subsystem should receive this request?” Alexa does not reason across domains, does not choose between competing implementations, does not learn which domain works better for a specific user. She’s a voice-controlled menu, not an intelligence. What aiConnected replicates: Alexa’s COVERAGE model — “No matter what a user asks, the system knows WHERE it belongs.” The difference: Alexa hardcodes those domains. aiConnected makes them open and expandable by developers. The critical bridge from “domains” to “capabilities”:| Step | What Happens |
|---|---|
| Step 1 | Domains stay dumb — just labels (Scheduling, Messaging, Finance) |
| Step 2 | Capabilities are registered UNDER domains — human-built execution logic (“Create Google Calendar event”, “Send invoice via Stripe”) |
| Step 3 | The AI does NOT invent workflows — it answers “Which known capability should handle this request?” |
FEATURE 3: The Domain Ontology — Covering 1,000+ Domains Without Building Them
What it establishes: The hierarchical, flexible domain tree that allows organic growth to unlimited domains. Structure:- Each node is addressable, versioned, and extendable
- ~50 top-level domains, ~200 mid-level, 1,000+ leaf domains organically
- Developers don’t “add domains” arbitrarily — they register capabilities UNDER existing domains
- New domains can be proposed through a governance process
FEATURE 4: The Capability Arbitration Layer — Runtime Intelligence
What it establishes: The mechanism that makes this more than a routing table — it’s a competitive capability marketplace at runtime. How it works: When a user says “Set a 25-minute focus session and don’t let notifications through,” the system:- Identifies relevant domains: Utilities.Time + System.Control
- Finds ALL registered modules in those domains
- Scores them on: intent match, context relevance, user history, trust level, developer reliability
- Either selects one module OR orchestrates multiple modules together
- Capability A worked 92% of the time → preferred
- Capability B failed 40% of the time → deprioritized
- User historically preferred A → weighted higher
- That’s routing optimization, not AI magic
FEATURE 5: Comparison to MCP, Zapier, and Existing Systems
What it establishes: Precise positioning against every system people will compare aiConnected to.vs Alexa
| Alexa | aiConnected |
|---|---|
| Hardcoded domains | Discoverable & expandable domains |
| Shallow execution | Multiple possible execution paths |
| No cross-domain cooperation | System orchestrates across domains |
| No learning | Remembers what worked |
| Voice-controlled menu | Intent-driven intelligence |
vs Zapier/n8n/Make
| Automation Platforms | aiConnected |
|---|---|
| Trigger → Action pipelines | Intent → Capability selection → Execution |
| Manually wired | Developers register, system selects |
| No understanding of intent | AI classifies and routes |
| No decision-making | Competitive arbitration |
| No memory of outcomes | Learning from success/failure |
| User-maintained | Self-optimizing |
vs Claude MCP
| MCP | aiConnected DCF |
|---|---|
| Tool discovery & invocation | Domain ontology + intent resolution + capability competition + orchestration + persistent memory |
| Stateless (each call isolated) | Persistent (past success/failure affects routing) |
| Tools (“call this function”) | Capabilities (confidence, permissions, history, scope, reputation) |
| No competition between tools | Competitive arbitration — best fit wins |
| Flat tool space | Hierarchical, addressable domain tree |
| External tooling | OS-level authority (can alter UI, manage agents, change workflows) |
FEATURE 6: AI-Generated Workflows — What the AI Can and Cannot Do
What it establishes: Clear boundaries on AI autonomy within the capability system. What AI CAN do:- Generate workflow suggestions mid-conversation
- Propose automations dynamically based on observed patterns
- Select between pre-registered capabilities
- Coordinate multiple capabilities for complex requests
- Learn from outcomes to improve future selection
- Invent new capabilities from scratch
- Create credentials or authentication
- Own irreversible actions by default
- Execute without registered capability contracts
- Bypass permission boundaries
- AI = planner
- Workflow engine = executor
- Capabilities = guardrails
FEATURE 7: The Global Capability Library — Exponential Platform Scaling
What it establishes: How individual user training creates platform-wide intelligence. The compounding mechanism:- User completes a task using a capability
- User provides a rating
- If rating exceeds threshold (e.g., ≥90%), the capability becomes a stored global skill
- Future users benefit from that capability without retraining
- More users → more capabilities → fewer training cycles → faster results → more users
- Public: General skills useful to everyone (email copywriting, site building, research, scheduling, content generation, SEO)
- Private: Proprietary processes (custom CRM structures, internal SOPs, confidential financial models, company-specific onboarding flows)
- ≥90% user satisfaction → eligible for global capability storage
- ≥80% but <90% → stored in user’s private library only
- <80% → not stored as a capability
- Task-level capabilities (individual operations)
- Project-level capabilities (coordinated multi-task workflows)
- Campaign-level capabilities (strategic multi-project orchestration)
- Higher levels require higher validation thresholds (task=90%, project=92-93%, campaign=95%+)
- Lower levels feed higher levels automatically
FEATURE 8: Investor and Market Positioning
What it establishes: How to explain aiConnected’s value to investors, customers, and the market. What aiConnected actually is (for investors): “The first system that turns AI from a talking tool into an operating layer that actually runs things — and gets better the longer you use it.” What makes it different from Alexa, ChatGPT, or “AI assistants”:- Alexa can set a timer but can’t run your business
- ChatGPT can explain things but can’t operate systems
- Enterprise tools automate one narrow workflow
- aiConnected understands intent (not commands), coordinates many systems at once, learns preferences over time, and improves decisions based on outcomes
- Time returned (fewer decisions, fewer steps, less mental overhead)
- Consistency (things done the same way every time, no dropped balls)
- Leverage (one person operates like five, a small team competes with a big one)
- Continuity (the system remembers, staff can change, knowledge doesn’t disappear)
- The system learns each user
- The system remembers what works
- The system coordinates across domains
- That knowledge cannot be copied quickly — it is earned over time
- This is not a feature race. It’s an experience accumulation race.
FEATURE 9: Naming and Developer-Facing Language
What it establishes: Consistent terminology for internal, developer, and marketing contexts.| Context | Name |
|---|---|
| Internal architecture | Domain Capability Fabric (DCF) |
| Developer-facing | aiConnected Capability SDK |
| Marketing | ”Unlimited Domains. One Intelligence.” |
| Individual unit | Domain Capability Module (DCM) |
| Selection engine | Capability Arbitration Layer |
| Domain structure | Domain Ontology |
FEATURE 10: What NOT to Build First
What it establishes: The minimum viable version and what comes later. The smallest version that clearly improves on Alexa:- 20-30 core domains
- Clear developer registration process
- Visible domain selection
- Transparent execution
- Basic outcome tracking
- Full competitive arbitration between thousands of modules
- Cross-domain orchestration for complex multi-step workflows
- Global capability library with quality gates
- Developer marketplace with ratings and revenue sharing
- Campaign-level capability composition
- Phase 1: Core product with built-in capabilities for power users
- Phase 2: Developer SDK for capability registration
- Phase 3: Arbitration and competition between capabilities
- Phase 4: Global library, marketplace, and enterprise deployment
Data Model
Implementation Principles
- Developers register capabilities, the AI selects them. The AI never invents new execution logic. It evaluates registered capabilities against user intent and chooses the best match. This is routing optimization, not superintelligence.
- Domain Capability Modules are contracts, not prompts. Each DCM declares what it does, what it needs, what permissions it requires, and how confident it is. The system enforces these contracts. Developers cannot register capabilities that exceed their declared scope.
- Competition improves quality. Multiple developers can register capabilities for the same domain and intent. The arbitration layer scores them and selects the best for each user at each moment. This creates natural quality pressure without central curation.
- MCP is an implementation detail, not the architecture. A DCM can internally use MCP tools, REST APIs, n8n workflows, local executables, or agent swarms. The arbitration layer doesn’t care about implementation — it cares about declared intent coverage, historical performance, and domain alignment.
- Learning comes from outcomes, not from AI reasoning. The system records which capabilities succeeded, which failed, which users preferred, and which had highest satisfaction. Future routing is informed by this data. No mysterious “AI learning” — just statistical optimization on tracked outcomes.
- The global capability library is quality-gated. Capabilities only enter the global pool after meeting satisfaction thresholds. Lower-quality results stay private. This prevents contamination and ensures the shared library continuously improves.
- Self-improving but never self-modifying. The platform gets stronger with every successful capability execution. But it never rewrites its own rules, evolves outside task boundaries, gains open-ended autonomy, or becomes unpredictable. This is the golden line that must never be violated.
- Start with 20-30 core domains, not 1,000. The ontology should be designed for organic growth but shipped with a manageable core. Developer expansion fills the rest. Trying to define 1,000 domains upfront is the wrong approach — define how domains are born, compete, and evolve.
- Irreversible actions require confirmation by default. Any capability action that cannot be undone (sending emails, making payments, deleting data) must require explicit user confirmation unless the user has explicitly configured auto-approval for that specific action type.
- The capability system integrates with — but does not replace — Personas and Cipher. Personas mediate between users and capabilities. Cipher orchestrates capability selection at the system level. The DCF is infrastructure that Personas access and Cipher governs. Users never interact with the DCF directly.