Overview

What is Companion OS?

Companion OS is an embodied AI platform. Your companion lives in three places simultaneously — your Ray-Ban Meta glasses, your phone screen, and a desktop agent that works for you 24/7. It's not a chatbot. It's a presence.

👓

Glasses

Hears and sees everything through your Ray-Ban Meta. Answers hands-free.

📱

Phone

3D avatar on screen. Speaks, listens, and remembers every session.

🖥️

Desktop

Runs autonomous tasks — email, research, goals — while you sleep.

The core differentiator is embodied continuity. One companion, three surfaces, one memory. No other AI product runs the same model across your physical senses, your screen, and your autonomous agent simultaneously.
Getting Started

Setup guide

1

Download the app

Get the Android APK from the landing page, or install the Windows desktop package. iOS builds ship in v2 after App Store review.

# Android
Download companion-os-v1.apk from companion-os.xyz
Settings → Install unknown apps → enable for your browser
Open the APK → Install

# Windows
Download CompanionOS-Setup.exe
Run installer → Launch Companion OS
2

Onboarding flow

5-step setup when you first launch:

  1. Choose archetype — Nova, Aria, Orion, Vex, Kira, or Create your own (custom keywords + tone)
  2. Name your companion
  3. Integrations — connect Telegram, email, calendar (optional)
  4. Age verification
  5. Character Studio — Face Scan / Face & Body / Hair / Style / Outfit / Import VRM (skippable, uses archetype preset if skipped)
3

Connect Ray-Ban glasses (optional)

Companion OS connects to Ray-Ban Meta glasses via WebSocket. The companion sees what you see and hears through the glasses mic.

1. Pair glasses in Meta View app
2. In Companion OS onboarding step 5:
   - Enter glasses IP (found in Meta View → Developer settings)
   - Enter stream port (default: 9870)
3. Status indicator shows "👓 Glasses" when connected
   Falls back to "📱 Phone camera" automatically
4

Import memory (optional)

Paste a conversation export from ChatGPT or Claude. Companion OS extracts key facts about you and builds the initial life graph without starting from zero.

ChatGPT: Settings → Data controls → Export data → conversations.json

Claude: claude.ai → Profile → Export data

Companion Twin

The semantic memory layer

The Companion Twin is a persistent world model of the user — a graph of people, projects, habits, goals, and events that grows with every session.

V1

Life Graph

live

Entities (people, projects, places, goals, habits, tokens) connected by typed relationships. Extracted from every session via Gemini. Injected into every system prompt so the companion always knows your world.

// Entity types
person | project | place | concept | habit
goal | event | tool | token | org

// Example triples
user → works_on → YourProject
user → was_seen_at → coffee shop
user → engages_in → coding
V2

Predictive Activation

live

Pre-emptive intelligence. Generates morning brief cards, goal nudge cards, and pre-meeting context automatically. Cards surface in the chat interface when relevant.

☀️

Morning brief

3 priorities, market snapshot, calendar

⚠️

Goal nudge

Triggered when a goal has no activity for 3+ days

👥

Meeting context

Who you last spoke to, what was discussed

V3

Contextual Vision

live

Camera frame → Gemini Vision describe → extract entities → store → feed to Life Graph. Works via glasses camera or phone camera.

Frame pipeline:
1. Capture frame (glasses or phone)
2. Describe: "Person is working at a desk, whiteboard visible"
3. Extract: { location: "home desk", activity: "coding", significance: 0.7 }
4. Store in vision_memory.json
5. If significance ≥ 0.5 → feed entities to Life Graph
6. Inject recent observations into every session
V5

Goal Drift Detector

live

Audits GOALS.md against recent session logs every 6 hours. Fires a nudge card when a goal hasn't been mentioned in 3+ days. 2-day cooldown per goal.

Skill Forge

Skills from natural language

The companion can create, save, and run reusable skills from plain English. Say what you want once — it becomes a named, callable workflow.

The frontier: SkillWatcher monitors session patterns. When it detects the same workflow 3+ times, it asks: “You've done this 4 times. Want me to save it as a skill?” The system expands itself.

Skill types

prompt-skill

Single-shot: system prompt + tools + output. Best for analysis, briefings, lookups.

workflow-skill

Multi-step tool chain. Steps execute sequentially, pause for confirmation where needed.

advisor-skill

Analyze and recommend only. Never acts without explicit approval. Default for money/trades.

Creating skills

// Voice or chat
"Make a skill that scans new Solana launches for rug risk"
"Whenever I say meeting prep, pull Gmail + calendar + contacts"
"Create a skill to summarize my unread emails every morning"
"Save this workflow as a reusable skill"

// Updates
"Update that skill to also check holder concentration"
"Rename it to Launch Filter"
"Disable that skill"
"Make it approval-only"
"Add a voice trigger: token check"

Seeded skills (available out of the box)

Solana Launch Filteradvisor-skill

Analyzes a token launch for rug risk via DexScreener + on-chain signals. Scores 0–10.

launch filterrug checkvet this token
Meeting Prepworkflow-skill

Pulls Gmail + calendar + life graph context → pre-meeting brief in seconds.

meeting prepprepare for meeting
Morning Deep Briefworkflow-skill

Email + calendar + market snapshot → prioritized daily brief.

morning briefstart my day
Goal Drift Reviewadvisor-skill

Audits goals vs recent activity. Flags what's stalled, what's on track.

goal driftreview my goalswhat am I slipping on
Research Briefprompt-skill

Web search + memory → 300-word research brief on any topic.

research briefbrief me onresearch this
Intelligence Layer

Advanced twin modules

📊

Market Brain

The Solana operator console. Live token prices via Jupiter API, DexScreener signals, Gemini-powered memecoin risk scoring, Solana narrative detection via Google Search grounding.

// Example: memecoin risk score
{
  address: "ABC...XYZ",
  symbol: "LAUNCH",
  liquidityUsd: 45000,
  holderConcentration: 0.72,  // top wallet holds 72%
  rugRisk: "HIGH",
  flags: ["low-liquidity", "whale-concentration", "new-contract"]
}
🔀

Outcome Twin

Personal world simulator. Reasons over your actual goals and life graph to model the downstream effects of a decision before you make it.

"If I work on the side project tonight instead of the client deadline, what slips?"
→ immediateEffects: ["Client deadline at risk", ...]
→ goalImpact: [{ goal: "client delivery", impact: "negative", ... }]
→ recommendation: "Side project can wait 48h. Client deadline can't."

"If I enter this trade, what does that do to my risk budget?"
→ tradeoffs: ["$X of liquidity locked", "correlates with SOL position", ...]
🛡️

Counterparty Twin

Trust graph for wallets, founders, protocols, and people. Not just “what is this token” — “should I trust this actor?”

// Trust score: 0 (confirmed bad) → 1 (verified trustworthy)
{
  identifier: "7abc...def1",
  type: "wallet",
  trustScore: 0.23,
  tags: ["rug-history", "low-liquidity", "new-wallet"],
  signals: [
    { type: "on-chain", sentiment: "negative", weight: 0.8,
      description: "Associated with 3 previous rug exits" }
  ]
}
📡

Narrative Twin

Memetic attention radar. Finds what's heating up in Web3 before the price move is obvious. Powered by Gemini Search grounding — real-time signal, not cached data.

// Attention radar output
Heating:  "AI + DeFi convergence" [heat: 8.1/10 ↑]
Emerging: "RWA Season 2"           [heat: 5.4/10 ↑↑]
Cooling:  "L2 fee wars"            [heat: 3.2/10 ↓]

// Early signals (pre-price-move candidates)
→ high velocity + emerging status = watch closely
Ray-Ban Glasses

Embodied vision + voice

Ray-Ban Meta glasses are what make Companion OS genuinely embodied. Your companion hears, sees, and speaks through the glasses — hands-free, eyes-free, real-world aware.

👁️

Vision pipeline

Camera frames → Gemini Vision → entity extraction → Life Graph. The companion builds visual memory of your world automatically.

🎙️

Voice pipeline

Glasses mic → transcription → Gemini → response → glasses speaker. Full conversation without touching your phone.

🔄

Burst capture

Not every frame — burst-based on configurable interval or significant scene change. Keeps API costs low while building rich visual memory.

📱

Phone fallback

No glasses? Companion OS falls back to phone camera automatically. The same vision pipeline runs through the phone camera.

// GlassesStreamService WebSocket protocol
ws://{glassesIp}:{port}  →  JPEG frames
CompanionClaw receives frame  →  VisionMemory.processFrame()
→ Gemini Vision describes: "User at coffee shop, MacBook open, latte"
→ Entities extracted: { location: "coffee shop", activity: "working" }
→ Life Graph updated
→ Next session: companion knows you work from Brew & Co on Tuesdays
CompanionClaw

The intelligence backend

CompanionClaw is the multi-tenant backend that powers Companion OS. Every user gets a fully isolated workspace with self-heal, self-learn, cron jobs, memory, and skill execution.

Cron schedule (per user)

JobScheduleWhat it does
morning-brief8am dailyGoals + market snapshot + narrative radar → push notification
task-executorEvery 2hExecutes highest-priority task in TASKS.md
memory-reconcile3am dailyDistills session logs → structured memory + Life Graph extraction
task-planner7am MondayReviews GOALS.md → seeds 3-5 tasks for the week
intel-sweep11am dailyWeb search on active goals → briefing
goal-auditorEvery 6hGoalDriftDetector audit → nudge cards if stalled
weekly-review6pm SundayWins, gaps, next week focus → push notification
nightly-check9pm dailySkillWatcher scan + Narrative Twin refresh

Self-heal

HeartbeatMonitor runs every 5 minutes per active session. Checks gateway health, Gemini session, disk. Restarts and notifies on repeated failures.

Memory reconciliation

// 3am daily per user
1. Read all session logs from past 24h
2. Gemini extracts: new facts, updated preferences, resolved tasks
3. Merge into memory.json with confidence scores
4. Update USER.md and GOALS.md if new goals detected
5. Extract Life Graph entities from session log
6. Regenerate memory_for_model.json (flat projection for session injection)
$COMPANION Token

Utility and tiers

$COMPANION is the Solana token that gates access tiers, powers the VRM marketplace, and aligns builders with the platform's growth.

Free

0 $COMPANION

  • 3 preset avatars
  • 7-day memory
  • 1 companion
  • Managed API key (30 min/day)

Holder

≥100 $COMPANION

  • Full VRM library
  • 90-day memory
  • VRoid Hub import
  • Managed key (3h/day)

Staker

≥1,000 $COMPANION

  • Unlimited memory
  • 3 companions
  • Early access
  • BYOK support

Builder

≥10,000 $COMPANION

  • API access
  • Host for others
  • Referral revenue
  • Multi-agent swarms

NFT Companion Archetypes

10 premium companions with unique personalities and rare VRMs minted as Solana NFTs. Holders earn $COMPANION when others run their archetype.

VRM Marketplace

Creators list custom 3D avatars priced in $COMPANION. 5% platform fee to treasury. VRoid Hub import free for first 90 days.

Roadmap

What's next

Live

Live now

  • Web demo (companion-web.vercel.app)
  • VRM avatar system — FBX idle animation, lip sync
  • Face scan → Gemini anime portrait + color match
  • Character Studio: Face Scan / Face & Body / Hair / Style / Outfit / Import VRM
  • VRoid Hub OAuth import
  • Custom companion persona creator
  • Privacy Center (Wave 1–3)
  • Image generation (Imagen 4)
  • Ray-Ban glasses streaming (VisionClaw)
  • Gemini web search + vision in chat
In progress

Pre-launch

  • CompanionClaw Railway deploy (backend ready, deploy pending)
  • Companion Twin V1–V3 + V5
  • Skill Forge
  • Morning brief + push delivery
  • Stable APK build
  • Windows desktop package
  • Video generation via Veo 3
  • Email + calendar UI
  • $COMPANION token launch (Pump.fun)
Planned

v2

  • iOS App Store submission
  • Multi-agent swarms (Staker+ tier)
  • Computer use agent (Builder tier)
  • Voice clone
  • Twitter @CompanionOS archetypes
  • Play Store submission
  • Skill marketplace
Planned

v3

  • Unity bridge (console-quality avatar rendering)
  • On-device memory (no cloud for privacy tier)
  • Companion-to-companion social graph
  • Group companion sessions
  • Developer SDK
API Reference

CompanionClaw REST API

CompanionClaw endpoints are hosted at https://api.companion-os.xyz (or your Railway URL). Pass x-gemini-key header for BYOK.

Companion-web also exposes API routes at /api/* for the demo layer: /api/demo-chat, /api/avatar/stylize, /api/generate-persona, /api/vroid/* (OAuth + model browse + download).

GET
/health

Gateway health check

POST
/onboard

Provision a new user workspace from onboarding data

POST
/chat

Send a chat message. Detects forge intent and skill triggers automatically.

GET
/users/:id/twin

Get life graph context block + goal drift summary

POST
/users/:id/twin/extract

Trigger manual life graph extraction from text

GET
/users/:id/activation

Get pending proactive activation cards

POST
/users/:id/skills/forge

Create a skill from a natural language prompt

GET
/users/:id/skills

List all enabled skills

POST
/users/:id/skills/:skillId/run

Execute a skill by ID

PUT
/users/:id/skills/:skillId

Update a skill via NL or direct patch

DELETE
/users/:id/skills/:skillId

Disable or archive a skill

POST
/users/:id/outcome/simulate

Simulate the downstream effects of a decision

POST
/users/:id/counterparty/analyze

Analyze a wallet, founder, or protocol for trust signals

GET
/users/:id/narrative/radar

Get current narrative attention signals

POST
/users/:id/capture

Store a vision frame (glasses/phone/webcam) into VisionMemory + Life Graph

GET
/users/:id/avatar/config

Get saved avatar config (track, VRM URL, colors)

POST
/users/:id/avatar/config

Save avatar config after creator flow

GET
/users/:id/privacy/settings

Get privacy settings (retention, provider-lane, redaction flags)

PUT
/users/:id/privacy/settings

Update privacy settings

POST
/users/:id/privacy/export

Request full data export

POST
/users/:id/privacy/delete

Delete account and all associated data

GET
/users/:id/privacy/audit

Get audit trail of AI calls and data access events

GET
/market/brief

Solana market brief (no userId needed)

POST
/market/score

Memecoin risk score for a contract address

POST
/swarm

Multi-agent swarm for complex tasks (Staker+)