What is Hue

The pain

You run a public-affairs shop. Fifteen clients, each on their own stack — their PAC in one CRM, email platform in another, ad buying in a third, legislative tracker in a fourth, compliance in a fifth, news alerts in everyone's Gmail. Half the day is tab-hopping. The other half is assembling status updates by hand that go stale by dinner. New business development is a spreadsheet someone forgets to open. Knowledge lives in people's heads and leaves when they do.

Hue is the layer underneath the tabs. One space per client, plus a space for your agency itself. Each space ties together every service that client uses — their CRM, their email list, their ads, their legislative tracker, their compliance system, their news inbox — as one live graph your team and your AI can query. Nothing is copied; the source services keep their data. The graph holds the relationships.

The rest of this doc walks through a week in the life of an operator using Hue, stage by stage, so you can see exactly what it does and when you'd use it.

A week in the life

Monday morning — news triage

You open Claude (or GPT, or whatever LLM your shop routes through MCP) and point it at the agency space. You type: "Scan this morning's Politico Playbook and the Gmail alert folder. For each item that touches a current client, write a one-paragraph brief: what happened, what we already know about it, what action (if any) to take today."

The LLM:

The LLM didn't need a custom "news monitor" product. Gmail was connected. Congress and census plugins were connected. Every client's CRM / email / ads / threads were connected. The LLM stitched them together on demand because every skill is discoverable via registry.list and every reference has a way to re-fetch.

Monday afternoon — capture a thread from the article

The most interesting item in the brief is a Pro scoop about a pending HHS proposed rule. Not a current client issue, but a business-development signal: it affects one of your prospects.

You tell the LLM: "Create a thread in the 'BD — pharma' space from that Pro article. Title it with the rule name. Summarise the substance. @mention the bill, every legislator named, and the agency. Tag it prospect:acme-pharma and issue:drug-pricing."

The LLM:

threads.create fires a hook. The graph listens. For every ref:// link in the body, an edge lands: (this thread) --[mentions]--> (that legislator), (this thread) --[mentions]--> (that bill). The tag applications are edges too. Two minutes later when graph.reembedPending runs, each new edge gets a 1536-dim embedding so semantic search can find it by phrase. You never have to look at that thread again. It's in the graph.

Tuesday — a prospect asks "what would you bring?"

An Acme Pharma VP emails the agency: "We're considering hiring an outside firm on our 340B exposure. What would you bring?"

You open a new LLM conversation in the agency space. "Assemble everything we know internally about Acme Pharma, 340B, and the pharma-pricing landscape over the last six months. Group by source. I want a pitch tomorrow morning."

The LLM:

Nothing about "Acme" was ever bulk-synced into Hue. Over six months you had read a handful of articles and written a handful of notes. Each one left a trace — a Gmail message, a thread body, a tag — and each trace became an edge. The graph made them reachable; the LLM assembled them in one session.

Wednesday — a new client signs; provision their space

Acme signed. You provision their space:

That's the whole setup. Acme's data stays in Acme's services; Hue knows where to find each record and which staffer has which tickbox.

Thursday — build a campaign

The Acme team kicks off a pharma-pricing advocacy push. First meeting: "Who are our targets, what assets do we have, what's the sequence?"

You type into the Acme space: "For the pharma-pricing push, pull every swing legislator on H.R. 4873 and cross-reference with our assets — PAC balance, email list by district, ad-account availability, op-ed pipeline, coalition contacts. Give me a ranked deployment plan per target."

The LLM:

You spend an hour refining the plan with your team in a thread. Every tweak you type — "let's pull back on Sen. Brown, too close to AHIP" — you @mention Sen. Brown, which adds another edge to the graph. Future questions about Sen. Brown surface this thread automatically.

Thursday afternoon — activate

Your team runs the plan:

Each activation leaves its own trace. The graph reflects state-of-the-world without anyone writing "record this in our tracker" — the activation is the record.

Friday — report

CEO wants an end-of-week status update across every client.

You open the agency space. "End-of-week client status report. For each active client, summarize what we did this week, what happened in the news on their issues, what metrics moved, what's due next week."

The LLM walks the agency's sub-space tree, touches each active client's graph, summarizes, and writes the report. Each paragraph cites the specific thread, email, or activation it's summarizing; the CEO can click through to verify.

Permission guarantee: if a less-privileged staffer runs this query, they only see sub-spaces they've been explicitly added to. Hue cannot leak Client A data into a Client B report because the AI literally cannot reach Client A's space from a Client B session — workspace_access is per-(human, space) and never cascades.

The three invariants that make this work for an agency

  1. No cross-client bleed. Each client lives in its own Postgres schema. An AI session in Client A's space cannot reach Client B's data. Staffers are added to specific spaces with specific ticked skills — no role ladders, no accidental inheritance.
  2. Data stays at source. Hue never copies a CRM record, an email, or a campaign metric into its own store. The graph stores addresses; every query re-fetches from the origin service. Disconnect a service and the edges remain but the pointers stop resolving — no orphaned data pile.
  3. Every action is logged. Every skill call is one row in a hash-chained audit log — who did what, when, in which space, with which inputs, producing what output. When a client asks who ran what against their data last month, the answer is one query.

The five primitives, very briefly

If you read docs/architecture.md you'll see these in detail; here's the thirty-second version. Think of the architecture as a three-layer mind — the cortex sits on top of the graph which sits on top of the services.

  1. Services. Each connected platform (Gmail, CRM, email tool, ads, congress, census, compliance) is a service whose typed functions ("skills") the LLM can call. One connection per space per service. Ground truth — every other layer re-fetches from here.
  2. Threads. Where your team captures what's happening. Prose with @mentions that resolve to stable refs. Writing a thread is how you feed the graph without thinking about it.
  3. The graph. Every user-authored ref-to-ref relationship — who mentioned whom, what touches what, who owns what, who's in which district — lands as one row in an edges table per space. Indexed for exact traversal and cosine similarity. The middle layer.
  4. The context lake (cortex). The top layer. Pin a reference — a legislator, a district, a bill, a PAC — and it becomes a SPACE-LEVEL RECOMMENDATION the LLM sees as "this is what matters here." Every list skill reads the cortex as a soft default for its filters, so once you pin a congressional district to a campaign space, every listMembers / listAssets / searchBills call on that space defaults to that district. Pin = soft default; scope = hard lock. Both live in the same store; opposite semantics.
  5. The LLM. Claude, GPT, or a local model over MCP. Reads the cortex first (spaces.listPinned), walks the graph when it needs to connect dots, hydrates nodes from the source services, synthesises. The AI is whoever you bring; the infrastructure is Hue.

What your LLM sees when it connects

Every LLM session over MCP hits the same flow. No custom glue, no per-space wiring — the LLM lands, gets a compressed surface (two tools), and uses them to pull the space's stylesheet.

         ┌─────────────────────────────┐
         │  LLM connects to /api/mcp   │
         │  Bearer token (OAuth 2.1)   │
         └──────────────┬──────────────┘
                        │
                        ▼
         ┌─────────────────────────────┐
         │     method: initialize      │
         │   ─────────────────────     │
         │   serverInfo { name: "j" }  │
         │   capabilities { tools: {} }│
         └──────────────┬──────────────┘
                        │
                        ▼
         ┌─────────────────────────────┐
         │     method: tools/list      │
         │   ─────────────────────     │
         │                             │
         │   ┌───────────────────┐     │    Gateway pattern —
         │   │  discover(...)    │     │    222 skills hidden
         │   │  invoke(skill,… ) │     │    behind 2 meta-tools
         │   └───────────────────┘     │    so MCP clients don't
         │                             │    choke on tool count.
         └──────────────┬──────────────┘
                        │
                        ▼
         ┌─────────────────────────────┐
         │  invoke: spaces.orient      │   ← the session seed
         │     ─────────────────       │
         │  returns a CSS-like         │
         │  stylesheet of the space:   │
         │                             │
         │  @service congress {        │
         │    congress.member {        │
         │      pinned: ref://…;       │   ← soft defaults
         │      !important: ref://…;   │   ← scope hard-locks
         │      fetchSkill: …getMember │
         │    }                        │
         │  }                          │
         │  @service census { … }      │
         │  @service gmail { … }       │
         └──────────────┬──────────────┘
                        │
                        ▼
    ┌───────────────────┴──────────────────┐
    │                                      │
    ▼                                      ▼
┌──────────────────────┐        ┌──────────────────────┐
│  discover / invoke   │        │  graph.walk /        │
│  any service skill   │        │  searchSemantic      │
│  (congress.*, …)     │        │  from pinned refs    │
└──────────┬───────────┘        └──────────┬───────────┘
           │                               │
           └───────────────┬───────────────┘
                           │
                           ▼
              ┌─────────────────────────┐
              │  Executor applies:      │
              │   • access check        │
              │   • scope substitute    │
              │   • classification +    │
              │     PII redaction       │
              │   • hash-chained audit  │
              └─────────────────────────┘

Why the pattern matters: every session starts blind and ends oriented in one call. The LLM learns (a) the space's editorial voice (pinned refs), (b) its hard boundaries (!important scope locks), (c) every targetType it can pin against, and (d) every service's fetchSkill — all before touching any real data. The rest of the session is just calling invoke with specific skills, and every call runs through the same security + audit pipeline regardless of whether it's Gmail, Congress.gov, or your in-house CRM.

Next steps

Ready to wire your LLM? Copy-pastable MCP + OAuth URLs →