What is Hue
The pain
You run a public-affairs shop. Fifteen clients, each on their own stack — their PAC in one CRM, email platform in another, ad buying in a third, legislative tracker in a fourth, compliance in a fifth, news alerts in everyone's Gmail. Half the day is tab-hopping. The other half is assembling status updates by hand that go stale by dinner. New business development is a spreadsheet someone forgets to open. Knowledge lives in people's heads and leaves when they do.
Hue is the layer underneath the tabs. One space per client, plus a space for your agency itself. Each space ties together every service that client uses — their CRM, their email list, their ads, their legislative tracker, their compliance system, their news inbox — as one live graph your team and your AI can query. Nothing is copied; the source services keep their data. The graph holds the relationships.
The rest of this doc walks through a week in the life of an operator using Hue, stage by stage, so you can see exactly what it does and when you'd use it.
A week in the life
Monday morning — news triage
You open Claude (or GPT, or whatever LLM your shop routes through MCP) and point it at the agency space. You type: "Scan this morning's Politico Playbook and the Gmail alert folder. For each item that touches a current client, write a one-paragraph brief: what happened, what we already know about it, what action (if any) to take today."
The LLM:
- Reads the Gmail messages from Politico, Axios, Bloomberg Gov since last night.
- Pulls out the entities it sees — legislators, bills, agencies, companies, districts.
- Resolves each with typed calls:
congress.searchBills("H.R. 4873")for a stable bill id,congress.listMembers({state: "AR"})for a legislator id. - For each resolved entity, walks the graph one hop out: does a client care about this? did we meet with this legislator? is there a thread tagged to this bill? do we have an ad flight in that district?
- Writes a brief per item. You read it in three minutes. It flags two items for follow-up; you delegate the third.
The LLM didn't need a custom "news monitor" product. Gmail was connected. Congress and census plugins were connected. Every client's CRM / email / ads / threads were connected. The LLM stitched them together on demand because every skill is discoverable via registry.list and every reference has a way to re-fetch.
Monday afternoon — capture a thread from the article
The most interesting item in the brief is a Pro scoop about a pending HHS proposed rule. Not a current client issue, but a business-development signal: it affects one of your prospects.
You tell the LLM: "Create a thread in the 'BD — pharma' space from that Pro article. Title it with the rule name. Summarise the substance. @mention the bill, every legislator named, and the agency. Tag it prospect:acme-pharma and issue:drug-pricing."
The LLM:
- Calls
threads.create({workspaceId: "BD-pharma-space", title, body}). The body uses the[label](ref://targetType/id)link syntax for every legislator and bill — the LLM resolved each viacongress.*before dropping them in. - Calls
tags.applytwice. - Returns the new thread id.
threads.create fires a hook. The graph listens. For every ref:// link in the body, an edge lands: (this thread) --[mentions]--> (that legislator), (this thread) --[mentions]--> (that bill). The tag applications are edges too. Two minutes later when graph.reembedPending runs, each new edge gets a 1536-dim embedding so semantic search can find it by phrase. You never have to look at that thread again. It's in the graph.
Tuesday — a prospect asks "what would you bring?"
An Acme Pharma VP emails the agency: "We're considering hiring an outside firm on our 340B exposure. What would you bring?"
You open a new LLM conversation in the agency space. "Assemble everything we know internally about Acme Pharma, 340B, and the pharma-pricing landscape over the last six months. Group by source. I want a pitch tomorrow morning."
The LLM:
graph.searchSemantic({query: "acme pharma 340B"})— ranks edges by fit.graph.walkfrom the top matches, two hops. Returns: the thread you wrote yesterday, three earlier clippings in Gmail, a competitor's op-ed forwarded by a journalist you know, two internal notes from meetings on adjacent issues, the full bill record for H.R. 4873.congress.getBill+congress.getMemberfor the bills and legislators — live status.census.getData({geography: "state:AR"})because one sponsor is from Arkansas and demographics matter for the pitch.- Writes a 600-word brief with citations. You read, edit, ship.
Nothing about "Acme" was ever bulk-synced into Hue. Over six months you had read a handful of articles and written a handful of notes. Each one left a trace — a Gmail message, a thread body, a tag — and each trace became an edge. The graph made them reachable; the LLM assembled them in one session.
Wednesday — a new client signs; provision their space
Acme signed. You provision their space:
spaces.create({name: "Acme Pharma", parentId: "<agency-space>"})— a sub-space under the agency space that inherits every service connection and config you already set up.- Wire Acme's own services: Gmail for inbox, iCloud for calendar, Asana for task tracking, GA + GSC for their web presence, Congress.gov + OpenStates + Federal Register for legislative tracking, Census for demographic lookups, DDC Web Library for advocacy components. Each is a click at
/spaces/<acme>/services/connect. For each one, you tick the skills on the per-human access grid that you want staffers and the AI to have. (Future first-party connectors — CRM, email platform, ad network, compliance tooling — follow the same pattern: one plugin folder, aconnectionblock in its PluginDef, and it's wired.) - Toggle any services Acme doesn't use via the Services tab. Pin any scope values via the Graph tab — for example,
scope.ga.propertyIdlocked to Acme's one GA property, so everyga.*call on the Acme space is confined to that property.
That's the whole setup. Acme's data stays in Acme's services; Hue knows where to find each record and which staffer has which tickbox.
Thursday — build a campaign
The Acme team kicks off a pharma-pricing advocacy push. First meeting: "Who are our targets, what assets do we have, what's the sequence?"
You type into the Acme space: "For the pharma-pricing push, pull every swing legislator on H.R. 4873 and cross-reference with our assets — PAC balance, email list by district, ad-account availability, op-ed pipeline, coalition contacts. Give me a ranked deployment plan per target."
The LLM:
graph.searchSemantic("pharma pricing H.R. 4873")→ bill node.congress.getBill(hr-4873)→ sponsor, cosponsors, committee.- For each swing legislator, walks outward:
congress.getMember({bioguideId})for committee + office;congress.listCosponsors({qualifiedId})for the bill's coalition;census.getData({level: "congressional-district", state, cd})for district demographics;openstates.searchBills({jurisdiction})for state-level companion bills;gmail.searchMessages({query: member.state})for any correspondence;threads.related({targetType:"congress.member", targetId:bioguideId})for internal notes on that legislator. - Pairs targets with the assets that can reach them. Returns a matrix.
You spend an hour refining the plan with your team in a thread. Every tweak you type — "let's pull back on Sen. Brown, too close to AHIP" — you @mention Sen. Brown, which adds another edge to the graph. Future questions about Sen. Brown surface this thread automatically.
Thursday afternoon — activate
Your team runs the plan:
- A teammate drops a thread: "Sent the AR-3 segmented email this morning — results at the end of the week." The thread's
@mentionof the congressional district adds an edge. Real campaign-send tracking will land when a first-party email-platform connector ships; until then the thread is the activation record. - Ad-flight updates live in the ad platform's own UI; a teammate drops a thread noting the update + @-mentions the district. Same edge story.
- PAC treasurer wires $3,300 to Sen. Y through the CRM. Edge: (PAC) --[contributed to]--> (legislator).
- Coalition partner agrees to co-sign. You
@mentionthem in a thread. Edge: (thread) --[mentions]--> (partner).
Each activation leaves its own trace. The graph reflects state-of-the-world without anyone writing "record this in our tracker" — the activation is the record.
Friday — report
CEO wants an end-of-week status update across every client.
You open the agency space. "End-of-week client status report. For each active client, summarize what we did this week, what happened in the news on their issues, what metrics moved, what's due next week."
The LLM walks the agency's sub-space tree, touches each active client's graph, summarizes, and writes the report. Each paragraph cites the specific thread, email, or activation it's summarizing; the CEO can click through to verify.
Permission guarantee: if a less-privileged staffer runs this query, they only see sub-spaces they've been explicitly added to. Hue cannot leak Client A data into a Client B report because the AI literally cannot reach Client A's space from a Client B session — workspace_access is per-(human, space) and never cascades.
The three invariants that make this work for an agency
- No cross-client bleed. Each client lives in its own Postgres schema. An AI session in Client A's space cannot reach Client B's data. Staffers are added to specific spaces with specific ticked skills — no role ladders, no accidental inheritance.
- Data stays at source. Hue never copies a CRM record, an email, or a campaign metric into its own store. The graph stores addresses; every query re-fetches from the origin service. Disconnect a service and the edges remain but the pointers stop resolving — no orphaned data pile.
- Every action is logged. Every skill call is one row in a hash-chained audit log — who did what, when, in which space, with which inputs, producing what output. When a client asks who ran what against their data last month, the answer is one query.
The five primitives, very briefly
If you read docs/architecture.md you'll see these in detail; here's the thirty-second version. Think of the architecture as a three-layer mind — the cortex sits on top of the graph which sits on top of the services.
- Services. Each connected platform (Gmail, CRM, email tool, ads, congress, census, compliance) is a service whose typed functions ("skills") the LLM can call. One connection per space per service. Ground truth — every other layer re-fetches from here.
- Threads. Where your team captures what's happening. Prose with
@mentionsthat resolve to stable refs. Writing a thread is how you feed the graph without thinking about it. - The graph. Every user-authored ref-to-ref relationship — who mentioned whom, what touches what, who owns what, who's in which district — lands as one row in an
edgestable per space. Indexed for exact traversal and cosine similarity. The middle layer. - The context lake (cortex). The top layer. Pin a reference — a legislator, a district, a bill, a PAC — and it becomes a SPACE-LEVEL RECOMMENDATION the LLM sees as "this is what matters here." Every list skill reads the cortex as a soft default for its filters, so once you pin a congressional district to a campaign space, every
listMembers/listAssets/searchBillscall on that space defaults to that district. Pin = soft default; scope = hard lock. Both live in the same store; opposite semantics. - The LLM. Claude, GPT, or a local model over MCP. Reads the cortex first (
spaces.listPinned), walks the graph when it needs to connect dots, hydrates nodes from the source services, synthesises. The AI is whoever you bring; the infrastructure is Hue.
What your LLM sees when it connects
Every LLM session over MCP hits the same flow. No custom glue, no per-space wiring — the LLM lands, gets a compressed surface (two tools), and uses them to pull the space's stylesheet.
┌─────────────────────────────┐
│ LLM connects to /api/mcp │
│ Bearer token (OAuth 2.1) │
└──────────────┬──────────────┘
│
▼
┌─────────────────────────────┐
│ method: initialize │
│ ───────────────────── │
│ serverInfo { name: "j" } │
│ capabilities { tools: {} }│
└──────────────┬──────────────┘
│
▼
┌─────────────────────────────┐
│ method: tools/list │
│ ───────────────────── │
│ │
│ ┌───────────────────┐ │ Gateway pattern —
│ │ discover(...) │ │ 222 skills hidden
│ │ invoke(skill,… ) │ │ behind 2 meta-tools
│ └───────────────────┘ │ so MCP clients don't
│ │ choke on tool count.
└──────────────┬──────────────┘
│
▼
┌─────────────────────────────┐
│ invoke: spaces.orient │ ← the session seed
│ ───────────────── │
│ returns a CSS-like │
│ stylesheet of the space: │
│ │
│ @service congress { │
│ congress.member { │
│ pinned: ref://…; │ ← soft defaults
│ !important: ref://…; │ ← scope hard-locks
│ fetchSkill: …getMember │
│ } │
│ } │
│ @service census { … } │
│ @service gmail { … } │
└──────────────┬──────────────┘
│
▼
┌───────────────────┴──────────────────┐
│ │
▼ ▼
┌──────────────────────┐ ┌──────────────────────┐
│ discover / invoke │ │ graph.walk / │
│ any service skill │ │ searchSemantic │
│ (congress.*, …) │ │ from pinned refs │
└──────────┬───────────┘ └──────────┬───────────┘
│ │
└───────────────┬───────────────┘
│
▼
┌─────────────────────────┐
│ Executor applies: │
│ • access check │
│ • scope substitute │
│ • classification + │
│ PII redaction │
│ • hash-chained audit │
└─────────────────────────┘
Why the pattern matters: every session starts blind and ends oriented in one call. The LLM learns (a) the space's editorial voice (pinned refs), (b) its hard boundaries (!important scope locks), (c) every targetType it can pin against, and (d) every service's fetchSkill — all before touching any real data. The rest of the session is just calling invoke with specific skills, and every call runs through the same security + audit pipeline regardless of whether it's Gmail, Congress.gov, or your in-house CRM.
Next steps
- Create a client space and connect their first service (usually email or CRM).
- Walk the generated graph —
/spaces/[id]/graphshows edges populate as your team writes threads and runs skills. - Build Guides if you're wiring a custom in-house service.
- Architecture reference if you want the schema, index, and traversal details.