The privacy and context-minimization architecture used across LUMARA and SwarmSpace. This page is for developers building plugins and security reviewers evaluating the platform. PRISM is a proper noun and is not expanded as an acronym.
PRISM is the privacy architecture that governs how user data flows through SwarmSpace and LUMARA. Its core principle: plugins receive the minimum necessary context to perform their function. The goal is structural enforcement wherever possible — the system is being built so that undeclared data paths are not available to plugin code, not merely disallowed by policy. Some enforcement layers described below are operational today; others are planned and clearly marked.
Three enforcement boundaries work together:
| Boundary | Role |
|---|---|
| Manifest declaration | The plugin declares required context fields in privacy_data_required as a string[] of dot-notation CHRONICLE field names. Current The router enforces this: only declared fields are forwarded to the plugin, and calls to non-ANONYMOUS plugins are blocked unless the user provides _prism_consent. |
| Sandbox enforcement | Current A Cloudflare Worker (V8 isolate) runs the invocation. These are static, pre-deployed workers — not dynamically created per request. Planned Context injection (passing only the declared subset into the context parameter) is planned; the router currently forwards parameters verbatim. |
| Network control | Planned globalOutbound: null (default posture) will block all outbound network access except hostnames allowlisted in network_domains on the manifest. This network control layer is not yet enforced; workers currently have standard Cloudflare Worker outbound access. |
Structural vs policy: Context shape, sandbox APIs, outbound fetch allowlists, and credential injection at the network layer are the planned structural enforcement targets. Current V8 isolate sandboxing is operational. Planned Context filtering, outbound allowlists, and boundary credential injection are not yet enforced. Safety review, Developer Agreement terms, runtime anomaly review, and third-party API compliance are policy and contract — they complement structural enforcement rather than replace it.
Planned architecture The following describes the intended context minimization flow. The router currently forwards invocation parameters verbatim to plugin workers without field-level filtering.
privacy_data_required from the plugin manifest.context argument.// privacy_data_required: [] (ANONYMOUS tier) { "query": "What's the weather like this afternoon?", "metadata": { "caller": "lumara", "request_id": "7b2c9f1a-4e8d-4b3c-9f01-123456789abc", "tier": "free" } }
Current The router enforces context field filtering: only fields declared in privacy_data_required are forwarded. Undeclared fields are stripped before the request reaches the worker. Plugins with privacy_data_required: [] (ANONYMOUS tier) receive no personal context.
// privacy_data_required: [] (ANONYMOUS tier) { "query": "Summarize recent papers on transformer scaling laws", "context": {}, "metadata": { "caller": "lumara", "request_id": "9d0e5a33-1111-2222-3333-444455556666", "tier": "free" } }
Current Each plugin call executes inside a Cloudflare Worker (V8 isolate). These are static, pre-deployed workers — not dynamically created per request. Each invocation runs in its own isolate with no shared mutable state between concurrent calls or between different plugins. Only platform-declared bindings and APIs are exposed — the plugin cannot import arbitrary modules, open raw sockets, or reach internal SwarmSpace services outside the published worker surface.
Network control: Planned The intended default posture is globalOutbound: null, where outbound HTTP(S) is permitted only to origins whose hostnames appear in the manifest network_domains array. This is not yet enforced — workers currently have standard Cloudflare Worker outbound access.
Credential injection: Planned The intended design has SwarmSpace hold secrets for third-party APIs and attach authorization at the network boundary, so that plugin code issues a standard fetch() and the API key is injected transparently. Currently, workers access secrets via environment variables bound at deploy time.
Runtime monitoring: Partial Execution logging is implemented. Outbound target comparison, volume tracking, and anomaly detection against declared behavior are planned. Monitoring is intended as operational enforcement, not a substitute for the structural blocks.
Planned Three fields are planned for your manifest to determine how PRISM and the orchestrator treat your plugin in automated and chained execution contexts. These fields are defined in the manifest schema but are not yet read or enforced by the router or orchestrator.
| Field | Meaning |
|---|---|
is_read_only |
Plugin only reads data. Does not write to external systems, send messages, or modify state. Read-only plugins auto-approve in scheduled auto execution mode. |
is_destructive |
Plugin modifies, deletes, publishes, or sends data to external systems. Destructive plugins require explicit user confirmation even inside an already-approved workflow chain, and are blocked entirely in headless auto mode. |
headless |
Plugin is designed to run without a user-facing confirmation step. Required for Durable Object dispatch. Verified tier only. Incompatible with is_destructive: true. |
The following describes the planned architecture for recurring agents. Durable Objects are not yet deployed; this section documents the intended privacy boundaries for when they are implemented.
On each scheduled execution, the DO requests fresh context from LUMARA at run time, injects it into the workflow run, uses it for synthesis, then discards it. Personal context exists only for the duration of that execution path — it is not durably held in the DO as a user profile cache.
The following describes the planned catalogue discovery architecture. When implemented, LUMARA will call SwarmSpace's proactive discovery endpoint (/catalogue/updates), where interest tags are derived on the client from CHRONICLE and related local signals. SwarmSpace will receive tag hashes, not raw CHRONICLE documents. With only hashed tags, SwarmSpace cannot reconstruct the user's full profile from that request. LUMARA will perform additional relevance filtering using full local context before surfacing goal cards.
network_domains handle data after the request leaves SwarmSpace. The Developer Agreement and vendor terms address this contractually.privacy_data_required honestly. Request only fields your implementation actually uses. Over-declaration is a safety-review failure.network_domains completely. Any hostname contacted during normal operation must appear. Undeclared egress will be blocked structurally once outbound allowlists are enforced (currently planned).Violations can result in rejection, de-listing, or account action regardless of technical minimization.