Security and governance.
An AI that runs commands on production endpoints is only deployable if you can trust it. Here is how the trust model works.
Approval policies
You define what the AI engineer can do on its own and what needs a human to approve. The rules are yours. Configure them at the client level, by issue class, or by risk tier.
- Low risk actions (restart a service, clear temp files) can auto-execute
- Higher risk actions require human sign-off, with single or multi-approver strategies
- Policies configurable per organization and per endpoint group
AI safety guardrails
The AI engineer does not run on trust alone. Every proposed action passes through guardrails before anything touches an endpoint.
- Every action is risk-classified before execution. Low risk runs under policy, medium and high risk wait for human approval, the top tier is blocked outright
- Agents can only invoke allow-listed actions. Anything outside that list is rejected, including attempts via prompt injection
- Sensitive values (emails, phone numbers, card numbers, API keys, bearer tokens) are redacted before any prompt is sent to an LLM provider
Full audit trail
Diagnostic runs, policy decisions, and approvals are all written down. Exportable for compliance reviews.
- Diagnostic commands and outputs recorded verbatim
- Approval decisions with timestamp and approver identity
- Post-remediation verification results
- Audit records are append-only: the application offers no path to modify or delete them
Platform foundations.
The governance story above rests on a platform built with the same discipline.
Identity and access
TOTP and FIDO2 passkeys protect user accounts. The client portal supports SSO via OpenID Connect. New users are added by signed, time-limited invitation, not shared credentials.
Tenant isolation
Each tenant has its own database credentials and its own object-storage credentials. Crossing the boundary between two tenants requires crossing a credential boundary, not just a code-level filter.
Encryption
TLS on every connection, with HSTS preload so browsers cannot downgrade to HTTP. Sensitive fields are encrypted at the application layer, with per-tenant key separation.
Signed binaries
Windows agents are signed with an Extended Validation Authenticode certificate. macOS agents are code-signed and notarised by Apple. You can verify the signature chain on every endpoint you install.
Security questions, a detailed overview under NDA, or a vulnerability to report? Write to [email protected].