Settingsintermediate

Capability security levels

Security levels 0–3 control how much oversight a capability gets — from silent execution all the way up to supervisor approval. Plus identity verification, redaction, and consent tracking.

4 min read

Capability security levels

Capabilities have four security levels (0 through 3) that decide how much supervision the AI gets when calling them. Levels exist because not every capability deserves the same trust — looking up your store’s opening hours is one thing; refunding a charge is another.

The four levels

  • 0 — No restrictions, no special logging. The AI runs the capability silently. — Read-only lookups against safe data: order status, opening hours, public catalogue queries.
  • 1 — Logged, but no human approval required. — Most read operations on customer data — looking up the customer’s own orders, account info, invoices.
  • 2 — Requires confirmation from the AI’s caller (the customer in chat, or the agent if the AI is in copilot mode) before execution. — Mutations the customer wants to take but should explicitly agree to: cancellations, plan changes, address updates.
  • 3 — Requires supervisor approval before execution. The capability pauses; a human in your team has to approve it before the AI proceeds. — High-stakes actions: refunds above a threshold, account deletions, anything you’d want a second pair of eyes on.

The default for new capabilities is level 0. Raise the level any time the action has real consequences — the cost of an extra confirmation step is far less than the cost of a wrong action taken silently.

Identity verification

On top of the security level, you can require the AI to verify the customer’s identity before the capability runs. Verification methods include:

  • Email — the customer states the email on the account; the capability checks it matches.
  • Order number — the customer states a recent order number.
  • Zip code — the customer states their billing zip.
  • Custom identity fields — anything you’ve stored on the contact record.

Verification happens conversationally — the AI asks, the customer answers, the system validates against your records. The capability won’t execute if the answer doesn’t match.

Redaction rules

Define patterns that should be automatically masked in capability inputs and outputs. Common targets:

  • Credit card numbers
  • Social security numbers
  • API tokens or passwords leaking through error messages
  • Anything matching a regex you specify

Redacted values never appear in logs or in the capability’s execution history — the AI sees only the masked form.

For capabilities that act on the customer’s behalf (cancellations, subscription changes, account modifications), Atender records the customer’s consent before the action runs. The consent record includes:

  • The customer’s identifier
  • The capability that was called
  • The action that was about to happen
  • The customer’s affirmative response
  • A timestamp

Consent records are part of your audit trail and can be exported for compliance reviews.

How they combine

Identity verification, security levels, and consent tracking are independent dials. A typical configuration for a high-stakes capability:

  • Auth: OAuth2 + Full Access
  • Identity verification: email + order number
  • Security level: 3 (supervisor approval)
  • Redaction rules: credit card numbers, full bank details
  • Consent tracking: on

That capability won’t run until (1) the customer proves identity, (2) a supervisor approves, (3) the customer confirms — and even then, sensitive values never reach the logs.

See also

Tags

Ai FeaturesReference