FAQ — What data does Self-Learning use?
A short answer to the most common question.
What gets used
- Flagged AI responses from your conversations — flags submitted from the Conversations inbox, the Agent Stack test sandbox, and the Monitor.
- The annotations your team writes on those flags — the comment text and the intent category (missing info, too verbose, incorrect, tone, routing error, knowledge gap, other).
- The conversation context around the flagged message — the messages immediately before and after, and the specialist that produced the flagged reply.
That’s it. Nothing else feeds the proposal generator.
What does NOT get used
- Conversations from other tenants. Self-Learning is scoped to your tenant. Your data never shapes another tenant’s AI, and theirs never shapes yours.
- Unflagged conversations. Conversations that nobody marks as needing improvement are never fed into the proposal pipeline. The AI doesn’t second-guess responses on its own.
- Customer PII for training. Proposals are generated from the substance of the issue, not raw customer identifiers. Anything matching your redaction rules on capabilities is masked before it reaches Self-Learning.
Where do approved changes apply?
Only to your tenant’s stack. An approved prompt edit modifies your specialist agent. An approved KB-article proposal creates a draft in your Knowledge Base. Nothing leaks across tenants.
Can I disable it?
Yes. Self-Learning is configurable per tenant. If you turn it off, no new feedback reports get processed and no new proposals are generated. Existing groups, proposals, and audit-log entries are preserved — you can re-enable later without losing history.
Who can see flagged content?
Anyone on your tenant with access to Self-Learning. By default, that’s admins and supervisors. Review-only users can see Feedback Groups and the Audit Log; full-access users can also see and act on Staged Changes. Ask your admin to adjust roles if you need to widen or narrow that.