Designing the per-action AI consent model: tradeoffs and UX constraints
The problem with ambient AI
Most email clients that integrate AI features operate on an ambient model: the AI is always on, always watching, always processing. Suggestions appear unbidden. Summaries are generated before you ask. Smart replies are pre-computed for every incoming message.
This creates two problems. First, it normalizes the constant transmission of email content to third-party AI providers. Second, it makes it impossible for the user to reason about what data has been shared and with whom.
TwinMail takes a different approach: every AI action requires explicit, per-action consent.
The glass-box pattern
We call our AI interaction model the "glass-box pattern." Before any AI action executes, TwinMail presents a consent sheet that shows:
- Inputs — exactly which message content, metadata, or context will be sent to the AI provider
- Provider — which AI service will process the request (e.g., Claude, GPT-4)
- Action — what the AI is being asked to do (summarize, draft reply, extract action items)
- Outputs — after execution, the full AI response is displayed before any state change
The user reviews the inputs, confirms the action, and then reviews the outputs before accepting or discarding the result. Nothing happens silently.
The consent sheet architecture
The consent sheet is a modal UI component that intercepts every AI action. Its lifecycle is:
- Trigger — user explicitly requests an AI action (button press, keyboard shortcut, or context menu)
- Preview — consent sheet opens showing the proposed inputs and the target provider
- Confirm — user reviews inputs and confirms, or cancels
- Execute — TwinMail sends the request to the AI provider
- Review — consent sheet displays the AI response
- Apply — user accepts the result (applies draft, saves summary) or discards it
There is no step where data leaves the device without the user seeing exactly what will be sent.
Input minimization
The consent sheet enforces input minimization. When the user requests a thread summary, the consent sheet shows precisely which messages will be included. TwinMail does not send the entire thread history by default — it sends only the messages visible in the current view, unless the user explicitly expands the context window.
This is a UX tradeoff. Sending more context often produces better AI outputs. But we believe the user should make that tradeoff explicitly, not have it made for them.
UX tradeoffs we considered
Blanket consent with audit log
One alternative we evaluated was a "consent once, audit always" model: the user grants broad permission for AI features, and TwinMail maintains a detailed log of every AI interaction for later review.
We rejected this because:
- Audit logs are reviewed retroactively, after data has already been shared
- The cognitive overhead of reviewing logs is higher than the overhead of per-action consent
- Blanket consent normalizes the very behavior we are trying to make deliberate
Tiered consent with defaults
Another alternative was tiered consent: certain "low-risk" AI actions (e.g., subject line suggestions) proceed automatically, while "high-risk" actions (e.g., draft replies) require confirmation.
We rejected this because:
- The risk classification is subjective and context-dependent
- A subject line suggestion for a sensitive thread is not low-risk
- Users develop different expectations about what requires consent, leading to confusion
Per-action consent is the simplest correct model
Per-action consent has higher interaction cost than the alternatives. Every AI action requires two extra clicks (confirm inputs, accept outputs). We accept this cost because:
- It is predictable — the user always knows when AI is involved
- It is auditable — the consent sheet is the audit log
- It is revocable — the user can cancel at any point
- It scales down — if the user does not want AI features, they simply never trigger them
Provider configuration
TwinMail does not ship with a default AI provider. During setup, the user configures their preferred provider and API key. The consent sheet always shows the configured provider name and endpoint, so the user knows where their data is going.
Supported providers include any OpenAI-compatible API endpoint, Anthropic's Claude API, and local models via Ollama. The provider configuration is stored in the vault and never transmitted to Twindevs infrastructure.
What we built
The consent sheet is implemented as a Tauri window overlay with three states: preview, executing, and review. The executing state shows a progress indicator but does not allow dismissal — this prevents the user from accidentally missing the AI response.
The entire AI pipeline — input assembly, consent capture, provider communication, and output display — is managed by a single Rust module (ai_consent) that enforces the glass-box invariant at the type level. It is not possible to send data to an AI provider without going through the consent pipeline.
The broader principle
Per-action consent is expensive in interaction cost. It is cheap in trust cost. For a product that handles the most sensitive digital communication most people produce, we believe that is the right tradeoff.