AI Transparency

How Our AI Works

Understanding how VocalCalm's AI makes decisions, protects your privacy, and supports your mental wellness journey with complete transparency.

Privacy First

All AI processing respects your privacy. We never share your personal data for training external models.

Safety Logging

High-risk moments are logged automatically so we can audit behaviour when legally required. We don't offer 24/7 human monitoring—VocalCalm signposts you to emergency contacts instead.

Explainable Decisions

You can always request explanations for AI recommendations and understand the reasoning behind them.

How AI Decisions Are Made

1. Conversation Analysis

Our AI analyzes your conversations using natural language processing to understand:

  • • Emotional patterns and sentiment
  • • Key themes and concerns you express
  • • Progress toward your wellness goals
  • • Potential safety risks or crisis indicators

2. Therapeutic Matching

Based on your needs, the AI selects appropriate therapeutic approaches:

  • • CBT techniques for thought patterns
  • • DBT skills for emotional regulation
  • • ACT methods for acceptance and values
  • • Evidence-based interventions matched to your situation

3. Personalization Engine

The AI personalizes your experience by:

  • • Learning your communication preferences
  • • Adapting to your progress over time
  • • Remembering context from previous sessions
  • • Adjusting difficulty of therapeutic exercises

What Data We Use

  • Your conversation transcripts (encrypted)
  • Session timing and frequency patterns
  • Self-reported wellness metrics
  • Therapeutic goal progress
  • Safety risk indicators
  • Dual-save conversation state (secure primary store first, then verified backup copy)

What We Never Do

  • Share your data with third parties for marketing
  • Use your sessions to train external AI models
  • Make decisions without possibility of human review
  • Store unencrypted conversation data
  • Replace professional medical diagnosis

How Our Memory System Works

Your coach remembers past sessions so each conversation feels continuous, but memory is treated like clinical data. We dual-save every transcript: it lands in our private, encrypted memory store first, then synchronises to a secure backup once the encrypted channel is available. If the backup copy is delayed or unavailable, your coach keeps going with the primary record and tells you what was retrieved.

  • Encryption end-to-end: TLS in transit, AES-256 at rest for both storage locations.
  • Audit trails: Clinicians can review access logs whenever a safety escalation is triggered.
  • User control: Export or deletion requests remove records from the primary store first, then the backup once the sync completes.
  • Transparency in sessions: The coach is scripted to acknowledge if a memory couldn't be fetched yet.

Your Rights Regarding AI Decisions

You Can Always:

  • • Request human review of any AI recommendation
  • • Ask for explanations of AI decisions
  • • Opt out of specific AI features
  • • Access all data the AI uses about you
  • • Request deletion of your AI training data
  • • Challenge or correct AI assessments

How to Exercise Your Rights:

  • • In-app: Settings → AI Preferences
  • • Email: [email protected]
  • • Request forms available in your account
  • • Response within 72 hours guaranteed
  • • No fee for exercising your rights
  • • Available in all supported languages

Safety Monitoring & Crisis Detection

Our AI continuously monitors for safety concerns and logs them for compliance review. We do not offer live crisis monitoring—VocalCalm always encourages you to contact local emergency services.

Pattern Recognition

AI identifies concerning patterns that may indicate increased risk

Audit Trail

Flagged conversations are stored securely so a qualified reviewer can audit them if a legal obligation arises.

Resource Connection

We surface hotline numbers and online support portals for your region—reach out directly for immediate help.

Learn More

Questions about AI transparency?
Contact our AI Ethics team at [email protected]