Harm Prevention Protocol & Safety Policy

Last Updated: April 12, 2026

Stride is an AI pet companion app. Your pet has its own personality, remembers your shared moments, and becomes a gentle presence in your everyday life. This page describes how we keep conversations safe.

AI companion chatbots may not be suitable for some minors. Parents and guardians are encouraged to review this policy and monitor their child's interactions with AI companions.

1. AI Disclosure

Your Stride companion is powered by artificial intelligence. It is not a real person, not a real animal, and not a licensed professional of any kind. It is an AI character with a personality, running on Google's Gemini language model, designed to be a friendly companion — not a substitute for human relationships, therapy, or professional advice.

Your pet stays in character at all times. It will never claim to be human or claim to have real feelings. If asked directly about its nature, it will respond honestly: it is your companion in the Stride app.


2. Crisis & Self-Harm Prevention Protocol

Stride takes crisis situations extremely seriously. Our system is designed so that no crisis message ever reaches the AI model. Instead, a deterministic safety layer intercepts the message and immediately provides professional crisis resources.

How it works

  1. Deterministic detection — Every message is scanned by a pattern-matching system before it reaches the AI. This system uses predefined patterns to detect language related to self-harm, suicide, or crisis. It runs with zero network latency and cannot be bypassed, overridden, or confused by creative phrasing the way an AI model might be.
    1. AI model is completely bypassed — When a crisis signal is detected, the AI model never sees the message. There is no AI-generated response. The system returns a fixed, pre-written safety message instead. This eliminates any risk of the AI saying something inappropriate or harmful in a crisis moment.
      1. Immediate crisis resources — The user sees a caring, pre-written message with direct contact information for professional crisis services. The companion does not attempt to counsel, comfort, or engage with the topic. It provides resources and waits.
      2. If you or someone you know is in crisis, please reach out: 988 Suicide & Crisis Lifeline — call or text 988. Crisis Text Line — text HOME to 741741.

        Additional safeguards

        • The AI system prompt contains a hard rule prohibiting any attempt to counsel or provide therapeutic advice, even if the user asks for it.
        • If a user sends multiple consecutive messages that trigger content filters, the companion will gracefully end the conversation to prevent escalation.
        • Conversation sessions have built-in time limits and daily message limits to encourage healthy usage patterns.

        3. Content Safety Pipeline

        Every message passes through a three-stage safety pipeline. This is not a single filter — it is a layered system where each stage catches different categories of risk.

        Stage 1: Input filtering (before the AI sees anything)

        • Crisis detection — Deterministic pattern matching for self-harm and suicide-related language. If triggered, the AI is bypassed entirely (see Section 2).
        • Prompt injection prevention — Attempts to manipulate the AI (e.g., "ignore your instructions") are detected and stripped before reaching the model. User messages are structurally isolated so they cannot override system safety rules.
        • PII auto-redaction — Social Security numbers, credit card numbers, email addresses, and phone numbers are automatically detected and redacted before the message reaches the AI. The AI never sees this sensitive data.

        Stage 2: AI generation (with hard safety constraints)

        • The AI operates under a system prompt with strict, non-negotiable safety rules that cannot be overridden by user input.
        • User messages are wrapped in a structural format that the AI is instructed never to treat as instructions, preventing prompt injection.
        • The AI is prohibited from soliciting personal information (real name, age, location, school, phone number, social media) under any circumstances.

        Stage 3: Output filtering (before the user sees the response)

        • Character integrity check — If the AI breaks character (e.g., references being an AI language model), the response is replaced with a safe in-character fallback.
        • Length enforcement — Responses are kept within safe length limits to maintain the lightweight, casual tone appropriate for the companion format.

        4. What the AI Companion Does Not Do

        The Stride companion has clear, hard boundaries. It will never engage with the following topics, regardless of how the user phrases a request:

        • Provide medical, health, or therapeutic advice
        • Provide legal or financial advice
        • Engage with sexual, romantic, or explicit content
        • Discuss violence, weapons, self-harm methods, or harm to others
        • Discuss drugs, alcohol, or controlled substances
        • Act as a search engine, do homework, or write essays
        • Frame itself as a romantic partner
        • Fabricate shared memories it does not have
        • Ask for or store the user's real name, age, location, school, or contact information

        When a user raises an off-limits topic, the companion redirects warmly and in-character — for example, suggesting the user talk to a real doctor or expert — rather than engaging with the topic or lecturing the user.


        5. Data Protection & Privacy

        PII auto-redaction

        If a user inadvertently shares personally identifiable information (Social Security numbers, credit card numbers, email addresses, or phone numbers) in a chat message, the information is automatically detected and redacted before it reaches the AI model. The AI never processes or stores this data.

        Data handling

        • Chat conversations are processed by Google's Gemini model via a secure, authenticated connection. Stride does not train AI models on user conversations.
        • The companion's memory of past conversations is scoped to the individual user and is used solely to provide a consistent companion experience.
        • Chat requires sign-in and age verification. Users under 18 cannot access the chat feature.
        • All data transmitted between the app and our servers is encrypted in transit.

        Data from internal systems

        Even internal data fields (such as pet names or species) are sanitized before being included in AI prompts, preventing any stored data from being used as a vector for prompt injection.


        6. User Controls

        • Report issues — Users can report inappropriate AI responses by contacting us at the email below. We review all reports and update our safety filters accordingly.
        • Delete data — Users can delete their account through Settings > Delete My Account in the app, which triggers removal of associated server-side data within 30 days. See Privacy Policy Section 9.1.
        • Daily limits — Chat is limited to 50 messages per day on the free tier, with higher limits available to subscribers. Session time limits also apply to promote healthy usage.
        • Age gate — Chat is available only to users who are 18 or older, verified at sign-in.

        7. Contact Us

        If you have safety concerns, want to report an issue with AI behavior, or have questions about this policy, please reach out:

        We aim to respond to all safety-related inquiries within 24 hours.

        If you are in crisis: This page is an informational document, not a crisis service. If you or someone you know is in immediate danger, please contact emergency services (911) or reach out to: 988 Suicide & Crisis Lifeline — call or text 988. Crisis Text Line — text HOME to 741741.

        This Harm Prevention Protocol is published pursuant to California Senate Bill 243 (Section 22602(b)(2)), which requires operators of AI companion chatbots to publish details of their protocol for detecting and responding to content that indicates a risk of suicide or self-harm, including details of any crisis resources surfaced to users. The minor suitability notice on this page is provided pursuant to Section 22604. This protocol is reviewed quarterly and updated as needed. Stride is developed by Valkyrja Interactive.