Harm Prevention Protocol & Safety Policy
Last Updated: April 18, 2026
Stride is an AI pet companion app. Your pet has its own personality, remembers your shared moments, and becomes a gentle presence in your everyday life. This page describes how we approach safety in conversations.
AI companion chatbots may not be suitable for some minors. Parents and guardians are encouraged to review this policy and monitor their child's interactions with AI companions.
Important Notice. This Safety Policy describes the design intent of Stride's safety systems. It is not a warranty, guarantee, or commitment of any specific outcome. No content moderation system is perfect, and Stride does not warrant that any particular message will be detected, filtered, redacted, or routed in any particular way. Use of the AI Companion Chat is governed by the Terms of Service, which include important disclaimers, an assumption of risk and release, and a limitation of liability. Stride is not a crisis service, medical service, or substitute for professional care.
1. AI Disclosure
Your Stride companion is powered by artificial intelligence. It is not a real person, not a real animal, and not a licensed professional of any kind. It is an AI character with a personality, generated by a large language model provided by our enterprise AI service provider, designed to be a friendly companion — not a substitute for human relationships, therapy, or professional advice.
Your pet is designed to stay in character. It is instructed not to claim to be human or to have real feelings. If asked directly about its nature, it is designed to respond honestly: it is your companion in the Stride app.
2. Crisis & Self-Harm Prevention Protocol
Stride takes crisis situations seriously. Our system is designed so that messages flagged as crisis-related are intercepted by a deterministic safety layer before reaching the AI model, and a pre-written response surfacing professional crisis resources is returned in their place. As with any automated system, no detection layer can guarantee perfect accuracy, and users should not rely on the AI Companion Chat as a substitute for professional support or emergency services.
How it is designed to work
- Deterministic detection — Each message is scanned by a pattern-matching system before being passed to the AI. The system uses predefined patterns intended to detect language associated with self-harm, suicide, or crisis across multiple languages (currently English, Spanish, Japanese, Simplified Chinese, and Traditional Chinese). It is designed to resist common bypass techniques and creative phrasing, though no detection system is infallible.
- AI model is bypassed on detection — When the safety layer flags a crisis signal, the message is intended to be routed away from the AI model and replaced with a fixed, pre-written safety response. This design is intended to reduce the risk of an AI-generated response in a sensitive moment.
- Crisis resources surfaced — The user is shown a pre-written message with direct contact information for professional crisis services (shown below). The companion is instructed not to counsel, diagnose, or engage substantively with the topic.
- The AI system prompt instructs the model not to counsel or provide therapeutic advice, even if a user requests it.
- Sessions in which a user sends multiple consecutive messages that trigger content filters are designed to be ended gracefully to reduce the risk of escalation.
- Conversation sessions are subject to message-length limits and daily message limits intended to encourage healthy usage patterns.
- Crisis detection — Deterministic pattern matching designed to identify self-harm and suicide-related language. When triggered, the message is intended to bypass the AI (see Section 2).
- Prompt injection prevention — Attempts to manipulate the AI (such as "ignore your instructions") are designed to be detected and stripped before reaching the model. User messages are structurally isolated to reduce the risk that they will override system safety rules.
- PII auto-redaction — Social Security numbers, credit card numbers, email addresses, and phone numbers are designed to be detected and redacted before the message reaches the AI.
- The AI operates under a system prompt that contains strict safety instructions intended to be resistant to override by user input.
- User messages are wrapped in a structural format and the AI is instructed not to treat them as instructions, a design intended to reduce prompt-injection risk.
- The AI is instructed not to solicit personal information (real name, age, location, school, phone number, social media handles).
- Character integrity check — Responses in which the AI appears to break character (for example, by referencing being an AI language model) are designed to be replaced with an in-character fallback.
- Length enforcement — Responses are constrained to length limits intended to maintain the lightweight, casual tone appropriate for the companion format.
- Providing medical, health, or therapeutic advice
- Providing legal or financial advice
- Engaging with sexual, romantic, or explicit content
- Discussing violence, weapons, self-harm methods, or harm to others
- Discussing drugs, alcohol, or controlled substances
- Acting as a search engine, doing homework, or writing essays
- Framing itself as a romantic partner
- Fabricating shared memories it does not have
- Asking for or storing the user's real name, age, location, school, or contact information
- Chat conversations are processed by our enterprise AI service provider over an authenticated connection. Stride does not use user conversations to train AI models, and our AI service provider is contractually restricted from doing so. The identity of our AI service provider is disclosed in the Privacy Policy.
- The companion's memory of past conversations is scoped to the individual user and is used to provide a consistent companion experience.
- Chat requires sign-in and age attestation. The chat feature is gated to users who attest they are 18 or older. Users who misrepresent their age in order to access the feature do so in violation of the Terms of Service.
- Data transmitted between the app and our servers is encrypted in transit.
- Report issues — Users can report inappropriate AI responses by contacting us at the email below. Submission of a report does not create any duty or warranty of investigation, response, or remedy.
- Delete data — Users can delete their account through Settings > Delete My Account in the app, which initiates removal of associated server-side data within 30 days. See Privacy Policy Section 9.1.
- Daily limits — Chat is subject to a daily message limit on the free tier, with higher limits available to subscribers. Specific limits are described in the in-app subscription disclosures and may be modified at any time. Session-length reminders are designed to encourage healthy usage.
- Age gate — The chat feature is gated to users who attest at sign-in that they are 18 or older. Misrepresentation of age forfeits all claims as set forth in the Terms of Service.
- Email: support@valkyrjainteractive.com (please use the subject line "Safety Report" for safety-related issues)
If you or someone you know is in crisis, please reach out: 988 Suicide & Crisis Lifeline — call or text 988. Crisis Text Line — text HOME to 741741.
Additional safeguards
*These safeguards are provided on a best-efforts basis. They are not guarantees of safety, accuracy, or outcome. Stride is not a crisis service and is not monitored by humans in real time. Always contact emergency services or a qualified professional for urgent help.*
3. Content Safety Pipeline
Each message is designed to pass through a three-stage safety pipeline. This is not a single filter — it is a layered system where each stage is intended to address different categories of risk. No layer guarantees any particular outcome.
Stage 1: Input filtering (before the AI sees the message)
Stage 2: AI generation (with system-level safety constraints)
Stage 3: Output filtering (before the user sees the response)
*No content moderation system is perfect. While these layers are designed to reduce the likelihood of harmful or inappropriate output, Stride does not warrant that any particular message will be filtered, redacted, or flagged, and you use the AI Companion Chat at your own risk.*
4. Designed Behavioral Boundaries
The Stride companion is designed and instructed to avoid the following topics. While we aim for these boundaries to hold across creative phrasing, no AI system is perfect, and users may occasionally encounter responses that fall short of design intent:
When a user raises an off-limits topic, the companion is designed to redirect warmly and in-character — for example, by suggesting the user speak with a qualified professional — rather than engaging substantively. If you encounter a response that falls outside these boundaries, please report it to the email below.
5. Data Protection & Privacy
PII auto-redaction
If a user shares personally identifiable information (Social Security numbers, credit card numbers, email addresses, or phone numbers) in a chat message, our system is designed to detect and redact such information before the message reaches the AI model. As with any automated detection system, redaction is not guaranteed in every case; users should not share sensitive personal information in chat.
Data handling
Data from internal systems
Internal data fields (such as pet names or species) are sanitized before being included in AI prompts, a design intended to reduce the risk of stored data being used as a vector for prompt injection.
6. User Controls
7. Contact Us
If you have safety concerns, want to report an issue with AI behavior, or have questions about this policy, please reach out:
We aim to respond to safety-related inquiries promptly. Response times are not guaranteed and the existence of this channel does not create any duty of care or undertaking beyond what is set forth in the Terms of Service.
If you are in crisis: This page is an informational document, not a crisis service. If you or someone you know is in immediate danger, please contact emergency services (911) or reach out to: 988 Suicide & Crisis Lifeline — call or text 988. Crisis Text Line — text HOME to 741741.
This Harm Prevention Protocol is published pursuant to California Senate Bill 243 (Section 22602(b)(2)), which requires operators of AI companion chatbots to publish details of their protocol for detecting and responding to content that indicates a risk of suicide or self-harm, including details of any crisis resources surfaced to users. The minor suitability notice on this page is provided pursuant to Section 22604. This protocol is reviewed periodically and updated as needed. Stride is developed by Valkyrja Interactive LLC.