Skip to content

Safety & Moderation

Spec Source: Document 15 — Reporting & Flagging, Document 14 §3-4 | Last Updated: February 2026

Overview

DoCurious serves a mixed-age audience that includes children under 13 and students in school environments. Every moderation decision is filtered through five principles from the spec: Easy (reporting takes less than 30 seconds), Safe (reporters are never identified to the reported party), Clear (users know what happens after they report), Non-punitive (reporting is encouraged, not stigmatized), and Age-appropriate (under-18 users see simpler, more supportive language throughout the reporting and safety flows).

The safety system has three layers:

  1. User-initiated reporting -- any user can flag content or profiles through a three-step modal that collects a reason, optional details, and displays a confirmation with next-step options.
  2. Automated flagging -- content scanning catches prohibited material (explicit images, CSAM, severe profanity, personal information patterns, self-harm keywords) before or at the moment of publishing.
  3. Moderation review -- platform admins and community moderators review flagged items through dedicated queues and take graduated enforcement actions.

Supporting these layers are user-facing tools for blocking and muting, an appeals process for users who believe moderation actions were taken in error, and always-available safety resources for users in crisis.

The safety system is intentionally asymmetric in its design: it is very easy for any user to report (one tap to open the modal, one tap to select a reason, one tap to submit), but the moderation team has multiple graduated enforcement options to ensure proportional responses. This reflects the platform's stance that over-reporting is preferable to under-reporting, and that the cost of reviewing a false positive is far lower than the cost of missing a real safety issue.

Because DoCurious is used in school environments with children as young as those in elementary school (under 13 with parental consent), the safety infrastructure must satisfy COPPA, FERPA, and state student privacy laws simultaneously. The reporting system is designed so that a child can report something that makes them uncomfortable without needing to understand legal categories or policy terminology.

How It Works

Report Flow (Standard)

STATUS: BUILT

The ReportModal and ReportButton components implement the core report flow. The modal presents reason selection, an optional description textarea, an "Also block this user" checkbox (suppressed for community reports), and a confirmation state. The ReportButton wraps the modal trigger as a flag icon button that can be placed on any content card. Types are defined in src/types/report.types.ts.

Every piece of reportable content on DoCurious has a "Report" entry point -- either a flag icon or a "Report" option in a three-dot menu. The label is always "Report" rather than "Flag" because it is clearer for younger users.

The report flow has three steps:

Step 1 -- Reason Selection. The user sees the question "What's the problem?" (or "What's wrong?" for minors) and selects a single reason from a predefined list. The reasons vary by content type:

Content TypeReason Options
Track Records, Posts, CommentsInappropriate or explicit content; Harassment or bullying; Spam or scam; False or misleading content; Sharing personal information (doxxing); Copyright violation; Self-harm or dangerous activity; Something else
User ProfilesFake or impersonation account; Inappropriate profile content; Harassment or bullying; Spam account; Underage user (appears to be under 13 without parental consent); Something else
ChallengesInaccurate or misleading information; Safety concerns (dangerous without proper warnings); Inappropriate content; Copyright violation; Pricing or availability issues; Something else
CommunitiesInappropriate community purpose; Harassment or hate group; Spam community; Inappropriate content being shared; Something else

The current frontend implementation uses a simplified reason set across all content types: Inappropriate content, Spam or misleading, Harassment or bullying, Misinformation, Safety concern, Other. The spec's per-content-type reason lists are defined but not yet wired to the UI.

Step 2 -- Additional Details (Optional). The user sees the prompt "Want to tell us more?" with a free text field (500 character limit in the spec, currently uncapped in the textarea). A "Submit without details" skip option is specified but not yet present in the UI -- the current implementation makes the details field visible after reason selection and submits with or without text.

Step 3 -- Confirmation. The user sees "Thanks for letting us know" (currently "Report Submitted") with three pieces of information: the team will review within 24 hours, the reported person will not know who reported them, and if a violation is found action will be taken. The confirmation also offers:

  • "Block this user" (for reports on user-created content -- currently implemented as an inline checkbox on step 2)
  • "Mute this community" (for community content reports -- not yet implemented as a post-report option)
  • "Learn about our Community Guidelines" link

Post-report behavior: After submission, the reported content remains visible to other users (unless auto-flagged or auto-hidden -- see Automated Flagging below). The reporter sees a subtle "You reported this" indicator visible only to them. No confirmation email is sent. The reporter receives an in-app notification if they opt in to report status updates.

Report Flow (Minors Under 18)

STATUS: BUILT

The minor-specific report flow is implemented in ReportModal.tsx. Under-13 users see kid-friendly language with simplified reason options. Users aged 13-17 see teen-appropriate language with supportive messaging. All minors see a post-report message encouraging them to talk to a parent, teacher, or trusted adult. School-linked students see their school name in the post-report message. The 'block user' checkbox is hidden for all minors.

For users under 18, the report modal uses simpler, more supportive language:

  • Question becomes "What's wrong?" instead of "What's the problem?"
  • Reason options are simplified: "It's mean or hurtful," "It's inappropriate," "It's scary or makes me uncomfortable," "Someone is pretending to be someone else," "It's spam or fake," "Something else"
  • After submitting, minors see an additional message: "If someone is bothering you or making you feel unsafe, talk to a parent, teacher, or trusted adult."
  • For school-linked students, the message includes specific names: "You can also talk to [Teacher Name] or [SA Name]."

Under-13 reporting uses the same simplified flow. If the report involves another user contacting them inappropriately, the system automatically flags it to admins as high priority. Parents are not automatically notified of reports filed by their child (to avoid discouraging reporting), unless the situation is escalated by the moderation team.

Reportable Content Types

STATUS: BUILT

All seven spec-defined reportable content types are implemented: track_record, feed_post, community, user, comment, challenge, and event. The ReportButton component is placed across all reportable surfaces. direct_message is deferred as DMs are not in platform.

Content TypeWhere It AppearsReport TriggerFE Type Status
Track RecordTR gallery, community feed, challenge detail"Report" on TR cardDefined as track_record
Community PostCommunity feed"Report" on postDefined as feed_post
Community CommentBelow community posts"Report" on commentNot yet in ReportableContentType
User ProfileProfile page, member lists"Report" on profileDefined as user
ChallengeChallenge detail page"Report" on challengeNot yet in ReportableContentType
CommunityCommunity detail page"Report" on communityDefined as community
EventEvent detail page"Report" on eventNot yet in ReportableContentType
Direct Message (future)Message thread"Report" on messageN/A -- DMs not in platform

Automated Flagging

STATUS: BUILT

Automated content moderation with 8 rule types is implemented as middleware on community feed and track record routes. Rules include profanity detection, link filtering, excessive caps detection, duplicate content filtering, rapid posting detection, suspicious pattern detection, excessive length detection, and hate speech detection. Flagged content automatically creates ContentReport entries for admin review.

The spec defines eight auto-flag triggers that operate before or alongside user reports:

TriggerActionPriority
Image scanning: nudity/explicitAuto-block from publishing + queue for reviewCritical
Image scanning: CSAM matchImmediate block + NCMEC report + admin alertCritical
Text: severe profanity/slursAuto-block from publishing + queue for reviewHigh
Text: self-harm keywordsQueue for review + show support resources to authorHigh
Text: personal info patterns (phone, address, SSN)Block from publishing + prompt user to editMedium
User receives 3+ reports in 7 daysAuto-flag user for reviewHigh
Content receives 3+ unique reportsAuto-hide pending reviewHigh
New account posts 10+ items in first hourAuto-flag as potential spamMedium

Auto-block user experience: When content is auto-blocked, the user sees: "Your [post/Track Record] couldn't be published. Our system detected content that may not meet our guidelines. Please review and try again." The tone is deliberately non-accusatory -- no shaming language, no "violation" terminology. The user can edit and re-submit, or if they believe the flag is wrong, they can click "Think this is a mistake? Submit for manual review" which routes the content to the moderation queue.

CSAM handling: Content matching known CSAM databases triggers the most severe automated response: immediate block, mandatory NCMEC (National Center for Missing & Exploited Children) report, and an admin alert. This is a legal requirement under federal law (18 U.S.C. 2258A) and is not subject to appeal. The spec treats this as a separate track from the standard moderation pipeline.

Self-harm keyword detection: When self-harm keywords are detected in user-created content, the content is queued for review but the author is not blocked from publishing. Instead, the author is shown the SafetyResources component with crisis contacts. This reflects the platform's principle that a user expressing distress should receive support, not punishment.

Blocking

STATUS: BUILT

The BlockUserButton component implements block and mute actions with confirmation dialogs, success notices, and undo capability. Block can be triggered from the report confirmation flow (checkbox in ReportModal), from user profiles (three-dot menu), and from settings (blocked users list). The component accepts userId, username, isBlocked, and isMuted props and manages local state for both actions.

Blocking is available to all users. It is a silent, user-initiated action -- the blocked user is never notified.

What blocking does:

  • Blocked user's content is hidden from the blocker (Track Records, posts, comments)
  • Blocked user cannot see the blocker's public content
  • Blocked user cannot join communities created by the blocker
  • Blocking is mutual in terms of content visibility but unilateral in terms of initiation
  • Blocking does NOT affect school context -- teacher/student relationships are maintained regardless of blocks

How to block:

  • From the report confirmation screen: "Block this user" checkbox
  • From a user profile: three-dot menu, then "Block"
  • From Settings: "Blocked users" list, add by username

How to unblock:

  • Settings, then "Blocked users," then "Unblock" on the specific user
  • The BlockUserButton component provides an inline "Unblock" toggle
  • Unblocking is immediate

Muting

STATUS: BUILT

Muting is implemented in the BlockUserButton component (user-level muting) and as a moderation action in CommunityModTools (moderator-imposed muting with configurable duration from 1 hour to 30 days). The community moderation mute includes a reason field and appears in the moderation log.

User-initiated muting (community level):

  • Community posts from the muted user no longer appear in the muter's feed
  • The user remains a community member but receives no notifications from the muted user
  • The user can still visit the community and see the muted user's content manually by navigating to their profile

Moderator-imposed muting:

  • A community moderator or creator can mute a member, preventing them from posting or commenting
  • The muted member can still view feeds but sees a banner indicating their muted status
  • The spec calls for indefinite muting (until manually unmuted); the current UI offers timed durations: 1 hour, 6 hours, 24 hours, 3 days, 7 days, 30 days

How to mute a community (user-initiated):

  • Community settings: "Mute this community"
  • From the report confirmation: "Mute this community" (specified but not yet in the post-report UI)

Muting & Blocking Restrictions for Minors

STATUS: BUILT

Minor blocking restrictions are implemented in BlockUserButton.tsx. Under-13 users see a 'Report instead' UI instead of block/mute buttons. In school contexts, students cannot block teachers and are directed to report concerns instead. All minors see simplified confirmation language.

  • Under-13 users can block other users but see a simplified UI
  • Within school communities, students cannot block teachers or classmates (but can report them)
  • Teachers cannot be blocked by students in school context
  • These restrictions apply only within the school context -- outside school communities, standard blocking rules apply

Report Status & Transparency

STATUS: BUILT

The reporter feedback system (Settings > "Your Reports" with status tracking) is specified but not implemented. The ReportModal shows a confirmation message but does not persist the report or provide subsequent status updates.

Reporters can check the status of their reports at Settings > "Your Reports":

StatusMessage Shown to Reporter
Submitted"We received your report and will review it."
Under Review"Our team is looking into this."
Action Taken"We reviewed your report and took action. Thanks for helping keep DoCurious safe." (No specifics about the action, to protect the reported user's privacy.)
No Violation Found"We reviewed your report and didn't find a violation of our guidelines. Thanks for looking out for our community."

Status changes trigger in-app notifications (not email) to keep the system lightweight.

Transparency Report (Future): The spec calls for a quarterly published report on the website with total reports received, breakdown by category, action taken rates, average response time, auto-flagging statistics, and appeal outcomes.

Moderation Queue (Admin)

STATUS: BUILT

Two admin moderation interfaces exist. The FlaggedContentQueue page (src/pages/admin/FlaggedContentQueue.tsx) provides a card-based queue with severity filtering (high/medium/low), content previews, report metadata (reason, reporter, date, report count), and four action buttons per item: Dismiss, Warn, Remove, Suspend User. The ReviewModeration page (src/pages/admin/ReviewModeration.tsx) is wired to useAdminStore for fetching flagged content and resolving flags, with search and status filtering. Both use mock data.

Platform admins review flagged content through a prioritized queue. Each flagged item displays:

  • Content type and preview
  • Author name and ID
  • Reporter name (or "AutoMod" / "System Scanner" for automated flags)
  • Report reason
  • Report date and time
  • Severity level (high/medium/low)
  • Number of reports received

Admin actions per item:

ActionEffect
DismissReport is closed. Content remains visible. No action against the author.
WarnAuthor receives a warning notification. Content remains visible. Warning is logged.
RemoveContent is removed from the platform. Author is notified that their content was removed.
Suspend UserAuthor's account is suspended (temporary platform-wide restriction). Author loses access to all features for the suspension period.

Community-Level Moderation

STATUS: BUILT

The CommunityModTools component (src/components/social/CommunityModTools.tsx) provides a full moderation panel accessible to community creators and moderators. It shows flagged content within the community, offers four action types (remove post, warn user, mute user with duration, ban user from community), requires confirmation dialogs with optional reason fields, and maintains a visible moderation action log with timestamps.

Community creators and moderators have a separate set of moderation tools scoped to their community:

ActionEffectScope
Remove PostRemoves the post and all its replies. Author is notified.Community only
Warn UserSends a formal warning notification to the user.Community only
Mute UserUser can view feeds but cannot post or comment for the selected duration.Community only
Ban UserUser loses access to all community feeds immediately. Previously shared content is hidden.Community only

Each action is recorded in the community's moderation log, which shows the action type, target user, moderator name, reason, and timestamp. The log is visible to all moderators and the community creator, providing transparency and accountability for moderation decisions within the community.

Escalation from community to platform: When community-level moderation is insufficient (e.g., a user is banned from a community but continues harassing members through other channels), community moderators can escalate to platform-level moderation by filing a report. Platform admins can then take account-wide actions (warnings, suspensions, permanent bans) that go beyond any single community's scope.

Appeals Process

STATUS: BUILT

The AppealForm page (src/pages/account/AppealForm.tsx) implements the full appeal lifecycle. It displays recent moderation actions taken against the user (content removal, warning, suspension), lets the user select an action to appeal, provides a reason textarea (minimum 50 characters), an optional file attachment for supporting evidence, and a good-faith confirmation checkbox. Existing appeals are shown with status badges (Pending Review, Under Review, Approved, Denied) and reviewer notes. One appeal per moderation action is enforced. All mock data.

Users whose content was removed or whose account was restricted can appeal the decision.

Appeal entry point: The user receives a notification that content was removed or their account was restricted. The notification includes an "Appeal this decision" link that navigates to /account/appeal.

Appeal flow:

  1. Select action to appeal -- User sees a list of recent moderation actions taken against them. Actions that already have an appeal are grayed out and cannot be appealed again.
  2. Provide reason -- Free text field (1000 character limit per spec, 50 character minimum enforced in UI). User explains why they believe the decision was wrong. Optional file attachment for supporting evidence.
  3. Good-faith confirmation -- Checkbox: "I confirm this appeal is made in good faith and that the information provided is accurate to the best of my knowledge."
  4. Submit -- Confirmation screen sets expectations: "We'll review your appeal within 48 hours" (spec says 48 hours; the UI currently says 5 business days).
  5. Outcome notification -- Either upheld ("After reviewing your appeal, our original decision stands. [Brief reason].") or overturned ("We reviewed your appeal and have restored your [content/account]. We're sorry for the inconvenience.").

Appeal limits:

  • One appeal per moderation action (enforced in the UI via alreadyAppealed check)
  • No further appeals after decision (user can contact support for exceptional circumstances)
  • Appeals must be filed within 14 days of the moderation action (not yet enforced in the UI)

Safety Resources

STATUS: BUILT

The SafetyResources component (src/components/social/SafetyResources.tsx) displays crisis contacts, online safety tips, and reporting guidance. It supports a compact mode for abbreviated display. Emergency contacts include the 988 Suicide & Crisis Lifeline, Crisis Text Line, and National Child Abuse Hotline. The component includes five numbered safety tips and a "Go to Report Form" call-to-action.

Safety resources are shown when:

  • A user reports content related to self-harm or dangerous activity
  • Auto-flagging detects self-harm keywords in user-created content
  • A user searches for self-harm related terms

Crisis contacts displayed:

ResourceContact MethodDetails
988 Suicide & Crisis LifelineCall or text988
Crisis Text LineText HOME to741741
National Child Abuse HotlineCall1-800-422-4453
SAMHSA HelplineCall1-800-662-4357 (in spec, not yet in component)

The spec also specifies: "For immediate danger, call 911."

Presentation rules:

  • Resources are shown non-intrusively (not a blocker or popup that must be dismissed)
  • Shown alongside content, not as a replacement for the user's action
  • Tone is caring, not clinical
  • For minors: "Talk to a parent, teacher, or trusted adult" appears first, before hotline numbers

Online safety tips (shown in the component):

  1. Never share personal information like your full name, address, phone number, or school name with people you meet online.
  2. Keep your passwords private and do not share them with friends or classmates.
  3. If someone online makes you feel uncomfortable or asks you to do something that does not feel right, tell a trusted adult immediately.
  4. Think before you post. Once something is online, it can be very difficult to remove completely.
  5. Be kind in your interactions. If you see bullying or hurtful behavior, report it and support the person being targeted.

The component supports two display modes: full (all five tips with detailed descriptions and all three emergency contacts with descriptions) and compact (four abbreviated tips and contacts without descriptions). The compact mode is used in inline callouts alongside content, while the full mode is used on dedicated safety pages.

Minor-Specific Safety Protections

STATUS: BUILT

Minor-specific protections are fully implemented. The ReportModal provides age-appropriate language for under-13 and 13-17 age groups, includes post-report supportive messaging, and shows school-linked contact information for school students. The BlockUserButton enforces under-13 report-only mode and school context restrictions.

The spec defines several safety behaviors specific to users under 18 and under 13:

Under-18 protections:

  • Simplified report language ("What's wrong?" instead of "What's the problem?")
  • Simplified reason options using age-appropriate vocabulary
  • Post-report support message: "If someone is bothering you or making you feel unsafe, talk to a parent, teacher, or trusted adult."
  • Safety tips prioritize "talk to a trusted adult" before hotline numbers
  • Community Guidelines are written in age-appropriate language for the under-18 audience

Under-13 protections (in addition to all under-18 protections):

  • Same simplified report flow as under-18
  • Reports involving inappropriate contact from another user are automatically flagged as high priority to admins
  • Parents are NOT automatically notified of reports filed (to avoid discouraging reporting), unless the moderation team escalates
  • Content created by under-13 users is auto-held for review before appearing in any public context (does not apply within school communities)
  • Blocking UI is simplified
  • No analytics cookies without parental consent (relevant to safety tracking)

School-linked student protections:

  • Post-report message includes specific names: "You can also talk to [Teacher Name] or [SA Name]"
  • Students cannot block teachers or classmates within school communities (but can always report)
  • Teacher-student relationships are maintained regardless of blocks in non-school contexts
  • Content in school communities is not auto-held (the school context provides its own oversight layer through teachers and School Admins)

AUP Enforcement

STATUS: BUILT

The AcceptableUsePolicy page (src/pages/public/AcceptableUsePolicy.tsx) and CommunityGuidelines page (src/pages/public/CommunityGuidelines.tsx) are implemented as static legal pages accessible from the public routes. Both display the platform values, prohibited content/behavior lists, and enforcement policies. The AUP is referenced from the Terms of Service and enforced through the moderation system.

The Acceptable Use Policy defines prohibited content and behavior, with graduated enforcement:

Prohibited content (users may not post, upload, or share content that):

  • Is sexually explicit or pornographic
  • Depicts or promotes violence or self-harm
  • Contains hate speech targeting race, ethnicity, gender, sexual orientation, religion, disability, or national origin
  • Harasses, bullies, or threatens other users
  • Contains personal information of others without consent (doxxing)
  • Is spam, scams, or commercial solicitation
  • Infringes copyright or intellectual property
  • Promotes illegal activities
  • Contains malware or malicious links
  • Impersonates another person or entity
  • Is deliberately misleading or deceptive

Prohibited behavior (users may not):

  • Create multiple accounts to evade restrictions
  • Use automated tools (bots, scrapers) without permission
  • Attempt to access other users' accounts or data
  • Exploit platform vulnerabilities
  • Manipulate gamification systems (fake completions, XP farming)
  • Circumvent content moderation or safety features
  • Use the platform to recruit for external commercial purposes
  • Engage in predatory behavior toward minors

Enforcement escalation:

LevelTriggerAction
First offense (minor)Single minor violationWarning
Second offenseRepeat violationContent removal + 7-day restriction
Severe violationSerious single violationImmediate suspension
Illegal content (CSAM, threats)Any occurrenceImmediate permanent ban + law enforcement notification

False reporting (abusive use of the report system) is itself an AUP violation.

Community Guidelines

STATUS: BUILT

The CommunityGuidelines page is implemented as a static public page. It covers the five core values (Be Encouraging, Be Authentic, Be Respectful, Be Safe, Be Original), community-specific rules, minor-specific guidelines, and vendor-specific guidelines.

The Community Guidelines set the positive tone for platform interactions. Unlike the AUP (which is legalistic and defines prohibitions), the Community Guidelines are written in a friendly, encouraging voice. They are structured around five core values:

  • Be Encouraging -- Celebrate others' achievements, give constructive feedback, recognize that everyone is at a different point in their learning journey
  • Be Authentic -- Share real experiences in Track Records, do not fake completions or misrepresent work, acknowledge that struggling and failing are part of learning
  • Be Respectful -- Treat everyone with kindness, disagree respectfully, no harassment, bullying, or exclusion
  • Be Safe -- Do not share personal information (address, phone number, school name for minors), report content that makes you uncomfortable, prioritize physical safety when doing real-world challenges
  • Be Original -- Share your own work and experiences, give credit when inspired by others, do not copy others' Track Records

Roles & Permissions

This matrix shows which platform roles can perform which safety and moderation actions.

ActionGeneral UserStudent (T1)Student (T2)ParentTeacherSchool AdminPlatform Admin
Report contentYesYesYesYesYesYesYes
Block userYesSimplifiedYesYesYesYesYes
Mute communityYesSchool onlyYesYesYesYesYes
View "Your Reports" statusYesYesYesYesYesYesYes
File appealYesYesYesYesYesYesYes
Moderate community contentCreator/mods onlyNoNoNoCreator/modsCreator/modsYes
Review platform flagged queueNoNoNoNoNoNoYes
Take platform mod action (warn/remove/suspend)NoNoNoNoNoNoYes
Review appealsNoNoNoNoNoNoYes
View transparency reportPublicPublicPublicPublicPublicPublicPublic

Admin role permissions for moderation (from src/types/admin.types.ts):

Admin RoleHas content_moderation Access
SuperYes
ContentYes
SupportYes
SchoolNo
VendorNo
AnalyticsNo
EngineeringNo

Constraints & Limits

ConstraintValueSource
Report completion time target< 30 secondsDoc 15 principle
Report reason selectionSingle-select from predefined listDoc 15 §3.2
Report details text limit500 charactersDoc 15 §3.3
Appeal reason text limit1,000 charactersDoc 15 §8.2
Appeal minimum text50 charactersUI enforcement in AppealForm
Appeal filing window14 days from moderation actionDoc 15 §8.3
Appeals per moderation action1Doc 15 §8.3
Appeal review SLA48 hours (spec) / 5 business days (UI text)Doc 15 §8.2 / AppealForm
Report review SLA24 hoursDoc 15 §3.4
Auto-hide threshold3 unique flaggers on same contentDoc 15 §5.1
Auto-flag user threshold3+ reports in 7 daysDoc 15 §5.1
Spam detection threshold10+ posts in first hour (new accounts)Doc 15 §5.1
DMCA repeat infringer limit3 valid claims = account terminatedDoc 14 §9.4
Community mute durations (UI)1h, 6h, 24h, 3d, 7d, 30dCommunityModTools
AUP change notice period14 days via in-app notificationDoc 14 §11
Terms/Privacy change notice30 days via email + in-appDoc 14 §11

Design Decisions

Why "Report" instead of "Flag"? The word "Report" is clearer for younger users and feels less technical. "Flag" could be confusing or feel like a negative mark on the reporter. The spec explicitly calls for "Report" as the user-facing label. Internally, the codebase uses both terms -- the admin queue is called FlaggedContentQueue (admin perspective) while the user-facing components use "Report" (user perspective).

Why are reporters never identified? Confidential reporting is fundamental to the system's safety, especially for minors. If a student knew that reporting a classmate's behavior would reveal their identity, they would be far less likely to report. The spec mandates that "The person you reported won't know who reported them" is shown on every report confirmation.

Why no immediate content removal on report? Immediately hiding reported content would create an abuse vector where users could report content they simply disagree with to get it temporarily removed. Instead, content remains visible until reviewed by a moderator -- except when it receives 3+ unique reports, which triggers auto-hide as a community self-moderation mechanism.

Why simplified language for minors? Under-18 users need to understand the reporting process without legal or technical jargon. Phrases like "It's mean or hurtful" and "It's scary or makes me uncomfortable" map directly to the adult equivalents (Harassment/bullying and Safety concern) but use vocabulary accessible to younger users. This is a COPPA-informed design choice.

Why no email notifications for reports? The spec explicitly avoids email for report status updates to prevent "email overload" and keep the system lightweight. In-app notifications are used instead. This also reduces the risk of sensitive report details appearing in email inboxes that may be shared (particularly relevant for family email accounts used by minors).

Why does blocking NOT affect school relationships? The school context requires that teacher-student and student-student relationships remain functional regardless of personal conflicts. A student blocking a teacher cannot prevent the teacher from assigning challenges, reviewing submissions, or managing the classroom. This is enforced by design: blocking affects content visibility and social interactions but never breaks the school administrative hierarchy.

Why one appeal per action with no further escalation? The single-appeal rule prevents the moderation team from being overwhelmed by repeated appeals on the same decision. Users who believe their situation is truly exceptional can contact support directly, but the formal appeals process is limited to one round. This keeps the system manageable while still providing due process.

Why is the appeal review SLA different between spec and UI? The spec says 48 hours; the AppealForm UI text says 5 business days. This discrepancy should be resolved before launch. The 48-hour SLA is more aggressive and may not be sustainable at scale; the 5-business-day window is more realistic but less user-friendly. This is a product decision that depends on staffing.

What to revisit:

  • Per-content-type report reason lists (spec defines different reasons for each type; current UI uses a single list)
  • Minor-specific simplified report language (not yet implemented)
  • 14-day appeal window enforcement (not yet enforced in the UI)
  • Appeal SLA discrepancy (48 hours vs. 5 business days)
  • Reporter feedback system (Settings > "Your Reports")
  • Auto-flagging infrastructure (no content scanning exists yet)
  • SAMHSA helpline addition to SafetyResources component
  • Mute community from report confirmation flow
  • ReportableContentType enum expansion (missing comment, challenge, event)

Technical Implementation

Type Definitions

FileDescription
src/types/report.types.tsCore report types: ReportReason (6 values: inappropriate, spam, harassment, misinformation, safety_concern, other), ReportStatus (4 values: pending, reviewed, action_taken, dismissed), ReportableContentType (4 values: track_record, feed_post, community, user), ContentReport interface extending BaseEntity with reporterId, contentType, contentId, reason, optional description, and status.
src/types/admin.types.tsAdmin moderation types: AdminRole (7 roles), AdminFeatureArea (includes content_moderation), ADMIN_PERMISSIONS mapping roles to feature areas. The FlaggedContentStatus type is exported from this file and used by ReviewModeration.

State Management

FileDescription
src/store/useAdminStore.tsZustand store with flaggedContent state, fetchFlaggedContent and resolveFlag actions. Used by the ReviewModeration page for platform-level moderation.

Page Components

FileDescription
src/pages/admin/FlaggedContentQueue.tsxCard-based moderation queue. Local state with mock data (6 flagged items). FlaggedItem interface with contentType, contentPreview, authorName, reporterName, reportReason, reportDate, severity, reportCount. Severity filtering (all/high/medium/low). Four actions per card: Dismiss, Warn, Remove, Suspend User. Severity stats display.
src/pages/admin/ReviewModeration.tsxStore-connected moderation page. Uses useAdminStore.fetchFlaggedContent and resolveFlag. Search by content text, status filter (all/pending/approved/flagged/removed/dismissed). Platform admin guard via selectIsPlatformAdmin.
src/pages/account/AppealForm.tsxFull appeal flow. Mock moderation actions (content removal, warning, suspension). Mock existing appeals with status badges. Radio selection for action to appeal, textarea with 50-char minimum, file attachment placeholder, good-faith checkbox. One-appeal-per-action enforcement. Success banner on submission.
src/pages/public/AcceptableUsePolicy.tsxStatic AUP page with prohibited content/behavior lists and enforcement escalation. Public route.
src/pages/public/CommunityGuidelines.tsxStatic community guidelines page with core values (encouraging, authentic, respectful, safe, original), community-specific rules, minor-specific guidelines, and vendor-specific guidelines. Public route.

UI Components

FileDescription
src/components/common/ReportModal.tsxThree-step report modal using Dialog component. Reason selection (6 buttons), optional textarea, "Also block this user" checkbox (hidden for community reports), submit button, confirmation state with auto-close. Props: contentType, contentId, onClose.
src/components/common/ReportButton.tsxFlag icon trigger for ReportModal. Manages open/close state. Props: contentType, contentId, size. Renders FiFlag icon with ghost variant button.
src/components/social/BlockUserButton.tsxBlock and mute actions with confirmation dialogs. Block confirmation: "They will no longer be able to see your profile or interact with you." Mute confirmation: "Their posts will be hidden from your feed in this community." Success notices with undo for block. Props: userId, username, isBlocked, isMuted.
src/components/social/CommunityModTools.tsxCommunity moderator panel. Flagged content cards with four actions (remove post, warn, mute with duration selector, ban). Confirmation dialogs with reason textarea. Moderation log with action type badges, timestamps, and moderator attribution. Props: communityId.
src/components/social/SafetyResources.tsxCrisis support display. Three emergency contacts (988 Lifeline, Crisis Text Line, National Child Abuse Hotline). Five numbered safety tips (four in compact mode). "Go to Report Form" CTA. Warm "You Are Not Alone" header banner. Props: compact (boolean).

Data Model (Backend)

The spec defines four database tables for the safety and moderation system:

TableKey ColumnsIndexes
content_reportsid (UUID PK), reporter_user_id (FK), content_type (enum: track_record, post, comment, profile, challenge, community, event), content_id (UUID polymorphic), reason (varchar 100), details (text, max 500), status (enum: submitted, under_review, action_taken, no_violation, dismissed), priority (enum: critical, high, medium, low), auto_flagged (boolean), reviewed_by_admin_id (FK nullable), reviewed_at, resolution_notes, created_at(content_type, content_id), (status, priority, created_at), (reporter_user_id, created_at)
user_blocksid (UUID PK), blocker_user_id (FK), blocked_user_id (FK), created_atUnique on (blocker_user_id, blocked_user_id), index on blocked_user_id
community_mutesid (UUID PK), user_id (FK), community_id (FK), created_atUnique on (user_id, community_id)
moderation_appealsid (UUID PK), user_id (FK), moderation_action_id (FK), reason (text, max 1000), status (enum: submitted, under_review, upheld, overturned), reviewed_by_admin_id (FK nullable), reviewed_at, resolution_notes, created_at(status, created_at)

Note on status enum mapping: The frontend ReportStatus type uses pending | reviewed | action_taken | dismissed while the backend spec uses submitted | under_review | action_taken | no_violation | dismissed. The adapter layer will need to map between these when the backend is connected. Similarly, the frontend AppealStatus type uses pending_review | under_review | approved | denied while the backend uses submitted | under_review | upheld | overturned.

Note on content_type enum: The backend content_reports table supports seven content types (track_record, post, comment, profile, challenge, community, event) while the frontend ReportableContentType currently defines only four (track_record, feed_post, community, user). The naming also differs -- the backend uses post where the frontend uses feed_post, and profile where the frontend uses user. The adapter layer in src/adapters/ will need to handle these mappings.

Note on priority field: The backend spec includes a priority field (critical/high/medium/low) on content_reports which maps to the severity field in the FlaggedContentQueue mock data. Auto-flagged items receive their priority from the trigger type (e.g., CSAM match = critical, profanity = high), while user-reported items default to medium and can be escalated by admin review.

  • Communities -- Community-level moderation (delete post, mute member, remove member, ban) is handled by community creators and moderators. The three-feed system (Bucket List, Track Record, Discussion) defines the surfaces where content can be reported. The CommunityModTools component provides the in-community moderation panel.
  • Accounts -- User roles determine moderation permissions. Age verification (under-13, under-18) triggers minor-specific safety flows. COPPA compliance affects how reports from minors are handled and whether parents are notified.
  • Track Records -- Track Records are a primary reportable content type. The Track Record gallery, community feeds, and challenge detail pages all surface ReportButton instances for user-created content.
  • Notifications -- Report status changes, moderation actions, appeal outcomes, and safety alerts are all delivered through the in-app notification system. No email notifications for reports.
  • School -- School context creates special moderation rules: students cannot block teachers, teacher-student relationships are maintained regardless of blocks, and under-13 content in school communities is not auto-held (unlike public contexts).
  • Vendor -- Vendor-created challenges are reportable for inaccuracy, safety concerns, inappropriate content, copyright violations, and pricing issues. Vendors have specific guidelines in the Community Guidelines around challenge accuracy, safety, and honest representation.

DoCurious Platform Documentation