This guide explains how to identify, report, and mitigate harmful content on our platform.

Key Concepts

  • What is Hateful Content?
    Content that promotes hate, violence, or discrimination based on race, religion, gender, or other protected attributes.

    Hate_Speech
  • Why it Matters
    Protecting users from harmful content is essential for fostering a safe and inclusive community.

    Online_Safety

Usage Examples

  1. Reporting Hateful Content

    • Click the "Report" button below the post.
    • Select the category: Hate Speech / Harassment / Misinformation.
    • Provide details to help us address the issue.
  2. Moderation Tools

    • Use automated filters to detect keywords like racist, xenophobic, or violent.
    • Manual review by team members ensures context is considered.
    Moderation_Tools

Best Practices

  • Be Specific
    Clearly describe the content’s harmful nature to improve moderation accuracy.
  • Avoid Over-reporting
    Reports should focus on content that violates our community guidelines.
    Community_Guidelines

For more details on community standards, visit our Community Guidelines.
🛡️