This guide explains how to identify, report, and mitigate harmful content on our platform.
Key Concepts
What is Hateful Content?
Content that promotes hate, violence, or discrimination based on race, religion, gender, or other protected attributes.Why it Matters
Protecting users from harmful content is essential for fostering a safe and inclusive community.
Usage Examples
Reporting Hateful Content
- Click the "Report" button below the post.
- Select the category: Hate Speech / Harassment / Misinformation.
- Provide details to help us address the issue.
Moderation Tools
- Use automated filters to detect keywords like racist, xenophobic, or violent.
- Manual review by team members ensures context is considered.
Best Practices
- Be Specific
Clearly describe the content’s harmful nature to improve moderation accuracy. - Avoid Over-reporting
Reports should focus on content that violates our community guidelines.
For more details on community standards, visit our Community Guidelines.