Content Moderation Policy
Effective: February 2026
Last Updated: February 2026
1. Purpose & Scope
This policy governs all user-generated content on Pincident and applies to every user.
This policy governs all user-generated content on Pincident, including dents, comments, media uploads, profile information, net descriptions, and validation responses. It applies to every user regardless of role, brass score, or account age. The goal is to maintain the accuracy, reliability, and safety of location-based civic information shared on the platform.
This policy should be read alongside our Terms of Service and Governance Model.
2. Definitions
Key terms used throughout this policy.
- Dent
- A user-submitted incident or event report anchored to a specific geographic location on the Pincident platform.
- Net
- A user-defined geographic area (public or private) where dents and community activity are organized.
- Brass Score
- A numeric credibility metric reflecting a user's history of accurate reporting and constructive platform participation.
- Validation
- The community process through which users confirm, dispute, or mark dents as resolved.
- Protected Characteristics
- Race, ethnicity, national origin, religion, gender, gender identity, sexual orientation, disability, age, caste, veteran status, and immigration status.
- Misinformation
- Demonstrably false claims of fact that could cause real-world harm. This is distinct from honest mistakes, differences of opinion, speculation clearly presented as such, or good-faith inaccuracies in incident reporting.
- Doxxing
- Publishing or threatening to publish another person's private or identifying information without their consent.
3. Prohibited Content Categories
Thirteen categories of content that are not permitted on Pincident.
3.1 Illegal Content
Content that violates applicable law in the jurisdiction where it is posted or viewed. Examples: soliciting illegal transactions, distributing controlled substances, facilitating theft or property crime.
3.2 Violence & Threats
Direct or implied threats of physical harm against individuals, groups, or communities. Content that glorifies, incites, or celebrates violence. Examples: threatening a person mentioned in a dent, calling for vigilante action at a reported location, posting content designed to intimidate residents of an area.
3.3 Hate Speech
Content that attacks, demeans, or dehumanizes individuals or groups based on protected characteristics. Examples: using slurs in dents or comments targeting a community, posting dents that blame incidents on ethnic or religious groups without factual basis, creating nets specifically to target marginalized communities.
3.4 Harassment & Bullying
Targeted, sustained, or severe abuse directed at specific individuals. Examples: repeatedly posting dents to track or surveil a specific person, coordinating with other users to flood someone's dents with hostile comments, using dents to publicly shame individuals by name for minor disputes.
3.5 Doxxing & Privacy Violations
Publishing private addresses, phone numbers, identification documents, medical records, financial information, or images of identifiable individuals in private settings without consent. Examples: posting a dent revealing someone's home address after a dispute, sharing photos of individuals taken in private spaces, publishing personal information of someone involved in a reported incident.
3.6 Sexual Exploitation & Abuse
Sexually explicit content, non-consensual intimate imagery, or any content that sexually exploits minors. Zero tolerance policy: content involving minors is removed immediately and reported to relevant authorities. Examples: posting explicit imagery in dents, sharing non-consensual intimate images, using the platform to solicit sexual contact with minors.
3.7 Deliberate Misinformation
Fabricated incident reports, manipulated media presented as authentic, or false safety claims designed to cause public harm. Examples: creating fake emergency dents to cause panic in an area, posting doctored images or video as evidence of a non-existent incident, systematically creating false reports to discredit a location or business.
3.8 Spam & Manipulation
Repetitive or bulk posting, coordinated inauthentic behavior, artificial engagement, validation manipulation, and brass score gaming. Examples: posting identical dents across multiple nets, using multiple accounts to validate your own reports, coordinated campaigns to inflate or deflate brass scores.
3.9 Impersonation
Posing as another user, public figure, organization, emergency service, or Pincident staff member. Examples: creating a username that mimics a government agency, pretending to be a Pincident moderator, impersonating a public official in comments.
3.10 Conflict of Interest
Posting dents about your own business or organization without disclosure, or using the platform to promote or damage competitors. Examples: creating dents praising your own business without identifying your relationship, using alternate accounts to validate your own business-related dents, posting false incident reports about a competitor's location.
3.11 Off-Topic Content
Content unrelated to the geographic location, incident context, or civic information purpose of the platform. Examples: purely political or religious commentary with no connection to a reportable event, personal advertisements, general social media content unrelated to location-based incidents.
3.12 Dangerous Content
Content that provides instructions for creating weapons or harmful substances, or that could directly facilitate real-world harm at a specific location. Examples: posting instructions for building explosive devices in a dent, sharing information designed to help someone cause harm at a named location, posting detailed security vulnerability information about a specific building.
3.13 Regulated Goods & Services
Content promoting the sale of controlled substances, firearms, alcohol, tobacco, gambling services, or other regulated items must comply with applicable local laws. Examples: advertising firearms sales in dents, promoting unlicensed gambling operations, soliciting controlled substances.
4. Content Verification States
Content progresses through defined states that help users assess reliability.
Content on Pincident progresses through defined states that help users assess reliability:
- Unverified: Newly posted; has not yet received sufficient community review.
- Corroborated: Multiple independent users have confirmed the report through validation.
- Disputed: Community members have submitted conflicting assessments.
- Resolved: The reported incident has concluded or been addressed; the dent is marked as historical.
These states are informational. They reflect community assessment and do not constitute verification, endorsement, or fact-checking by Pincident.
5. Moderation Approach
Pincident uses a hybrid system combining automated detection with human moderators.
Pincident uses a hybrid moderation system combining automated detection tools with human moderators. Automated systems flag potentially harmful content for human review. Automated systems do not make final removal decisions, with two exceptions: content matching known child sexual abuse material (CSAM) hashes and content posing an imminent threat to physical safety.
Platform-level moderation operates independently from net-level moderation. We are committed to monitoring our automated systems for false positives, bias, and accuracy, and we regularly review their performance.
6. Net-Level Moderation
Net owners may establish additional community rules, but cannot weaken platform-wide policies.
Net owners may establish additional rules for their community beyond platform-wide policy. Net-specific rules cannot weaken or contradict platform-wide rules.
The moderator hierarchy within each net is as follows:
- Owner: Creates and controls the net.
- Admin: Manages day-to-day moderation, appoints moderators.
- Moderator: Reviews flagged content, issues warnings, removes content.
- Member: Posts, comments, participates in validation.
- Observer: Read-only access.
Net moderators can: remove content within their net, mute users, issue warnings, and set net-specific guidelines.
Net moderators cannot: access private user data, view activity in other nets, or override platform-level enforcement.
Pincident's platform team may override net-level moderation decisions that conflict with platform policy, or intervene when net governance fails to address serious violations.
7. Reporting Mechanisms
In-app reporting is available on every piece of content.
In-app reporting is available on every piece of content: dents, comments, media, and user profiles. When submitting a report, users select a category (matching the prohibited content categories above) and may provide an optional description.
Reports are prioritized based on:
- Severity of the potential violation
- Volume of reports received for the same content
- Potential for real-world harm
- The reporter's brass score
Reporter identity is not disclosed to the reported user. Abuse of the reporting system (such as coordinated mass-reporting of content that does not violate policies) is itself a violation subject to enforcement.
8. Enforcement Actions
Enforcement is proportional to the severity and frequency of violations.
| Severity | Action | Examples |
|---|---|---|
| Minor / First offense | Warning + content removal | Off-topic post, minor spam, first-time low-severity violation |
| Moderate / Repeat offense | Temporary suspension (24 hours to 30 days) + brass score reduction | Repeated policy violations after warning, harassment, misleading content |
| Serious | Extended suspension + significant brass score reduction | Hate speech, doxxing, deliberate misinformation causing real-world concern |
| Severe / Zero tolerance | Permanent ban + content purge + referral to authorities where required | CSAM, credible threats of imminent violence, content facilitating immediate physical harm |
Brass score reduction serves as both a consequence and a signal: violations reduce your credibility, which in turn reduces the visibility and influence of your contributions. Enforcement decisions consider: the severity of the violation, apparent intent, the user's history of violations, real-world impact or potential for harm, and whether the user took corrective action.
9. Repeat Offender Policy
Consequences escalate with repeated violations.
Violations are tracked across the lifetime of an account. Consequences escalate with repeated violations. Three serious violations within any 12-month period triggers a mandatory review for permanent removal.
A pattern of moderate violations, even if no single incident is severe, may result in permanent action when the cumulative behavior demonstrates disregard for platform policies.
Ban evasion -- creating new accounts after a permanent ban -- results in immediate removal of the new account.
10. Appeals Process
Users may appeal any enforcement action within 30 days.
Users may appeal any enforcement action within 30 days of the action being taken. Appeals are reviewed by a human moderator who was not involved in the original decision.
During the appeal period, enforcement remains in effect: removed content stays removed, and suspended accounts remain suspended.
Response timelines:
- 14 business days for content removal appeals
- 7 business days for account suspension appeals
One appeal is permitted per enforcement action. Appeal outcomes:
- Upheld: Original action stands.
- Overturned: Content restored or suspension lifted.
- Modified: Penalty adjusted.
A clear written explanation of the appeal outcome is provided to the user.
11. Real-World Safety Protocol
Location-anchored content carries unique risks that require heightened scrutiny.
Location-anchored content carries unique risks not present on non-geographic platforms. We apply heightened scrutiny to:
- Content near sensitive locations (schools, hospitals, houses of worship, government buildings)
- Content that could facilitate targeted harm at a specific address or area
- Content that could trigger area-specific panic or dangerous public response
Content flagged as an imminent safety risk may be removed preemptively while under expedited review (target: within 4 hours of flagging).
When content indicates a credible and imminent threat to physical safety, we coordinate with local law enforcement.
12. Government & Legal Requests
We comply with valid legal orders and notify users when permitted.
Pincident may remove content or disclose user information pursuant to valid legal orders, including court orders, valid subpoenas, or equivalent legal instruments recognized under South African law.
Users are notified of government content removal requests unless we are legally prohibited from providing notification. We will challenge requests that we determine to be legally insufficient or overbroad.
Government and legal requests are reported in aggregate in our annual transparency report.
13. Automated Moderation Transparency
Our automated systems flag content for human review; they do not act alone except in specific cases.
Our automated systems detect:
- Spam patterns and coordinated inauthentic behavior
- Content matching known CSAM hash databases
- Keyword and pattern-based indicators of policy violations
- Anomalous validation or brass score activity
With the exception of confirmed CSAM hash matches, automated systems flag content for human review rather than removing it automatically.
We commit to regular audits of our automated systems to assess accuracy, false positive rates, and potential bias. Users may request human review of any action initiated or informed by automated systems.
14. Transparency Reporting
Pincident publishes an annual transparency report covering moderation activity.
Pincident publishes an annual transparency report that includes:
- Total content removed, broken down by prohibited content category
- Enforcement actions taken (warnings, suspensions, permanent bans)
- Appeals received and their outcomes (upheld, overturned, modified)
- Government and legal requests received and the proportion complied with
- A breakdown of automated versus human moderation actions
The transparency report is published on the Pincident website and made freely accessible.
15. Cooperation with Authorities
Pincident cooperates with law enforcement investigating illegal activity on the platform.
Pincident cooperates with law enforcement agencies investigating illegal activity on the platform.
Content involving child sexual exploitation is reported to relevant national authorities and, where applicable, organizations such as the National Center for Missing & Exploited Children (NCMEC) or equivalent bodies.
Credible threats of imminent violence are reported to local law enforcement in the relevant jurisdiction. Data preservation requests are honored in accordance with applicable law.
16. Changes to This Policy
Material changes are communicated at least 30 days in advance.
We may update this Content Moderation Policy to reflect changes in our practices, community needs, or legal requirements. Material changes will be communicated at least 30 days in advance through platform notifications.
For the general framework governing policy modifications, refer to Section 17 of our Terms of Service.