How BotIdentifier Works
Last updated: March 1, 2026
1. The Problem
Automated and coordinated inauthentic accounts—commonly called bots, sock puppets, and astroturf networks—distort online discourse at scale. Individual users and researchers lack a shared, transparent repository to document and track these accounts across platforms.
My Blog provides that repository. It is open to all users, does not require specialized technical knowledge, and publishes its methodology openly so every score can be understood and challenged.
2. Submitting a Report
Any registered user may file a report about a social-media account they believe exhibits inauthentic behavior. A report includes:
- Account handle & platform — the username and platform (e.g., X/Twitter, Facebook, Reddit).
- Behavior type — one or more categories such as spam, coordinated amplification, impersonation, or misleading identity.
- Description — a factual summary of the observed behavior.
- Evidence — supporting material such as screenshots, archive.org links, or direct post URLs.
Reports are validated against minimum-length and format rules upon submission. Invalid submissions are rejected immediately with a clear explanation.
3. Content Moderation
Every report enters a moderation queue before it can affect an account’s score. Moderators verify that the report:
- Contains sufficient, relevant evidence.
- Does not target a private individual without justification.
- Complies with the Content Moderation Policy.
High-reputation reporters may have reports auto-approved when the administrator has enabled that option. Approved reports are published; rejected reports are returned with a reason. Our target review time is 48 hours.
4. Reputation Scoring
Once approved, reports feed into the scoring engine, which computes a composite 0–100 Reputation Score for each listed account. The score aggregates six weighted components:
- Report volume (25 %) — number of independent reports.
- Reporter credibility (20 %) — track record of the reporters.
- Evidence strength (20 %) — quality of supporting evidence.
- Behavior consistency (15 %) — agreement across reports on behavior type.
- Account age anomaly (10 %) — age-vs-activity ratio signals.
- Platform confirmation (10 %) — whether the native platform has taken action.
Each score is accompanied by a confidence indicator (None, Low, Medium, High) and a breakdown of the top contributing factors. For the full technical specification, see the Scoring Methodology page.
5. Score Bands
Scores are grouped into five interpretive bands:
| Range | Band | Interpretation |
|---|---|---|
| 0–19 | Insufficient Evidence | Too little data to characterize; treat as unscored. |
| 20–39 | Low Suspicion | Some reports exist but evidence is limited or inconsistent. |
| 40–59 | Moderate Suspicion | Multiple reports with corroborating evidence. |
| 60–79 | High Suspicion | Substantial evidence from credible reporters. |
| 80–100 | Confirmed Bad Actor | Overwhelming evidence and/or platform-level action taken. |
A high score is a statistical indicator, not a determination of fact. See the Scoring Disclaimer.
6. Campaigns
When multiple accounts appear to operate in coordination, moderators may group them into a campaign—a named cluster that links related accounts and provides a network-level view. Campaigns include a description, a network graph visualization, and the aggregate statistics of all linked accounts.
7. Reporter Reputation
Reporters accumulate their own reputation score based on the quality of their submissions:
- Approved, high-evidence reports increase reporter reputation.
- Rejected or flagged reports decrease it.
- Reporter reputation influences the weight their future reports carry in the scoring algorithm (the “Reporter credibility” component).
This creates a self-correcting incentive: contributors who consistently submit accurate, well-evidenced reports have greater influence over time.
8. Watchlists & Alerts
Registered users can add accounts to a personal watchlist. When a watched account’s score changes or new reports are filed, the user receives a notification. Watchlists are private and visible only to the user who created them.
9. Disputing a Score
If you believe a score is inaccurate or based on false reports, you have the right to challenge it. The Dispute & Data Removal page explains the process, including how to request a full listing removal, partial correction, or manual score review.
10. What BotIdentifier Does NOT Do
- It does not access private or platform-internal data. All inputs are user-contributed.
- It does not make legal findings or accusations against identifiable individuals.
- It does not ban, suspend, or restrict accounts on any platform.
- It does not encourage or endorse harassment, doxing, or retaliation against any person.
11. Contact
Questions about how My Blog works: admin@botidentifier.com.