Modcord

Modcord

4.25
4

Context-aware AI moderation: Detects toxicity, raids & spam. LLM-powered, fully customizable, and open source.


Modcord – AI-Powered Discord Moderation

Stop moderating like it's 2015. Modcord understands context.

What is Modcord?

Modcord is an AI-driven moderation bot that protects your Discord server using large language models to detect toxicity, spam, raids, and harmful behavior—not just keyword matching.

Unlike traditional bots that ban memes for containing "bad words," Modcord reads the conversation. It understands sarcasm, context, and intent. It knows the difference between a real threat and banter between friends.


Core Features

Context-Aware AI Detection

  • Analyzes complete conversations, not isolated messages
  • Detects hate speech, harassment, and toxic behavior with near-human accuracy
  • Works with any LLM-compatible API (OpenAI, Anthropic, local models, etc.)

Behavioral Analysis

  • Tracks member activity patterns to flag coordinated raids in real-time
  • Identifies self-bots and malicious actors before they cause damage
  • Learns from your moderation decisions to improve over time

Fully Customizable

  • Define server-specific rules and let the AI learn them
  • Set channel-specific guidelines (off-topic content in #general vs #random)
  • Exempt users and roles from automated moderation
  • Control which actions auto-trigger (warn, delete, timeout, kick, ban)

Transparent Moderation

  • Every action comes with a reason your mods can read
  • See exactly why the bot made a decision
  • Appeal system-friendly (no black-box bans)
  • Full audit logging for compliance

Built for Developers

  • 100% open source – Audit the code yourself
  • Self-hosted – Your data never leaves your server
  • Java/Gradle – Production-grade, battle-tested stack
  • Database-backed – PostgreSQL for persistence and scale

How It Works

  1. Messages arrive → Bot batches them with recent history
  2. Context built → Pulls older messages for conversation context
  3. AI analyzes → Sends structured data to your LLM endpoint
  4. Decision made → Gets back moderation action (warn, delete, timeout, etc.)
  5. Action applied → Deletes messages, logs to audit channel, DMs user

All configurable. All transparent.


Perfect For:

Gaming communities – Catch toxicity before it escalates
Study/work servers – Keep discussions professional
Creator communities – Protect members from harassment
Large servers – Handle scale without manual review
Privacy-conscious admins – Self-host, stay in control


Getting Started

Quick Invite

Add Modcord to your server

Commands

  • /status health – Check bot health
  • /exclude add @user – Exempt user from moderation
  • /mod warn @user [reason] – Manual warning
  • /debug show-rules – View current server rules

Documentation

Full setup guide and architecture docs available on GitHub


Privacy & Security

  • ✅ Open source – Review every line of code
  • ✅ Self-hosted option – Run on your own infrastructure
  • ✅ No third-party tracking – Only connects to your LLM provider
  • ✅ Database-backed audit log – Full compliance trail
  • ✅ GDPR-friendly – Built to respect user data

Support


🎯 Why Modcord?

Feature Modcord Keyword Filters Manual Review
Understands context
Scales to 1000+ members
24/7 automatic
Explainable decisions
Open source N/A
Customizable to your rules ⚠️ Limited

Status

Active Development – Latest updates at github.com/HoneyBerries/Modcord


Modcord: Moderation that thinks.

評価とレビュー


4.25

4 レビュー

レビューは登録ユーザーのみ投稿できます。すべてのレビューはTop.ggのモデレーターによって管理されます。投稿前に ガイドライン をご確認ください。

5評価

3

4評価

0

3評価

0

2評価

1

1評価

0



abjcbf
abjcbf
5 months ago

I want to see more improvements in the bot's ability to handle like ticketting and appeals, but overall good job.

0

pepmon270
pepmon270
5 months ago

Also think it is good. Love that it is open source so I can use the program too!

1

straw_bunni_
straw_bunni_
5 months ago

The code is awesome, did checkout the repo. I think some improvements can be by fine-tuning the LLM.

1