Spam and abusive content are the scourge of the Internet. Communities, which intentionally provide low barriers to content contribution, can be particularly vulnerable. But Pluck has been in the abuse-fighting game alongside our brand and publisher customers for 7 years, and we’ve built an unparalleled array of Controls to keep your community free of content that lowers its value.

Spam Detection

All content sent to the Pluck platform can be passed by TypePad or Akismet spam detection services. These machines are leaders at detecting spam, and they learn. Every time a moderator approves content that was flagged as spam the 3rd party is notified, and machine learning kicks in to cut down on false positives.

IP and Metadata Blocking

Much spam originates from a range of IPs. The Pluck Moderation Workbench makes it easy to implement IP blocking. If there is identifying metadata in the request, such as a string that identifies the “bot”, it can be blocked as well.

Flood Control and Word Filters

A flood of abusive content from both bots and humans can be turned into a trickle simply by using Pluck Flood Control, which stops rapid submissions from the same user or anonymous session. The interval between allowed submissions is configurable to fit your community’s needs.

Dirty Word filters can also prevent offensive language from reaching your community. You control the list of words deemed unacceptable, and wildcards are supported.

“Pot Spoiling” Prevention

There is often a fine line between a highly-engaged user and one who is excessively dominating conversations. Pluck Quotas can be fine-tuned to prevent more than x contributions with n minutes. For instance, you can set reasonable limits on the number of submissions per day.

Moreover, the Pluck Moderation Manager includes Listeners that can detect patterns in text that may not be abusive, but warrant additional review. For instance, reviews that mention competitors may be positive for your brand, but by capturing hits on Listeners and routing them to the appropriate team, you can be sure that your community doesn’t become an advertising platform for competitors.

False Abuse Report Prevention

Users sometimes report abuse against content that they simply disagree with, potentially overloading your moderation team. When we introduced Scoring—specifically, the “thumbs down” button—we found that false abuse reports dropped dramatically.

You can also turn off anonymous reporting to allow only authenticated users to report abuse.

There’s more on this subject in our discussion of Pluck’s Moderation Workbench and Moderation Engine.

  

Have A Question? Request More Info