The Pluck Moderation Engine powers the control, management and oversight of community contributions and users through automated processing, analysis and configurable rules applied against user content.

What it does for you

  • Protects brand image and voice
  • Enhances member and visitor community experiences
  • Promotes community health and interaction
  • Improves contribution quality

What’s in it

  • Content Policies (by Type, Section, Category and Tier)
  • Post-moderation
  • Listeners
  • User Tiers
  • Flood control
  • User management
  • Abuse immunity
  • Pre-moderation
  • Abuse reporting and management
  • Keyword, IP and Meta Data Filters
  • Spam Service Integration
  • Automated content removal
  • Personalized ignore list

Details

User contributions are the foundation of any successful online community but when left unmanaged, the quality of those contributions can drop and quickly ruin the health of a vibrant and thriving online experience. Whether it’s due to spam, trolls, misinformation or just a single flame war, the negativity can spread quickly and damage the community.

The Pluck Moderation Engine is a multi-layered set of tools, processes and features that help brands efficiently manage community contributions at scale. In addition to providing a means of keeping content and users in check it can be a source of insight into products, customers and markets.

In its most basic layer, the Engine offers keyword, IP and meta-data filters that can be configured to stop content meeting certain criteria from ever being submitted.

For content that passes these tests, the Engine features User Tiers and Content Policies that let moderators impose different levels of moderation for different types of content based on the trust level a community member or visitor has earned or is assigned. Those who are established or trusted members of the community may be permitted to contribute freely or even moderate on your behalf. Visitors or newer members can be shepherded through pre-moderation or proactive review. The Pluck Moderation Engine is highly flexible and supports managing such access differently across user tiers, types of content and sections of content.

The Engine’s Spam Service integration provides another layer of analysis. The service can pass all content through Typepad, Akismet or similar systems and take automated action to flag, withhold or delete content that these systems identify as spam.

The Engine’s Listeners can also analyze content for words and phrases that warrant review. Review criteria can moderation-focused and reflect the likelihood that the content is problematic. But review criteria can also be driven by analytic considerations, where the Listeners can be configured to detect brand, product or customer content. Content surfaced by Listeners can be queued for review and, depending on the Listener type can even be routed beyond community management and moderation teams to, say, product management or brand marketing groups.

The Moderation Engine processes user-reported abuse and allows for the automated removal of content that has received a designated number of complaints until it has been reviewed and acted upon. Community participants can choose to ignore other community members with which they take personal issue but who may not be in violation of community standards.

While primarily centered around content, moderation also involves managing users and taking action against those who are becoming problematic. Robust user management allows for the temporary or permanent removal or downgrading of user accounts and associated content. All automated and manual moderation actions are processed into the Pluck Activity Log for audit and reporting purposes.

Have A Question? Request More Info