Geek Feminism Wiki


1,199pages on
this wiki
Add New Page
Talk0 Share

Moderation is the control of discussion forums. It's commonly used on the Internet by all kinds of groups in order to either keep discussion focussed around a certain topic, or, even in free-ranging discussions, to remove undesirable behaviour (trolling, flaming etc). Many feminist and feminist-friendly geek spaces use it to remove evidence of the avalanche of online harrassment directed towards them and their members. But moderation is certainly not restricted to such communities.

Techniques for moderation

  • Removing undesirable content. This is not possible on some older style online fora (email lists, Usenet) or real-time fora (chats), but is often used on blogs and web forums. Replies might also be removed.
    • Outright deletion. The content is removed entirely, and usually cannot be retrieved. 
    • Removing from public view. The content is no longer viewable by the public, but moderators can still view it and may be able to make it visible again. 
  • Making undesirable content hard to read. A common technique is Teresa Nielsen Hayden's "disemvowelling", in which all the vowels are removed from a blog comment (eg "all the vowels are removed" becomes "ll th vwls r rmvd")
  • Voting down undesirable content. 
  • Freezing an undesirable discussion. The discussion remains viewable, but no new replies are accepted. 
  • Pre-moderating for undesirable content. 
    • Global. All content requires the approval of a moderator before it is visible. This can be time-consuming. 
    • By poster. New contributors' content requires the approval of a moderator. Individuals can be cleared to post without approval, but this status can be revoked at any time. 
    • Banning. The banned user will not be able to submit comments. This may be temporary or permanent (see Removing disruptive posters). Some social media platforms consider "ban evasion" (making a second account for the purpose of evading a ban on the first account) a significant violation of platform rules, and will disable the accounts of offenders. 
    • Hellbanning. Moderating someone's comments while making it appear to them that they are unmoderated; that is, it will appear to them as though other participants can see but are ignoring their posts, whereas in fact other participants are not seeing them.
    • Content Blocklists. Content meeting certain pre-defined criteria is not allowed through (blocked outright or held for approval) but other items might be allowed through. Sometimes used against spammers with known bad links, or with words or phrases which don't belong in that forum. 
    • Source Blocklists. Contributors meeting certain pre-defined criteria (IP ranges, lists of usernames) are not allowed through (blocked outright or held for approval). 
  • Removing disruptive posters. A particular user is identified as behaving disruptively, and is removed.
    • Time-Out. The user behaving disruptively is temporarily prevented from posting, but after a relatively short cooling-off period, will be allowed to post again. 
    • Escalating time-outs. For each successive instance of disruption, the temporary ban grows longer and longer. 
    • Three Strikes. A user who consistently or repeatedly behaves disruptively will be permanently banned after they have exhausted the short and finite good graces of the moderator. Note that a "strikes" type system is inherently subject to inconsistent enforcement (however, just as a garden party is not major league baseball, so a personal blog perhaps does not need the sort of consistent rules that a corporate message forum might).
    • Venue-based ban. When a user is disruptive in one part of a large group of online spaces, but has not disrupted another part, some spaces have the tools to ban this user from one venue but not another. The same person who can talk respectfully about sensitive religious issues may not be able to discuss sports civilly, or vice-versa. 
    • Permanent ban. This particular disruptive user cannot return under the same internet-identifiable identity. (Generally this is by user account, by email address, by IP address, or similar.) Sometimes permanently banned users are allowed to create new accounts or return via another means, as long as they do not cause a new disruption. Sometimes they are not. 
    • Global permanent ban. If a user banned under one identity returns and can be identified as the same entity, this removal choice empowers the moderators to ban the new account even if the disruptive user has not behaved disruptively under the new account. 
  • Topicality. Some venues have topics which should be adhered to. Some multi-topic venues require labeling.
    • Redirection. Redirect an off-topic posting to a venue where it is appropriate.  
    • Labeling. Labeling items with their contents will help various users make their own decisions about whether they want to read those items.
      • General content labeling.  
      • Sensitive topic labeling. If a post contains topics which are known as things the venue is sensitive to, the moderator can apply labels to help users who are sensitive to the topic avoid it or brace for it.  

Tips for moderation

"1. There can be no ongoing discourse without some degree of moderation, if only to kill off the hardcore trolls. It takes rather more moderation than that to create a complex, nuanced, civil discourse. If you want that to happen, you have to give of yourself. Providing the space but not tending the conversation is like expecting that your front yard will automatically turn itself into a garden...
"9. If you judge that a post is offensive, upsetting, or just plain unpleasant, it’s important to get rid of it, or at least make it hard to read. Do it as quickly as possible. There’s no more useless advice than to tell people to just ignore such things. We can’t. We automatically read what falls under our eyes.
"10. Another important rule: You can let one jeering, unpleasant jerk hang around for a while, but the minute you get two or more of them egging each other on, they both have to go, and all their recent messages with them. There are others like them prowling the net, looking for just that kind of situation. More of them will turn up, and they’ll encourage each other to behave more and more outrageously. Kill them quickly and have no regrets."
"What the blog world needs is not a universal 'Code of Conduct'; what it needs is for people to remind themselves that deleting comments from obnoxious dickheads is a good thing. It's simple: if someone's an obnoxious dickhead, then pop! goes their comment. You don't even have to explain why, although it is always fun to do so. The commenter will either learn to abide by your rules, or they will go away. Either way, your problem is solved. You don't need community policing or a code of conduct to make it happen. You just do it."

Tools for moderation by platform


Any IRC network can have any number of channels. Each channel can be controlled by Operators, or Ops. (Some ad-hoc channels have no ops.) In addition to channel operators, there are network operators, who are responsible for network-wide service and moderation.  

  • Keying. Controlling access to a channel based on a password. 
  • Kick. Remove a user from a channel. 
  • Ban. Prevent a user from joining a channel, based on any combination of their nick, hostname, and IP address. If the user is in the channel, they would not be able to speak or nickchange. If they leave, they would not be able to return unless the ban is lifted.  
  • Mute. Restrict speech in a channel to those who have Voice or above. 
  • Quiet. Prevent a particular user from speaking in that channel (or nickchanging while in that channel).  
  • Ignore. Any given user can choose to not see another user's speech and actions (depending on the client, join/part and nick change may still be visible). This is across all channels on that network.
  • Caller ID. Some IRC networks (including freenode) allow users to pre-screen private messages (this doesn't apply to channel messages). freenode documentation (under "User and channel modes").

Some IRC networks have policies related to ban or kline (global ban) evasion. Talk to your local network's ops to see how they can help you.


Twitter is a mega-forum, with user-to-user interaction. Visibility is non-granular: an account may be public or private, but individual tweets cannot be hidden, only deleted (with the usual caveats: archive tools, search engines, and individuals may have kept copies). 

Twitter's moderation tools are a moving target, because of changes to the platform. This information may become inaccurate over time. 

  • Muting. You will not see tweets from the muted user in your timeline, and if you don't follow them, you will not see @-mentions from them either. If you do follow the muted user, you will still see direct messages and @-mentions from them, but their tweets will not appear in your timeline. The muted user may guess that you have muted them due to lack of response, but cannot confirm that. 
  • Blocking. In addition to not seeing tweets from the muted user (@-mentions or in your timeline), the blocked user's account will stop following your account and be unable to resume following. The blocked user can confirm that they have been blocked. 
  • Suspension. If Twitter's security enforcement team deems the account sufficiently rule-breaking, they can disable the account entirely. 


LiveJournal is divided into individual journals and communities. Each community can be moderated by community owners, maintainers, and moderators. Each individual journal can only be moderated by that individual journal account. Security is very granular. 

The major relationship on LiveJournal is "friending", which is one-way: any journal can add any other journal as a "friend" without the agreement of the other. The "friend" relationship ties access (them viewing your locked entries) and subscription (their entries appearing on your friends page). 

  • Banning. A journal owner can ban another journal from any form of contact. The result is that the banned account cannot comment in that personal journal, and cannot send private messages to that account. However, the banned account can still add the journal as a friend, and read any entries according to the banned account's security accesses. The ban is per-journal, which means that the banned user can still reply to comments left by that journal elsewhere (in communities or other personal journals). Banning can conceal unwanted connections from your profile, but the connection will still appear elsewhere. 
  • Ban evasion. Any instances of ban evasion (using an alternate account to initiate contact after a previous account has been banned) should be reported to the LiveJournal Abuse Prevention Team, as they do not look kindly on it.  
  • Defriending. Defriending another user will remove their access to your locked entries, and remove their entries from appearing on your friends page. Note that banning will not automatically defriend someone. 
  • Friends-only commenting. To limit any comments even on a public entry to known friends, a journal owner can specify that comments can only be left by the members of the friends list. 
  • Custom friends groups (friends page). Custom friends groups can be created to control whose entries appear on the friends page without defriending an account. 
  • Custom friends groups (entry security). Custom friends groups can be created to control access to specific entries. 
  • Freezing. Any comment thread can be frozen to accept no new comments under the parent thread, however, this does not prevent the creation of new threads. 
  • Screening. Any comment can be hidden from view without deleting it. 
  • Deletion. Any comment or entry can be deleted by the journal owner. The comment creator can also delete their own comments. In communities, the community owner, maintainers, and entry creator can delete entries and comments to those entries.  
  • IP address viewing. Journal owners can choose to collect the IP addresses of commenters: anonymous-only, or anonymous and logged-in. While there are no security or automatic moderation tools that work off of this like some other platforms may have, it can still be used informationally (when a hateful anonymous comment is left from the same IP your logged-in acquaintance uses an hour later, it may be time to talk).  


Dreamwidth is divided into individual journals and communities. Each community can be moderated by community admins and moderators. Each individual journal can only be moderated by that individual journal account. Security is very granular. 

Dreamwidth's major relationships are Access and Subscription. Granting another journal Access allows them to view your locked entries. Subscribing to another journal lets any entries you have access to read appear on your Reading page. 

Dreamwidth's moderation tools are similar to LiveJournal's. 



Facebook's major relationship is Friendship. Friendship must be mutual, but an individual account can allow other users to "follow" them instead (this is not enabled by default), which lets them see public posts only. Facebook has granular security with wacky user interface problems. 


The major relationship on Google+ is the circle. 


  • Profile Visibility. There is some sort of on/off toggle for profile visibility off-network in addition to on-network.
  • Noise. Can classify contact as Noise vs. Friend.
  • Notably lacking. A lot of privacy and anti-stalking/anti-harassment tools.

See also

Ad blocker interference detected!

Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.