Benefits of having "trusted / high quality" user recognition

Benefits of having "trusted / high quality" user recognition

A number of threads talk about disputes, POV wars, retaining good editors in the face of bad editorship, etc etc. I said in a post that creating an official usergroup of "trusted" (or "good quality") content editor might be the single biggest step towards helping here that we can make, given we have only 2 - 4 recommendations total and want the ones that will do most. Here's my "thought experiment" why so:

Suppose there was a process similar to RFA (but easier!) for recognizing a user as "trusted" in their content work and article talk interactions only. By this I mean, they consistently work well on content, they edit neutrally, don't edit war, collaborate not fillibuster, they make generally good edits, they cite, they improve content, debate issues instead of attacking personalities, and have a good general ability at content work. A user asks the community if they are trusted this way, like RFA or like "rollback" or any other trust level the community can grant. No extra rights in editing, but...
1. We now have users with a specific standing. People who worked for that, will guard their editor standing and not want to lose it by bad editing. A vested interest in high standard content work. Incentive!
2. We can patrol articles and understand edit wars much easier, because in article history we can highlight edits by "high quality" or trusted users, or bots can spot patterns involving trusted and other users. Makes patrolling and understanding edit wars much easier. Of course not all "trusted users" are good, not all others are "bad", but it's valuable information on a dispute if you can see it in history.
3. Not all editors are equal in approach (in their editing quality). Not all want or would get adminship. So we have something people can aim for, as an ordinary editor. Enwiki has 100k's or millions of editors, but only 1700 admins. I'd like to see this so popular, so much "something a new editor is guided towards" by newcomer help and other things discussed, that most editors who stay round a while will ask for the trust of this. Maybe 20k such users. Think what this does! (And also retains editors at Wikipedia - if you gain formal recognition/status, it's incentive too)
4. Finally, edit wars and entrenched problem areas. In a difficult edit war, Arbcom or the community can now say "any trusted content editor may edit the article. Others = talk page only". Many many trusted content editors, so no real POV or "too narrow editors" issue here. Anyone who wants to edit the mainspace and can get community agreement they edit content well, may join in (others only on the talk page). Instant stability, good decisions, consensus, and quality, on problem and edit war articles! No harm done, no bias added, articles still edited by a wide pool that anyone can join.

So if we only have 2 - 4 recommendations, this might be one with profound scope to help in many ways - stability, quality, entrenched edit wars, experienced editor enjoyment, and incentive to gain good content editing skills and edit well!

FT2 (Talk | email)19:22, 26 November 2009

I think it's a good idea if you can de-politicize it. Be mindful of factions trying to bolster their point of view, and slam the points of view of others. The more we can quantify the better. For example, if someone has three FA's under their belt, then they clearly understand the processes and standards of quality that Wikipedia is looking for. That would get us halfway there, at least. The other half of being "trusted" on quality would be making sure they have a good track record of civility (not an jerk) and consensus building (not a partisan hack), which is harder to measure.

Randomran20:13, 26 November 2009
 

What you are proposing I see as an extension of the autoreviewing. I like your argument in point 4). Such an improvement would help, but by itself I am afraid it will not change much. One technicality I see is that a user can be trusted in one field, but have problems editing another. Another one is what a colleague of mine noted some time ago: "no person interested in history is "adminnable" in Wikipedia." What he means is related to my mini-essay on "sticking mud". I am afraid that in the current system, quite a few editors who create good content would fail to become recognized in a vote as such by the community (just as they would fail at RfAdm), as their disruptive content opponents would tag-team together, scream murder, sling mud and create enough disruption and mistrust at their request for application that they would fail to gather sufficient support to pass. That said, I have a potential solution: perhaps after initial few months of open voting, voting should be limited only to other "trusted content creators" (same should be likely done for voting for admins). In other words - I can trust the quality editors to make quality decisions, but I am increasingly disappointed with flaming and mistrust-sawing comments from "the peanut gallery" in various discussions I see.

I'll post some of my more specific thoughts soon. --Piotrus 20:21, 26 November 2009 (UTC)

Piotrus20:21, 26 November 2009

I like both these points (Randomran, Piotrus).

  1. The concept would need an effective removal process that's hard to game and less easily used/abused.
    Note some people will be prepared to build up a 3 month track record to "get into" a topic this way. But - if removal is fair and easy, this isn't a problem - it's like IP block exemption that way - takes time and effort to get it, and it's easy to lose if abused. It doesn't need to be perfect, and the odd gamer or exception isn't a problem either. It's enough if it cuts the problem right down, and those users who do evade are then easy to deal with because it's very few, and because most other users on the topic are good balanced users who'll handle it quite properly, not edit war or dramatize. So it's "self repairing".
  2. To address Piotrus' technicality, "trust" in this sense includes appropriate recognition and self management where they have COI or other problems/strong views. Anyone can be nice and neutral on an article they don't care about. This idea implies good (reliable, trustworthy) editorship on articles the user does care about too.
  3. I'm fine with self-selection after a while. But it encourages divergence between "trusted content editors" and "all editors". Maybe look at enwiki Mediation Committee for a better way - section for mediators to comment, section for anyone else to comment, and set criteria for acceptance/veto.

    So for example it might need a user to fill a template of evidence on their editing, and get >= 50% at community feedback from editors with > 100 mainspace edits, plus >= 75% from at least 10 trusted content editors. We want to encourage mass involvement and good standards, so keep it based on pre-defined data and agreed percentages and acceptance/veto criteria. I'll work on this a bit.
FT2 (Talk | email)20:53, 26 November 2009

How about becoming a trusted editor automatically after having written 1 FA / 5 GA / 50 DYKs? No need for community voting, just show quality content that was recognized by others (FAreviewrs/GAreviewrs/DYKreviewers). In addition, the trusted editor would be trusted only for content areas recognized by appopriate WikiProject tags in content he has created. The approval procedure could be held on a given WikiProject talk pages (and perhaps centralised via transclusion to a more general forum), and the voting should be open to members of that WikiProject as well as all other trusted editors. As such, the approval would be discussed by experts (specific and general), with little chance of the status being disrupted by wikipolitics or trolls. --Piotrus 22:04, 26 November 2009 (UTC)

Piotrus22:04, 26 November 2009

No. Writing X many of this that or the other doesn't necessarily correlate to trust in editorial approach generally. It doesn't evidence appropriate neutrality on pet subjects, nor talk page discussion approaches. Nor is the reverse true; not writing these doesn't in any way deny trust. I'd take high level content as evidence, but other stuff counts under "trust". I'll try to write up a brief idea on this in a bit.

FT2 (Talk | email)22:21, 26 November 2009
 

FT2 is right that some of these processes are a little too vulnerable to whims and personal opinion. The reason that FA's are great is because they're the only status that's given by consensus. I've actually seen apparent GA's try for FA and get slammed hard, with people arguing they should be demoted.

Not to say that FA should be the only way to tell if an editor understands quality. But going with other measures could make it possible for a faction to pump up the credentials of their own narrow-minded editors.

Randomran23:15, 26 November 2009
 

I like the overall feel of this. I think we can work out details of what percentages are appropriate. Can someone tag this as something we want to further refine?

Bhneihouse03:45, 27 November 2009
 
 

PS. An alternative - or complimentary - way of addressing this is to recognize non-anonymous experts. See my essay. --Piotrus 21:23, 26 November 2009 (UTC)

Piotrus21:23, 26 November 2009

Non-anonymous experts are of source (very) desirable. But that doesn't mean they always make desirable editors. There are non-anonymous experts who can't "get" the idea of how we edit, can't collaborate, can't handle NPOV, have their own (non-policy) view on sources, write fine on their pet topics and edit war on others, .....

I wouldn't recognize expertize in this idea. Not because I'm anti-expert (I'm not) but because even in scientific publishing, expertize does not necessarily mean balance, neutrality, non-fringe, appropriate conduct to others they don't agree with, and the like. An expert needs to be able to show these basic editing skills like anyone else, because Wikipedia is not a publisher of original research and so on. This is about recognizing people who know how to edit and are accepted as good at doing so. Nurture those, and the project benefits for all -- including for experts.

I'd tell the expert that (like joining any new project) they need to learn how to edit Wikipedia, which will be different from how they edit their own papers. But they can edit freely, and (since they are bright and used to scientific collaboration) they'll surely be recognized as a trusted content editor for our purposes nice and quickly, if they take a few minutes to understand how we work here. In fact I'd make that part of the "New user wizard" ("Do you have formal credentials in any field you plan to edit?" + guidance).

FT2 (Talk | email)21:40, 26 November 2009

Note: there are also "less formal" credentials that work, i.e. while Master Gardener is a formal credential (that I have), many people don't know or understand the certification. Somewhere can we start culling a list of potential certifications for admins to utilize in screening?

Bhneihouse03:49, 27 November 2009

I could have said that better: informal and lesser known credentials available in a list may help admins.

Bhneihouse03:50, 27 November 2009
 
 
 

Grrrr: LT has eaten my post twice, and apparently I didn't learn the first time, to compose in OO :-(

So: suffice to say, very quickly .... based on my own experiences as a new editor, I can tell you I would have found trust-labelling-of-editors very very helpful. So yes, I can imagine it being useful for non-new-editors, and it would also be useful for new people. It's a good idea.

I would however make a plea for trustworthiness being automatically derived out of on-wiki actions, rather than being manually assessed on a case-by-case basis. 1) It scales better, 2) it eliminates popularity contests, and 3) it prevents people needing to waste a bunch of time debating particularly controversial / edge cases. I know there are lots of challenges inherent in an automated approach, but I think the benefits would outweigh the drawbacks.

Sue Gardner22:28, 26 November 2009

I want to think carefully on this. You wouldn't add a post like this lightly, so I'm going to think on it.

The first thought is roughly, keep it simple, and using automation to cut down the number of cases that need more than cursory human review (80/20 rule).

However, the benefits of something like this flow from the idea that some users can generally be trusted to do right, in article and article discussions, of any kind. Those are the users we want in this pool, because once identified, they provide a large population of experienced content writers not needing much guidance or checking, and capable of being given heavy disputes to put into good editorial order.

I don't see a way to judge it on any automated basis though. The benefits here flow from their ackowledged trust to do right (broadly speaking) on any content matter, up to and including self-management of bias, interaction style, and the like.

That should be what we're guiding people towards. That's actually what we want and need -- and it's not hard to do either. It'll become a norm that people'll want to get it. Make it valuable, and people will value it. But I don't think it's open to easy automation. It would probably be easier to design a system with a simpler human element instead. I agree the intensity, drama, and diversion of communal resources seen at RFA isn't where something like this or on this scale would end up. Probably very different.

I'll think about it harder, but that's my initial thoughts why I've positioned it as I have.

FT2 (Talk | email)23:58, 26 November 2009

I agree about not automating this process. I also agree about popularity contests, thanks Piotrus. So the admins surveying these users themselves have to be pretty much beyond reproach. Perhaps a pivotal point in quality control is how Wikipedia "approves" and trusts editors? Perhaps another pivotal point is the actual "structure" of this process? I apologize if what I am saying is obvious and derived from all of your statements -- I am trying to get us to bullet points that become more cohesive as we work.

Bhneihouse03:57, 27 November 2009
 

Hi FT2.

I think I may have overstated the strength of my position in my earlier post; I was having some LT problems, which made me, in the end, probably overly succinct :-)

So let me make a couple of longer comments now:

It sounds to me like what you want to create is a marker/label for people who are particularly trusted to have good judgment. Probably these would be people who've been around for a while, and understand the policies well, who are reasonable and thoughtful.

I think that's a great idea. I think new editors would really appreciate being able to tell at-a-glance if an editor they didn't know was someone they should trust and listen to. I think also that one of the big points of pain for new people is that when their edits are reverted, they automatically assume "the Wikimedia community" has rejected their edit --- they don't understand that it's the act of an individual, and not necessarily a good or wise decision. If some editors were labelled as particularly trusted then A) new people might not be so quick to assume that everyone speaks with consensus authority, and B) they might actually be motivated to seek out advice and counsel from the ones who are specifically labelled as known to have good judgment.

I can also see how labelling-of-particularly-trusted-editors would be helpful for other experienced editors -- for the ArbCom, for people who do OTRS work, and so on. I don't have that kind of experience myself, but I can imagine how this would be useful for the people who do.

(((Basically, we currently all have our own informal mechanisms for assessing people's reputations, and learning who to trust. I trust people I know personally, like Philippe. And I trust people who my trusted people trust -- like, I first began trusting you, because Jimmy told me I could :-) But that's limited: it doesn't scale very well, and it takes a long time for those networks to develop, which makes things especially hard for new people.)))

So I think your idea makes a lot of sense: it would enable trust to scale better than it currently does.

So, after thinking it through some more, I have two comments/suggestions for you:

1) I worry about the word “trusted.” If some editors are labelled as trustworthy, by implication other editors will be seen, or will feel as though they're seen, as not trustworthy. Which I think would make lots of people feel bad, and could make the “trusted” people targets for envy and anger.

In my old world (journalism) we used the designation "senior editor" for exactly this kind of person, and I'd recommend you think about using it for this. A senior editor in a newsroom generally does the same work as other editors, but the designation is an acknowledgement, and a signal to others, that they are especially seasoned and credible and wise. It's a label that new people would understand. And it wouldn't alienate other people – they can still be good, constructive, useful editors, and they can aspire to earn senior status without feeling diminished by not having it.

2) I really worry about the system for attaining this status being gameable. I don't think you could afford to have an open voting component, because there will always be a few trolls and cranks who are super-motivated to game the system, and practically anybody can rustle up a few dozen friends to help them do that. I worry that if there's a voting component, it'd be a magnet for posturing and rabble-rousing and drama, which would end up wasting tons of good people's time.

I also think voting might make it impossible for anyone who edits controversial topics to gain this status. That would be bad, because as I understand it, some of our best and wisest editors focus on trying to bring neutrality to difficult topics. In a voting-based system they would get voted down, I think, by power blocs of editors with strong POVs.

So I don't think voting would work. You want these decisions to be made thoughtfully, by experienced people. So I would suggest this instead: I think you could have an (automated, low) bar that people need to reach before they're considered for this status. Like, maybe one year of editing experience, and a minimum of 200 edits. That would screen out people who simply haven't been around long enough, or edited enough, to have developed a good understanding of the policies. And then I think you would need a trusted team of people, who would investigate people's edit histories on a more qualitative basis, and seek out people who are particularly thoughtful and wise and constructive.

Who could make those decisions? I don't know the internal workings of the projects well enough to say – but I would guess the ArbCom could, or the Arb Com could nominate a group to do it, or the community could nominate a group that the Arb Com could then approve. Or maybe the ArbCom could nominate the first dozen senior editors, who could then set up a system for expanding their own ranks. It seems to me that the kind of people who create Featured Articles would be a good starting point – but like I said, I don't know the internal workings well enough to really say.

Sorry this is so long, but I hope it's helpful. It's a good idea: I would really like to see it work :-)

Sue Gardner05:24, 27 November 2009

Agreed on the controversial topics issue. I've seen reasonable people caught in the middle, and ending up with nothing but harassment (from both sides) to show for it. The more we can rely on other measures the better. The human component can be there as a screen, as a veto, but people should really be judged by accomplishments that the widest number of Wikipedians cannot deny.

Randomran05:45, 27 November 2009
 

Framing is important, so I support the "senior editor" name.

ArbCom doesn't have time to deal with that stuff; instead, how about we make all editors who have written 2+ FAs "senior"?

Piotrus05:48, 27 November 2009

I think that's a good place to start. Simple, objective, hard to game, but achievable within 6 months to a year if you want it.

Randomran05:51, 27 November 2009

Urgh. No.

Good content writers may or may not be good at interacting and working with other users, may not be good in other topic areas, and so on. It has its fair proportion of users who couldn't meet the kind of role we're talking about, though.

FT2 (Talk | email)05:56, 27 November 2009
 

I'd suggest if anything this:

  • Users will be assessed on trust in their editorship. They must submit a portfolio of significant experiences and skills covering:

  • Basic and peer reviewed article writing - typically at least 10 non-stub articles and 1 GA/FA.
  • Specific editor skills - responded on at least 120 noticeboard issues spread across the major noticeboards including FRINGE, RS, NPOV, BLP/N; COI/N; EDITWAR/WQA, content RFC/3O, xFD (including at least one "rescued" AFD), plus basic template skills.
  • Peer review skills - typically at least 5 GA and 2 FA reviews
  • Collaboration skills - significant active involvement in a Wikiproject for at least 2 months, or equivalent.
  • Editor dispute skills - addressing 10 or more disputes with a mix of amicable and hostile/improper editorship.
  • Your own showcase - at least 3 items (not otherwise used here) that showcase your interests and abilities in any wiki area. These could be unusual or interesting content or editorial matters, media work, admin or patrolling, or any other area - your choice!
  • Users wishing to object must show diffs that clearly evidence any of the following in the last 9 months:

  • Two or more instances of clear poor judgement (not just legitimate disagreement) related to NPOV, OR, CITE, V, RS, or COPYRIGHT.
  • Two or more clear demonstrations of personal attack, attacking a person not the evidence, a thread in which the user fillibusted "gamed" edit warred or obstructed consensus in an unreasonable manner, or threats.
  • A history of poor xFD or other content process contribution
  • A pattern of undue serious incivility covering at least 5 instances.
  • Evidence that the portfolio grossly misrepresents their content editorship.
  • Any blocks or other formal warnings or sanctions by an administrator.
  • Gross bad faith, breach of trust, deception, or any access removal related to poor conduct (including but not limited to puppetry, faked content or citations, concealed POV warring, conspiracy to disrupt the wiki, and the like) - without time restriction.
  • Any diffs should be self evident, with minimal context or explanation, and clearly show the behavior concerned.

  • Claims (portfolio or concern) not evidenced as above are disqualified except in exceptional circumstances.

That allows users a rough criterion for "evidence that should be publicly shown".

It's not even that demanding - 10 basic articles, one decent peer-reviewed article, half a dozen peer reviews, a couple of days work on noticeboards (to show specific areas), and some dispute resolution and collaboration.

I don't like being prescriptive, I think it's gameable, but it's still better than 1 - 2 FAs. It'd probably work.

FT2 (Talk | email)06:09, 27 November 2009

I'm not against formalizing the status of quality users (the German wiki has already experimented with something like that in the form of 'quality revisions', revisions that can be flagged by experts only). However a user that is an expert in Indian cooking isn't an expert in quantum physics, for example. The most it would give us is a very general indication if we can assume good faith in a certain contributor.

Woodwalker05:53, 29 November 2009

The essence of wiki working is, that certain approaches are key. Approaches (or their lack) aren't the same as expertize (or its lack).

If given the choice of an expert who could not show good editorial approaches, or a good editor with the right approaches who lacked specific topic expertize, then for this project, choose the latter, not the expert. Why? Many reasons:

  • The expert (in such a scenario) may have his own foibles and bias, or unwillingness to hear others, or non-neutral stance
  • The expert doesn't know how to collaborate, or work with others in a mass edited project. As a liability he may drive others off and absorb immense time and harm the communal fabric. We get one perfect article (only we don't know if it's biased because every community dialog about it descends into argument and name calling) and not a lot more.
  • The non-expert with good approaches will listen to others, consider the views, research them and check the details. They may not know, but they know how to examine others' work and check facts. They foster others to work with them and as a community the work gets done to a high standard even so.

Part of Wikipedia is that although we want high quality, we aren't a cutting edge academic source. We'd like to have some of that, but it's not (as I understand it) our actual core goal.

If we're assessing what kind of editor can be broadly trusted to work on all kinds of difficult articles unsupervized and do so in a proper way (as this thread considers), then the qualities I've outlined will get you that person, and fairly high quality (though not cutting edge) writing. A PhD won't.

$0.02 :)

FT2 (Talk | email)06:18, 29 November 2009

I agree, but let's not call them 'experts' when in fact they're rather 'good editors'. I like the idea to create some kind of special user status whenever an editor reaches the requirements you mentioned (06:09, 27 November 2009). What do you think about quality revisions though? It's also a way of showing the reader how the best editors rate the quality of an article.

Woodwalker07:42, 29 November 2009
 
 

I don't necessarily see it as better, but I am not opposing making this more complex (still, I like KISS...). Anyway, regarding opposing, I would strongly suggest taking editors activity into consideration. In other words: editors who are very active and have edited for a long, long time (and logically, would be likely to be good or trusted editors) also are more likely to have enemies, or at least, more "exception-to-the-rule" dirt that can be brought up against them (lookie, warning from 2006, PA from 2007, ArbCom from 2008...etc.). As such I'd suggest that any assuming the editor applying has been reasonably active in the past 6 months, examples (diffs) of poor judgment should be not older than that period. --Piotrus 05:22, 30 November 2009 (UTC)

Piotrus05:22, 30 November 2009
  1. My own view is that most "professions" have some kind of "continuing professional development" post initial qualification. While wiki isn't a "qualification", we could well ask "what are we offering to users to stretch their skills and as a means of self evaluation and development as editors". Something like this, a "recognized wiki-editor qualification", would be good for the ethos that way too.
  2. Piotrus - the "reasons to object" were crafted as requiring both specific types of bad activity or judgment to be shown via evidence alone, within a time limit of the last 9 months, as drafted. With luck that solves your concern?
FT2 (Talk | email)11:02, 30 November 2009
 
 
 
 

Long thread, quick replies:

  1. Per Piotrus: Never underestimate the importance of choosing terminology well. "Senior editor" works just fine for me. We can debate terminology if this goes ahead, but the basic point's good.
  2. A panel's fine, but you hit the old problem then: this is a panel that indirectly controls who's designated as a "senior editor". So that becomes a focus of allegations, games, and so on, as Arbcom can be. We know where that path goes, and if avoidable, let's avoid it.
  3. This area's easier than Arbcom because editorial behaviors are almost all public record (even alliances emailed in private become obvious on wiki a lot of the time), so bad conduct's visible. The community was founded on open decision making, and for all other senior roles, it works just fine. Admins, arbs, both done by the open community. let's see if we can avoid losing or diminishing that. It's part of the "trust" model to trust the wider community where we can (with suitable precautions).
  4. Instead of a panel, or trying to be super-ideal on selection, we can have a "pretty good" selection, if we also have ways to effectively catch the exceptions. Don't let the minority fringe case distort what's fine and simple process for the majority of cases. So we might back a 95% valid (but slightly able to be gamed if determined) nomination system, by also having:
    • A clear and standardized removal process
    • Perhaps some kind of scrutineers panel who can review cases claimed to be grossly affected by bias and canvassing, or where the results don't reflect appropriately on the user.
  5. Arbcom's definitely the wrong ones for this.
FT2 (Talk | email)05:51, 27 November 2009

Agree with FT2's no 3, I think a pannel isn't necessary. The arbcom isn't supposed to be involved in matters of content anyway (at least, the Dutch one isn't, I'm not totally sure about other arbcoms). Let's trust that, after a careful analysis of the portfolio and hearing the opinion of at least, say, three other 'senior editors', the community is able to choose a new 'senior'. Having the community choose them is more in line with the spirit of Wikimedia projects than having an elite pannel.

Woodwalker07:51, 29 November 2009
 
 
 

"it eliminates popularity contests" - very, very important, per my comments above.

Piotrus03:32, 27 November 2009
 

I've had a go (per Piotrus/Sue Gardner) at designing a hybrid approach. Rough concept is here, and please comment on the talk page if it's too off-topic for this thread.

It doesn't have to be "perfect", but it should be hard to game and fairly good for identifying good quality content editors, simple, and low overhead on individuals and community.

The key aims are automation, low gameability, simplicity of experience to users, very low scope for politicking/dramatizing/popularity contests, and low time needed by participants. I feel very strongly that automation alone (metrics for "trusted users") aren't viable, despite Sue's valid point. What we can easily do with existing tools is streamline it so far, that it's almost as efficient and substantively keeps all the benefits of both.

This one's a concept (rough only I'm afraid) - a hybrid of enwiki Mediation Committee's nomination method (demands filtering of good quality users and operates historically with no drama whatsoever) and a modification of the SecurePoll tool already in place.

That's the direction I'm thinking. It's a bit more involved than 100% automation, but it is simple (once set up) and keeps almost all the benefits of automation, all the benefits of user involvement, and very little of the drawbacks of either, when merged.

FT2 (Talk | email)04:11, 27 November 2009

One thing I strongly encourage people to keep in mind: any system that is set up will be gameable. All of them. When I was doing corporate training, we had a rule that you "train to the norm, not to the exception." The idea was, of course, that you write a process or a training scheme that will work MOST of the time. Someone's always going to be an exception. Someone's always going to game the system, but if we can make it work 90% of the time, that's good enough.

The perfect is the enemy of the good.

~Philippe (WMF)23:30, 27 November 2009

Covered. I think I said almost the same above - you design it to be 90 - 95% good,which means it's slightly gameable. But you counter that by making sure removal is also to-the-point, and some kind of scrutineers exist for "surprising" results where there is a widespread suspicion of gaming or undue conclusion.

FT2 (Talk | email)23:55, 27 November 2009

Sounds good to me. How about adding process for dealing with the exceptions? I think you probably know what they are by now.

Bhneihouse19:11, 28 November 2009