|Thread title||Replies||Last modified|
|Yh--||0||17:29, 11 March 2011|
|Fundraising charts||0||02:06, 17 August 2010|
|Alliances and Partnerships||0||17:23, 4 December 2009|
|interview||3||15:27, 23 November 2009|
|Commons||6||08:43, 23 November 2009|
|interviews and data||2||23:25, 16 November 2009|
|data and analysis||1||21:55, 29 October 2009|
|Interview updates||1||20:34, 26 October 2009|
|Barnstar||0||20:34, 26 October 2009|
They're very useful and I would like to see more of them! See Thread:Talk:Wikimedia Foundation fundraising/Chapters. Thank you.
Alliances and Partnerships
Hi John -
I've left a few of the mandate questions on the task force page at Talk:Task force/Alliances and Partnerships. I hope members of the task force will go in and start to work on these questions - our due dates are actually starting to be a little scary! Thanks... -- Philippe 17:23, 4 December 2009 (UTC)
Hey John. I read this report and contacted the author. We conducted an interview through email, just to follow up on some of his conclusions. I was going to upload the interview, but I was hoping that maybe you could do it, just to make it official. Maybe even contact the author and verify that the interview is accurate? We will want to use this as an authoritative source of data, like the other interviews. Should I email the .pdf of the interview off to you?
There's no need to have John upload it to make it official... just make sure that you cite that you did the interview, and upload away. :) (I'm answering for John, since he's taveling).
Thanks, Philippe. :) Randomran, I'd just want to see sure that Jose has the chance to see the notes and make any edits as he sees fit. You could shoot him a link to the PDF!
I'm not so sure they're within commons' role. They're not media (graphics, movies, sound), they're text. And they're very specific text about the strategy project. I'm not sure they fit on commons, but I'm willing to be convinced. ;)
I do agree with you, though, that they're incredibly interesting and educational files.
As I think local uploads should be banned completely (besides en.wikipedia fair-use etc.), Commons is the place to do. You never know when you want to use the file within other project. Btw, I think they are highly educative.
I'm looking at http://commons.wikimedia.org/wiki/Commons:Contributing_your_own_work which says that only media files in particular formats are accepted. I'm not sure this meets the requirements: first because it's not photography, illustration, audio or video, and second because it's in none of the file types that are allowed. :)
I do agree with you, though, that they're incredibly interesting and educational files.
Agreed that these are very educational files!
The other issue about uploading to Commons is that interviewees agreed only to their notes being posted to the strategy site, as opposed to agreeing to post their notes in more public and visible setting.
Okay, this is a reason for not using Commons. Btw, Philippe, PDFs are very fine for Commons, lots of them have been uploaded already.
One more question - shouldnt be the interviews released under free license? Or does this site's license automatically makes that (PDF is a text after all)? Or are citations of (c) text okay? If citations are okay, remixing surely is not... some ideas from the interviews definitelly might be used further.
interviews and data
Hey John, no problem. I can get a little too into it at times. But I found myself reading as much as I could about the problem. I read everything. But I think a big problem is that people don't know what information is out there. Even me, I'm finding it to be a challenge to identify the gaps in our research, and figure out what we still don't know.
It might be useful to just summarize a few key sources of data that we have. We have our fact base. We have the interviews. We have the summaries of the interviews. And we have people who have started flagging outside research that's relevant to our task force. I think we kind of have this little list of resources here and there, but some stuff slips through the cracks.
Maybe what we really need is to just create a suggested list of action items for every task force (similar to the suggested templates), and show every task force some of the things they could be doing. The broad point: understand the research, and help flag key findings for other people on your task force. The smaller points -- read the fact base, read the interview summaries, read the interviews, modify the interview summaries, look for other research, add key findings from the other research to the fact base, and (finally) identify questions that cannot be answered the the data we have right now. Not that every project needs to do every one of these. But it's good just to show people the kinds of research activities that they can be doing. Judging by the scattered activity here, I suspect some people feel lost with what they could do in terms of research.
Thinking out loud.
I really like this idea. What do you think the best way of presenting this "action plan" is? A simple post on LiquidThreads for each TF?
I think it would be useful to make a little essay/template like the one we have for roles. Just some suggestions. But maybe the title would be "suggested sources of research". Have some links to some of the general research sources we've flagged and included in the site. But then the last one or two points should be to (1) look for additional research that is relevant to your task force, and (2) be ready to identify gaps in research (e.g.: questions we cannot answer). We could then post this little essay/template at every task force, listing it on the front page, and maybe on the talk page too to be safe.
data and analysis
Hey John, thanks for all your hard work assembling data about peoples' contributions. There are two areas related to the community health task force that I'd like to learn more about. (1) Why some new users survive their first few edits, and some don't. (2) Why veteran users eventually leave.
On the first topic, I think it might be helpful to compare the first two or three groups of editors. What topics/areas do the different groups (1 edit, 10 edits, 100 edits) focus their time on, and is there a trend that certain areas make for an easier socialization process? Compare the first edit of the three groups -- is the one-off group making lousier contributions than the 10 group or the 100 group, or do they have worse luck with the community? What other data can we assemble about peoples' early edits, and finding trends about how to survive passed the 10 edit threshold?
On the second topic, it's a bit more feasible to do more detailed surveys that target veterans who left Wikipedia. I think the most obvious reason for turnover is that they start devoting their time to other things. But we can dig deeper. Maybe we can ask where the bulk of their time was being used by the end of their Wikipedia life, and how that changed from their favorite days at Wikipedia. We can also ask general questions about what, in their mind, made them leave. But we want to be careful that it doesn't become a soapbox for where they personally think Wikipedia should go. We'll have to ask good questions that look for underlying patterns, rather than specific viewpoints.
I know this is a lot of work, and I haven't the faintest idea how much you might be able to manage. At this point, I'm just throwing out some ideas. Randomran 17:35, 29 October 2009 (UTC)
Hi randomran. I'm a big fan of the ideas you're playing with. That said, I wonder how we could track users over time to assess their contributions--this strikes me as something that may be too resource-intensive (time, analysis) for task force members.
That said, we've pulled together some information here, but there may be more things to explore.
Looking forward to seeing how this all progresses! I wonder if a wide-scale survey is feasible in our timeframe; are there other options to pursue, like drawing on the task force members' contacts and doing more informal surveying?
Apologies--it looks like you've already seen the analysis we posted up. Looking forward to hearing your thoughts?
Thanks John. I figured some of my ideas weren't very feasible. An informal survey could give us a starting point. I think a lot of us have some casual understanding of the problem, but my hope is to have something a bit more scientific. Instead of a long-term study (which would take a lot of time and analysis), maybe we could do a sample? Look at a random 20 one-shot users, and examine their edits. Then compare it to a random 20 users who have contributed 10-100 edits, and see if their first edits were noticeably different in some way? I wouldn't know the first thing about finding those users, and picking a random sample. But I know the analysis could be relatively quick and easy once we found our sample. Randomran 21:00, 29 October 2009 (UTC)
I wonder if this kind of analysis is an absolute necessity. We know that the new editors' edits are being reverted at high rates, which is the issue we're wrestling with here, but importantly (in my eyes), we know that the revert rate for every type of editor (experienced, inexperienced, etc.) is increasing. That, to me, suggests a broader trend of unfriendliness (as opposed to declining quality).
What do you think?
It's hard to get to a specific solution without identifying the specific problem. I think that some of it is generally unfriendliness. But we want know if the hostility is in a particular topic area. It would also be helpful to know if people are being reverted under the guise of enforcing one particular policy, or if they're being reverted on the whim of a few jerks. And where are they most likely to be reverted -- featured articles, stub articles, popular articles, or what? And are these edits even reasonably useful? I think we want to work on how we treat new users and socialize them into Wikipedia's culture, but it's hard to know what we need to work on with more specifics. It will be a lot of subjective arguments of "I think they hate what I hate", versus "no, I think they hate what *I* hate". It would genuinely be helpful to see what it is that the 100-edit editors are doing differently than the 1-edit editors, so that we can figure out how to close the gap.
Mostly thinking out loud. We may be stuck with the data we have, and we can still do a lot with just that.
Hi there, I notice you've just uploaded lots of updated interviews. I had already read all the interviews. Are the updates substantive enough to warrant me reading them all again or are they very minor changes? Feel free to reply here as I will keep an eye on this page. Thanks.