Thursday, June 21, 2012

Tally-ho! How Citizen Science Projects Rank Using My Keys to Success


Photo Courtesy: Richard-G
The saga of classifying all 80 citizen science projects highlighted on this blog is complete.  All have now been ranked according to how well they use my previously-identified keys to successful citizen science projects.  I've also had a chance to review the results some and have come up with some interesting observations.

But before we get started...

I realize this is all based on rankings from a population of one.  There is no thought that these are universal or indicate anything more than my personal opinion.  Many things could make my results different from your's or anyone else's...being more knowledgeable in a specific scientific area, being a visual vs. auditory learner, having a natural interest in a particular subject, or just ranking items at a different time.  But it does have value as a first validation of the proposed keys and sets up a framework for much broader analysis.  So when I compare these numbers to results of another data (such as web popularity or number of published papers) it should demonstrate how well these qualitative success keys predict qualitative success.

First, the top five:
  1. Great Backyard Bird Count
  2. Christmas Bird Count
  3. Project NOAH
  4. Valley of the Khans
  5. AAVSO Variable Star Observation
Next, some initial observations:
  • Separating Project Types for Comparison: Comparing Distributed Computing (DC) projects to non-DC projects just is not fair.  While the keys to success are useful for distinguishing between similar projects, the non-interactive nature of DC projects renders of the criteria moot.  This puts DC projects at an unfair disadvantage.  Future analyses need to reflect this fact.
  • Even Distribution and Minimal Variation: We'll do some statistical analyses in a later post (along with additional grouping by project type), but the main thing to notice is there is not much difference between projects and most are in the "fair" range of scores between 3.5-5.5.  To me that means most projects exemplify some keys to success, and each has at least some aspect they do extremely well (in the 6-7 range).  Also, very few are poor...all have something going for them.
  • High Scores for Birdwatching: The two highest-ranking projects using my keys to success were both birdwatching projects.  In some ways this should not be a surprise...birding has a long, proud history of involving everyday hobbyists in their research.  Scientists and amateurs have worked together for over a hundred years and much current research still relies on amateur observations.  This seems to be both cause and effect.  On the one hand, this long collaboration period has provided valuable experience being tapped to create successful projects.  They've had many years to practice. On the flip side, amateur birdwatching has been around so long because it lends itself so easily to citizen science. 
  • Zooniverse Projects not in the Top Five:  Again, this is just a quick ranking from a population of one.  But I was surprised that projects created by the Zooniverse team ranked in the middle of the reviewed projects.  In the past I've touted these as model projects, and I still enjoy participating long after my initial reviews.  But this exercise made me realize that the narrow focus these have on an individual scientific question is both a benefit and a curse.  On the one hand it allows project designers the ability to focus on just doing a few things right without sacrificing anything for sake of flexibility.  But on the other hand, participants can lose interest if it's just the same thing over and over.  In these cases participants only have to learn to perform a few simple tasks, and without much variability it can lead to waning interest.  Although developing different projects in various scientific areas eases the problem some, there may still  be too many similarities that prevent these projects from rising to the top.
  • "Average" Project Are Still Great Innovators: The average ranking for each project does not tell teh whole story.  Even a low- to middle-scoring project still includes many successful aspects. Some of these are the braod ones identified in my study, others are smaller-scale but jsut as important.  For example, some of the innovative design features in some mobile apps, or unique ways of presenting options for identification-type projects.  These are worthy of a much larger discussion...maybe a future "Tips, Tricks, and Cool Techniques" series of posts.  You'll probably see much of the Zooniverse in those posts too.
So those are some of my immediate thoughts.  With this complete I'm looking at the next step...testing my own analysis against some independent, quantifiable criteria. 

In this case I've predicted the projects that should be the most "successful".  But does that translate into scientific science (peer-reviewed publications), popular success (numbers of participants), or public success (web popularity)?  Time to run the numbers and find out!

No comments:

Post a Comment