Thursday, June 21, 2012

Tally-ho! How Citizen Science Projects Rank Using My Keys to Success


Photo Courtesy: Richard-G
The saga of classifying all 80 citizen science projects highlighted on this blog is complete.  All have now been ranked according to how well they use my previously-identified keys to successful citizen science projects.  I've also had a chance to review the results some and have come up with some interesting observations.

But before we get started...

I realize this is all based on rankings from a population of one.  There is no thought that these are universal or indicate anything more than my personal opinion.  Many things could make my results different from your's or anyone else's...being more knowledgeable in a specific scientific area, being a visual vs. auditory learner, having a natural interest in a particular subject, or just ranking items at a different time.  But it does have value as a first validation of the proposed keys and sets up a framework for much broader analysis.  So when I compare these numbers to results of another data (such as web popularity or number of published papers) it should demonstrate how well these qualitative success keys predict qualitative success.

First, the top five:
  1. Great Backyard Bird Count
  2. Christmas Bird Count
  3. Project NOAH
  4. Valley of the Khans
  5. AAVSO Variable Star Observation
Next, some initial observations:
  • Separating Project Types for Comparison: Comparing Distributed Computing (DC) projects to non-DC projects just is not fair.  While the keys to success are useful for distinguishing between similar projects, the non-interactive nature of DC projects renders of the criteria moot.  This puts DC projects at an unfair disadvantage.  Future analyses need to reflect this fact.
  • Even Distribution and Minimal Variation: We'll do some statistical analyses in a later post (along with additional grouping by project type), but the main thing to notice is there is not much difference between projects and most are in the "fair" range of scores between 3.5-5.5.  To me that means most projects exemplify some keys to success, and each has at least some aspect they do extremely well (in the 6-7 range).  Also, very few are poor...all have something going for them.
  • High Scores for Birdwatching: The two highest-ranking projects using my keys to success were both birdwatching projects.  In some ways this should not be a surprise...birding has a long, proud history of involving everyday hobbyists in their research.  Scientists and amateurs have worked together for over a hundred years and much current research still relies on amateur observations.  This seems to be both cause and effect.  On the one hand, this long collaboration period has provided valuable experience being tapped to create successful projects.  They've had many years to practice. On the flip side, amateur birdwatching has been around so long because it lends itself so easily to citizen science. 
  • Zooniverse Projects not in the Top Five:  Again, this is just a quick ranking from a population of one.  But I was surprised that projects created by the Zooniverse team ranked in the middle of the reviewed projects.  In the past I've touted these as model projects, and I still enjoy participating long after my initial reviews.  But this exercise made me realize that the narrow focus these have on an individual scientific question is both a benefit and a curse.  On the one hand it allows project designers the ability to focus on just doing a few things right without sacrificing anything for sake of flexibility.  But on the other hand, participants can lose interest if it's just the same thing over and over.  In these cases participants only have to learn to perform a few simple tasks, and without much variability it can lead to waning interest.  Although developing different projects in various scientific areas eases the problem some, there may still  be too many similarities that prevent these projects from rising to the top.
  • "Average" Project Are Still Great Innovators: The average ranking for each project does not tell teh whole story.  Even a low- to middle-scoring project still includes many successful aspects. Some of these are the braod ones identified in my study, others are smaller-scale but jsut as important.  For example, some of the innovative design features in some mobile apps, or unique ways of presenting options for identification-type projects.  These are worthy of a much larger discussion...maybe a future "Tips, Tricks, and Cool Techniques" series of posts.  You'll probably see much of the Zooniverse in those posts too.
So those are some of my immediate thoughts.  With this complete I'm looking at the next step...testing my own analysis against some independent, quantifiable criteria. 

In this case I've predicted the projects that should be the most "successful".  But does that translate into scientific science (peer-reviewed publications), popular success (numbers of participants), or public success (web popularity)?  Time to run the numbers and find out!

Tuesday, June 12, 2012

Ranking Citizen Science Projects by Hypothesized SuccessTraits

A few months ago I posted a long series of blog articles on the keys to successful citizen science projects.  It identified a number of major and minor themes that helped them attract users, develop good data, and lead to publishable results.  But now that I've created this model...does it really work?

To start testing the theory I've organized all the projects previously featured on this blog and ranked each from 1-7 using the various success criteria.  I chose 1-7 since it would allow a firm mid-point (for "Average" success on that trait) and would allow some differentiation between the poor performers (1-3) and high performers (5-7).  Unfortunately I'm not confident that I can personally fine-tune the rankings much further to justify a 1-9 or 1-11 scale, but this should do for now.  All this will let us create mathematical model that will hopefully predict overall success of a project, as discussed below.  But first we need to validate the rankings.

You can find a full document with the traits and individual rankings HERE.  I've created locked-down and public version of the document, so feel free to mark it up and add any important notes you'd like.  Whether you disagree with the rankings or just want to know how each was evaluated, let me know in the comments section below or in the document itself.  Only by working together can we create a model that works.

Finally, last week I asked everyone how to define "Success" in a citizen science project. I received a few good answer, like looking at number of publications (success from a scientific viewpoint), active users (from a participant viewpoint), or search popularity (from a marketing viewpoint). But I'm curious to hear if there are any more ideas? Let me know now as I look to evaluate the success of projects through some quantitative measures.

FOR MORE ON THE PREVIOUS SERIES:

Tuesday, June 5, 2012

How to Define Citizen Science Success

Photo Courtesy: EU Social
I'm working on another long-term thought piece and wanted to get your opinions first.  I'll provide much more background later (to get your initial reactions without bias), but here are the initial questions I'm trying to answer:

  1. How does one define "Success" for a citizen science project?
  2. How can the "Success" of a citizen science project be measured? 
I'm honestly not even sure if there is a common quantifiable measure for citizen science, but for this exercise let's assume there is. 

What are your thoughts?  I'm interested to hear them in the comments below.