|Photo Courtesy: Richard-G|
Now that we've discussed many of the important elements how do they actually operate in practice? Let's look at two citizen science communities which incorporate many of them well. Both are also extremely popular and should be familiar to most of you. But I bet you've never thought of them in this light before.
Cornell Lab of Ornithology:
The Cornell Lab of Ornithology has been on the forefront of modern citizen science for over 40 years. They specialize in bird behavior, biology, and ecology, utilizing close ties with the birdwatching community for assistance gathering data. So it's just natural that researchers relying on amateur field observations look to successes at Cornell when developing their own projects.
A quick scan of the group's web site shows the wide variety of research performed at Cornell. All have different goals and methods for collecting data, but that's an asset when appealing to the wide variety of interests and skill levels of most citizen scientists. Users can just visit the site and pick on that best fits your interest. They can even move between projects as their interests change and experience grows.
Currently over 200,000 citizen scientists participate in the 7-10 projects available each year. They also produce a large amount of scientifically useful data, leading to over 60 peer-reviewed scientific papers over the last ten years. So let's see how they've incorporated the keys to success we've described over the last few weeks:
- Benefit the User: As a university sponsored lab, educating participants is an important part of every project. All help users identify birds in their areas and provide many different teaching techniques suitable to every learning style. In the Great Backyard Bird Count observers can use the project spotter guides or join a group to learn from knowledgeable experts; in Project FeederWatch detailed information is given on the target species, and on eBird information is available on the participants mobile phone (just to name a few). They've also begun rewarding participants with game-style "badges" in the Round-Robin project. So they take the extra step to not just educate their participants, but to actively thank and reward them too.
- Engage the User: Cornell's long-standing support for citizen science has created a community of users willing to work on it's many projects. The birdwatching community is a loyal one and the tools created for them build a framework for long-lasting cooperation. People keep coming back not only for this support, but because they can grow together. Tools and projects get more advanced as users gain more experience. Once these users first join, Cornell engages closely with them and builds a relationship meant to last.
- Trust the User: Of the many things that set the Cornell projects apart from other citizen science projects is the extensive amount of data provided back to the public. Great examples include the opening the eBird tool to users by publishing the APIs (letting others adapt it to their own purposes), the online Breeding Bird Atlas, and the extensive Avian Knowledge Network data sets. So Cornell doesn't just ask participants to collect information, they develop tools to use it and provide access to everyone. This level of trust and transparency not only benefits the scientific community (the more minds the merrier) but reinforces the relationship between amateur and professional scientists. We may have different roles to play, but both are just as valuable, and both can use the data for exciting discoveries.
- Keep it Simple: The wide variety of projects (currently nine but varying over time) let's scientists stay laser-focused on their topic and let's them design projects optimized for the research. This keeps the data accurate, simplifies the project for users, and makes it easy to learn. It also allows users to pick projects the best appeal to their interests and experience level. Embracing mobile technologies also helps make participation convenient. The eBird tool and others all let birdwatchers participate on their terms, providing easy-to-use tools that can go anywhere and can be used not just for the Cornell group's projects, but for the individual's own purposes.
The Zooniverse is a collection of web-based projects that crowdsource routine data analysis tasks to citizen scientists. Participants analyze images from NASA telescopes, transcribe old naval climate records, and listen for patterns in recorded whale songs. Once complete, project scientists use this data to identify larger patterns and develop theories to fit the data. They are constantly adding new projects as older projects wrap up their data collection and publish their findings.
The Zooniverse team includes scientists from multiple research institutions who have developed a common project interface and infrastructure. While each project is unique, all are built around the same general design of a web-based interface for analyzing the scientific materials. There are no field-based observations or independent studies, just raw data supplied by the researchers and a set of tools for interpreting it.
The Zooniverse also claims to be home to the Internet's largest, most popular and most successful citizen science projects. Let's see why that is:
- Benefit the User: The Zooniverse scientists understand their audience well and chooses research projects designed to challenge them. This brings in large projects answering big questions (what happens when galaxies collide?!) which is always popular with citizen scientists. But with this comes the need to educate users in the science they are getting involved with. This includes the theoretical background, the method for gathering data, how it will be measured, and what it ultimately means. It may seem like a side issue for some, but educating your participants is an important way to keep them interested in the project. After all, if they don't understand it, why would they donate their time to it?
- Engage the User: Participant interaction is a key way Zooniverse scientists engage their users. Every project has a way for participants to tag any interesting items they come across. They can also start discussions with the project researchers and other citizen scientists as they explore what the data means. Not only does this keep users motivated by showing respect for their opinions, but some of the most interesting findings come from these unlikely sources (as did the often-mentioned Hanny's Voorwerp discovered by a project volunteer). Another large motivator is the continued scientific success as peer-reviewed papers are published and the discoveries are promoted through the popular press. Not only can people see the fruits of their effort, but they can take pride seeing the praise knowing they played an important role in the findings.
- Trust the User: All of the Zooniverse projects "Think Big" by focusing on very difficult projects. They parse it into small pieces for each user but the overall scope is quite large. It's not easy to map every single set of boulders on the Moon (MoonZoo), translate the language of whale songs (WhaleFM), or discover how galaxies form (GalaxyZoo: The Hunt for Supernovae). But the Zooniverse is audacious enough to let us try, and their record of scientific papers shows we are up for the challenge. Much of this success is due to their methodology for combining data from the large number of participating users. Although not everyone is an expert, the theory of crowdsourcing says the combined brainpower of multiple people can overcome any errors introduced by the individuals. For example, the Milky Way Project assigns everyone the task of finding "Bubbles" in infrared telescope images. Not everyone can find them and some people my find ones where they don't exist, but the project compensates for that. Only bubbles identified by at least five separate users are counted. This weeds out the false data and maintains data quality on an ambitious but ultimately successful project.
- Keep it Simple: Last but not least, a huge key to the Zooniverse success are their ingeniously simple interfaces and user-friendly tutorials. Since all are web-based project, each has an interface showing an image to be analyzed and a small set of tools for marking the image appropriately. For example, in the Milky Way Project users find the aforementioned bubbles, circle their outlines, and answer a few simple questions about them. This records the finding and allows it to be compared with those from other users. Similarly the Galaxy Zoo: Hubble project displays images of far-away galaxies and walks users through a series of questions about its shape and size. There are many questions but the guided nature means you don't have to remember multiple things at once...just focus on the task at hand. Making this even easier are the tutorials built into each project. All are short, which helps keep people from being overwhelmed, and each provides full examples of exactly what the user will see at each step. There are often practice sessions too which show illustrative examples of a phenomenon and ask users to analyze it. If you get the right answer you move on, if not it provides hints on what you did wrong. Others even have short video tutorials where participants can watch an expert walk through each step for you. So there is no guessing involved when users participate for real...they've already seen the possibilities before and can feel comfortable staying involved.
These have been a few ways researchers have created scientifically and popularly successful citizen science projects. They certainly aren't the only ones, but I wanted to highlight some groups that display a wide variety of approaches. We need to learn from each other to design more successful projects.
But what are your thoughts? Are there other things we should highlight from these two groups? Are there other keys that are important to projects you've participated in? Are there problems with these approaches? Let me know in the comments section below and we'll keep this conversation going!
FOR MORE ON THIS SERIES: