One of the best trends in science outreach over the past decade has been the emergence of ‘citizen science‘ – the idea that the layperson could actually contribute to real science. But I find myself dissatisfied with the current state of the art. The public are used, let’s be honest, as pack mules. One step up from SETI@home which used their computers and a step to the side from Amazon’s Mechanical Turking.
The majority of research papers published using this crowdsourcing model of citizen science do not have many – or any – members of the public as co-authors. The famous “Hanny’s Voorwerp” paper is the counter example that proves the rule. Perhaps there is a feeling that the contributing public don’t deserve credit for such menial work. But many of these people will have spent hundreds of hours over the course of a year sifting through the data which is more than I have done on some of the papers I have co-authored; and I am certainly not unique in that regard. To be honest it reminds me a little of the “computers” of Hubble’s time: women who got little or no recognition for the important work they did. As an aside, this happened even in the 1950’s as the Fermi-Pasta-Ulam paper showed.
The low level of intellectual expertise required by many current citizen science programs was brought home to me when I was able to train an (admittedly smart) six year old to classify roughly as well as me in about ten minutes on Galaxy Zoo.
Is that as much as the interested public can give us? I don’t think so.
I certainly don’t mean to overly critique the ground-breaking Zooniverse model. I just think that so much more is possible. A lot of this is based on my experience with the JEDI workshops. But it is also inspired by wonderful examples of published research conducted by children as young as eight.
I think we can add choice to the citizen science portfolio by filling the desert in the spectrum of large-scale human research that lies between professional researchers and current citizen science.
I recently watched the animated film Ratatouille again and the main theme “Anyone can cook!” struck a cord with me. I feel like going out on a limb and saying “Anyone can do research!” It might not be great or even very good, but I think anyone can do research that engages them in scientific ideas and the scientific method and is significantly more than pack-mule mechanical turking. And if teaching people to think scientifically is our goal, then even if not a single paper were ever published, we would have succeeded.
So what am I proposing? I propose a website which operates on the following basic principles:
– Guidelines – the website is community run along the lines of the scientific method. Users have to abide by the guidelines if they want to sign up and keep their accounts. No flaming, etc…
– Object Upload: anyone can contribute an object. the object can be a piece of writing, an idea, a calculation, a piece of code, a figure etc… This relates to an earlier posting of mine.
– Collaboration: anyone can use any previously uploaded objects for any purpose (but assigning appropriate credit as per e.g. a gnu license). they can modify, combine, resample, adapt, extend or alter any uploaded object. teams can work together on projects: sets of objects with a common purpose.
– Community moderated: users can rate, like and vote up good objects or projects and users can debate, comment etc… any object or project, providing community refereeing. In a way it could be like wikipedia with its debate pages and conflict resolution processes.
– Science Interface: there is a core of PhD scientists who voluntarily give comments and suggestions on objects or projects and moderate on debates regarding the guidelines of the community. In particular, they may suggest interesting projects to work on or may act as internal referees before papers are submitted to standard journals.
– Publication: products can be submitted formally to a standard journal or the arxiv at any time by group consensus and anyone who has contributed to a project or object has the write to be a co-author or have their names removed from anything using the projects.
– Totally Open: Unlike most science currently done, all components of a project would be open to scrutiny by anyone. All code, all calculations. All reproducible.
If I had to guess what the criticisms of colleagues would be, they would probably include:
- Cranks. We already have to deal with crazy people suggesting Einstein made a simple mistake or as an email I once got said, the author had taken a photograph of a wormhole! Wont the site be overrun by cranks? So what? If their objects have no value, they will simply be ignored because of low ratings. Just like the Matlab community website for example.
- Quality Control. Surely a bunch of lay people cannot produce research of the same quality as “real” scientists? Perhaps not individually. But together, I am not so sure. In the zooniverse quality is achieved by cross-correlating many people’s classifications. The same isn’t possible with the complex analogue nature of general research. But by demanding that all research objects be completely open, one could demand that every result be reproduced by at least two different users.
- Real research is hard. Most people are not interested in hard work. True. But we are talking about reaching and engaging a small section of the public who are fascinated by science and want to be a part of this incredible quest. We are talking about the people who inspired many of us “real” scientists to take up science. Mothers, fathers, uncles… The people who don’t want to take part can simply continue to donate their space CPU cycles or do pattern matching. This is about adding choice at the top end of the citizen science spectrum.
I am the first to admit that this just might not work. Perhaps it is far too ambitious but I don’t think so. Or rather, I think there is a framework which would allow it to work.
I remember when I switched back to doing observational astronomy as part of the SDSS-II supernova survey. With great trepidation I joined the hand-scanning team looking for new supernovae in the data taken only a day or two earlier on the other side of the world at APO. I was so concerned about doing this right I wrote a guide that later formed part of the hand-scanning guide.
This was good research I thought. A couple of years later, the Galaxy Zoo Supernova project showed that untrained members of the public could find supernovae as well as I or any professional for that matter! Perhaps that is one of the reasons why we “real” scientists haven’t really engaged citizen science at the highest intellectual levels – fear of competition and being matched by those who didn’t spend years sweating for the PhD!
Update 7 March 2012: Carolina has pointed me to the website Figshare. While it appears to be aimed at the professional researcher, it is not too far off what I have been proposing although it is not clear that groups of subscribers could collaborate together online to produce something bigger than the component objects, which is a key. There is also Research Gate, which is a social network site for science and research.