Michael Nielsen has posted a fantastic essay on the Future of Science over on his blog:
Science is an example par excellence of creative collaboration, yet scientific collaboration still takes place mainly via face-to-face meetings. With the exception of email, few of the new social tools have been broadly adopted by scientists, even though it is these tools which have the greatest potential to improve how science is done.
Why have scientists been so slow to adopt these remarkable tools? Is it simply that they are too conservative in their habits, or that the new tools are no better than what we already have? Both these glib answers are wrong. We’ll resolve this puzzle by looking in detail at two examples where excellent online tools have failed to be adopted by scientists. What we’ll find is that there are major cultural barriers which are preventing scientists from getting involved, and so slowing down the progress of science.
*snip*
We should aim to create an open scientific culture where as much information as possible is moved out of people’s heads and labs, onto the network, and into tools which can help us structure and filter the information. This means everything - data, scientific opinions, questions, ideas, folk knowledge, workflows, and everything else - the works. Information not on the network can’t do any good.
Ideally, we’ll achieve a kind of extreme openness. This means: making many more types of content available than just scientific papers; allowing creative reuse and modification of existing work through more open licensing and community norms; making all information not just human readable but also machine readable; providing open APIs to enable the building of additional services on top of the scientific literature, and possibly even multiple layers of increasingly powerful services. Such extreme openness is the ultimate expression of the idea that others may build upon and extend the work of individual scientists in ways they themselves would never have conceived.
And lots more. The whole essay is well worth reading for it's provocative insights and well-reasoned arguments. Nielsen is writing a book on the future of science and it's pretty likely to me that the themes he explores in this essay of openness, collaboration and networking will form the backbone of that book. I can't wait to see more!
That being said, it seems to me that the core issues that surround a of what he talks about is trust and/or/versus incentives. Trust in the sense of trusting that your own openness won't be abused. Incentives in the sense of incentives to be open and trust others. I think they're really interrelated and the essay kind of dances
around the relationship without really nailing it down.
After all, people need incentives to participate in open peer review, but couldn't a kind of radical trust actually be that incentive? In formal peer review the incentives are pretty obvious. You do peer review for my paper and I'll do peer review for someone else's. It works because everyone knows/trusts that the load is spread around.
With open peer review, people can't
trust that the load is spread around so people are hesitant to jump in and be the first to effectively take on both the formal and informal roles. The incentive to review someone else's paper is the knowledge/trust that my paper will be reviewed too; in open peer review, how do I know someone else will reciprocate my contributions.
It's the same with collaboration markets. What's the incentive to trust the other players in the market? Normally, the incentive to trust someone in a collaborative relationship is reciprocity. Basically, the parties in the collaboration help solve each other's problems with the goal of solving a larger problem in the process. How do you maintain those incentives in an open system?
It seems to me that a lot of the issues around open science / science 2.0 are really about incentives and trust. Solving the issues requires a certain leap of faith. The key point in Nielsen's essay is the line: "The danger of free riders who will take advantage for their own benefit (and to Alice’s detriment) is just too high."
The potential for a
tragedy of the commons situation is always there in an "unregulated" system where no one is organizing to make sure contributions to the common good (i.e. performing peer review or volunteering to collaborate) are more or less evenly distributed.
Of course, there lies the challenge at the heart of the essay; the challenge to build an open community that somehow maintains and builds on the incentive/trust structures in place for Science 1.0 while at the same time being more open, trusting and collaborative. Neilsen has a lot to say about those and keeping the conversation going is the way to help those structures get built.