Pages

Sunday, November 28, 2010

Soft peer review? Social software and distributed scientific evaluation

Posted on Academic Productivity on February 21st, 2007 by dario.

“The solution I’d like to suggest is that online reference management systems implement an idea similar to that of anonymous refereeing, while making the most of their social software nature. The most straightforward way to achieve this would be, I believe, a wiki-like system coupled with anonymous rating of user contributions. Each item in the reference database would be matched to a wiki page where users would freely contribute their comments and annotations. Crucial is the fact that each annotation would be displayed anonymously to other users, who whould then have the possibility to save it in their own library if they consider it useful. This behavior (i.e. importing useful annotations) could then be taken as an indicator of a positive rating for the author of the annotation, whose overall score would result from the number of anonymous contributions she wrote that other users imported. Now it’s easy to see how user expertise could be measured with respect to different topics. If user A got a large number of positive ratings for comments she posted on papers massively tagged with tag “dna”, this will be a indicator of her expertise for the “dna” topic within the user community. User A will have different degrees of expertise for topics “tag1″, “tag2″, “tag3″, as a function of the usefulness other users found in her anonymous annotations to papers tagged respectively with “tag1″, “tag2″, “tag3″.”


I'd like to suggest a problem with this system: Annotations are often used to evaluate the original text upon which they're written, if one anonymous commentator writes an annotation which has little to do with the original text, this might not be obvious to whomever hasn't read the original article. If someone is relying on annotations in order to find useful information among many articles, an annotation written by a less than competent annotator may be misleading, albeit attractive if it is simpler than the complex article it annotates.
An intriguing post! Metadata based scientific evaluation seems like a suitable alternative to the current peer review process. Thank you!

Saturday, November 27, 2010

Reducing the peer-reviewer's burden

Published at Peer-To-Peer, a nature.com blog, on May 10th, 2010, by Maxine Clarke:
"Nature Chemical Biology ( 6, 307; 2010) asks in its May Editorial: what can be done to reduce the burden on scientific referees while ensuring the continuity and quality of peer review?"
That's a good question, but why limit ourselves to "thinking inside th box"? Is peer review the only quality assurance possible in scholarship?
"Researchers profit from the peer review process in their roles as authors, where it improves their published papers. They also benefit as referees by getting a broad view of leading studies in their field and by enhancing the rigor of their discipline's published literature."
This is very altruistic on their behalf. It seems to me that reviewing papers would be an unwanted burden to most scientists.
"As technological advances have equipped scientists with new tools to probe scientific questions in unprecedented ways, the pace of research has expanded significantly, particularly in interdisciplinary areas. This more competitive landscape has placed increasing pressure on scientists to publish their research in leading journals. More manuscripts are being written and a higher burden for each manuscript to include more and better data puts corresponding pressure on the peer review system. Unfortunately, this is happening in an environment where the 'to do' lists of scientists are already becoming unmanageable."
I think the key term here is competitive. This competition benefits science by having scientists strive to reach high quality publications. Can't this competition be disconnected from the subjective assessments of any one reviewe? For instance, tying the tendencies of a scientific community as a whole to any one paper being deemed fit for publications - A sort of group peer review?
"Despite everyone's best efforts, the slowest step in the publication process remains the evaluation of manuscripts by anonymous experts. All journals, including Nature Chemical Biology, strive to balance the desires of authors for expeditious review with our need for the high quality referee feedback necessary for making informed editorial decisions. Given these competing demands, the scientific community needs to find ways to reduce the burden of peer review, while making sure that it fulfills its central role in the advancement of science."
A solution for that is brevity. By keeping articles SHORT we can shorten the time needed for their review and hence, publication.
"We urge principal investigators to work with their colleagues and institutions to establish formal peer review training in their curricula, focusing on the intellectual aspects of review, such as how to assess the aims and technical merit of scientific studies. However, they also need to include training on the practical matters of how to express constructive criticism clearly in writing and should examine the professional and ethical dimensions of peer review. Such approaches will help young scientists develop peer reviewing skills, which will also shape students' views of how to design and evaluate their own scientific work."
Very good idea. However, we need a system that will allow all of these scientists to implement their learnt review skills, rather than allowing just a select few to express their opinion for any given paper.

Wednesday, November 24, 2010

NYTimes > Scholars Test Web Alternative to Peer Review

Published in "Scholarship 2.0: An Idea Whose Time Has Come", on August 25, 2010.
"Now some humanities scholars have begun to challenge the monopoly that peer review has on admission to career-making journals and, as a consequence, to the charmed circle of tenured academe. They argue that in an era of digital media there is a better way to assess the quality of work. Instead of relying on a few experts selected by leading publications, they advocate using the Internet to expose scholarly thinking to the swift collective judgment of a much broader interested audience."
How would such a system avoid rating publications according to their writer's popularity and truly rate the publication's merit?
"Mixing traditional and new methods, the journal posted online four essays not yet accepted for publication, and a core group of experts — what Ms. Rowe called “our crowd sourcing” — were invited to post their signed comments on the Web site MediaCommons, a scholarly digital network. Others could add their thoughts as well, after registering with their own names. In the end 41 people made more than 350 comments, many of which elicited responses from the authors. The revised essays were then reviewed by the quarterly’s editors, who made the final decision to include them in the printed journal, due out Sept. 17."

Does this really shift the emphasis from a closed peer review system to open, web based mass peer review?

"Clubby exclusiveness, sloppy editing and fraud have all marred peer review on occasion. Anonymity can help prevent personal bias, but it can also make reviewers less accountable; exclusiveness can help ensure quality control but can also narrow the range of feedback and participants. Open review more closely resembles Wikipedia behind the scenes, where anyone with an interest can post a comment. This open-door policy has made Wikipedia, on balance, a crucial reference resource. "

Why do reviewers need to be more accountable? Their responsibilities lie in judging the merit of an article and no more. Exclusiveness in reviewing is essential in order to make sure the qualified personnel judges the merit of a paper. However, exclusiveness does not limit readership (especially in the case of open access journals). HOw can we make open review into a useful process in peer review?

"Advocates of more open reviewing, like Mr. Cohen at George Mason argue that other important scholarly values besides quality control — for example, generating discussion, improving works in progress and sharing information rapidly — are given short shrift under the current system. "

Generating discussion can be measured using the impact factor- The number of times a paper is cited in other papers, thus indicating the existence of a process of discussion, albeit a very slow one as the papers are published in high frequency.

"To Mr. Cohen, the most pressing intellectual issue in the next decade is this tension between the insular, specialized world of expert scholarship and the open and free-wheeling exchange of information on the Web. “And academia,” he said, “is caught in the middle.”  "

Would expert scholarship still count as expert if so many non-specialists are involved in generating it ?