Open Peer Review: Fast Forward for a New Science

Peer review has been with humans for a long time. Its effective inception dates back to World War II resulting information overload, which imposed a quantitative and qualitative screening of publications. Peer review was beset by a number of accusations and critics largely from the biases and subjective aspects of the process including the secrecy in which the processes became standard. Advent of the Internet in the early 1990s provided a manner to open peer review up to make it more transparent, less iniquitous, and more objective. This chapter investigates whether this openness led to a more objective manner of judging scientific publications. Three sites are examined: Electronic Transactions on Artificial Intelligence (ETAI), Atmospheric Chemistry and Physics (ACP), and Faculty of 1000 (F1000). These sites practice open peer review wherein reviewers and authors and their reviews and rebuttals are available for all to see. The chapter examines the different steps taken to allow reviewers and authors to interact and how this allows for the entire community to participate. This new prepublication reviewing of papers has to some extent, alleviated the biases that were previously preponderant and, furthermore, seems to give positive results and feedback. Although recent, experiences seem to have elicited scientists’ acceptance because openness allows for a more objective and fair judgment of research and scholarship. Yet, it will undoubtedly lead to new questions which are examined in this chapter.


I. Introduction
Peer review has been with us as long as human beings have tried to communicate. We all try to get approval from others just by talking, moving, and undertaking actions. If this kind of approval is done unconsciously on a daily basis, science on the other hand seeks such approval explicitly by CURRENT  relying heavily on peer review to advance on a solid and agreed upon basis. Scientists or researchers advance and are promoted on the basis of their work according to their papers, chapters, books, inventions, and research. It is like an intellectual competition in which only the brightest who present the most valuable works are rewarded. This is why editors, key players in the journal publication process, and referees have been called "the linchpin about which the whole business of Science is pivoted" (Ziman, 1968, p. 111) or "Gatekeepers of science" (Crane, 1967, p. 195). All of this shows the importance and ineluctability of peer review in the conduct of science. Consequently, and also as a result of it, peer review has been subjected to a wide and quarrelsome body of literature, most of it criticizing its implementation. Among the numerous issues mentioned are inadequacy of reviews, slowness of the process, rejection of innovative results, generally conservative biases, and the secrecy in which reviews have been conducted in a paper-oriented and pre-networked world.
When one says secrecy, one thinks automatically about impunity and cronyism. One thinks also about a possible old boy network exchanging favors to advance each other's agendas. All this has changed with the advent of the networks, mainly the Internet, allowing for increased openness and less manipulation. Peer review, as a result, is performed practically live on the Internet by whole communities of researchers and not by chosen referees. Has this new way of undertaking peer review changed the way it is conducted? Has it made it more equitable, just, and more fair? Does this openness allowed by the Internet consist only of advantages? This open process is quite recent and still in experimental stages, with only a few sites and domains having undertaken it. Among these, Electronic Transactions on Artificial Intelligence (ETAI), Atmospheric Chemistry and Physics (ACP), and Faculty of 1000 (F1000), have been highly active in promoting complete openness of the process. This chapter presents these three sites as prototypes of what peer review could be like in the future and which, in fact, have begun in some domains and areas of research.

II. Peer Review: A Brief History
The inception date of peer review as we know it today, is difficult to establish although The Philosophical Transactions, considered to be the first scientific journal, explicitly speaks of peer review in an excerpt of the minutes of March 1, 1665 stating "The Philosophical Transactions to be composed by Mr. Oldenburg … being first reviewed (emphasis added) by some of the members of the same" (Beaudry, 2011, p. 129).
The Royal Society of London is frequently assigned credit for having introduced the concept of refereeing or reviewing scientific manuscripts for publication in 1752. At that time, the Society finally, after almost 100 years of its existence, took over fiscal responsibility for The Philosophical Transactions. It established what they called a Committee on Papers, whose function it was to review all articles that were published in The Transactions.
The new regulation stipulated that five members of the committee would constitute a quorum. It also provided that the committee could call on "any other members of the Society who are knowing and well skilled in that particular branch of Science that shall happen to be the subject matter of any paper which shall be then to come under their deliberations" (Kronick, 1990(Kronick, , p. 1321. Along the same lines, Kronick, an expert on the subject of the first scientific journals of the 17 and 18th centuries, wrote: Some current editorial practices, such as peer review, began, in the methods these early societies devised for accepting communications for publication. Booth argues that the Royal Society of London first "introduced the concept of refereeing" in the middle of the 18th century by setting up a committee to review all papers before they were published in the Philosophical Transactions. There were, however, many antecedents to this practice. Oldenburg (the first editor of Philosophical Transactions) screened communications for presentation to the Society, but after the papers were read, they were "ordered to be reviewed by several of the Fellows." The Académie des Sciences in Paris, early in its history, established select committees to determine whether a member could or could not publish under its auspices. The peer review process almost as we know it today is described in the preface to the French edition of the Medical essays and observations published by a Society in Edinburgh in 1731. Papers submitted, it informs us, are distributed according to their subject content to those members of the society who are more versed in these matters for their review. It also specifies that the identity of the reviewer is not made known to the author, an early example of the controversial anonymous reviewer. The Société Royale de Medicine, soon after its institution in 1776, inaugurated a system by which two members examined each paper submitted to the society and provided the other members with a summary and critique. Validation of scientific work through review and discussion was in fact a major function of early scientific societies. (1984, pp. 869À870) This imprecise and somehow contradictory dating becomes even more imprecise if one considers Spier's (2002) quotation of a doctor going through some sort of local group of physicians reviewing his practice. The process is described in the following way: This work, and its later variants or manuals, requires that it is the duty of the visiting physician to make notes of the condition of the patient (in duplicate with one copy staying with the patient) on the occasion of each encounter. When the patient had been cured or had died, the notes of the physician would be examined by a local council of physicians who would adjudicate as to whether the physician had performed according 117 Open Peer Review to the standards that then prevailed. On the basis of their rulings, the practicing physician may or may not have been sued for damages by a maltreated patient. (pp. 1À2) The reason Kronick (1990) and Spier (2002) date the inception of peer review so differently is that while they describe the same process it is applied to different settings. When Spier cites the Ethics of the Physician, he speaks of how peers judge one of their own in the practice of medicine. On the other hand, Kronick speaks specifically about publication of papers and research in scientific journals. It is this kind of refereeing and reviewing that modern research has undertaken and studied. Moreover, it judges the value, acceptability, and scientific rigor of a particular research which is the point of the following discussion.
If peer review is now the standard venue for judging science and its research as a screen, both quantitatively and qualitatively, to select the best to publish, this was not the case in the beginning when the differences between books and journals were not as clearly distinguishable as they are today. This confusion resulted from the newness of the medium as well as its progression and slowness to replace books. According to Bazerman (1988) "The appearance of the scientific journal in 1645 [sic] did not immediately displace books as the primary means of communicating scientific findings. Books remained the more substantial source for scientific information for many years, interacting with the emerging journals" (p. 80).
On the same subject, Meadows (1974) states that "major research continued to be written up in monograph form throughout the eighteenth century, but the habit began to die out in the nineteenth century, at least among the physical sciences" (p. 67). This is quite consistent with Kuhn (1962) who details the way scientific revolutions are undertaken as going through three paraphrased phases: • The pre-paradigm phase in which there is no consensus on any particular theory, but is characterized by several incompatible and incomplete theories; • The normal science phase when a group agrees on a particular set of ideas and a framework of thoughts leading to a consensus; and • The revolutionary science phase when the underlying assumptions of the field are reexamined and a new paradigm is established.
The scientific journal went through these phases when the previous revolutions (printing being the foremost) became obsolete. The printed book, which represented an extraordinary advance compared to manuscripts, soon became outdated when journals appeared and gradually superseded books in their speed of publication.
Although peer review acts nowadays as a quantitative and qualitative sieve to publish the best of research, this was not always the case. At the beginning, the scientific journal was not as important as it has become nowadays. Its newness, the book's contention, the lack of research material, the weak educational levels of the time, made editors, or whatever was their function then, look for material to fill up their journals. As Spier (2002) explains it, One should notice though that peer review was not as important or even implemented with the first steps of the scientific periodical in mid-17th century. That period saw the journal space outstrip, and by far, the number of submissions … From the mid-1800s, there was more journal space than articles to print. When journals set up a board of assistant editors, their primary responsibility was to elicit articles and reviews to fill the pages of the publication. Peer review for the next 100 years consisted of the editor's opinion fortified when necessary by special committees, set up by societies to assess incoming manuscripts. (p. 3) One could say that peer review as we know it today has not had a linear implementation and was a process that was rather made on a case by case basis. The first journals implemented it, but not uniformly nor across the board. They just carried it out whenever they could. If its implementation became unanimously accepted much later, the reason was the extraordinary explosion of information observed after World War II.

III. Information Overload: A Prerequisite of Modern Peer Review
If this is not peer review as we know it today, it represents at least an embryo of what would become a sine qua none condition of published scientific research. The reason for its launch and unanimous implementation was the exponential and tremendous research effort undertaken by the United States and Europe to rebuild what the war destroyed. Price (1963) documented the issue in a seminal, and now a classic book, in as early as 1960s. Among the most striking statistics, he found that between 80% and 90% of the scientists having lived were still alive in 1963, that the gross size of science, be it in workforce or publication, doubled every 10À15 years, that the numbers of scientific journal was around 50,000 of which 30,000 are still publishing some 6,000,000 articles increasing by some 500,000 articles per year. His last interesting statistic, calculated that at that time was that there were around 1,000,000 scientists in the United States, a figure that increased in the following manner: there were 1000 in 1800 and again in 1850, 100,000 in 1900 which reached a million in the 1960s. These figures are compounded and confirmed in the United States' President's science report (The President's Science Advisory Committee, 1963). In this report, the Committee details the way that the scientific community sought to deal with the issue. For example, it found that Chemical Abstracts held 54,000 abstracts in 1930, which rose to 165,000 in 1962. It estimated that by 1970 it would reach 200,000 abstracts. Another source estimated that the four biggest bibliographical databases-Chemical Abstracts, Biological Abstracts, Excerpta Medica and MEDLARS-would exceed 200,000 abstracts (Loosjes, 1973). In a similar vein, the number of abstracts and journals was estimated in 1963 at respectively 1,000,000 and 100,000 (Shilling, 1963). Finally, another estimate put the number of journals in science and technology at 41,000 and at 1,000,000 the number of articles. All the other disciplines would add up to 1,000,000 articles (Bourne, 1962). This Operation Deluge as described by the Navy's Librarian in the 1950s (Loosjes, 1973) hastened the implementation of a way to manage it. Beside the different programs all geared at managing this information overload, peer review was, and remains a rather qualitative type of selection. It selects the best, or at least classifies and ranks submissions, to be published avoiding a glut that was not, and still is not manageable.

IV. Modern Peer Review
As it was difficult to determine exactly the beginning of the implementation and launch of peer review in its first forms, similarly it is difficult to determine a precise date of its implementation in its modern and emerging form. Generally, modern peer review began when editors began sending manuscripts to external referees. Most of the refereeing in the 19th and early years of 20th century was done internally and by the members of editorial boards, if not by the editors themselves. This was possible because the quantity of research to be reviewed was still relatively manageable and the extreme specialization that characterizes science nowadays had not yet set in. Weller (2001), in a comprehensive and retrospective narration of peer review, dates the first occurrence of modern peer review to 1942 when The Journal of Clinical Investigation began using editorial peer review "and the editor, Gamble, instituted the policy of sending papers to experts outside the editorial board for evaluation" (p. 4). Burnham (1990) reinforces this saying: Practically no historical accounts of the evolution of peer review exist. Biomedical journals appeared in the 19th century as personal organs, following the model of more general journalism … . The practice of editorial peer reviewing did not become general until sometime after World War II … Editorial peer review procedures did not spread in 120 Samir Hachani an orderly way; they were not developed from editorial boards and passed on from journal to journal. Instead, casual referring out of articles on an individual basis may have occurred at any time, beginning in the early to mid-19th century. Institutionalization of the process, however, took place mostly in the 20th century, either to handle new problems in the numbers of articles submitted or to meet the demands for expert authority and objectivity in an increasingly specialized world. (p. 1323) This is a clear indication that peer review is new in implementation in both quantitative and qualitative functions. While modern peer review basically dates to mid-20th century and after World War II, it has not been applied in a uniform manner nor has it earned the approval of those who were supposed to implement it. A number of editors used peer review unevenly and not in an orderly manner. Ingelfinger, well-known editor of The New England Journal of Medicine, maintains that the first editor of The American Journal of Medicine decided on the vast majority of submissions himself and gave an answer within 1 or 2 weeks of manuscript's receipt (Ingelfinger, 1974). In a study published in 1963, almost 25% of the journals surveyed did not use some sort of peer review (16% did not at all and 8% gave equivocal answers). One sample consisted of 156 well-known scientific journals from 10 countries "where research is considered good" (Porter, 1963(Porter, , p. 1014. The editor of The American Journal of Psychiatry from 1965 to 1978 recognized that peer review was not used prior to his tenure but he did implement it during the years he was editor (Braceland, 1978). As late as 1977, the editor of The Lancet questioned the viability of peer review in these terms: "I am a convinced opponent of routine peer review of articles. The experts' pronouncements tend toward cautious conservatism; they are not invariably beyond misplacing the big with the bogus …" (Douglas-Wilson, 1977, p. 877). In a 1989-editorial The Lancet went even farther when it claimed "that in the United States, far too much is being demanded of peer review … peer review works best when you do not ask too much of it" (Peers reviewed (Editorial), 1989, p. 1116). All these pronouncements indicate an uneasy situation pertaining to the place and importance of peer review in the field of science.
In clearer terms, peer review is seen from two angles: either as an author or as a reviewer. The first tries to publish and the second try to choose the best and most acceptable manuscripts to publish. According to Spier (2002) The peer-review process is a turf battle. What knowledge, science or doctrine may appear in the realm of the published is the prize to be won. On the one side, we have the writers and originators of ideas, on the other, we have the gatekeepers and critics. (p. 1) This competition has resulted in a keen kind of antagonism that has marred the publishing scene and made it source of contention which has boiled over to become a really nasty business which does not honor science at all. Literature on the subject is led by authors who feel they have not been rightly treated and who think that they are unjustly kept out of the publishing game. It is well known that careers hinge on the famous publish or perish syndrome and that it makes scholars scramble for publication to attain tenure, advantageous jobs, financing, and other perks which comes with publication. This competition has yielded to epic exchanges between scholars as shown in the following section.

A. Bias in Peer Review
If peer review has been beset by a war of words between protagonists of the publishing game, one issue has focused the complaints scientists direct at the peer review process and that is that biases seem to be inherently embedded in the review process, or at least are part of it. One could say that bias is normal since it is a human undertaking and not a mechanical one. Humans tend to react according to their beliefs, feelings, fondnesses, tendencies, and so forth. They have agendas, goals, and orientations that may not be compatible with objectivity. Starting from this, an abundance of literature has tried to determine the intricacy of bias in peer review. Lee, Sugimoto, Zhang, and Cronin (2013)

indicated that
In the context of quantitative research on bias in peer review, it is understood as the violation of impartiality in the evaluation of a submission. We define impartiality in peer evaluations as the ability for any reviewer to interpret and apply evaluative criteria in the same way in the assessment of a submission. That is, impartial reviewers arrive at identical evaluations of a submission in relation to evaluative criteria because they see the relationship of the criteria to the submission in the same way. (pp. 3À4) Therefore, and according to this definition, peer review strongly hinges on impartiality of reviewers and its absence, or at least its lack of it. Mahoney (1977) speaks of confirmatory bias "which is … a tendency for humans to seek out, attend to, and sometimes embellish experiences which support or 'confirm' their beliefs" (p. 1). Gilbert, Williams, and Lundberg (1994) investigated the influence of gender on the outcome of publication and acceptance. They found that there was no apparent effect on the final outcome of the peer-review process or acceptance for publication. In another study, the outcome was completely different with double blind review showing an increase in acceptance of female first-authored papers in the journal Behavioral Ecology leading to the conclusion there was a bias against female authors in the blind review (Budden et al., 2007). In the same vein, a study by Einav and Yariv (2006) found there was a correlation between surname initials and promotion, tenure, and nomination to prestigious awards. The authors suspected that this alphabetical discrimination to be linked to the norm in the economics profession prescribing alphabetical ordering of credits on coauthored publications. The same analysis was replicated as a test in 35 top North American psychology departments and no relation was found between alphabetical placement and tenure status. In a controversial study, Link (1998) found that "reviewers from the United States and outside the United States evaluate non-US papers similarly and evaluate papers submitted by U.S. authors more favorably, with U.S. reviewers having a significant preference for American papers" (p. 8). In an often asked question about bias by the Science Citation Index (SCI) toward US publications, the study found no truth as to this contention, concluding that … no significant correlation has been found between the ratio of the average number of citations per publication for publications with at least one EU address and at least one US address, respectively, on the one hand, and, on the other hand, the ratio of the corresponding number of publications per journal. (Luwel, 1999, p. 549) Surprisingly, Smart and Waldfogel found (among other results) bias in favor of low-status institutions in a study of seven major economics and finance journals in the United States for the years 1980À1985 (NBER Working Paper Series, 1996). Are articles published in so called "A" list journals better than those in less prestigious journals? It seems that this is not substantiated by results by Starbuck's (2005) study. In a rather bold move, the as-is journal proposes to shorten the process and, more importantly, let authors own their ideas in its publication. It also proposes to summarize the publication decision to accept or reject and let the article's fate be determined in one round of review (Tsang & Frey, 2007).
All of these critics, problems, and shortcomings were supposed to be treated and taken care of by peer review, which is an important and undeniably unavoidable part of the science construct. Advent of the Internet however, has opened a new manner for review of scientific research with a total openness which only a few have undertaken.

V. Open Processes
A. Sol Tax: The First Open Peer Reviewer?
The idea of opening the peer-review process has always been present yet it remains premised on secrecy being preponderant. Reasoning went this way: if a secret review allowed all the shortcomings to develop, openness would simply be the perfect antidote to it. With the review being in the open, reviewers would not dare to act in an inappropriate manner. Authors would be able to see reviewers' remarks, acquaintances, and potential conflicts of interest. They would be able to judge the first-hand knowledge of topics and any foul play could be sanctioned live on the network. The first experiment with an open peer process in the subject took place in 1959 when Sol Tax founded Current Anthropology. He launched what he called a Social experiment, an academic journal that would be configured to exchange and pool ideas, information, research materials and new knowledge. We shall review for one another the major results of past research, as a basis for more fruitful intercommunication on current developments. (Tax, 1959a, p. 3) He explains his reviewing manner in the following words, which are reproduced in extenso as it is a truly pioneering manner for its era and even into the present time: … because manuscripts will vary so in approach, compass, and complexity, the editors may handle them in a variety of ways: 1. Some manuscripts may be read by a few referees, and accepted and published.
2. A paper may be an important nucleus for intercommunication among specialists in the area covered by that paper. This should serve as a technique for combining the advantage of symposia (without having to travel) with the advantage of the kind of discussion found in the Letters to the Editors (without having to wait); for bringing specialists together, for pooling capabilities in areas which are increasingly difficult for one person to cover single-handed, and for drawing in people at the borderline of our science. In the case of such manuscripts, after a paper has been read and provisionally accepted here, it will be duplicated and sent to a list of readers. This list will include names suggested by the author and will have two general categories of people: (a) Readers who are also experts in the area under consideration. They may add material, argue the interpretation, or say nothing. In every case, the author will see the readers' comments and advise us on the best way to handle each reply; by incorporation in the original (with acknowledgement); by inclusion (with appropriate rejoinder); or however seems best. Thus, in one issue we shall have the core statement, the additional relevant information, the principal argument, and the rebuttal.
(b) Readers whose interest at the edges of the material covered by the paper but to whom it is not so central. For example, people who approach the material either as a part of a larger whole, or as the whole of which they are primarily concerned with the parts. Thus, we shall have an inclusive and expanding framework and an opportunity to learn from other sciences and to share our findings with them. (Tax, 1959b, p. 8) This is the kind of open peer review which some sites are experiencing currently, with a notable difference-it was done without the Internet. And this, undoubtedly, gives Sol Tax the title of the father of open peer review, a title that is so much deserved that the extreme openness and connectivity that characterizes today's world was not present in his era.

VI. Internet Era's Two Pioneering Experiences
Founded respectively in 1978 and 1996, Brain and Behavioral Science (BBS) and Psycoloquy represent two instances of open peer review but with different outcomes. Psycoloquy is not functioning anymore, and BBS could be considered to be the prototype of open peer-review processes.

A. Psycoloquy
This was to be an electronic counterpart of BBS according to Stevan Harnad (personal communication, August 5, 2014), but it was suspended with Carr restoring its archive which is meant to remain permanent. Psycoloquy became unsustainable financially, contrary to BBS which was funded by subscriptions. As of now, only an old home page appears on this link http://www. cogsci.ecs.soton.ac.uk/cgi/psyc/newpsy.

B. Brain and Behavioral Science
This represents one of the most successful and innovative journals to implement an open peer review. Its home page (http://journals.cambridge.org/ action/displayJournal?jid=BBS) presents the journal as follows: BBS is the internationally renowned journal with the innovative format known as Open Peer Commentary. Particularly significant and controversial pieces of work are published from researchers in any area of psychology, neuroscience, behavioral biology or cognitive science, together with 10 to 25 commentaries on each article from specialists within and across these disciplines, plus the author's response to them. The result is a fascinating and unique forum for the communication, criticism, stimulation, and particularly the unification of research in behavioral and brain sciences from molecular neurobiology to artificial intelligence and the philosophy of the mind. (BBS home page) Particularly significant and controversial pieces of work are singled out for open peer commentary and are also known as a "target article." They are explained in its Instruction to Target Article Authors as follows: If a manuscript is judged by BBS referees and editors to be appropriate for Commentary (see Criteria below), it is circulated electronically to a large number of potential commentators selected (with the aid of systematic bibliographic searches and e-mail Calls for Commentators) from the BBS Associateship and the worldwide bio behavioral science community, including individuals recommended by the author.
On the same page and in Criteria for Acceptance it goes on explaining these criteria To be eligible for publication, a paper should not only meet the standards of a journal such as Psychological Review or the International Review of Neurobiology in terms of 125 Open Peer Review conceptual rigor, empirical grounding, and clarity of style, but the author should also offer an explicit 500 word rationale for soliciting Commentary, and a list of suggested commentators (complete with e-mail addresses). (http://journals.cambridge.org/action/ displaySpecialPage?pageId=5544) As one can see, the process is rather selective and only those articles having already passed the sieve of traditional peer review and considered worthy of open commentary, would go through this rather unique process. The result is that the article is published along with the commentaries it has elicited, and the rebuttals by its author, if any, give new meaning to peer review and opens it to the whole community in a completely transparent manner.

VII. Three Examples of Open Peer Review
The above mentioned examples could all be considered variants of open peer review, each one with its specificities but lacking the cement that now makes opening of peer review possible. Some, such as Current Anthropology or even BBS when it was founded in 1978, did not have the advantage of the Internet with all its features, capabilities, speed, ubiquity, and omnipresence. These features can now enable ushering in a new manner to review science, all geared at making it less iniquitous and unfair. The three examples presented below have implemented openness in peer review but in different ways.

A. Electronic Transactions on Artificial Intelligence (ETAI)
The site http://www.etaij.org/ presents ETAI "is organized to make the best use of Internet technology, in particular by using a new and different peerreview system [emphasis added]" (ETAI home page). It goes farther in the "How ETAI works" tab giving the characteristic features of ETAI: • It provides a process for open discussion about articles and feedback to authors before an article is accepted. This discussion is shown and preserved on the ETAI web site, and participants in the discussion are not anonymous. This rather lengthy and sometimes complex process could be summarized in the following steps.
Once one clicks on "Annual journal volumes" on the left-hand side of the page, a figure detailing the different volumes of the journal and its contents appears. Upon clicking on the issue chosen (Vol. 2, 1998 was chosen as a working example), the following information is displayed: • Volume and year • Editor • ISSN (printed and electronic version) • ETAI webpage Articles are then displayed. In the example there were four articles. The first of these was "The Complexity of Model Checking in Modal Event Calculi with Quantifiers" by I. Cervesato, M. Franceschet, and A. Montanari. The "official citation" link under the article leads to another page which is the main access to the submission, made up of three distinct parts:

• Text in Postscript
Preamble and Body-this part is unusable because it leads to a format not supported any longer and therefore not exploitable.

• Publication Record
Cover Page: It is the metadata of the article including Full text, Authors, Article title, Publication type, Volume, Article number, Language, and Abstract. Under "available," is the initial date of submission and the dates of subsequent revisions. For example, the Cervesato et al. article was submitted on December 19, 1997 then first revision on March 28, 1998 and the second revision on July 29, 1998.

• Review Discussion
The Interaction Page and Further links parts of the web site are the most important as they detail the reviewing process live for everyone to see. Discussion about the paper is open to any person with an Internet connection. The discussion between authors and reviewers could be followed as questions are asked and answers given by authors. For example, question "Q1a" (by Paolo Liberatore) is answered by one of the authors (Montanari) in "A1a." All the six questions by three different scientists (Paolo Liberatore, Peter Jonnson, and Rob Miller) are all answered in the same manner in all openness. The examination of the whole process indicate a very lively and open discussion with authors and open reviewers exchanging thoughts and ideas referring each other to papers, links, and mathematical formulas to explain either their methodology or their view how the subject should be dealt with.
This first phase of the reviewing process is completely open. It is followed by a more traditional phase where the referees recommend acceptance or rejection and, unlike the first phase, they do it anonymously. Additional remarks are welcome and published with the article regardless of the outcome of the decision.
The whole process is summarized in Fig. 1. ETAI ceased operation in 2006 due to numerous reasons explained by its editor, Sandewall (2012). He explains that the structure of Artificial Intelligence (a federation of specific research) led to the creation of different areas inside the journal. One of the problems of this extreme specialization was that if a paper did not fit into a specific area it could not be submitted. Sandewall recognizes that he should have built computational software before starting the endeavor, but his eagerness to start a new experience prevailed. The added editorial work involved led to exhaustion and was probably one of the factors that led to the discontinuation of the journal after a few years of relatively successful existence. He adds that merely posting discussions on the site did not make it take off and some of the blame may be in the way discussions were launched. Finally, Sandewall adds that, in retrospect, he would have scaled up the approach by having a set of complete rules applying to all the different areas of Artificial Intelligence as well as having a computational structure ready before the start. Beside these reasons, it seems to this author that the date of ETAI founding could be another reason for its faltering. That date coincides with the first steps of the Internet and such revolutionary ideas may have doomed the experience. On the other hand, peer review is known to be very conservative and introducing such changes was, and still is, kind of courageous. Despite all that, ETAI's experience represents a highly original and open forum for reviewing. It is more original in that it combines a two-step reviewing system. It shows the process of live discussion between authors and reviewers. The second step is closer to the traditional peer reviewing but it relies heavily on the first where most of the reviewing is done.

B. Atmospheric Chemistry and Physics
Atmospheric Chemistry and Physics (ACP) (http://www.atmospheric-chemistry-and-physics.net/) is described as an An Interactive Open Access Journal of the European Geosciences Union International scientific journal dedicated to the publication and public discussion of high quality studies investigating the Earth's atmosphere and the underlying chemical and physical processes. It covers the altitude range from the land and ocean surface up to the turbo pause, including the troposphere, stratosphere and mesosphere. (ACP home page) It presents its peer-review process as … an innovative two-stage publication process involving the scientific discussion forum Atmospheric Chemistry and Physics Discussions (ACPD), which has been designed to foster and provide a lasting record of scientific discussion; maximize the effectiveness and transparency of scientific quality assurance; enable rapid publication of new scientific results [and] make scientific publications freely accessible. In the first stage, papers that pass a rapid access peer-review are immediately published on the Atmospheric Chemistry and Physics Discussions (ACPD) website. They are then subject to Interactive Public Discussion, during which the referees' comments (anonymous or attributed), additional short comments by other members of the scientific community (attributed) and the authors' replies are also published in ACPD. In the second stage, the peer-review process is completed and, if accepted, the final revised papers are published in ACP. (See http://www.atmospheric-chemistry-and-physics.net/review/review_ process_and_interactive_public_discussion.html) Once a paper is submitted, it goes through a quick peer review to determine a minimum of methodology and is immediately put in the Atmospheric Chemistry and Physics Discussion in which the information given is traditional pertaining to the title, author(s), date of submission, and so forth. Beside this, an abstract, the full paper, and interactive discussion are all in the site and could freely be accessed. At the end, a "Manuscript under submission for ACP" is posted. One of the first papers that had elicited a comment and also an answer was put in the site on March 12, 2014. Upon clicking on the "Interactive Discussion" button, one obtains the discussion page with the following information: the full text in PDF or XML, the title, the authors, and more. Under the "Interactive Discussion" button, one finds a number of indications such as: AC: Author Comment RC: Referee Comment SC: Short Comment EC: Editor Comment Status and the date of the discussion paper are indicted on the righthand side of the page (for the example presented, the paper's status is open until May 7, 2014). This submission elicited four RCs and four ACs which for the first RC details whether they are general, specific major, minor, or technical. The answer by AC responds to the different questions and explain the different phases and questioning with pictures, graphs, and the like. There was a short time span between the RC and AC (respectively March 29, 2014 and April 8, 2014) which is important as delays and slowness are among the most criticized cited and dreaded shortcomings of traditional peer review.
After a paper is submitted, a discussion period of 8 weeks is given for referees and the scientific community to comment on the paper. Each paper receives at least two commentaries from referees to be considered for discussion. The authors then have them up to 4 weeks to respond to commentaries. Papers will be published only if they have satisfactorily responded to commentaries. The Co-Editor could then either directly accept/reject the revised manuscript for publication in ACP or consult with referees in a traditional peer-review process. If necessary, additional revisions may be requested during peer review until a final decision is reached. In case of acceptance, the final revised paper is published on the ACP web site with a direct link to the preceding original paper and interactive discussion in ACPD. In addition, all referee and Co-Editor reports, the authors' response, as well as the different manuscript versions of the peer-review completion will be published. All publications (original paper, interactive comments, and final revised paper) are permanently archived and remain accessible to the open public via the Internet.
The whole, rather complex process, is summarized in Figs. 2 and 3. The revolutionary aspect of this process is made even more innovative by the fact that some researches could elicit additional post-peer-review commentaries that could achieve publication in ACP. The site does not indicate this type of publication as it does not make a difference between published in ACP and papers that have achieved post publication as a result of subsequent post-peer-review commentaries. The managing editor was asked about a possible difference or hint for these kinds of papers but the answer was "[they] do not have any data on which interactive comments later on resulted in a peer-reviewed comment/reply" (Martin Rasmussen, personal communication, July 23À24, 2014).

C. Faculty of 1000
The site http://f1000.com/ states is composed of 5000 faculty members, senior scientists, and leading experts in all areas of biology and medicine, and their associates. The Faculty recommends the most important articles rating them and providing short explanation for their selections (F1000 home page). It therefore practices what is known as post-peer review. It selects, among other features, already published and reviewed articles and

131
Open Peer Review reappraises them. This is made thanks to the work of 5000 faculty members undertaking this task.
The site is made up of three primary sections which are F1000 Prime, F1000 Research and F1000 Posters.

F1000 Prime
This is a collection of over 145,000 recommendations covering more than 3700 peer-reviewed journals in biology and medicine, contributed by the F1000 Faculty. This section has many features, among which the most important are: Article recommendations, Rankings, F1000 Prime reports, F1000 Faculty, Journal Clubs, and Blog. and Author Comments (on behalf of all co-authors) alongside the discussion paper opportunity to publish final Author Comments, archiving of discussion paper with interactive comments Submission of revised manuscript Immediate acceptance/rejection for publication in ACP or additional consultation of referees and iteration of peer-review process with additional revisions (optional) publication and archiving of Final Revised Paper on ACP website with direct link to preceding discussion paper in ACPD continuation of scientific discussion by longer or later comments and replies (Peer-Reviewed Commentaries) whcih are processed as separate discussion papers in ACPD and can achieve publication in ACP

Samir Hachani
Article recommendations. This section recommends articles that members of F1000 have chosen as important. As an example, on July 20, 2014 the following information was displayed: Fabio Bulleri, Università di Pisa, Italy, F1000 Ecology recommended the following article and gave it a rating of 1 "Invasive Plants as Drivers of Regime Shifts: Identifying High-Priority Invaders that Alter Feedback Relationships" by Gaertner et al. (2014) While the top rated article for the week of July 13À17, 2014 which rated 10 was "Collective Invasion in Breast Cancer Requires a Conserved Basal Epithelial Program" by Cheung, Gabrielson, Werb, and Ewald (2013). It received five recommendations, the most recent by Arthur Mercurio, from the University of Massachusetts Medical School who rated it Good for teaching, Interesting hypothesis, New finding, Novel drug target, and Technical advance.
Another feature is the statistics of recommendations in the last 7 days and last 30 days. On July 20, 2014, there were 246 new recommendations for the past 7 days and 194 new articles recommended. For the past 30 days, the statistics read as follow: 300 Articles classified as good for teaching, 918 classified as new finding, 16 classified as refutation, 256 classified as confirmation, 285 classified as interesting hypothesis, 92 classified as novel drug target, 100 classified as controversial, 178 classified as technical advance, 62 classified as review/commentary, and 3 as changes clinical practice.
Article recommendations are highly precise, detailed, and open. They post recommendations, authors with affiliation (sometimes with pictures), and also the degree by which they recommend the article. For example, the article cited above had 10 and was rated from good ( ) to very good ( ) to exceptional ( ).
Beside the recommendations, F1000 Prime has a system of ranking articles. When members of F1000 rank them, they give them either , or The total number of a given article gets will allow for the rankings. Among the rankings are: Current top 10: On July 20, 2014, an article in Nature from March 2013 by Hansen, Jensen, Clausen, Bramsen, Finsen, Damgaard, and with 12 was the most read at that date.
All time top 10: On July 20, 2014, an article in Nature in 2005 by Lolle, Victor, Young, and Pruitt was the most accessed and read of all time with 55 's. This article had a dissenting opinion by Alejandro Sanchez-Alvarez which proves the extreme revolutionary side of the system with use of the Internet and the web. Sanchez-Alvarez did this knowing that his name would be seen by all readers and that the consequences to which it could lead in a specialized and closed field where everybody knows everybody. This would have been impossible and unthinkable in the closed paper world where peers could express opinions without fear of reprisals and of being specifically named.
On July 20, 2014, three articles were classified as "hidden jewels." Hidden jewels are articles that should have had more attention but which slipped the attention of the community initially. They are therefore rediscovered. They have all gotten 6 and a practically equal number of recommendations (between 2 and 4).
It should be noted that the last three features (All time most viewed, Current most viewed, and Hidden jewels) are all accessible only by subscription, contrary to Current top 10 and All-time top 10.

F1000 Prime Reports
F1000 Prime reports are more like an open access review journal but practicing a closed peer review. One of its specificities is that at least one author of each article must be, or become approved as a Faculty Member of F1000Prime, which makes the article highly regarded knowing who contributes to F1000. On July 20, 2014, there were 597 articles all freely accessible as html or PDF.

F1000 Research
The F1000 Research section is made up of different features which are: articles, collections, for authors, for referees, blog, advisory editorial board, about/contact, submit an article, and My F1000.
Articles. It is a list of articles published by F1000. It is the most complete part of F1000 Research as it shows article published their status, their peer review, and more. On July 20, 2014, there were 522 articles of which 370 were indexed and had gone through an open peer review. On the right-hand side of the page "awaiting peer review" which means the article has been submitted but not been reviewed yet. The complete data on the page is as follow: title, version, referee status-awaiting peer review-and authors. On the right-hand side of the page, an "Open Peer Review-Invited Referee Responses-Awaiting Peer Review-Comments-No Comments-Add Comments" square indicates the submission is still awaiting review.
The Add Comments button allows a reviewer to add his comments after signing in or registering.
Indexed article. This lists articles that have gone through the open peer review and is openly accessible on the site. The complete data in the page are title, version, referee status and a link to the report by the reviewer openly accessed, and the authors. On the right-hand side of the page, an Open Peer Review-Invited Referee Responses details the different phases which the submitted article has gone through. It cites referee remarks by name, institution, and date and also their decision pertaining to the submission. The referee puts a sign when he or she agrees to the submission. If one of the referees asked for changes or has given remarks, then a version "2" of the article is put on the site with dates of version "1" and subsequent submissions.
The referee could also accept the submission with a sign which means he (or she) has reservations and has asked the authors for changes or addition. Lastly but very important, a referee could reject submission and give its reasons live on the site and put the sign next to his name. All this is done openly with the name of the referee, its institution, and the changes or remarks the reviewer has asked for along with the author's response. One should notice that referees are either chosen from F1000 Research referee panel (indicated by the names of referees on the page) or suggested by the authors (indicated by peer reviewers invited). F1000 insists regarding this new way of reviewing, with peers invited and chosen by the authors, to avoid conflict of interest, by not choosing colleagues or people the submitter has worked with in the last 5 years. All these criteria are checked by F1000 to ensure a bias free, and as objective as possible, review.
This phase of reviewing in F1000 denotes the extreme openness of the system. The whole process is openly accessible with authors, referees, and reviews seen and read by everybody. This is made even clearer in the outright rejection of articles or acceptation with reserves, with referees' names shown.
For referees. In this section, the prospective referees are given indication as to how their review will be performed. It is basically explained in the following lines and points which are: Pre-refereeing checks, Refereeing process, Versioning and Citation and Indexing.
Pre-refereeing checks. Articles submitted go through a quick check to see if they are scientifically sound and written in acceptable English. In case this initial check is not satisfactory, it is returned to authors for amendment. In case this is not done in a manner that satisfies the editorial team, it is rejected. It is published in less than 7 days if it does answer the issues raised and is clearly marked as "awaiting peer review." Refereeing process. When submitting manuscripts, authors are asked to suggest the name of five referees which will not have a conflict of interest with the work reviewed. They can also request by names a list of referees they do not want to review their manuscripts (a request F1000 will try to respect wherever possible). Referees decide whether the work seems scientifically sound. They also provide a report and status which will be displayed with the article, together with names and affiliations. Registered users (bona fide research scientists or clinicians) providing name and affiliation for public display will also be allowed to comment or referee report at any time.
Versioning. During reviewing, there are amendments to the original manuscript following remarks by reviewers. All versions of an article are accessible, and may be cited individually while the most recent version is the one displayed. All articles carry and are indexed by the CrossMark logo (CrossMark Identification Service™). This service allows viewing of the history of any given article and, when clicked upon, shows newer versions of the article and referees' reports.
All the steps describe in this section are summarized in Fig. 4, which takes into account the different steps taken by a submission for review.

VIII. Conclusion
Peer review has produced an abundance of literature, all geared to explaining what the whole review process is about, the problems it has yielded, and the solutions proposed. The subject is a highly sensitive one as it involves not only access to publication but also, and even more than that, the advantages and advancement that scientists get from publishing. The information overload that took place after World War II has hastened its reform as it became source of contention and an ever increasing crisis in the publishing world. Among the most preponderant reasons cited is the secrecy in which the whole process was conducted. The different steps undertaken from the inception of research to publication were done in a closed and secretive manner opening the door to numerous, and very often recurring, dysfunctions. Biases were documented as the most prominent cause of this situation especially combined with secrecy.
The advent of both the Internet and open access has allowed a much needed overhaul of the review process. From a closed and somehow biased operation, peer review became an open process subject to scrutiny. Reviewers, a pivotal part of the review process, have become more accountable and can themselves become the subject of reviews. Increasingly they undertake their work while the whole community observes. This has made them more cautious and more attentive. With flaws being in the open, the process is as close as possible to an objective operation. Peer review is unquestionably changed as science is performed live on networks. Gatekeepers are no longer the mythic and sometimes wicked people dreaded by authors.
On the other hand, various studies have not been able to determine with precision if this new openness has made peer review more equitable (Fisher, Friedman, & Strauss, 1994;McNutt, Evans, Fletcher, & Fletcher, 1990;Van Rooyen, Godlee, Evans, Black, & Smith, 1999) principally, because of its recent inception. It would be logical to think that openness would lead more moral behavior. Because the open process is still in initial phases definitive conclusions cannot be drawn. For example, many lay persons (and even professionals) see this unbridled openness as an opportunity to steal others' ideas. Some authors might not accept the fact their submissions (with all their flaws) can be seen by everybody, and corrections requested in public. Some might still prefer the comfortable and cozy anonymity of the closed paper world wherein a sub-par submission never  137 Open Peer Review sees the light of the day and would simply be rejected, asked to be revised substantially, or redirected at another journal. Finally, peer review has been known (among other criticisms) to be conservative and prone to rejecting new ideas because the gatekeepers, being the leaders of the field, would lose their preeminence over those they judge. One could readily envision reluctance by senior scientists to accept revolutionary schemes because doing so could affect their hard earned fame, status, and standing in their communities.
One of the most memorable exchange related to this turf battle, as Spier (2002) described it, was an epic one between Stanley Fish and Jerry Skoblow. Fish is a seasoned scholar associated with postmodernism who argues with Skoblow about blind review. Fish sees his status, his name, his achievement as part of the submission and insists on being judged as himself, and not as an anonymous submitter. Skoblow, who incidentally was Fish's student, argues against his professor's ideas comparing them to "scholarly Reaganomics." At the same time, Fish ironizes comparing Skoblow's view of him as "the work of a hoarder who wishes to dine alone at his own table while millions starve" (Fish, 1989, p. 163). Lines are not clearly drawn and it is too early to see all drawbacks and advantages of this open access reviewing process.
The three sites studied are, at this time, some of the most revolutionary undertakings in the subject of peer review. They open peer review to the community of researchers and allow the process to become more transparent. If the experience of ETAI for example has not been able to continue, it represents the live interactivity between the different protagonists and an example of how peer review will be performed in the future. ACP pioneered a rather original process that combines open peer review and allows continuing discussion of research making articles living entities that do not stop growing once they have been published. They could in fact result in a completely new article as a result of continuing open commentaries. Faculty of 1000 seems to be the most complete and the most innovative site of those discussed in this chapter. It practices open peer review but singles itself out by practicing a post publication peer review as the article are already reviewed but chosen again to be show-cased and reevaluated a second time. F1000 is also at the forefront of the open peer-review process due to the domain it covers: biology and medicine which have been highly active in the subject. These experiences should, no doubt, be replicated and developed, and already have been in cases like BMJ Rapid responses, Journal of Medical Internet Research, and Biology Direct. These may well be the way science will be judged in the future because in a world as networked as it is today, secrecy, bias, cronyism, manipulation, and other perceived flaws of the paper world cannot and will not be accepted.