id,ProjectName,DateAdded,Description,FunctionsOfTheProject,AreasOfInnovation,TypesOfOutputs,Format,Transparency,Process,DisciplinaryFields,OutcomesConsidered,DemographicVariablesOfUsers,TagLine,TypeOfProject,Website,ProjectStartDate,ProjectEndDate,OperatedBy,SouceCode,VideoURL,Goals,WhatIsReviewed,ReviewOfCodeOrData,ManuscriptHosting,ReviewFeatures,EligibleReviewers,TagsOrBadges,CriteriaForInclusion,Cost,ExplanationOfCost,NumberOfComments,Metrics,ResultsSummary,ResultsURL,PreregistrationLink,x_declarations,Verified,x_status 491,"PREreview",20181228,"PREreview is a community and a platform for the collaborative writing of preprint reviews. Our mission is to facilitate a cultural change in academia towards more transparent, effective, and inclusive ways of evaluating and improving science. Our strategy includes crowd-sourcing timely feedback to preprints from community members, such that the feedback can be used by the authors to improve the work, and potentially by journal editors to expedite their peer review process. We encourage the feedback to be constructive and ask members to adhere to our community guidelines. We offer peer review templates and other resources to find preprints and review them collaboratively at journal clubs. We are now working towards a new version of the platform that will help members build review skills, share their contributions safely, and create groups for collaborative reviews.",Feedback to authors,Reviewer training|Transparency,Preprints,Free-form commenting,,Comment indexing|Reviewer or editor initiates review,Life Sciences,"",NaN,"Changing the 'who', 'when', and 'how' of scientific peer review",Project,https://www.prereview.org,,,"Daniela Saderi, Samantha Hindle, Monica Granados",,,"Lack of comments to preprints; lack of transparency and diversity in peer review; lack of formal peer review training; lack of wide adoption of preprints.",NaN,No,No,"Full report, quick report (short paragraph). For PREreview 2.0 and Rapid PREreview (Outbreak Science) we will expand these to being structured templates. Currently PREreviews receive DOIs and are linked under respective abstracts on bioRxiv.org.","Currently anyone with email address who signes up on Authorea. For PREreview 2.0 anyone with Orcid ID. We will allow users to display pseudonyms.",No,"",0,"","Unknown","Yes, we will more with the new PREreview 2.0 (under development).","",,,NaN,Unverified,Active 520,"preLights",20190106,"The project aims to help scientists navigate the ever-growing preprint literature and facilitate commenting on preprints. A group of early-career researchers (the 'preLighters') select preprints that they think are interesting for the community, and write digests about them. These posts also feature personal opinions of preLighters about why the work is important, and engage preprint authors who often reply to questions raised in the preprint highlight.",Curation of interesting work|Feedback to authors,Reviewer training,Preprints,Free-form commenting,Open identities,Pre-publication review|Reviewer or editor initiates review,Life Sciences,"",NaN,"A community platform for highlighting and commenting on preprints",Project,https://prelights.biologists.com/,2018-02-20,,"The Company of Biologists",,,"1) Since the launch of bioRxiv, the number of posted preprints has been exponentially growing and is expected to further increase in the future. This will make it more and more difficult for researchers to keep up with the preprint literature. preLights aims to help scientists by selecting and summarizing interesting preprints for them. 2) Despite the growing popularity of preprints, public commenting on preprints hasn't taken off. At preLights, early-career researchers give personal opinions about preprints (which are often in their field of expertise). In addition, anyone can comment on the preLights posts and by also encouraging preprint authors to provide 'author responses',  preLights aims to facilitate discussions about preprints. 3) Finally, preLights aims to promote early-career researchers and give them a platform where they can practice their scientific writing and reviewing skills. These ECRs also play an important role in raising awareness and advocating for preprints.",NaN,,No,"'News and views' type of summary with personal opinion + questions to the authors; freeform commenting by anyone","preLighters are selected through an application process, in which they have to write a preprint highlight, a short bio and explain their motivation to join",No,"",0,"","Unknown","We track usage of the service and participation of the selected 'preLighters' (Currently we receive cc. 2K views/week on the website and have 130 active 'preLighters'). We also track the number of preprint author comments (currently cc. 30% of posts have author comments)","",,,NaN,Unverified,Active 748,"Peeriodicals.com",20190205,"A peeriodical is a lightweight virtual journal with you as the Editor-in-chief, giving you complete freedom in setting editorial policy to select the most interesting and useful manuscripts for your readers. The manuscripts you will evaluate and select are existing publications—preprints and papers. Thus, a peeriodical replicates all the functions of a traditional journal, including discovery, selection and certification, except publication itself.",Curation of interesting work|Feedback to authors,Costs|Incentives and recognition for reviewing|Speed|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints,Free-form commenting|Structured review form,Open interaction,Comment indexing|Comment moderation|Post-publication review|Reviewer or editor initiates review,,"",NaN,"Select the best science",Project,https://peeriodicals.com/,2018-02-01,,"The PubPeer Foundation",,,"The traditional journal has changed remarkably little in centuries and many people feel that scientific publishing is stuck in a rut, subject to a corporatist drift, and is not serving science optimally. The advent of preprints in many fields beyond those served by the ArXiv is liberating the dissemination of research, but most other journal functions have not been replaced effectively. Now you—all researchers—have the opportunity to select and certify research according to your own criteria. We expect peeriodical subject matters and editorial policies to be extremely varied. Some peeriodicals may wish to target narrow domains, while others will adopt a generalist approach. Some peeriodicals will be inclusive, focusing on discovery, whereas others may aim to enforce stringent quality criteria, prioritising certification. The point is that all approaches are permitted and supported—we hope you will innovate! You can create multiple peeriodicals. It will be users and readers who decide which peeriodicals they find useful and interesting. Users can sign up to receive alerts from any peeriodical they wish.",NaN,,,"A peeriodical has one or more editors. Anybody can set-up a peeriodical and either operate it alone or invite colleagues to form an editorial board or community. The editors can select ""manuscripts""—existing papers or preprints—to consider, either spontaneously or through suggestions from other researchers, including of course the authors. Note that there is no obligation that the manuscript be recent; for instance, we expect that some peeriodicals could focus on underappreciated classics. After all, predictions about scientific impact are generally more accurate for the past than the future. If the editors wish, they can solicit reviews for the manuscript via the Peeriodicals interface. Reviews will be published and the referees will have the option of posting anonymously or signing their review. Editors may decide at any time to accept, reject or comment on the manuscript, taking into account the comments received. They may of course suggest improvements to the manuscript or underlying study. If they justify their decision, their editorial decision will also be published.","",,"How will Peeriodicals fit into the publishing landscape? We see them as a space without entry barriers in which researchers can innovate and explore new approaches to scientific dissemination, in parallel to the traditional publishing industry. There are related and complementary initiatives, notably the overlay journals promoted by Tim Gowers, exemplified by Discrete Analysis, but also Science Open Collections, PLoS Channels, the APPRAISE initiative and Peer Community in... Each of these projects has their own specificities and goals. Nobody yet knows exactly what the future will look like, but we strongly believe that we are about to experience a period of rapid evolution in the dissemination of science and we hope that Peeriodicals will inspire and help you to share your imagination and expertise with the whole research community.",0,"","100-1,000","","",,,NaN,Verified,Active 771,"Hypothesis",20190212,"Hypothesis is a new effort to implement an old idea:  A conversation layer over the entire web that works everywhere, without needing implementation by any underlying site.  Our team creates open source software, pushes for standards, and fosters community. Using annotation, we enable sentence-level note taking or critique on top of news, blogs, scientific articles, books, terms of service, ballot initiatives, legislation and more.  Everything we build is guided by our principles -- that it be open, neutral, and lasting. We are a non-profit organization funded through the generosity of sponsors like the Knight, Mellon, Shuttleworth, Sloan, Helmsley, and Omidyar Foundation. Our efforts are based on the annotation standards for digital documents developed by the W3C Web Annotation Working Group. We are partnering boradly with developers, publishers, academic institutions, researchers, journalists, and individuals to develop a platform for the next generation of read-write web applications.",Curation of interesting work|Feedback to authors,Quality of review|Reviewer training|Speed|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints|Privately shared manuscripts,Annotations|Free-form commenting|Structured review form,Double blind|Open identities|Open interaction|Open participation|Open reports|Single blind,Author initiates review|Comment indexing|Comment moderation|Journal integration|Post-publication review|Pre-publication review|Professional editors|Reviewer or editor initiates review,All disciplines,"",NaN,"Creating a conversation over all knowledge",Project,http://hypothes.is,,,"",https://github.com/hypothesis,https://www.youtube.com/watch?v=aY9yj2uoMiU&feature=youtu.be,"Open annotation enables a conversation across web-based content through the creation of private, group, or public in-line annotations. Integration with manuscript submission systems can enable editors, reviewers, and authors to engage in all types of peer review (from traditional single-blind or double-blind to open) in either pre-publication or post-publication form. The tool can also be used outside of a manuscript submission system by publications who wish to undertake peer review experiments. One example is In Review, via which Biomed Central and Research Square offer authors the ability to opt into community feedback in parallel with traditional peer review. Feedback on pre-prints is an ideal use case for providing suggestions (public or in private groups) on early versions of articles. Because it works anywhere on the web, open annotation is an ideal tool to enable overlay journals. It can also raise the visibility of peer review reports posted as supplementary materials.",NaN,Yes,No,"-Works with either pre-publication or post-publication peer review -Enables open peer review -Enables traditional peer review either within manuscript submission system integrations or as a stand alone. Publisher can assign reviewer accounts for reviewer anonymity. -Can be used to post peer review reports or summaries -Can be used to provide feedback on content anywhere on the web in html, PDF, EPBU, or data formats.","This depends on the site integration (publisher, preprint service, etc.). They may select an Open Group (world-readable, world-writeable) or a Restricted Group (world-readable, but writeable only by those indicated by the group owner).",Yes,"Anyone can use the tool through implementation of public or private groups. Should a publisher wish to host their own branded and moderated layer, there is a cost. Please contact us for details.",NaN,"We offer a “freemium” service for those wanting to sign up and use the basic annotation tool. If publishers wish to host their own branded and moderated annotation layer, there is a charge. There is also a charge to integrate with SSO accounts. We also offer commercial products and services including SaaS hosting, software development and consulting services for organizations and enterprises wanting to implement a more robust digital annotation solution. We charge an annual SaaS hosting fee that covers 12 months of service. We use the number of documents added per year as a proxy for publisher size. Publishers can deploy across all content or specify content types or specific journals, for example. Deployment is back to volume 1, issue 1 or earliest copyright year for books. Pricing for software development and consulting services is negotiable, based on the deliverables and resource requirements of the proposed scope of work. For more information on pricing, contact Heather Staines, Director of Partnerships, at heather@hypothes.is","10,000+","We use Metabase, a data analytics tool, to track usage and participation of the Hypothesis tool on an aggregated basis, by sector, and by customer including total # of annotations, # of active annotators, # of new registered users, and# of new groups created. We also generate reports for customers that include information on publicly visible annotations (annotator, date, content annotated, text selection, annotation content) as well as private and group annotations (date, content annotated). We can also provide information on the top public annotators, top annotated documents, and more. We constantly communicate with our partners to see how they are using the tool and to solicit feedback for improvements and new features.","Hypothesis has integrations with all major hosting platforms, but an integration is not necessary for the tool to be utilized by end users. Content which is currently annotated (4.8M as of 2/28/2019) includes scholarly content and main stream web content.",,,NaN,Verified,Active 774,"Plaudit",20190213,"Plaudit allows researchers to publicly endorse academic research, and makes this data openly available. This provides a clear, simple and accessible signal about the quality of an academic work, that builds on the knowledge of your academic peers rather than the reputation of the journal it is published in.",Curation of interesting work|Validation of soundness,Costs|Incentives and recognition for reviewing|Speed|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints,Annotations|Quantitative scores,Open identities|Open interaction|Open participation|Open reports,Journal integration|Post-publication review|Reviewer or editor initiates review,All disciplines,"",NaN,"Open endorsements from the academic community",Project,https://plaudit.pub,2019-02-04,,"Flockademic",https://gitlab.com/Flockademic/Plaudit,,"Plaudit was created to improve the incentive structure in academia, in which researchers feel pressured to publish their work in ""top-tier"", but often paywalled journals, or journals with a substantial publishing fee. By providing an alternative signal of quality that still relies on the expertise of your academic peers, the hope is that that pressure can be relieved.",NaN,No,No,"","Anyone with an ORCID iD, willing to expose their endorsement to public scrutiny.",,"",0,"","Unknown","All endorsements are made publicly available as open data through CrossRef Event Data","",,,NaN,Verified,Active 781,"Peer Community In",20190213,"The “Peer Community in” (PCI) is a non-profit scientific organization that aims to create specific communities of researchers reviewing and recommending, for free, and at the demand of their authors, preprints in their field (articles not submitted to - or published in - journals and deposited on open online archives like bioRxiv.org). These specific communities are entitled PCI X, e.g. PCI Evolutionary Biology, PCI Ecology, PCI Paleontology.... Once recommended by and editor of one of these PCIs on the basis of rigorous and transparent peer reviews, preprints become valid references and may be considered to be articles of high value. Recommended preprints can be used by scientists and cited in the scientific literature. There is no need for these recommended preprints to be submitted for publication in classic journals (although they can be, according to the authors’ preferences). The PCI system is at no cost for readers and authors.",Curation of interesting work|Feedback to authors|Validation of soundness,Costs|Incentives and recognition for reviewing|Quality of review|Transparency,Preprints,Free-form commenting,Double blind|Open identities|Open reports|Single blind,Author initiates review,All disciplines,"",NaN,"A free recommendation process of unpublished scientific papers based on peer reviews",Project,https://peercommunityin.org,2017-01-01,,"Peer Community In non profit organization",,https://www.youtube.com/watch?v=4PZhpnc8wwo&t=75s,"The motivation behind the “Peer Community in” project is the establishment of a high-quality, free, public system for identifying high-quality preprints by a specific recommendation obtained after rigorous peer-review. The aim is for this system to be recognized both within, and, subsequently beyond the community, including by funding and research agencies. This project should lead to a new scientific publication system, in which preprints are deposited in free, open archives, and, if appropriate, reviewed and awarded a recommendation publicly guaranteeing their scientific quality. This recommendation could replace the current evaluation and editing processes of scientific journals, which are often opaque and very costly for research institutions.",NaN,Yes,No,"Editorial process (reviews, editorial decisions, authors' replies) is published if the paper is accepted. Nothing is published if the paper rejected. Reviewers can be anonymous if they want to. Authors can be anonymous to reviewers if they want. Editors must sign a recommendation text in case of acceptance.","New editors of a PCI are nominated by a current editor of this PCI and approved by the Managing Board of the PCI. Reviewers are selected by Editors.",No,"At least an Editor of the PCI has to find the article interesting (for any reasons: interesting question, interesting data, methods, results, discussion, etc.) to handle its evaluation.",0,"","100-1,000","> 500 Editors, 3 PCIs, >50 000 pages visited/year overall PCIs","",,,NaN,Verified,Active 790,"SciRate",20190214,"SciRate is a collaborative open science platform that encourages communication between researchers and the general public. It presently acts as a front for the arXiv, allowing people to vote and comment on papers.",Curation of interesting work|Feedback to authors|Validation of soundness,Quality of review|Transparency,Preprints,Free-form commenting,Open identities|Open interaction|Open participation,Pre-publication review,All disciplines,"",NaN,"Front for the arXiv with voting and commenting.",Project,https://scirate.com/,,,"SciRate Collaboration",,,"To create a collaborative network of people evaluating and discussing papers.",NaN,,No,"","",No,"No criteria, papers from the arXiv are displayed.",0,"","10,000+","","",,,NaN,Verified,Active 792,"Author-led peer-review trial by eLife",20190215,"Building on ideas presented by Erin O’Shea and colleagues (ASAPbio meeting, February 2018), once an editor has invited a paper for peer review, eLife is committed to publishing the work (https://elifesciences.org/inside-elife/2905802e/peer-review-elife-trials-a-new-approach). 1) New submissions were evaluated by Senior Editors to identify the papers to be invited for peer review. 2) After peer review and consultation between the reviewers, a Reviewing Editor compiles the feedback and peer reviews. 3) The authors decide how to respond, and submit a revision and their response to the reviews (unless they wish to withdraw their submission). 4) The Reviewing Editor evaluates the revised submission and responses, and the paper is published along with the decision letter, peer reviews, author responses, and an editorial rating (to indicate whether all issues had been addressed, or whether major or minor issues remain unresolved). The trial was closed once 300 authors had opted in.",Feedback to authors,Incentives and recognition for reviewing|Quality of review|Speed|Transparency,Journal accepted manuscripts,Structured review form,Open identities|Open reports|Single blind,Pre-publication review|Reviewer or editor initiates review,Life Sciences,"",NaN,"Authors decide how to respond to issues raised during review",Trial,https://elifesciences.org/articles/36545,2018-06-26,2018-08-08,"eLife",,,"Three main motivations:
  1. Builds on the consultative approach to peer review that eLife has developed by removing the gatekeeping role of peer review
  2. Could reduce waste and unnecessary work when articles are submitted and reviewed by multiple journals before publication
  3. Encourages evaluation of research based on the more than just the journal name in which the work is published
Examples of data we hope to gather:
  1. percentage of authors who undertake the trial process
  2. percentage of reviewers who agree to participate, and who sign their reviews
  3. percentage of revised papers that address all the concerns raised during review
  4. effect on decision-making before in-depth review
  5. are more papers are ultimately published
  6. speed to publication
",NaN,,,"The gatekeeping role of peer review is removed. All peer-reviewed articles are published (unless the author chooses to withdraw their article).","",Yes,"The articles are labelled as Research Communications to indicate that they are part of the trial.",NaN,"There is a publication fee as there is for all published research articles in eLife","100-1,000","","In “Peer Review: First results from a trial at eLife” (linked below), we compare the outcomes of 313 trial submissions with 665 regular submissions received during the same period of time. That is, almost a third of authors opted in to the trial approach. So far we have evaluated the first part of the editorial process (whether a paper is sent for in-depth review). The success rates at this first step for male and female last authors in the trial were similar (22.6% and 22.1% respectively), but late-career last authors fared better than their early- and mid-career colleagues. Further data will be posted and discussed covering the rest of the trial process.",https://elifesciences.org/inside-elife/262c4c71/peer-review-first-results-from-a-trial-at-elife,,NaN,Verified,Complete 794,"mSphereDirect",20190215,"mSphereDirect was a trial that explored a new publishing pathway in mSphere, a multidisciplinary open-access journal spanning the microbial sciences. mSphereDirect put authors in control of their own peer review process allowing them to select their own reviewers and submit scientific reviews and the peer review history simultaneously with the original research. mSphere Senior Editors validated the reviewers’ credentials and evaluated the quality of the submitted reviews and research. This pathway allowed for rapid decisions, speed to publication, and transparency in review.",Feedback to authors|Validation of soundness,Speed|Transparency,Journal accepted manuscripts,Free-form commenting|Structured review form,Open identities|Open interaction|Open participation,Author initiates review|Pre-publication review,Life Sciences,"",NaN,"An open, author-driven peer review experiment at mSphere",Trial,https://msphere.asm.org/content/1/6/e00307-16,2017-01-11,2019-02-28,"mSphere",,https://vimeo.com/198243261,"mSphereDirect was a response to the scientific community’s concerns about the current scholarly publishing process: peer review is too slow, the publishing process is too opaque, and journal editors may be too far outside their field when evaluating manuscripts. mSphereDirect put authors in the driver’s seat to secure reviews from suitable experts as rapidly as possible. Its objectives were to support the rapid communication of science, to give authors more control over the peer review process, and to provide guaranteed fast decisions. Additionally, mSphereDirect  increased transparency by publishing reviewer names with original research papers accepted via the mSphereDirect pathway. In this way, mSphereDirect merged two crucial publication concepts: relevant, quality peer review and vetted, rapid decisions.",NaN,Yes,No,"","Authors selected their own reviewers.",No,"",NaN,"","10-100","","This experiment is ending February 2019. A number of issues and concerns surfaced during the experiment. Uptake was low. Authors felt awkward asking colleagues to review their manuscripts, and conversely, some scientists were uncomfortable serving as nonblind reviewers. Additionally, a significant number of the reviews were superficial and uncritical. To maintain editorial standards, manuscripts were rejected from authors who technically followed all the rules, but for which the significance of the work was in question.",https://msphere.asm.org/content/3/6/e00678-18,,NaN,Verified,Complete 796,"bims: Biomed news",20190217,"Among the many things researchers have to do are (1) stay up-to-date in their field (2) develop name recognition as an expert. We aim to help with both at the same time. Bims is organized as a series of topic-specific reports. Each report is curated by a selector. Each week we make new PubMed papers available to selectors. By default, it's 7% of the new papers added to PubMed, something between 1000 and 2000 papers. Each selector decides what papers go into the report. We sound like a masochist's idea of fun. And probably the first time it is. But after that the system uses machine learning. Each week, it gets better at knowing what the selector wants. When you become a selector, you will quickly realize that our machine-learning based relevance order is more flexible and more precise than PubMed searches. Reports issues can be freely circulated to user communities. When we get our own server we will start building email distribution lists.",Curation of interesting work,Incentives and recognition for reviewing|Speed,Journal accepted manuscripts,,Open identities|Open participation|Open reports,,Life Sciences,"",NaN,"Enable reports on new biomedical research papers",Project,http://biomed.news/,2017-02-04,,"The Open Library Society, Inc.",,,"We aim to disseminate biomedical information through regularly scheduled subject specific reports. We want a system for fast dissemination of research results. We want to promote high quality research by having expert reviewers. At this time, Biomed news is based on PubMed. At this time, our reports are mainly geared to the academic and research communities. This does not have to be the case. PubMed has a lot of papers of more general interest. We can for example think of reports aimed at medical doctors, that filter for papers that have clinical relevance. Such reports may be run by MDs who normally don't do any research. We can think of reports aimed at patients that filter for advice on how to cope with a disease. Such reports may be run by patients, who have no prior biomedical training.",NaN,No,No,"Selectors initially choose abstracts relevant to the topic report, these inform the learning for the subsequent issues of the report.  Selectors can also customize the abstracts sent to e-mail subscribers by removing abstracts and also by providing a ranking order.","Topic experts are selected based on topic interest.",No,"At this time, we consider all new items that have been made available during the last week. We remove items that we can not date to the current or the past year.",0,"","1,000-10,000","The amount of time a selector spends on each issue can be tracked.","Since our founder fixed a major bug on 2018-11-14, no selector has lost interest in maintaining their report. Some selectors maintain several reports, this allows reports of different scopes to be compared by the same selector.",http://biomed.news/reports,,NaN,Verified,Active 799,"In Progress: Select Crowd Review",20190219,"A selection of about 70-100 experts, who are exclusive members of the crowd, receives a link to the manuscript and can comment on it anonymously via a secure web-interface. Only the editor knows who the reviewers are while Monitoring the process. Each reviewer decides if he or she has time and expertise to comment on the respective article. Participating reviewers see each other's comments and can discuss the research featured in the paper to improve the manuscript further. They can respond, interact, and enhance it in parallel. After 48 – 72 hours (on average) the review period ends, and the manuscript is taken off the platform. In the next step, the editor evaluates the comments of the reviewers, decides about accepting (with or without revision) or rejecting the article and sends the feedback of the crowd to the author for consideration and implementation.",Feedback to authors|Validation of soundness,Bias in review|Quality of review|Reviewer training|Speed,Journal accepted manuscripts,Annotations|Free-form commenting,Single blind,Journal integration,Chemistry,"",NaN,"A selected group of highly qualified peers reviews manuscripts interactively",Trial,https://www.thieme.de/de/thieme-chemistry/select-crowd-review-136859.htm,2017-01-01,,"Thieme",,,"In the first year after the introduction of Select Crowd Review for SYNLETT, experience has proved that the innovative process delivers substantive feedback to the authors, with the same or higher quality than classical peer review does, in only a few days. This allows editors to decide about the acceptance of a manuscript much faster and therefore shortens the time from submission to publication. We will continue to improve Select Crowd Review together with editors, authors, and reviewers that the innovative process will not just benefit SYNLETT and SynOpen, but potentially the wider scientific community.",NaN,Yes,Yes,"","",,"",0,"","100-1,000","","",https://www.nature.com/news/crowd-based-peer-review-can-be-good-and-fast-1.22072,,NaN,Unverified,NA 815,"biOverlay",20190225,"biOverlay is an overlay journal for the life sciences. Like an academic journal, editors solicit peer reviews of scientific literature (mostly, but not exclusively, preprints). Manuscripts are selected for general interest, and reviews are posted at biOverlay.org",Curation of interesting work|Feedback to authors|Validation of soundness,Bias in review|Costs|Incentives and recognition for reviewing|Quality of review|Reviewer training|Speed|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints,Free-form commenting|Structured review form,Open identities|Open participation|Open reports|Single blind,Post-publication review|Pre-publication review|Reviewer or editor initiates review,All disciplines|Computer science|Life Sciences,"",NaN,"A peer-review overlay for the life sciences",Project,https://bioverlay.org,2018-02-21,,"",,,"",NaN,No,No,"","Reviewers are selected by editors based on relevant expertise, similar to journal peer review. Editors self-nominate by submitting a CV and are reviewed by a the board of managing editors.",No,"Associate Editors are asked to consider what the most impactful item for review is at the time that they select a preprint or publication to send out for review. An editor has viewed it as the scholarly output most worthy of review at that time.",0,"","10-100","","",,,NaN,Verified,Active 822,"Nature open peer review trial",20190226,"""[B]etween 1 June and 30 September 2006, we invited authors of newly submitted papers that survived the initial editorial assessment to have them hosted on an open server on the Internet for public comment. For those who agreed, we simultaneously subjected their papers to standard peer review. We checked all comments received for open display for potential legal problems or inappropriate language, and in the event none was held back. All comments were required to be signed. Once the standard process was complete (that is, once all solicited referees' comments had been received), we also gathered the comments received on the server, and removed the paper.""",Curation of interesting work|Feedback to authors|Validation of soundness,Quality of review|Transparency,Preprints,Free-form commenting,Open identities|Open participation|Open reports,Author initiates review|Comment moderation|Journal integration|Pre-publication review,All disciplines,"",NaN,"2006 experiment with open commenting on prepublication manuscripts",Trial,https://www.nature.com/nature/peerreview/debate/nature05535.html,2006-06-01,2006-09-30,"Nature",,,"""[P]eer review is never perfect and we need to keep it subjected to scrutiny as community expectations and new opportunities evolve. In particular, we felt that it was time to explore a more participative approach.""",NaN,,Yes,"","Anyone",,"",NaN,"","10-100","The fraction of authors opting in, the number of comments and page views, and editor and author surveys","""We sent out a total of 1,369 papers for review during the trial period. The authors of 71 (or 5%) of these agreed to their papers being displayed for open comment. Of the displayed papers, 33 received no comments, while 38 (54%) received a total of 92 technical comments. Of these comments, 49 were to 8 papers. The remaining 30 papers had comments evenly distributed. The most commented-on paper received 10 comments (an evolution paper about post-mating sexual selection). There is no obvious time bias: the papers receiving most comments were evenly spread throughout the trial, and recent papers did not show any waning of interest. The trial received a healthy volume of online traffic: an average of 5,600 html page views per week and about the same for RSS feeds. However, this reader interest did not convert into significant numbers of comments.""",https://www.nature.com/nature/peerreview/debate/nature05535.html,,NaN,Verified,Complete 823,"Nature double blind peer review trial",20190226,"""The proportion of authors that choose double-blind review is higher when they submit to more prestigious journals, they are affiliated with less prestigious institutions, or they are from specific countries; the double-blind option is also linked to less successful editorial outcomes.""",Curation of interesting work|Feedback to authors|Validation of soundness,Bias in review,,,Double blind|Single blind,Pre-publication review|Professional editors,All disciplines,"",NaN,"2015-2017 trial in 25 Nature-branded journals",Trial,https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-018-0049-z,2015-03-01,2017-02-28,"Nature",,https://www.youtube.com/watch?v=umDWxTlBtIg,"""Double-blind peer review has been proposed as a possible solution to avoid implicit referee bias in academic publishing. The aims of this study are to analyse the demographics of corresponding authors choosing double-blind peer review and to identify differences in the editorial outcome of manuscripts depending on their review model.""",NaN,,,"","",,"",NaN,"","10,000+","Authors's preference for single- or double-blind review by journal, gender, country, and institution. Editorial decision outcomes","""Author uptake for double-blind submissions was 12% (12,631 out of 106,373). We found a small but significant association between journal tier and review type (p value < 0.001, Cramer’s V = 0.054, df = 2). We had gender information for 50,533 corresponding authors and found no statistically significant difference in the distribution of peer review model between males and females (p value = 0.6179). We had 58,920 records with normalised institutions and a THE rank, and we found that corresponding authors from the less prestigious institutions are more likely to choose double-blind review (p value < 0.001, df = 2, Cramer’s V = 0.106). In the ten countries with the highest number of submissions, we found a large significant association between country and review type (p value < 0.001, df = 10, Cramer’s V = 0.189). The outcome both at first decision and post review is significantly more negative (i.e. a higher likelihood for rejection) for double-blind than single-blind papers (p value < 0.001, df = 1, Cramer’s V = 0.112 for first decision; p value < 0.001; df = 1, Cramer’s V = 0.082 for post-review decision).""",https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-018-0049-z,,NaN,Unverified,Complete 824,"Interactive Public Peer Review",20190226,"Copernicus Publications has been offering the innovative Interactive Public Peer Review since 2001. In a first stage, manuscripts that pass a swift access review by the handling editor are posted as preprints (discussion papers) in the journal’s discussion forum. They are then subject to interactive public discussion, during which the referees' reports are posted as comments (anonymous or attributed), short comments can be posted by members of the scientific community (attributed), and the authors' replies are published. In a second stage, the peer-review process is completed and, if accepted, the final revised papers are published in the journal. All referee and editor reports, the authors' response, as well as the different manuscript versions of the post-discussion review of the revised submission, are published too. To ensure publication precedence for authors, and to provide a lasting record of scientific discussions, the discussion forum and the journal are both permanently archived and citable.",Feedback to authors|Validation of soundness,Quality of review|Speed|Transparency,Journal accepted manuscripts|Preprints,Free-form commenting|Structured review form,Open interaction|Open participation|Open reports,Author initiates review|Comment indexing|Journal integration|Pre-publication review|Reviewer or editor initiates review,All disciplines,"",NaN,"Enhancing effectiveness and transparency of scientific quality assurance",Project,https://publications.copernicus.org/services/public_peer_review.html,2001-09-03,,"Copernicus Publications",,,"The process aims to provide both rapid scientific exchange and thorough quality assurance. Through the immediate posting of manuscripts after a swift access review, scientists receive a fast record of their research as a preprint (discussion paper). The Interactive Public Peer Review enhances transparency as referee comments, author comments, and the comments of the scientific community are published online and are openly accessible. However, the process meets the criteria of traditional quality assurance as papers undergo revisions and are only published as final revised papers in the journal after final acceptance by the editor. In summary, the process fosters scientific discussion, maximizes the effectiveness and transparency of scientific quality assurance, enables rapid dissemination of new scientific results, and makes scientific publications freely accessible (Pöschl 2012, 10.3389/fncom.2012.00033, van Edig 2016, https://doi.org/10.3233/978-1-61499-649-1-28). This process is applied successfully in 20 of the 42 journals we publish.",NaN,,Yes,"In the access review, editors need to assess whether a manuscript is within the scope of the journal, whether it meets basic scientific quality, and whether it contributes something new to the respective field. After approval of a manuscript for Interactive Public Peer Review, it is posted as a discussion paper (preprint) in the journal’s discussion forum and is citable through a DOI. During the discussion phase different parties can engage in an iterative and developmental reflective process. In the interactive discussion, comments are posted by designated referees (anonymous or named). In addition, all interested members of the scientific community (named) can contribute as well. All participants are encouraged to stimulate further debate rather than simply preserve their position. The overall goal is to enhance the manuscript and thereby maximize the impact of the article. Usually, every discussion paper receives at least two comments by the referees, which are requested by the editor. Authors are encouraged to participate actively by posting author comments as a direct reply to referee comments and short comments from the scientific community. After the end of the open discussion, authors need to publish a response to all comments in case they have not done so during the discussion phase. In journals with post-discussion editor decision, the editor – based on the authors’ responses – either invites the authors to submit a revised manuscript or directly rejects the manuscript. If necessary the editor may also consult referees as is done during traditional peer review. In journals without post-discussion editor decision, authors need to submit their revised manuscript 4 to 8 weeks after the discussion. Taking the access peer review and interactive public discussion into account, the editor either directly accepts/rejects the revised manuscript for publication in the journal or consults referees again. If needed, revisions may be requested during peer-review completion until a final decision about acceptance/rejection is reached.","At least two referees, nominated by the editor, review the discussion paper. In addition, the scientific community is invited to join the interactive public discussion prior to publication. The following types of interactive comments can be submitted for instant non-peer-reviewed appearance alongside the discussion paper: a) Short comments (SCs) can be posted by any registered member of the scientific community (free online registration). Such comments are posted under the name of the person commenting. b) Referee comments (RCs) can only be posted by the referees peer-reviewing the discussion paper. Referees can choose to stay anonymous or to disclose their name. c) Editor comments (ECs) can only be posted by the editor handling the review process of the respective discussion paper. d) Author comments (ACs) can only be posted by the contact author of the discussion paper on behalf of all co-authors. All interactive comments are fully citable, paginated, and archived. All comments receive DOIs.",,"Journals have to select whether they want to apply the Interactive Public Peer Review or not. Hence, if a journal opts for it, all manuscripts submitted to this journal will follow this process.",0,"If a paper is accepted after Interactive Public Peer Review, the article processing charges of the respective journal apply.","10,000+","We provide the article level metrics (downloads, views), impact (citations), saves (bookmarks), and discussion information (social media) for discussion papers as well as final revised papers.","Interactive Public Peer Review has been successfully applied since 2001. Since then, the basic concept has remained unchanged even though additional features have been implemented (accelerated access review, post-discussion editor decision, post-discussion report publication, no more typesetting of the discussion papers). At the moment, 20 out of 42 journals we publish apply this peer-review process. Several journals have switched from applying a traditional peer review to the Interactive Public Peer Review.",,,NaN,Verified,Active 827,"Episciences",20190226,"Episciences.org is an innovative combination of the two routes of open access: the gold route by hosting journals in full open access and the green route where articles are submitted to these journals by depositing them in an open archive (overlay journals). The editorial boards of such epijournals organize peer reviewing and scientific discussion of selected or submitted preprints. Epijournals are “overlay journals” built above selected open archives; they add value to these archives by attaching a scientific caution to the validated papers. Epijournals can be either new titles created from scratch, or existing ones wishing to join the platform with a compatible publishing policy.",Curation of interesting work|Feedback to authors|Validation of soundness,Costs|Speed|Transparency,Preprints,Free-form commenting|Quantitative scores|Structured review form,Single blind,Journal integration|Post-publication review|Pre-publication review,All disciplines,"",NaN,"Overlay Journal Platform",Project,https://www.episciences.org/,2013-01-01,,"CCSD (Center for Direct Scientific Communication) [CNRS ; INRA ; INRIA ; University of Lyon]",,,"The objectives are to achieve open journals and implement open access to digital versions of articles. The epijournals could be either new titles or existing ones wishing to join the platform. The Episciences.org platform could host epijournals of any scientific disciplines. All data handled and generated by the platform or by its users are under the full control of the academic trustees.",NaN,Yes,No,"","",No,"",0,"","1,000-10,000","","",,,NaN,Verified,Active 831,"F1000Prime",20190226,"Launched in 2002, F1000Prime is an article recommendations service, powered by F1000.  Prime was designed to help researchers to navigate an expanding corpus of scholarly literature, providing an easy way for researchers to discover emerging content that might have relevance for their research and/or to support the formulation of new ideas and potential collaboration.  Prime effectively provides an expert (peer) curated service of published research that 'shouldn't be missed'. The F1000 faculty is a virtual collection of peer-nominated scientific experts (n=8000) from across the globe; Faculty Members (FMs) are required to write a short summary of the key interest points in any article that they find either interesting, novel, challenging (and other categories/classifiers of interest); alongside the summary , FMs provide a semantic rating - ‘Exceptional'; 'Very good'; 'Good’. The recommendations provide by F1000Prime complement quantitative article-based citation metrics, providing a qualitative expert (human) view of the potential importance, interest and impact of a piece of research.   It is estimated that round 1.5-2% of articles in PubMed are recommended in F1000Prime per annum.  All recommendations receive a DOI and are fully citable thus allowing transparency and visibility of the peer review activity involved and the recommendations to potentially inform other research; all articles receiving a recommendation in F1000Prime are flagged in the article Crossref record.  We are working hard to secure ORCIS ids for all FMs and already push recommendations to an individual ORICD id to help provide recogntion of the efforts of FMs. Traditionally, the audience and subscriber base for F1000Prime has been Research and Institution Libraries.  Latterly, F1000Prime is used by funding agencies and Institutions to support research impact assessment, evaluation and practical services such as peer review selection and grant management. Currently F1000Prime is a service focusing on recommendations in the life and medical sciences; however during 2019 F1000Prime's scope is being broadened to cover psychology and the physical sciences with more to come.  ",Curation of interesting work|Validation of soundness,Incentives and recognition for reviewing|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints,Annotations|Free-form commenting|Quantitative scores|Structured review form,Open identities|Open reports,Comment indexing|Post-publication review,Life Sciences,"",NaN,"'Science you shouldn't miss' - expert-based article recommendation service",Project,https://f1000.com/prime,,,"F1000",,https://www.youtube.com/watch?v=IUNz6bE3c-A&t=2s,"F1000Prime intends to provide a route for qualitative (post-publication) review and commenting - the USP of F1000Prime is that the recommendations are provided by independent experts.  F1000Prime intends to serve as a complement to more quantitative article-based citation metrics.",NaN,,No,"All recommendations are assigned a DOI and are citable and useable in other platforms, and pushed openly to the Crossref article record; user commenting functionality in development and coming soon 2019.","All recommendations have to be provided by an appointed Faculty member.  There are currently c8000 virtual Faculty Members – all are peer nominated and approved by the appointed Heads of each Faculty – all must be Associate Professor (or equivalent) and above.  F1000 manages the services but has no influence in who is appointed nor what is recommended – experts are independent.  ",,"A research output (could be an article, a pre-print, an other type of research output – and we are working to encourage recommendation of a broader range of research output types) has to be recommended by an appointed Faculty Member to be included in F1000Prime; any conflicts of interest associated with recommending a specific output must be openly declared.",NaN," It varies depending on the type of the institution and number of active users; though F1000 offers free 30-day trials to all potentially interested clients. Please contact for more information. https://f1000.com/subscriptions ","10,000+","","",,,NaN,Verified,Active 847,"reffit",20190227,"We are about empowering everyday scientists by giving them a platform to amplify their view of what makes new science great and what findings need to be evaluated more carefully.",Curation of interesting work|Feedback to authors|Validation of soundness,Bias in review|Incentives and recognition for reviewing|Quality of review|Speed|Transparency,Journal accepted manuscripts|Other scholarly outputs,Annotations|Free-form commenting|Quantitative scores,Open identities|Open participation|Single blind,Comment rating (meta-evaluation)|Post-publication review,Computer science|Life Sciences,"",NaN,"Giving scientists a platform for expressive peer review",Trial,https://reffit.com,,,"Greg Hale",https://github.com/imalsogreg/reffit,,"Mainstream adoption of a critical attitude toward published science Beautiful, impactful, featureful medium for expression Reffit's goal is to absorb the low-key, informal and honest tone of journal club criticism onto a platform where the discussion can be shared and distilled. The comments are free-form, but the site allows commenters to specify whether their comment was positive or negative, and whether it referred to a paper's reproducibility, novelty or coolness, and that metadata is used to get a numerical summary of those feelings.",NaN,No,No,"","Completely open. A reputation system for reviews being planned.",Yes,"Any user can initiate a discussion about any paper. Inclusion doesn't mean much, but frontpage sorting suggests broad engagement.",0,"","10-100","","",,,NaN,Unverified,In Preparation 851,"F1000 Research",20190227,"F1000 Research is an innovative and dynamic publisher and technology provider, operating open research publishing platforms for researchers that offer fast publication, invited open peer review (post-publication) and full data deposition and sharing. The peer review takes place after publication. Once the article is published, expert reviewers (who are selected by the author) are invited on the authors’ behalf. The peer review is administered on behalf of the authors by the F1000Research editorial team. The peer review is entirely transparent: the reviewers names and affiliation, their review and the approval status they choose are published alongside the article. Reviews are posted as soon as they are received and the peer review status of the article is updated with every published review. For more detail please click here.",,Bias in review|Costs|Incentives and recognition for reviewing|Quality of review|Reviewer training|Speed|Transparency,,Structured review form,Open identities|Open interaction|Open reports,Author initiates review|Post-publication review,Life Sciences,"",NaN,"Open Research Publishing model",Project,https://f1000research.com,,,"",,,"One of the key tenets of the open research publishing model is to improve the peer review process. Under this model, the role of the invited reviewers is not to accept or reject an article for publication – their remit is to help authors improve their work. This transparent process should encoruage more constructive reviews and civil dialogue between authors and reviewers, who communicate directly with each other, without an editor arbitrating decisions, or acting as gatekeeper. The peer review process becomes a central part of the article and with it being open it allowers readers to understand the full context of the article and see how other experts viewed the work.",NaN,,,"","",,"",NaN,"","1,000-10,000","","",,,NaN,Verified,Complete 859,"Involving patients and the public in the peer review process",20190227,"By playing a key part in determining what research gets funded and published, peer review has a pivotal role in determining the treatment and interventions patients receive and how medicine is practised. To ensure research is appropriate and relevant to end users, The BMJ now systematically invites patients and the public to review research manuscripts (and some other article types) alongside its peer reviewers. We have a series of research projects planned to evaluate this important initiative.",,Quality of review|Transparency,Journal accepted manuscripts|Privately shared manuscripts,Free-form commenting,Open identities|Open reports,Journal integration|Reviewer or editor initiates review,Life Sciences,"",NaN,"Patient and public involvement",Trial,https://www.bmj.com/campaign/patient-partnership,,,"BMJ",,https://www.youtube.com/watch?v=ZAFzvsmD-v8&list=PL6IxO3PNBkC_eemHQI3uYvP6Nd2j3L9QV&index=9,"Involving patients and the public in the research process has the potential to increase the quality and value of the research and reduce waste. Many grant funding organisations now incorporate members of the public in their review processes or panels, for example, the UK’s National Institute for Health Research (NIHR) and the Patient-Centered Outcomes Research Institute (PCORI) in the USA. By contrast, biomedical journals still do very little to involve patients and the public in their peer review processes. In 2014 The BMJ adopted an innovative strategy to coproduce its content with patients and the public. To ensure research is appropriate and relevant to end users, The BMJ now systematically incorporates patient and public review of research manuscripts (and some other article types) alongside traditional scientific peer review. Reviews are signed and published alongside accepted articles. We have a series of research projects planned to evaluate this important initiative.",NaN,,,"Patient and public reviewers are registered on the manuscript tracking software (ScholarOne) and invited by editors to review at the same time as peer reviewers, directly from the platform.","All patients and members of the public can sign up as a reviewer by completing our registration form (https://www.bmj.com/about-bmj/resources-reviewers/guidance-patient-reviewers). There are no inclusion/exclusion criteria.",,"",NaN,"","0","We recently surveyed patient reviewers to hear their views on the experience (see below). We also track time taken to review and acceptance to review rates.","We have received a high level of engagement with the initiative. Acceptance to review rates are good and reviews are timely. Reviewers' experiences are largely positive but they have also given us feedback on how to improve their experience. We have published the results from a survey describing early experiences (https://bmjopen.bmj.com/content/8/9/e023357). Further research into the value added by patient and public review is planned.  ",https://bmjopen.bmj.com/content/8/9/e023357,,NaN,Verified,Active 873,"Publons",20190227,"Publons was founded to address the static state of peer-reviewing practices in scholarly communication, with a view to encourage collaboration and speed up scientific development. The world's researchers now use Publons to keep track of their publications, citation metrics, peer reviews and journal editing work in a single, easy-to-use profile. Publons also partners with academic publishers to provide peer review solutions designed to bring greater transparency, efficiency, quality, and recognition to their peer review processes. The Publons Academy is a free peer review training course developed together with expert academics and editors, to teach early career researchers the core competencies of peer-reviewing.",,Incentives and recognition for reviewing|Quality of review|Reviewer training|Transparency,,,,,,"",NaN,"Make your research contributions more visible",Project,https://publons.com/,,,"",,https://vimeo.com/304505048,"",NaN,,,"","",,"",NaN,"","0","","",,,NaN,Verified,Active 878,"Effects of training on quality of peer review: randomised controlled trial",20190227,"Single blind randomised controlled trial at The BMJ, a general medical journal, to test the effect of two types of training on the quality of review, number of errors detected, time taken to review and recommendation for publication. There were two intervention groups (workshop versus self-taught package) and a control group receiving no training.",,Quality of review|Reviewer training,Privately shared manuscripts,,,,Life Sciences,"",NaN,"Training peer reviewers",Trial,https://www.bmj.com/content/328/7441/673,,,"BMJ",,,"Many studies have illustrated the inadequacies of peer review and its limitations in improving the quality of research papers. However, few studies have evaluated interventions that try to improve peer review. We conducted a randomised controlled trial to examine the effects of training. Training that would be feasible for reviewers to undergo and for a journal to provide would have to be short or provided at a distance. Although the effectiveness of short educational interventions is questionable, some brief interventions have been shown to be successful (depending on what is being taught and the methods used). We aimed to determine whether reviewers for The BMJ who underwent training would produce reviews of better quality than those who received no training; whether face to face training would be more beneficial than a self taught package; and whether any training effect would last at least six months. The training materials for the trial are freely available on bmj.com (https://www.bmj.com/about-bmj/resources-reviewers/training-materials).",NaN,,,"","All UK reviewers who had recently completed a review at the time of the study were eligible and invited to take part in a research project.",,"",NaN,"","1-10","","Reviewers in the self taught group scored higher in review quality after training than did the control group, but the difference was not of editorial significance and was not maintained in the long term. Both intervention groups identified significantly more major errors after training than did the control group. The evidence for benefit of training was no longer apparent on further testing six months after the interventions. Training had no impact on time taken to review but was associated with an increased likelihood of recommending rejection. (1) Schroter S, Black N, Evans S, Smith R, Carpenter J, Godlee F. Effects of training on the quality of peer review: A randomised controlled trial. BMJ 2004;328:673-5.  (2) Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them?  J R Soc Med 2008:101: 507-14.",https://www.bmj.com/content/328/7441/673,,NaN,Verified,Complete 895,"Impact of assessing during peer review the consistency between the CONSORT checklist & the submitted manuscript on the completeness of reporting",20190228,"Randomised controlled trial in collaboration with BMJ Open, a general medical journal, to evaluate in a real editorial context whether assessing during peer review the consistency between the submitted CONSORT checklist and the information reported in the manuscripts of randomised trials, as well as providing feedback to authors on the inconsistencies found, improves the completeness of reporting of published trials.",Feedback to authors,Quality of review|Transparency,,Free-form commenting,Open identities|Open reports,Journal integration|Reviewer or editor initiates review,Life Sciences,"",NaN,"Improving the transparency of randomised controlled trials",Trial,https://clinicaltrials.gov/ct2/show/NCT03751878,2018-12-01,2019-07-31,"David Blanco's PhD project under supervision of Erik Cobo at Universitat Politècnica de Catalunya and Jamie Kirkham at Liverpool University and in collaboration with BMJ Open.",,,"Randomised trials are considered the gold standard in medical research. The CONSORT Statement aims to improve the quality of reporting of randomised trials. Without transparent reporting, readers cannot judge the reliability of trial findings, therefore these findings cannot inform clinical practice. Different stakeholders, including biomedical journals, have taken different actions to try to improve author adherence to CONSORT. The most popular one is to instruct authors to submit a completed CONSORT checklist with page numbers indicating where the CONSORT items are addressed when they submit their manuscript. However, this measure alone has proven not to be effective. In this study, we intend to assess whether evaluating the checklists submitted by authors and providing feedback to them as part of the peer review process improves the completeness of reporting of published trials.",NaN,,Yes,"Randomised controlled trials in the intervention group will receive an additional peer review (conducted by the principal investigator) focused entirely on the consistency of reporting between the submitted manuscript and the completed CONSORT checklist. Where inconsistencies are found, authors of these manuscripts will be asked for changes in relation to the items. Trials in the control group will undergo the usual peer review process.","Eligibility criteria: Manuscripts are eligible for our study if (i) they have been submitted to BMJ Open, (ii) they are original research submissions reporting the results of a randomised trial and (iii) they have passed the first editorial filter and have been subsequently sent out for peer review.",,"",NaN,"","0","","This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 676207. Results will be available n September 2019.",,https://clinicaltrials.gov/ct2/show/NCT03751878,NaN,Verified,Active 903,"Impact of a short form of the CONSORT checklist for peer reviewers to improve the reporting of randomised controlled trials published in biomedical journals: a randomised controlled trial",20190228,"Despite improvement with implementation of the CONSORT Statement (CONsolidated Standards for Reporting Trials), there remain major reporting deficiencies in published randomised controlled trials (RCT).  We plan to conduct, in collaboration with a number of journals, a RCT evaluating our hypothesis that reminding peer reviewers of the most important CONSORT items (including a short explanation of those items) will result in higher adherence to CONSORT guidelines in published RCTs. During the standard peer-review process, peer-reviewers will be randomly allocated to use either (i) a short version of the CONSORT checklist including the ten most important and poorly reported CONSORT items as defined by a group of experts of the CONSORT Statement (C-short); or (ii) no checklist. The primary outcome will be the overall frequency of adequate reporting of the ten most important CONSORT items in published articles. Within an innovative RCT, we plan to randomise peer review of submitted manuscripts to an intervention or control group (i.e. use of C-Short or the usual approach) and evaluate if reporting can be improved by enlightening the awareness of peer-reviewers to the most important CONSORT items.",,,Journal accepted manuscripts,,,,,"",NaN,"Randomising peer-reviewer to a CONSORT checklist to improve adequate reporting",Trial,http://www.nowebsite.ch,,,"",,,"Within an innovative RCT, we plan to randomise peer review of submitted manuscripts to an intervention or control group (i.e. use of C-Short or the usual approach) and evaluate if reporting can be improved by enlightening the awareness of peer-reviewers to the most important CONSORT items. If this intervention deems to be effective, it could be easily implemented by all medical journals without needing additional resources at a journal level.",NaN,,,"","",,"",0,"","100-1,000","","",,,NaN,Verified,In Preparation 904,"Transparent Peer Review",20190228,"A collaborative project involving Wiley together with Publons and ScholarOne (both part of Clarivate Analytics).   The project gives authors the choice of transparent peer review on submission to a journal regardless of the current model of peer review the journal is operating (for example, single blind, double blind). If their article is published and authors have elected for transparent peer review the peer reviewers’ reports, authors’ responses, and editors’ decisions will accompany publication. Reviewers have the option to disclose their names alongside their reports. The peer review history is openly available on a page hosted by Publons via a link from the article. Each component has a DOI, ensuring each element is fully citable. For those reviewers who choose to sign their reviews, the DOIs can also be added to their ORCID records.",,Incentives and recognition for reviewing|Quality of review|Transparency,Journal accepted manuscripts|Other scholarly outputs,Structured review form,Double blind|Open identities|Open reports|Single blind,Professional editors,All disciplines,"",NaN,"Developing an automated and scalable approach to transparent peer review.",Project,https://hub.wiley.com/community/exchanges/discover/blog/2019/01/22/progressing-towards-transparency-more-journals-join-our-transparent-peer-review-pilot,2018-09-13,,"Operated by Wiley, together with Publons and ScholarOne (both part of Clarivate Analytics).",,,"Wiley is committed to moving towards greater openness and reproducibility of research, including increasing transparency in peer review. A transparent peer review workflow shows readers the process behind editorial decision making, increases accountability, and helps recognise the work of editors and peer reviewers.   Journals joining the pilot have the flexibility to incorporate transparency into their existing workflows and processes without making other major changes to how they conduct peer review. When combined with author choice (and peer reviewer choices) this approach provides a very flexible way to open-up peer review.   In recognising the need to  change the conversation around transparent peer review and learn more, it’s vital that any initiative has the capability to scale and be compatible with different peer review models, across diverse subject disciplines and publisher workflows. We have taken a collaborative approach working with Publons and ScholarOne to enable this.",NaN,,Yes,"","Editors invite reviewers to participate. The reviewers are then asked to agree to transparent peer review and it is optional to have their name disclosed.",,"",0,"","100-1,000","It is possible to track how many people view the peer review history, linked from the article to Publons.","The pilot started in September 2018 with the inclusion of the first journal Clinical Genetics.  As of end January 2019, 77% of authors have agreed to transparent peer review (417 out of 528 submissions) and 23% of reviewers have signed their reports. These initial results will be presented at the Open Science Conference in Berlin (March 2019).   The pilot was extended in January 2019 to include a further ten journals, and we will be reviewing how the pilot is progressing in due course.",https://www.open-science-conference.eu/%20,,NaN,Verified,Active 907,"Effect of open peer review on quality of reviews and on reviewers'recommendations: a randomised trial",20190228,"In 1998 we conducted a randomised trial at The BMJ, a large general medical journal, to examine the effect on peer review of asking reviewers to have their identity revealed to the authors of the paper. Outcome measures included review quality (using the validated Review Quality Instrument), the recommendation regarding publication, the time taken to review.  ",,Quality of review|Speed|Transparency,,Free-form commenting,Open identities,Journal integration|Reviewer or editor initiates review,Life Sciences,"",NaN,"Randomised trial of open review",Trial,https://www.bmj.com/content/318/7175/23,,,"BMJ",,,"Previous RCTs at The BMJ failed to confirm that blinding of reviewers to authors' identities improved the quality of reviews so we decided to experiment with open identities peer review. To us it seemed unjust that authors should be “judged” by reviewers hiding behind anonymity: either both should be unknown or both known, and it is impossible to blind reviewers to the identity of authors all of the time. We therefore conducted a randomised trial to confirm that open review did not lead to poorer quality opinions than traditional review and that reviewers would not refuse to review openly (because open review would then be unworkable).",NaN,No,,"","Consecutive manuscripts received by the BMJ and sent by editors for peer review during the first seven weeks of 1998 were eligible for inclusion. Four potential clinical reviewers were selected by one of the BMJ's 13 editors. Two of these four reviewers were chosen to review the manuscript. The other two were kept in reserve in case a selected reviewer declined. The selected reviewers were randomised either to be asked to have their identity revealed to authors (intervention group) or to remain anonymous (control group), forming a paired sample.",,"",NaN,"","100-1,000","A questionnaire survey was undertaken of the authors of a cohort of manuscripts submitted for publication to find out their views on open peer review.","Asking reviewers to consent to being identified to the author had no important effect on the quality of the review, the recommendation regarding publication, or the time taken to review, but it significantly increased the likelihood of reviewers declining to review. Most authors were in favour of open peer review. (van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers' recommendations: A randomised trial. BMJ 1999; 318: 23 – 27.)",https://www.bmj.com/content/318/7175/23,,NaN,Verified,Complete 911,"Axios Review",20190228,"Axios Review exists to eliminate the huge inefficiency at the heart of scientific publishing: Authors cannot currently submit to a journal that wants their paper. The resulting search for a suitable journal wastes researcher time, inflates the cost of journals, and ultimately slows the progress of science. Axios applies the long-established brokerage model to scientific publishing. We conduct a rigorous editor-led peer review process and then find a journal that wants the paper. This approach is highly effective: 85% of Axios papers get accepted at the journal (compared to <30% for the current system). Half of the accepted Axios papers are not re-reviewed by the journal. Our papers are published with under 2 rounds of review (vs. 5 in the current system) without any loss of rigour.",Curation of interesting work|Feedback to authors|Validation of soundness,Bias in review|Costs|Quality of review|Speed,Preprints|Privately shared manuscripts,Annotations|Free-form commenting|Structured review form,Double blind|Single blind,Author initiates review|Journal integration|Pre-publication review,Life Sciences,"",NaN,"A brokerage between authors and journals",Project,https://dynamicecology.wordpress.com/2015/03/05/axios-review-is-working-you-should-try-it/,2013-02-01,2017-02-23,"",,,"The current process for publishing scientific research is highly inefficient: - Researchers need to publish their papers in high profile journals to win funding. - High profile journals receive many thousands of research papers every year. - 70 to 95% of these papers are rejected, the majority for poor fit or lack of novelty. - Researchers must then submit their papers to another journal. - Most papers are assessed again and again by different journals before being published. - This takes up researcher time, raises the cost of journals, and slows the progress of science.",NaN,No,No,"Axios Review applies the long-established brokerage model to facilitate the interaction between authors and journals: - Scientists send us their papers and suggest target journals. - We use standard editor-led anonymous peer review to evaluate the research. - We offer the paper to the appropriate target journals. - The journals use our evaluation to decide whether they want the paper. - Once we have found an interested journal, authors revise their paper and send it in. - Journals either re-review the paper or accept it for publication immediately Scientists can thus submit to a journal that wants their paper, and journals receive papers they actually want.","Axios staff recruited academic Editors, the Editors picked reviewers (with occasional help from Axios staff)",No,"Articles reviewed by Axios generally appeared in normal academic journals. Authors were asked to add a line thanking Axios Review for feedback to their acknowledgements",250,"The fee covered i) editorial office & staff salaries, ii) overheads, and iii) promotional activities.","100-1,000","We collected lots of metrics.","We received 470 submissions, with almost 200 of those arriving in 2016. At the last time of checking (early 2017) 130 had been published in academic journals, mostly high profile journals in ecology and evolution. We consistently found that 85% of Axios papers were accepted at the interested journal (compared to under 30% for the current system). Half of the accepted Axios papers are not re-reviewed by the journal, greatly increasing the speed of the publishing process. Most Axios papers were published with fewer than 2 rounds of review, compared to over 5 in the current system. Excluding the time authors spend on revisions, we averaged 3 months between the paper arriving at Axios and being published in a journal.",,,NaN,Verified,Complete 963,"Peerage of Science",20190305,"How it works: Free for scientists, journals pay. 1) Author submits manuscript, chooses Anonymous or Onymous mode, chooses process deadlines. 2) Manuscript becomes available for any validated community member to engage as reviewer, or recommend others as reviewers. Reviewer Essays must have sections Merit, Critique and Discussion, max 1000 words, may include list of references. 3) Reviewers must judge, score and give feedback to each other’s Essays, for accuracy and fairness in each three sections. Scores build reviewer’s performance metric, which can be shown publicly in reviewer’s profile. 4) Author uploads revised version. 5) Reviewers give final assessment on whether revision addresses valid critique, and if it is Publishable, Revisable, or Unpublishable. 6) Editors of multiple (currently 70) journals can access any peer reviews and can click to authors a direct, private offer to publish, or invitation to submit. Authors can also click journal logo to initiate correspondence.",Feedback to authors|Validation of soundness,Bias in review|Costs|Incentives and recognition for reviewing|Quality of review|Reviewer training|Speed|Transparency,Other scholarly outputs|Preprints|Privately shared manuscripts,Free-form commenting|Quantitative scores|Structured review form,Double blind|Open interaction|Open participation|Single blind,Author initiates review|Comment rating (meta-evaluation)|Journal integration,All disciplines|Life Sciences,"",NaN,"Open engagement, judged review quality, multiple journals simultaneously",Project,https://peerageofscience.org,2011-11-01,,"Peerage of Science Ltd",,https://www.youtube.com/watch?v=pZ82MHskV6M&feature=youtu.be,"Once upon a time in a land probably far, far away from where you are now, a postdoc was (as they often are) sadly noticing another year commencing on the peer review process of his manuscript. It was his own fault of course: it had been his own darn choice to start the descent down the journal prestige ladder from the top, where the steps are most slippery. Yet, it was dawning on the young scientist that this was also his illusions about scientific peer review meeting reality. Perhaps he should have paid more attention to the words of Richard Horton: ""We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong."" Let’s change that.",NaN,Yes,Yes,"https://www.peerageofscience.org/how-it-works/overview/   https://www.peerageofscience.org/how-it-works/process-flow/    ","Reviewers: external verification that there exists at least one article where the person is the first or corresponding author, published in a PubMed- or ISI -indexed journal. Once verified, person gets Peer status, and thereafter can access and engage any manuscript, except those submitted by authors affiliated with Peer via institution or co-authorship with last 3 years. Editors: appointed by participating journals.",,"Peerage of Science itself does not publish or publicly display the outcomes, but facilitates the selection processes of participating publishers. Access to manuscripts and peer reviews is strictly limited to validated Peers and editors. Submissions are pre-screened before posting to the community, and articles judged pseudoscience are rejected. Peerage of Science itself does not appoint or solicit reviewers, nor does it limit the number of reviewers. Some manuscripts fail to get reviews at all (ca 26% of submissions), most get at least one, while some have received eight full Essays.",0,"","1,000-10,000","","",,,NaN,Verified,Active 976,"protocols.io",20190306,"The mission of protocols.io. is to make it easy to share method details before, during, and after publication. Published research protocols get community annotation on protocol- and step-level. Authors often improve the originals based on this feedback, easily creating updated versions with a few clicks.",Feedback to authors|Validation of soundness,Costs|Speed|Transparency,Other scholarly outputs,Annotations|Free-form commenting,Open identities|Open interaction|Open participation,Post-publication review|Reviewer or editor initiates review,,"",NaN,"dynamic and interactive research protocols with community review",Project,https://www.protocols.io/,2014-02-11,,"protocols.io",,https://www.youtube.com/watch?v=wvqw6PPl0eY&t=3s,"The protocols.io platform was inspired by the frustration of one of the co-founders regarding lack of a central space to share corrections and optimizations of previously-published methods. After launching, the team behind protocols.io realized that most research papers do not contain adequate descriptions of the methods. So the mission of the company evolved into encouraging culture change towards better reporting of the details upon publication and then enabling scientists to keep the methods up-to-date.",NaN,,,"Handled by competent reviewers with good editors, pre-publication peer review improves the submitted papers. However, improving quality is not the same as ensuring it. A typical reviewer is unlikely to verify the scripts and rerun analyses reported in a genomics paper. Similarly, it would be highly unusual for a reviewer of an experimental molecular biology paper to dash to the lab and attempt to reproduce the reported results as part of checking the paper. The peer review that can truly verify any given paper is the gradual process of replication and extension of the published work over time. When authors of a given paper or other researchers follow up on a publication – then and only then – do we really catch mistakes, improve, and verify the work. This is why it is so important to report not only the results, but the code, data, and the detailed methods used to produce these results. This is why the efforts of publishers like GigaScience are so important. We were driven to create protocols.io by the desire to establish a central place where scientists can share and discover optimizations and corrections of published methods; our hope is that through versioning and collaborative sharing post-publication, methods can evolve and remain up-to-date, long after the original publication. Excellent example of such community review: https://www.protocols.io/view/cut-run-targeted-in-situ-genome-wide-profiling-wit-mgjc3un","Any registered user is allowed to comment.",Yes,"We try to expose all possible metrics that may be helpful in assessing whether a protocol is likely to work. Each public protocol has a dedicated ""metrics"" tab with this information. Example: https://www.protocols.io/view/cut-run-targeted-in-situ-genome-wide-profiling-wit-mgjc3un/metrics",0,"The protocols.io platform is free to read and share for public content. For private collaboration, there is a limit of 5 free private protocols per user, requiring a $5/month subscription above that.","100-1,000","","",,,NaN,Verified,Active 982,"Publishing review reports (PRR)",20190306,"For detailed article please check or OA article and its supplementary data: https://www.nature.com/articles/s41467-018-08250-2 In November 2014, five Elsevier journals agreed to be involved in the Publication of Peer Review reports as articles (from now on, PPR) pilot. During the pilot, these five journals openly published typeset peer review reports with a separate DOI, fully citable and linked to the published article on ScienceDirect. Review reports were published freely available regardless of the journal’s subscription model (two of these journals were open access, while three were published under the subscription-based model). For each accepted article, all revision round review reports were concatenated under the first round for each referee, with all content published as a single review report. Different sections were used in cases of multiple revision rounds. For the sake of simplicity, once agreed to review, referees were not given any opt-out choice and were asked to give their consent to reveal their identity. In agreement with all journal editors, a text was added to the invitation letter to inform referees about the PPR pilot and their options. At the same time, authors themselves were fully informed about the PPR when they submitted their manuscripts. Note that while one of these journals started the pilot earlier in 2012, for all journals the pilot ended in 2017.",,Bias in review|Incentives and recognition for reviewing|Quality of review|Speed|Transparency,Journal accepted manuscripts,Free-form commenting,Open identities|Open reports|Single blind,Comment indexing|Professional editors|Reviewer or editor initiates review,,"",NaN,"Transparency in peer review and credit to reviewers",Trial,https://www.nature.com/articles/s41467-018-08250-2,2014-11-10,2017-12-31,"Elsevier",,,"Our aim was to understand whether knowing that their report would be published affected the referees’ willingness to review, the type of recommendations, the turn-around time and the tone of the report. These are all aspects that must be considered when assessing the viability and sustainability of open peer review. By reconstructing the gender and academic status of referees, we also wanted to understand whether these innovations were perceived differently by certain categories of scholars",NaN,Yes,Yes,"","Editors are appointed for the journals. They choose reviewers and manage the peer review process.",No,"",NaN,"","10,000+","Yes, article page usage data on Science Direct shows one out of three click on the pilot journal article pages is on the peer review reports. Surveying editors of participating journals shows editors are using published peer review reports as a resource material for training young reviewers.","we measured changes both before and during the pilot and found that publishing reports did not significantly compromise referees’ willingness to review, recommendations, or turn-around times. Younger and non-academic scholars were more willing to accept to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report. These findings suggest that open peer review does not compromise the process, at least when referees are able to protect their anonymity.",https://www.nature.com/articles/s41467-018-08250-2,,NaN,Verified,Complete 988,"External Tests of Peer Review Validity Via Impact Measures",20190306,"A review of the current literature on validating peer review decisions with research output impact measures is presented here; only 48 studies were identified, about half of which were US based and sample size per study varied greatly. 69% of the studies employed bibliometrics as a research output. While 52% of the studies employed alternative measures (like patents and technology licensing, post-project peer review, international collaboration, future funding success, securing tenure track positions, and career satisfaction), only 25% of all projects used more than one measure of research output. Overall, 91% of studies with unfunded controls and 71% of studies without such controls provided evidence for at least some level of predictive validity of review decisions. However, several studies reported observing sizable type I and II errors as well. Moreover, many of the observed effects were small and several studies suggest a coarse power to discriminate poor proposals from better ones, but not amongst the top tier proposals or applicants (although discriminatory ability depended on the impact metric). This is of particular concern in an era of low funding success, where many top tier proposals are unfunded.",Curation of interesting work,Quality of review,Journal accepted manuscripts,,,,Life Sciences,"",NaN,"Literature Review of Grant Review Validity Studies",Project,https://www.frontiersin.org/articles/10.3389/frma.2018.00022/full,,,"American Institute of Biological Sciences",,,"There is a need to consolidate evidence surrounding grant peer review's validity, at least based on inputs and outputs into the system, as the evidence base is sparse and spread out and siloed across the literature. So this was an attempt to get a consensus view, again using quality measures of inputs into the review system and productivity/impact measures of outputs of the system to get a sense as to whether reviewers can make valid decisions and what are some of the limitations. It is clear from the compiled results that while there is a lot of subjectivity in the system which keeps reliability for any individual proposal low, reviewers can discriminate between good and bad proposals to some degree but have a much more difficult time separating out excellent from outstanding ones. Yet, in the age of low success rates, that is exactly what we have tasked reviewers to do.",NaN,,,"","",,"",NaN,"","0","","",,,NaN,Verified,Complete 1015,"Grassroots Journals",20190306,"Grassroots Journals assess the quality of scientific studies and aim to make it irrelevant where a study was originally published. A grassroots journal collects, assess, categorizes and ranks all articles relevant for a specific topic/community. Thus it brings together articles that are now spread over many publications and provides a better access to the scientific literature. It does not publish articles, but only reviews them. Post-publication review means that the assessments are up to date. It includes articles of all quality levels by quantifying the quality; this avoids making multiple reviews for multiple journals. All assessments are open access and the code open source.",,Costs|Quality of review|Transparency,Other scholarly outputs,Annotations|Free-form commenting|Quantitative scores|Structured review form,Open identities|Open interaction|Open participation|Open reports|Single blind,Comment moderation|Post-publication review|Professional editors|Reviewer or editor initiates review,All disciplines|Earth & space sciences,"",NaN,"Open Post-publication review organized like a journal",Project,https://grassroots.is/,,,"Victor Venema",,,"Breaking the power of the traditional publishers and bringing the assessment what good science is back to the scientific community.",NaN,,No,"","The editors of a Grassroots Journal invite experts to review, but everyone else is also welcome to add comments.",,"",0,"","1-10","","",,,NaN,Verified,In Preparation 1022,"Science Open Reviewed",20190306,"Science Open Reviewed is an online community of researchers in the natural sciences, health sciences, and social sciences  using a unique model for promoting: efficient and accountable author-directed open (non-blind) peer review; effective reviewer participation incentives and reputation metrics; and rapid dissemination of discovery and commentary. SciOR is free to everyone because it is not operated by a profit-driven commercial enterprise, nor by a traditional paywall-based journal — but by front-line researchers, using a novel library-based scholarly communication system, sponsored by a leading university research institution.  ",Curation of interesting work|Feedback to authors|Validation of soundness,Bias in review|Costs|Incentives and recognition for reviewing|Quality of review|Transparency,Journal accepted manuscripts|Preprints,Free-form commenting,Open identities,Author initiates review|Journal integration|Pre-publication review,All disciplines,"",NaN,"Connecting authors with reviewers for journals",Trial,https://science-open-reviewed.com/webapp/,,,"",,,"The mission of SciOR is to promote the progress of science as a mission for discovery, rather than a mission for elitism.  Specifically, SciOR provides support for extension and refinement of mechanisms and opportunities for: (1) increasing reviewer incentive and reputation rewards, and thus increasing the pool of available and willing reviewers; and (2) promoting transparent and accountable open peer-review, and thus limiting the incidence of biased and poor quality reviews often associated with blind reviewing.",NaN,,No,"","Authors invite reviewers or respond to potential reviewers who have contacted the author. Reviewers are eligible if they hold a PhD, or are registered in a PhD program at an accredited institution",No,"Authors submit their reviewed paper to the journal of their choice, together with the reviews, and the identification of the reviewers.",0,"","0","","",,,NaN,Verified,NA 1062,"In Review",20190307,"Powered by Research Square and developed in partnership with BMC, In Review aims to open up the peer review process. The first service of its kind, In Review provides authors with on-demand access to the status of their manuscript and tracks all major editorial events such as when reviewers have been invited, have agreed, and have submitted a report. By allowing authors to post their work early, In Review gives authors the opportunity to showcase their work to funders and others, and to engage the wider community for comment while their manuscript is under review. The result is earlier feedback, and the potential for more citations and collaboration opportunities.",,Speed|Transparency,Preprints,Annotations|Free-form commenting,,Journal integration|Professional editors|Reviewer or editor initiates review,,"",NaN,"A new service enabling prepublication sharing and peer review tracking.",Project,https://www.springernature.com/gp/authors/campaigns/in-review,,,"Springer Nature and Research Square",,https://www.youtube.com/watch?v=J9PSGPYZxJs&feature=youtu.be,"Experimenting with new initiatives to improve peer review and advance research has been something long championed at BMC. Created in partnership with Research Square, there were two important goals for In Review. First, giving researchers the ability to share their work with funders and others in a citeable way on submission to journals. This allows them to engage the community in discussion to help improve their article either through the public commenting tool or the Hypothes.is open annotation tool. We anticipate early sharing will help authors strengthen their manuscripts as well as spur collaboration opportunities.  Secondly, In Review would give authors a way to track the status of their manuscript in real time through the peer review process, including when reviewers have been invited, and reviewer reports have been received. This increased transparency has been well received by authors who have opted-in to the service since October 2018.",NaN,,Yes,"In Review is integrated with journals. The platform hosts information about the peer review timeline and status publicly. In addition, there is a private author dashboard which will enable authors to read the peer reviews as they are received.  This is under development and will launch in 2019. Authors can also use the dashboard to recommend reviewers. One important feature is that manuscripts that are not accepted for publication at the journal will remain on the platform as preprints. This allows authors to continue to share and get feedback on their manuscript, but also give them the freedom to submit to other journals.  The peer review history is only available privately through the author dashboard. As peer review becomes more open over time, full peer reviews may be made available publicly on the platform.","Anyone is able to comment on an article.",Yes,"All submissions undergo an Integrity Check, which includes plagiarism, ethics and conflict of interest.  This is denoted with a checkmark icon on the article page. A drop-down list shows the checks that have been completed.",0,"","1,000-10,000","We track author uptake of the service (averages 56% as at Mar 2019) and unique article views.","As at March 2019, the project has been rolled out on eight BMC journals.  There have been over 1000 submissions, with an average 56% opt-in rate.  Springer Nature will be rolling this out across many more journals in 2019.",,,NaN,Verified,Active 1140,"WikiJournal User Group",20190319,"The three WikiJournals (Medicine, Science and Humanities) are Wikipedia-integrated academic journals. This firstly means that in addition to research articles, they also publish Wikipedia-style encyclopedic review articles. These review articles can be written from scratch or based on Wikipedia articles, and can be integrated back into Wikipedia after peer review. Secondly, the WikiJournals are hosted and supported by the Wikimedia Foundation and so are free to both authors and to readers. And third, the WikiJournals use the same MediaWiki software as Wikipedia. Articles automatically benefit from features like version control and discussion pages, and anyone can contribute to them, including the peer reviewers, and the journals' academic editors.",Feedback to authors|Validation of soundness,Costs|Quality of review|Transparency,Journal accepted manuscripts|Preprints,Annotations|Free-form commenting,Double blind|Open identities|Open interaction|Open participation|Open reports|Single blind,Author initiates review|Post-publication review|Professional editors,,"",NaN,"Open access, free to publish, Wikipedia-integrated",Project,https://en.wikiversity.org/wiki/WikiJournal_User_Group,2014-03-28,,"Thomas Shafee, EiC WikiJSci",,,"The main original goal of WikiJournals is to encourage and recognize contributions to Wikipedia by academics. To do this, the WikiJournals allow Wikipedia articles to have named authors, DOIs, a stable PDF version of record, and to undergo academic peer review. Moreover, WikiJournals strive to achieve a broad thematic scope that covers practically all fields of research, and to provide a zero cost, open access publishing venue for the fields that lack one. And finally, WikiJournals implement recognized best practices, in particular a form of open peer review where not only the reviews are made public, but also anyone can contribute.",NaN,,Yes,"Thanks to the MediaWiki platform, peer review at WikiJournals is extremely flexible. While comments themselves are always made public, the authors, reviewers and other participants may be named, pseudonymous or anonymous. Interactions between all the participants are possible before, during and after the review process. Reviewers may choose to perform small corrections themselves, rather than suggesting them to authors. And authors are able to revert modifications thanks to version control.","Editors are selected for their professional credentials as researchers, their experience with academic publishing, and their experience with open projects and in particular Wikipedia. Reviewers are selected for their status as experts in the reviewed article's subject matter. As in Wikipedia, anyone is eligible to comment: it is not even necessary to create an account.",,"Articles are accepted for publication (inc, PDF, doi, indexing etc) once they have been been assessed by at least two external peer reviewers (invited by the editorial board) and their responses and edits have been agreed to by both the peer reviewers and editorial board. Suitable material is integrated into Wikipedia, as a 'living version' where it is available for post-publication review and freely editable by anyone.",0,"All editors are currently volunteers, hosting costs supported by Wikimedia Foundation","10-100","Currently tracked are: The number of times that an article is read in the journal and on Wikipedia (for those integrated into Wikipedia); Number of pre-publication peer reviewers; Post-publication peer review; The Social media impact of each article by AltMetric.com. Later, an impact factor will be sought by ISI Web of Science.","As of 2019: 49 submitted articles,  36 accepted for publication, 12 declined, 1 withdrawn.25 currently in review. Of the 36 published, 27 integrated into Wikipedia (of which 8 were adapted by updating/overhauling existing Wikipedia content). . Mean number of external peer reviewers = 3.1 per article, additional suggestions from editors = 1.2 per article, spontaneous pre-publication suggestions from readers = 0.1 per article. Extensive post-publication commentary performed on 'living' Wikipedia versions of articles. 75% of peer reviewers choose to have their identity open. WikiJournal articles integrated in full or part into Wikipedia receive 6.9 million views per annum so far. Combined citations 121 per G-scholar. Combined AltMetric score >200.",,,NaN,Verified,Active 1150,"Karma Credits (and Blockchain/Cryptocurrency Incentives) for Reviewers, Editors, Authors",20190326,"Karma Points are a measure for your contributions in the scholarly community. Karma implies that every good deed will be paid back - and in fact, JMIR Publications has created an innovative system that rewards productive members of the scientific community for contributions such as peer-reviewing or editing for JMIR journals and partner journals. A new cryptocurrency (R-Coin) and blockchain project will make karma credits portable to other publishers - contact us if you are interested to collaborate.",,Incentives and recognition for reviewing,,,,,,"",NaN,"Get points for peer-review and editing, redeemable for APC credits",Project,https://jmir.zendesk.com/hc/en-us/articles/115001104247,2016-03-01,,"JMIR Publications",,,"Incentivize peer-reviewers, editors, and authors, and provide a fair remuneration in form of APC discounts to our community members, commensurating with their productivity and involvement.",NaN,,,"","",,"",NaN,"","1,000-10,000","","",,,NaN,Verified,Active 1155,"Blockchain for Peer Review",20190326,"Digital Science, Springer Nature, Taylor & Francis, Cambridge University Press, ORCID, Katalysis and the Wellcome Trust as advisors joined forces in an industry-wide, not-for-profit initiative with the intent to solve the challenges around the peer review process. By building a shared, fully GDPR compliant infrastructure for sharing peer review information while fully respecting demands around confidentiality, privacy, as well as supporting single blind and double blind alongside open review models, the process can be made more transparent, recognizable, transportable and efficient. The initiative was launched in March 2018. Having successfully delivered and tested our PoC, we plan to launch in 2019 our first MVP op top of the infrastructure focused on validating and disclosing the review process through Crossmark.",,Incentives and recognition for reviewing|Quality of review|Speed|Transparency,,,,,,"",NaN,"Making the review process more transparent, efficient, recognizable and transportable",Trial,https://www.blockchainpeerreview.org/,,,"",,,"We believe that most of the problems around the peer review process are caused by the fact that, due to the confidential nature of the process, peer review is traditionally siloed and a 'black box'. The lack of transparency can lead to fraud and manipulation, results in a lack of recognition for reviewers, and hinders the exchange of review history between journals. Moreover, the lack of coordination between journals and publishers, especially on shared reviewers, leads to many inefficiencies. The project's aim is to build a shared infrastructure on a not-for-profit basis that will allow all participants in he ecosystem (publishers, institutions, funders, and of course editors, authors and reviewers themselves) to exchange information around the process, while respecting the confidential nature of the process.",NaN,,,"","",,"",NaN,"","0","","",,,NaN,Verified,In Preparation 1159,"IRRIDs: International Registered Report Identifier (IRRID)?",20190326,"IRRIDs are a proposal - implemented at JMIR Research Protocols and all JMIR journals - to link protocols to results papers and vice versa. The IRRID is a simple identifier consisting of the DOI of the published protocol paper (or another unique identifier), preceded by RR1 (for the protocol) or RR2 (for the results paper). There are some other attributes identifying whether the protocol was written before data collection or during/after. We hope to form an international collaboration with other publisher who adopt this format and encourage the publication of protocols / registered reports to improve accountability and quality of research. If you are an editor/publisher and wish to adopt this format and want to be listed as partner journal/publisher, please contact support@jmir.org.",,Quality of review,Journal accepted manuscripts|Other scholarly outputs,,,,,"",NaN,"Linking research result papers to protocols",Project,https://jmir.zendesk.com/hc/en-us/articles/360003797672,2018-09-01,,"JMIR Publications",,,"Improve accountability and facilitate the linkage of registered reports. Reviewers can then more easily identify protocols and judge whether there have been protocol deviations or post-hoc conclusions that were not part of the original hypotheses.",NaN,,,"","",,"",NaN,"","100-1,000","","",,,NaN,Verified,Active 1178,"Research: Gender bias in scholarly peer review",20190330,"Peer review is the cornerstone of scholarly publishing and it is essential that peer reviewers are appointed on the basis of their expertise alone. However, it is difficult to check for any bias in the peer-review process because the identity of peer reviewers generally remains confidential. Here, using public information about the identities of 9000 editors and 43000 reviewers from the Frontiers series of journals, we show that women are underrepresented in the peer-review process, that editors of both genders operate with substantial same-gender preference (homophily), and that the mechanisms of this homophily are gender-dependent. We also show that homophily will persist even if numerical parity between genders is reached, highlighting the need for increased efforts to combat subtler forms of gender bias in scholarly publishing.",,Bias in review,Privately shared manuscripts,,Open identities,,All disciplines,"",NaN,"",Project,https://elifesciences.org/articles/21718,2007-01-01,2015-12-31,"Demian Battaglia",,,"",NaN,,,"","",,"",NaN,"","Unknown","","In this study, we found that apart from a few outliers depending on country and discipline, women are underrepresented in the scientific community with a very slow trend towards balance, which is consistent with earlier studies (Larivière et al., 2013; Fox et al., 2016; Topaz and Sen, 2016; Lerback and Hanson, 2017; Nature Neuroscience, 2006; Shen, 2013; Nature, 2012). In addition, we found that women contribute to the system-relevant peer-reviewing chain even less than expected by their numerical underrepresentation, revealing novel and subtler forms of bias than numeric disproportion alone. We reported clear evidence for homophily beyond the expected baseline levels in both genders (Figure 3) using a very large trans-disciplinary data set that allowed us to clarify a previously ambiguous picture (Lloyd, 1990; Gilbert et al., 1994; Borsuk et al., 2009; Buckley et al., 2014; Fox et al., 2016). This network-level inbreeding homophily is driven by a large fraction of male editors, together with only a few highly homophilic female editors.",,,NaN,,Complete 1182,"Peer Feedback",20190403,"We propose the creation of a scientist-driven, journal-agnostic peer review service that facilitates subsequent publication in a journal after creation of an ""Evaluated Preprint."" Review would be conducted by members of an extensive review board and be coordinated by professional editors. We anticipate that the majority of papers that pass through Peer Feedback will not need to be reviewed again, or could be reviewed only for journal suitability. We also hope that the system will promote better journal matching with less rejection since editors will have better information on the submission.",Feedback to authors|Validation of soundness,Quality of review|Speed|Transparency,Preprints,Free-form commenting,Open reports,Author initiates review|Journal integration|Professional editors,Life Sciences,"",NaN,"Journal-agnostic peer review service that produces an “Evaluated Preprint”",Project,https://asapbio.org/peer-feedback,,,"Ron Vale, Tony Hyman, and Jessica Polka",,,"By conducing peer review before journal submission, Peer Feedback aims to make review more constructive by focusing attention on the science in a manuscript, not its suitability for a given journal. It also aims to reduce repeated peer reviewing that can occur during serial journal submission.",NaN,,No,"","",,"Upon receipt, submissions are subjected to an initial screening process that checks for adherence to relevant publishing guidelines such as plagiarism. However, the existence of an Evaluated Preprint by itself does not imply quality; readers would have to evaluate the reviews themselves.",NaN,"","0","","",,,NaN,Verified,NA 1185,"HIRMEOS - open peer review experiment",20190404,"As part of the European project HIRMEOS, OpenEdition is proposing an annotation service (with our partner Hypothesis) for SSH open access monographs, on OpenEdition Books. The implementation of this feature is accompanied by a post-publication open peer review experiment which lasts four months, from February to June 2019. 13 books are currently opened for experiment, in agreement with their authors and publishers (https://oep.hypotheses.org/2122). Every reader can annotate or consult annotations. We benefit from moderation rights (with Hypothesis publisher groups feature) and rules of good conduct have been drafted to guide this experiment (https://www.openedition.org/22530).",Feedback to authors|Validation of soundness,Bias in review|Quality of review|Speed|Transparency,Other scholarly outputs,Annotations,Open identities|Open interaction|Open participation|Open reports,Comment moderation|Post-publication review,Humanities|Social science,"",NaN,"Open annotations to explore scholarly communication on SSH books",Trial,https://oep.hypotheses.org/2122,2019-02-07,2019-06-01,"OpenEdition (CNRS, EHESS, Aix-Marseille Université)",,https://www.youtube.com/watch?v=d38bzzQmpow,"The objective is to create a space for scientific conversation around publications. We want to allow readers to share their opinions, their knowledge linked to books subjects, directly at the time of reading. We expect participants to write scientific, constructive and argued annotations in order to: Offer a critique of the ideas that make up the texts, achieve fact-checking in the books, open new avenues for reflection, add elements enriching the publications, and interact with each other (reply feature). We also want to offer an innovative service which could be useful to explore the value of opening new peer review processes, allowing users from across the humanities and social sciences, to develop new usages.",NaN,,,"","",,"",NaN,"","10-100","","",,,NaN,Verified,Active 1254,"PLOS Channels",20190411,"Curated by experts, PLOS Channels highlight the latest research from across the published literature, preprints, commentary and news of interest to a community. The community can nominate content via email or Twitter. Channels are not limited to traditional journal articles, including preprints, AV content, news, conferences and grey literature. Channels include: Antimicrobial Resistance, Cholera, Complexity, Disease Forecasting & Surveillance, Microbiome, Open Source Toolkit, Responding to Climate Change, Tuberculosis, Veterans Disability & Rehabilitation Research, Ebola Outbreak",Curation of interesting work,,Journal accepted manuscripts|Other scholarly outputs|Preprints,,,Post-publication review,All disciplines,"",NaN,"Channels are a community resource to discover and explore content chosen by experts",Project,https://channels.plos.org/,2017-03-06,,"PLOS",,,"The project is the result of community feedback who wanted a single location to discover research and news in their field, and also authors of PLOS ONE who were concerned their papers were being ""lost"" in a megajournal. Users reported being overwhelmed by the amount of literature (both peer-reviewed and not) and wanted guidance from trusted and respected leaders in their field as to the most interesting and impactful content on an easy to use and regularly updated platform. User-led design resulted in the decision to be publisher agnostic. Publishers have previously created showcases for their own content but these were reported by users as of limited use given it showed just a fraction of the relevant material.",NaN,,No,"Channel Editors are provided with a regular content report to review. The content report is generated via PubMed and PLOS search. When submitting to PLOS journals, authors can nominate their submission for inclusion in the Channel once published. Any user can nominate content for the Channel by email or via social media. Direct nominations are noted to Channel Editors in the report.","Channel Editors are recruited by PLOS.",,"Any content that is judged to be in the Channel's scope can be included. Inclusion is entirely at the discretion of Channel Editors, who are volunteers and experts in their field.",0,"","1,000-10,000","We track site visits and interactions; author nominations when submitting to PLOS; impact on ALMs for PLOS content.","",,,NaN,Verified,Active 1262,"SciPost",20190413,"SciPost (https://scipost.org) is a top-quality next-generation Genuine Open Access publication portal managed by professional scientists. Its principles, ideals and implementation can be found at https://scipost.org/about and https://scipost.org/FAQ. SciPost operates on an entirely not-for-profit basis, and charges neither subscription fees nor article processing charges; instead, its activities are financed through a cost-slashing consortial model. Details of the sponsorship scheme and how to join can be found at https://scipost.org/sponsors or by emailing admin@scipost.org.",Curation of interesting work|Feedback to authors|Validation of soundness,Costs|Incentives and recognition for reviewing|Quality of review|Transparency,Journal accepted manuscripts|Other scholarly outputs|Preprints,Free-form commenting|Structured review form,Open participation|Open reports,Comment indexing|Comment moderation|Comment rating (meta-evaluation)|Post-publication review|Pre-publication review|Professional editors|Reviewer or editor initiates review,All disciplines|Physics,"",NaN,"The complete, by-and-for scientists publication portal.",Project,https://scipost.org,2016-04-20,,"SciPost Foundation",,https://www.youtube.com/watch?v=Pgvd7EvehCI,"SciPost was born out of the recognition that the currently available publishing infrastructure does not properly serve the interests of science, and that these are best served by scientists themselves. The initiative aims to reform all aspects related to scientific publishing. In particular, SciPost's mission is  to:  ",NaN,Yes,Yes,"Our peer-witnessed refereeing procedure is described in detail on our author guidelines page. https://scipost.org/submissions/author_guidelines    ","Editors are Fellows of our Editorial Colleges. Reviewers are invited among the community. Professional scientists are further able to volunteer reports and comments.",Yes,"",0,"We operate using a consortial sponsorship model.","100-1,000","","",,,NaN,Verified,Active 1277,"Open Peer Review, BMC Group",20190417,"In 2001, BioMed Central was the first publisher to openly post named peer reviewer reports alongside published articles as part of a ‘pre-publication history’ for all medical journals in the BMC series. The BMC Group now publishes over 500 journals and of those, 70 operate fully open peer review. In the open peer review process practiced by BMC, authors know who the reviewers are for all manuscripts during the peer review process, and if the manuscript is accepted for publication the named reviewer reports accompany the published article. Open peer review facilitates accountability and recognition, and may help in training early career researchers about the peer review process.",Feedback to authors|Validation of soundness,Bias in review|Incentives and recognition for reviewing|Quality of review|Transparency,Journal accepted manuscripts|Other scholarly outputs,Free-form commenting|Structured review form,Open identities|Open reports,Pre-publication review|Professional editors|Reviewer or editor initiates review,,"Editorial decision outcomes|Author opt-in rate or submission rate>Reviewer invitation acceptance rate|Author opt-in rate or submission rate>Reviewer recommendation outcomes – ie accept, accept with revisions, reject, etc (if applicable)",NaN,"Learning from 18 years of open peer review",Project,https://www.biomedcentral.com/about/advancing-peer-review,2001-03-01,,"BMC Group, Springer Nature",,,"Peer review is central to the publishing process and has a fundamental role to play in maintaining the integrity of the published literature and advancing discovery. At BMC, we have always supported innovation in peer review. The intention with open peer review is to make the reviewer (and the Editor) more accountable for peer review, and to provide recognition of the work that peer reviewers do. There have been relatively few large-scale studies looking at the impact of open peer review, and so data from the BMC Group is useful for learning about the effectiveness and benefits of open peer review.",NaN,,,"","",,"",NaN,"","10,000+","","A study in BMJ Open in 2015 found that the quality of peer review reports was slightly higher in BMC Infectious Diseases (open peer review) compared with BMC Microbiology (single-blind peer review). However, no effect was found in the Journal of Inflammation when it operated open vs single-blind review. This study also showed that reviewer recommendations were similar in BMC Infectious Diseases and BMC Microbiology, suggesting no difference between open and single blind peer review.  However, in the Journal of Inflammation, it was found that reviewers were more likely to recommend acceptance under open peer review compared with single blind.",https://bmjopen.bmj.com/content/5/9/e008707,,NaN,Verified,Active