Published December 30, 2012 | Version v1
Diagram Open

Knowledge Freedom in computational science: a two stage peer-review process with KF Eligibility Access Review

  • 1. Maieutike Research Initiative

Description

What is explained below is the peer review process applied by Notes on Transdisciplinary Modelling for Environment (NTMe). Authors and readers are encouraged to understand not only the process, but also the rationale behind the process, which is closely connected with the peculiar challenges faced by the scientific problems (computational-science modelling under uncertainty, in broad and heterogeneous contexts) in the scope of the journal. 

 

\(\mathsf{\Large\text{The rationale behind the peer-review process}}\)

Computational research problems for more than one discipline. Research specific to a particular disciplinary domain is typically authored, and then studied, by experts in that domain. Several conventions, assumptions, fundamental protocols and methodology practices, are shared by domain experts as a common ground. Given that proficiency in these aspects is a precondition for domain experts, this kind of knowledge is often implicit, and not well communicated within domain-specific literature. "Typical" data preprocessing, model settings, quality-assessment assumptions (and their "default" simplifications and shortcuts), in this literature are often minimally reported or omitted, because the domain-specific community of researchers and practitioners knows them so well that they may resemble a sort of acquired conditioned response. However, when in a wider research more than one disciplinary domain is connected among each other, then implicit unexpressed knowledge may lead to ambiguity, misuse of results, and avoidable errors - sometimes challengly difficult to detect. The more and more diverse the domains, the higher this risk might be.

Useful cross-domain building blocks. "Transdisciplinary" research may risk being misinterpreted as just the final summary list of results grabbed by autonomous disciplinary silos run in parallel. However, a poor ability to communicate computational methods and data - and their limitations - between multiple disciplinary domains would jeopardise the collective (cross-domain) ability to distill truly integrated computational science. Computational science for transdisciplinary problems requires a pragmatic, engineered, but reasonably simple and modular approach to break and connect disciplinary silos - when appropriate for the problems investigated. Publishing useful cross-domain building blocks of this approach is the aim of the peer-review process in NTMe.

Connecting disciplines, space and time scales under uncertainty: the need for a shared semantics. Wide-scale transdisciplinary modelling (WSTM) relies on methods proper to computational science for connecting multiple scientific disciplines, and shedding light on broad, complex problems. Several of the most pressing problems we face as a human society are only partially known in their chain of consequences, and hopelessly difficult to address by a single discipline and a narrow perspective. Some of these problems capture the specific interest by particularly exposed regions, but their mechanism is more general, with essential knowledge often lying in other spatial areas - or even other time periods - hence calling for a wider perspective. Multiple spatial and temporal scales, with dissimilar data resolution, are frequent in WSTM problems, since they are often dictated by the complexity of reality and the available data. A wide-scale extent typically implies uneven quality and availability of data, and many sources of uncertainty - which increases when assumptions, methods and semantics by different disciplines or research institutions do not easily merge. A truly transdisciplinary communication of this essential semantics requires the freedom to communicate scientific knowledge, accurately and transparently, between disciplinary and corporate barriers.

Real knowledge sharing in computational science. As a consequence, wide-scale transdisciplinary modelling demands a focus on reproducible research and real scientific knowledge freedom. Data and software freedom are essential aspects of knowledge freedom in computational science. Therefore, ideally published articles should also provide the readers with the data and source code of the described mathematical modelling. To maximise transparency, replicability, reproducibility and reusability, published data should be made available as open data while source code should be made available as free software. Communicating semantics even to non-experts in a given domain requires that mathematical assumptions, otherwise obvious within a specific discipline, are duly annotated in a portable and concise way. Accordingly, brief but semantically clear documentation of data, methods and software is a key precondition. Here, a two-stage peer review process is described in which scientific knowledge freedom is considered with a dedicated Eligibility Access Review. This new peer review process is applied by Notes on Transdisciplinary Modelling for Environment with a focus on WSTM for environment.

 

\(\mathsf{\Large\text{The peer-review process}}\)

A two-stage peer review process to avoid single-use disposable computational science. The two-stage peer review process requires discussion papers to be published so as to receive feedback from the scientific community before their possible finalisation. Initial manuscript submission is subject to the soundness review outlined above, also ensuring eligibility criteria to be fulfilled so as to support scientific knowledge freedom. Although this concept is multifaceted, some few dimensions might be emphasised which broadly apply in computational science and engineering (CSE). Among the many possible eligibility criteria in CSE, it should be highlighted at least the need for: 

  • free software to have been published so as for it to be persistently available;
  • appropriate licensing and source code review to have been done (portable modularisation, with semantics of mathematical data-transformation methods clearly annotated);
  • free data to have been published so as for it to be persistently available (with semantics of quantities clearly annotated);
  • a minimal share of free-access core references to be selected in order for scientists and research organisations not to be discriminated on the basis of their funding availability, when they try to access the core literature cited in the manuscript.

Before acceptance for discussion, a manuscript (and its potential data, parameters and software) may be revised, without exposing to the public immature versions, until it becomes able to fulfil the eligibility criteria. These iterations between author(s) and editors/reviewers remain confidential, and a potential manuscript rejection does not preclude resubmission. Acceptance of the manuscript to the discussion stage is followed by the permanent publication of the accepted version. This does not preclude, and actually encourages, further revised versions to be resubmitted. Any new revision is subject to eligibility access review, which takes into account the submission history of the manuscript and the already accepted public older versions.

Not a static publication. The discussion stage of the peer review process allows short comments to be submitted by referees and the scientific community, while authors are encouraged to interact with pending comments by providing their responses. During this stage, the paper accepted for discussion is already citable. Depending on the specific goals of each research problem, the discussion may remain open for a short time interval, or instead for a noticeably longer period of time (even indefinitely, if appropriate: for example, whenever authors do consider publishing cumulative milestone versions as important). There is not an a-priori limit to the duration of this first stage, and to the number of intermediate public revisions (which are optional), for a paper accepted for discussion. The second stage of peer review concludes the discussion stage with the submission of a revised manuscript, with final review and corrections. Fulfilment of the eligibility criteria is required over all the publication stages.
The published materials might be updated from time to time (at the authors' discretion), and - after peer-reviewing the changes - published so that the evolution is clear and available to others. Although preferred and encouraged in the discussion stage, updates are possible even after the final stage. Revisions offering a noticeably different content, compared with the previous versions, might sometime be recommended by editors/reviewers as deserving the status of a new publication - not to confuse the readers. This implies the full two-stage review process has to be applied to the new publication (as an independent publication), while the previous accepted publication is considered as the final permanent version. Recalling an analogy with the typical evolution of free-software, a software package is frequently subject to future improvements and corresponding new versions. If a new version is too different from the original package, it may become a new independent package. 

Scientific opinions, perspectives, and overviews. Manuscripts focusing on computational science methods, data, parameters, and software are expected to contribute practical components of scientific knowledge which can be adapted, modified, or generally integrated within the future body of knowledge - potentially in unexpected ways. Therefore, the aforementioned two-stage review is, overall, meant to ease the future invention of derivative works. On the other hand, manuscripts offering expert opinions on these topics do not contribute "practical" components to be directly modified. Instead, they contribute organised ideas to support the evolution of scientific knowledge. Therefore, the peer review process for this typology of manuscripts focuses on:

  • the broad understandability and potential interest of the expert opinions, and
  • the factual correctness of the objective elements included in the opinion (with adequate bibliographic, tabular, visual support where appropriate). 

Preregistration, protocols and methods. NTMe accepts a third typology of manuscripts, focusing on anticipated results (hence not yet computed) which are expected following a modelling procedure to transform input data and parameters into the desired output. As data and software are only anticipated in this typology of manuscripts, the peer-review process focuses on:

  • the broad understandability and potential interest of the proposed protocol or methodology,
  • the correctness and clarity of the mathematical formulation of the proposed computational-modelling steps (data-transformation modules),
  • the discussion of anticipated results, pitfalls, sources of uncertainty and their proper management, bifurcations of the methodology depending on the potential variants that a user may desire to apply (depending on available data and their quality, and on the desired specialisation of the proposed general method), and
  • the adequacy of bibliographic, tabular, visual supports where appropriate. 

Abstract, and plain language summary. Each article includes a technical abstract. Although the topics of published content might deal with domain-specific aspects of computational/environmental science, their focused analysis is expected to support wider research in a transdisciplinary context. Therefore, the abstract is expected to be accessible beyond the domain-specific research community. Each article, either in the discussion or in the final stage, in addition to a technical abstract needs a plain language summary so that non-experts and readers with a basic scientific and technical literacy can understand the main contributions of the article. The aim of the plain language summary is to support educational dissemination. The summary is not a simpler abstract, and typically offers a longer, more comprehensive description. It is realised with an editorial support to authors, so as to respect a general structure (unless otherwise agreed):

  • first, the general context of the work is presented (the general topics in which the work is situated);
  • second, the general open questions are introduced to which the work aims to contribute;
  • third, the specific way is described in which the work contributes to progress on the general open questions;
  • fourth, the implications, new opportunities or suggested lines of research, and potential limitations and unaswered questions are summarised.

Constructive peer review. The ultimate aim of the peer review process is to support authors, and future readers, in sharing useful, durable, and understandable notes of transdisciplinary knowledge. Given the very nature of the topics in NTMe, the need for a cogent peer review on a multiplicity of expertise fields is essential. Rarely a single reviewer is able to cover all the aspects of transdiciplinary computational science. Even so, different opinions by different reviewers may emerge on specific points. Therefore, a constructive approach is promoted during each step of the peer review process, so that authors are guided to understand the peer review, and a compendium of recommandations may be presented in a consistent way.

 

© 2012-2020 Daniele de Rigo. This work is licensed under a Creative Commons Attribution No Derivatives 4.0 International license. Please, cite as
de Rigo, D., 2012.
Knowledge Freedom in computational science: a two stage peer-review process with KF eligibility access review. Zenodo, CERN. https://doi.org/10.5281/zenodo.7578

 

Files

NTMe_TwoStage_PeerReview.png

Files (190.0 kB)

Name Size Download all
md5:9e85464aab301eaf35460447ae3bc810
82.5 kB Preview Download
md5:273e24a092c935ad096c4957407466e5
107.5 kB Preview Download