Open and Reproducible Research Glossary

The Framework for Open and Reproducible Research Training have launched a new glossary. You can find a paywalled paper introducing the glossary here.

I did not find the glossary easy to skim through so I decided to download the glossary from GitHub and make my own table with the 261 entries. Here it is:

Title Definition
Abstract Bias The tendency to report only significant results in the abstract, while reporting non-significant results within the main body of the manuscript (not reporting non-significant results altogether would constitute selective reporting). The consequence of abstract bias is that studies reporting non-significant results may not be captured with standard meta-analytic search procedures (which rely on information in the title, abstract and keywords) and thus biasing the results of meta-analyses.
Academic Impact The contribution that a research output (e.g., published manuscript) makes in shifting understanding and advancing scientific theory, method, and application, across and within disciplines. Impact can also refer to the degree to which an output or research programme influences change outside of academia, e.g. societal and economic impact (cf. ESRC: https://esrc.ukri.org/research/impact-toolkit/what-is-impact/).
Accessibility Accessibility refers to the ease of access and re-use of materials (e.g., data, code, outputs, publications) for academic purposes, particularly the ease of access is afforded to people with a chronic illness, disability and/or neurodivergence. These groups face numerous financial, legal and/or technical barriers within research, including (but not limited to) the acquisition of appropriately formatted materials and physical access to spaces. Accessibility also encompasses structural concerns about diversity, equity, inclusion, and representation (Pownall et al., 2021). Interfaces, events and spaces should be designed with accessibility in mind to ensure full participation, such as by ensuring that web-based images are colorblind friendly and have alternative text, or by using live captions at events (Brown et al., 2018; Pollet & Bond, 2021; World Wide Web Consortium, 2021).
Ad hominem bias From Latin meaning “to the person”; Judgment of an argument or piece of work influenced by the characteristics of the person who forwarded it, not the characteristics of the argument itself. Ad hominem bias can be negative, as when work from a competitor or target of personal animosity is viewed more critically than the quality of the work merits, or positive, as when work from a friend benefits from overly favorable evaluation.
Adversarial collaboration A collaboration where two or more researchers with opposing or contradictory theoretical views —and likely diverging predictions about study results— work together on one project. The aim is to minimise biases and methodological weaknesses as well as to establish a shared base of facts for which competing theories must account.
Adversarial (collaborative) commentary A commentary in which the original authors of a work and critics of said work collaborate to draft a consensus statement. The aim is to draft a commentary that is free of ad hominem attacks and communicates a common understanding or at least identifies where both parties agree and disagree. In doing so, it provides a clear take-home message and path forward, rather than leaving the reader to decide between opposing views conveyed in separate commentaries.
Affiliation bias This bias occurs when one’s opinions or judgements about the quality of research are influenced by the affiliation of the author(s). When publishing manuscripts, a potential example of an affiliation bias could be when editors prefer to publish work from prestigious institutions (Tvina et al., 2019).
Aleatoric uncertainty Variability in outcomes due to unknowable or inherently random factors. The stochastic component of outcome uncertainty that cannot be reduced through additional sources of information. For example, when flipping a coin, uncertainty about whether it will land on heads or tails.
Altmetrics Departing from traditional citation measures, altmetrics (short for “alternative metrics”) provide an assessment of the attention and broader impact of research work based on diverse sources such as social media (e.g. Twitter), digital news media, number of preprint downloads, etc. Altmetrics have been criticized in that sensational claims usually receive more attention than serious research (Ali, 2021).
AMNESIA AMNESIA is a free anonymization tool to remove identifying information from data. After uploading a dataset that contains personal data, the original dataset is transformed by the tool, resulting in a dataset that is anonymized regarding personal and sensitive data.
Analytic Flexibility Analytic flexibility is a type of researcher degrees of freedom (Simmons, Nelson, & Simonsohn, 2011) that refers specifically to the large number of choices made during data preprocessing and statistical analysis. “[T]he range of analysis outcomes across different acceptable analysis methods” (Carp, 2012, p. 1). Analytic flexibility can be problematic, as this variability in analytic strategies can translate into variability in research outcomes, particularly when several strategies are applied, but not transparently reported (Masur, 2021).
Anonymity Anonymising data refers to removing, generalising, aggregating or distorting any information which may potentially identify participants, peer-reviewers, and authors, among others. Data should be anonymised so that participants are not personally identifiable. The most basic level of anonymisation is to replace participants’ names with pseudonyms (fake names) and remove references to specific places. Anonymity is particularly important for open data and data may not be made open for anonymity concerns. Anonymity and open data has been discussed within qualitative research which often focuses on personal experiences and opinions, and in quantitative research that includes participants from clinical populations.
ARRIVE Guidelines The ARRIVE guidelines (Animal Research: Reporting of In Vivo Experiments) are a checklist-based set of reporting guidelines developed to improve reporting standards, and enhance replicability, within living (i.e. in vivo) animal research. The second generation ARRIVE guidelines, ARRIVE 2.0, were released in 2020. In these new guidelines, the clarity has been improved, items have been prioritised and new information has been added with an accompanying “Explanation” and “Elaboration” document to provide a rationale for each item and a recommended set to add context to the study being described.
Article Processing Charge (APC) An article (sometimes author) processing charge (APC) is a fee charged to authors by a publisher in exchange for publishing and hosting an open access article. APCs are often intended to compensate for a potential loss of revenue the journal may experience when moving from traditional publication models, such as subscription services or pay-per-view, to open access. While some journals charge only about US$300, APCs vary widely, from US$1000 (Advances in Methods and Practice in Psychological Science) or less to over US$10,000 (Nature). While some publishers offer waivers for researchers from certain regions of the world or who lack funds, some APCs have been criticized for being disproportionate compared to actual processing and hosting costs (Grossmann & Brembs, 2021) and for creating possible inequities with regard to which scientists can afford to make their works freely available (Smith et al. 2020).
Authorship Authorship assigns credit for research outputs (e.g. manuscripts, data, and software) and accountability for content (McNutt et al. 2018; Patience et al. 2019). Conventions differ across disciplines, cultures, and even research groups, in their expectations of what efforts earn authorship, what the order of authorship signifies (if anything), how much accountability for the research the corresponding author assumes, and the extent to which authors are accountable for aspects of the work that they did not personally conduct.
Auxiliary Hypothesis All theories contain assumptions about the nature of constructs and how they can be measured. However, not all predictions are derived from theories and assumptions can sometimes be drawn from other premises. Additional assumptions that are made to deduce a prediction and tested by making links to observable data. These auxiliary hypotheses are sometimes invoked to explain why a replication attempt has failed.
Badges (Open Science) Badges are symbols that editorial teams add to published manuscripts to acknowledge open science practices and act as incentives for researchers to share data, materials, or to embed study preregistration. As clearly-visible symbols, they are intended to signal to the reader that content has met the standard of open research required to receive the badge (typically from that journal). Different badges may be assigned for different practices, such as research having been made available and accessible in a persistent location (“open material badge” and “open data badge”), or study preregistration (“preregistration badge”).
Bayes Factor A continuous statistical measure for model selection used in Bayesian inference, describing the relative evidence for one model over another, regardless of whether the models are correct. Bayes factors (BF) range from 0 to infinity, indicating the relative strength of the evidence, and where 1 is a neutral point of no evidence. In contrast to p-values, Bayes factors allow for 3 types of conclusions: a) evidence for the alternative hypothesis, b) evidence for the null hypothesis, and c) no sufficient evidence for either. Thus, BF are typically expressed as BF10 for evidence regarding the alternative compared to the null hypothesis, and as BF01 for evidence regarding the null compared to the alternative hypothesis.
Bayesian Inference A method of statistical inference based upon Bayes’ theorem, which makes use of epistemological (un)certainty using the mathematical language of probability. Bayesian inference is based on allocating (and reallocating, based on newly-observed data or evidence) credibility across possibilities. Two existing approaches to Bayesian inference include “Bayes factors” (BF) and Bayesian parameter estimation.
Bayesian Parameter Estimation A Bayesian approach to estimating parameter values by updating a prior belief about model parameters (i.e., prior distribution) with new evidence (i.e., observed data) via a likelihood function, resulting in a posterior distribution. The posterior distribution may be summarised in a number of ways including: point estimates (mean/mode/median of a posterior probability distribution), intervals of defined boundaries, and intervals of defined mass (typically referred to as a credible interval). In turn, a posterior distribution may become a prior distribution in a subsequent estimation. A posterior distribution can also be sampled using Monte-Carlo Markov Chain methods which can be used to determine complex model uncertainties (e.g. Foreman-Mackey et al., 2013).
BIDS data structure The Brain Imaging Data Structure (BIDS) describes a simple and easy-to-adopt way of organizing neuroimaging, electrophysiological, and behavioral data (i.e., file formats, folder structures). BIDS is a community effort developed by the community for the community and was inspired by the format used internally by the OpenfMRI repository known as OpenNeuro. Having initially been developed for fMRI data, the BIDS data structure has been extended for many other measures, such as EEG (Pernet et al., 2019).
BIZARRE This acronym refers to Barren, Institutional, Zoo, and other Rare Rearing Environments (BIZARRE). Most research for chimpanzees is conducted on this specific sample. This limits the generalizability of a large number of research findings in the chimpanzee population. The BIZARRE has been argued to reflect the universal concept of what is a chimpanzee (see also WEIRD, which has been argued to be a universal concept for what is a human).
Bottom-up approach (to Open Scholarship) Within academic culture, an approach focusing on the intrinsic interest of academics to improve the quality of research and research culture, for instance by making it supportive, collaborative, creative and inclusive. Usually indicates leadership from early-career researchers acting as the changemakers driving shifts and change in scientific methodology through enthusiasm and innovation, compared to a “top-down” approach initiated by more senior researchers “Bottom-up approaches take into account the specific local circumstances of the case itself, often using empirical data, lived experience, personal accounts, and circumstances as the starting point for developing policy solutions.”
Bracketing Interviews Bracketing interviews are commonly used within qualitative approaches. During these interviews researchers explore their personal subjectivities and assumptions surrounding their ongoing research. This allows researchers to be aware of their own interests and helps them to become both more reflective and critical about their research, considering how their own experiences may impact the research process. Bracketing interviews can also be subject to qualitative analysis.
Bropenscience A tongue-in-cheek expression intended to raise awareness of the lack of diverse voices in open science (Bahlai, Bartlett, Burgio et al. 2019; Onie, 2020), in addition to the presence of behavior and communication styles that can be toxic or exclusionary. Importantly, not all bros are men; rather, they are individuals who demonstrate rigid thinking, lack self-awareness, and tend towards hostility, unkindness, and exclusion (Pownall et al., 2021; Whitaker & Guest, 2020). They generally belong to dominant groups who benefit from structural privileges. To address #bropenscience, researchers should examine and address structural inequalities within academic systems and institutions.
CARKing Critiquing After the Results are Known (CARKing) refers to presenting a criticism of a design as one that you would have made in advance of the results being known. It usually forms a reaction or criticism to unwelcome or unfavourable results, results whether the critic is conscious of this fact or not.
Center for Open Science (COS) A non-profit technology organization based in Charlottesville, Virginia with the mission “to increase openness, integrity, and reproducibility of research.” Among other resources, the COS hosts the Open Science Framework (OSF) and the Open Scholarship Knowledge Base.
Citation bias A biased selection of papers or authors cited and included in the references section. When citation bias is present, it is often in a way which would benefit the author(s) or reviewers, over-represents statistically significant studies, or reflects pervasive gender or racial biases (Brooks, 1985; Jannot et al., 2013; Zurn et al., 2020). One proposed solution is the use of Citation Diversity Statements, in which authors reflect on their citation practices and identify biases which may have emerged (Zurn et al., 2020).
Citation Diversity Statement A current effort trying to increase awareness and mitigate the citation bias in relation to gender and race is the Citation Diversity Statement, a short paragraph where “the authors consider their own bias and quantify the equitability of their reference lists. It states: (i) the importance of citation diversity, (ii) the percentage breakdown (or other diversity indicators) of citations in the paper, (iii) the method by which percentages were assessed and its limitations, and (iv) a commitment to improving equitable practices in science” (Zurn et al., 2020, p. 669).
Citizen Science Citizen science refers to projects that actively involve the general public in the scientific endeavour, with the goal of democratizing science. Citizen scientists can be involved in all stages of research, acting as collaborators, contributors or project leaders. An example of a major citizen science project involved individuals identifying astronomical bodies (Lintott, 2008).
CKAN The Comprehensive Knowledge Archive Network (CKAN) is an open-source data platform and free software that aims to provide tools to streamline publishing and data sharing. CKAN supports governments, research institutions and other organizations in managing and publishing large amounts of data.
Co-production An approach to research where stakeholders who are not traditionally involved in the research process are empowered to collaborate, either at the start of the project or throughout the research lifecycle. For example, co-produced health research may involve health professionals and patients, while co-produced education research may involve teaching staff and pupils/students. This is motivated by principles such as respecting and valuing the experiences of non-researchers, addressing power dynamics, and building mutually beneficial relationships.
COAR Community Framework for Good Practices in Repositories A framework which identifies best practices for scientific repositories and evaluation criteria for these practices. Its flexible and multidimensional approach means that it can be applied to different types of repositories, including those which host publications or data, across geographical and thematic contexts.
Code review The process of checking another researcher’s programming (specifically, computer source code) including but not limited to statistical code and data modelling. This process is designed to detect and resolve mistakes, thereby improving code quality. In practice, a modern peer review process may take place via a hosted online repository such as GitHub, GitLab or SourceForge.Related terms: Reproducibility; Version control
Codebook A codebook is a high-level summary that describes the contents, structure, nature and layout of a data set. A well-documented codebook contains information intended to be complete and self-explanatory for each variable in a data file, such as the wording and coding of the item, and the underlying construct. It provides transparency to researchers who may be unfamiliar with the data but wish to reproduce analyses or reuse the data.
Collaborative Replication and Education Project (CREP) The Collaborative Replication and Education Project (CREP) is an initiative designed to organize and structure replication efforts of highly-cited empirical studies in psychology to satisfy the dual needs for more high-quality direct replications and more training in empirical research techniques for psychology students. CREP aims to address the need for replications of highly cited studies, and to provide training, support and professional growth opportunities for academics completing replication projects.
Committee on Best Practices in Data Analysis and Sharing (COBIDAS) The Organization for Human Brain Mapping (OHBM) neuroimaging community has developed a guideline for best practices in neuroimaging data acquisition, analysis, reporting, and sharing of both data and analysis code. It contains eight elements that should be included when writing up or submitting a manuscript in order to improve reporting methods and the resulting neuroimages in order to optimize transparency and reproducibility.
Communality The common ownership of scientific results and methods and the consequent imperative to share both freely. Communality is based on the fact that every scientific finding is seen as a product of the effort of a number of agents. This norm is followed when scientists openly share their new findings with colleagues.
Community Projects Collaborative projects that involve researchers from different career levels, disciplines, institutions or countries. Projects may have different goals including peer support and learning, conducting research, teaching and education. They can be short-term (e.g., conference events or hackathons) or long-term (e.g., journal clubs or consortium-led research projects). Collaborative culture and community building are key to achieving project goals.
Compendium A collection of files prepared by a researcher to support a report or publication that include the data, metadata, programming code, software dependencies, licenses, and other instructions necessary for another researcher to independently reproduce the findings presented in the report or publication.
Computational reproducibility Ability to recreate the same results as the original study (including tables, figures, and quantitative findings), using the same input data, computational methods, and conditions of analysis. The availability of code and data facilitates computational reproducibility, as does preparation of these materials (annotating data, delineating software versions used, sharing computational environments, etc). Ideally, computational reproducibility should be achievable by another second researcher (or the original researcher, at a future time), using only a set of files and written instructions. Also referred to as analytic reproducibility (LeBel et al., 2018).
Conceptual replication A replication attempt whereby the primary effect of interest is the same but tested in a different sample and captured in a different way to that originally reported (i.e., using different operationalisations, data processing and statistical approaches and/or different constructs; LeBel et al., 2018). The purpose of a conceptual replication is often to explore what conditions limit the extent to which an effect can be observed and generalised (e.g., only within certain contexts, with certain samples, using certain measurement approaches) towards evaluating and advancing theory (Hüffmeier et al., 2016).
Confirmation bias The tendency to seek out, interpret, favor and recall information in a way that supports one’s prior values, beliefs, expectations, or hypothesis.
Confirmatory analyses Part of the confirmatory-exploratory distinction (Wagenmakers et al., 2012), where confirmatory analyses refer to analyses that were set a priori and test existent hypotheses. The lack of this distinction within published research findings has been suggested to explain replicability issues and is suggested to be overcome through study preregistration which clearly distinguishes confirmatory from exploratory analyses. Other researchers have questioned these terms and recommended a replacement with ‘discovery-oriented’ and ‘theory-testing research’ (Oberauer & Lewandowsky, 2019; see also Szollosi & Donkin, 2019).
Conflict of interest A conflict of interest (COI, also ‘competing interest’) is a financial or non-financial relationship, activity or other interest that might compromise objectivity or professional judgement on the part of an author, reviewer, editor, or editorial staff. The Principles of Transparency and Best Practice in Scholarly Publishing by the Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ), the Open Access Scholarly Publishers Association (OASPA), and the World Association of Medical Editors (WAME) states that journals should have policies on publication ethics, including policies on COI (DOAJ, 2018). COIs should be made transparent so that readers can properly evaluate research and assess for potential or actual bias(es). Outside publishing, academic presenters, panel members and educators should also declare COIs. Purposeful failure to disclose a COI may be considered a form of misconduct.
Consortium authorship Only the name of the consortium or organization appears in the author column, and the individuals’ names do not appear in the literature: For example, ‘FORRT’ as an author. This can be seen in the products of collaborative projects with a very large number of collaborators and/or contributors. Depending on the journal policy, individual researchers may be recorded as one of the authors of the product in literature databases such as ORCID and Scopus. Consortium authorship can also be termed group, corporate, organisation/organization or collective authorship (e.g. https://www.bmj.com/about-bmj/resources-authors/article-submission/authorship-contributorship), or collaborative authorship (e.g. https://support.jmir.org/hc/en-us/articles/115001449591-What-is-a-group-author-collaborative-author-and-does-it-need-an-ORCID)
Constraints on Generality (COG) A statement that explicitly identifies and justifies the target population, and conditions, for the reported findings. Researchers should be explicit about potential boundary conditions for their generalisations (Simons et al., 2017). Researchers should provide detailed descriptions of the sampled population and/or contextual factors that might have affected the results such that future replication attempts can take these factors into account (Brandt et al., 2014). Conditions not explicitly listed are assumed not to have theoretical relevance to the replicability of the effect.
Construct validity When used in the context of measurement and testing, construct validity refers to the degree to which a test measures what it claims to be measuring. In fields that study hypothetical unobservable entities, construct validation is essentially theory testing, because it involves determining whether an objective measure (a questionnaire, lab task, etc.) is a valid representation of a hypothetical construct (i.e., conforms to a theory).
Content validity The degree to which a measurement includes all aspects of the concept that the researcher claims to measure; “A qualitative type of validity where the domain of the concept is made clear and the analyst judges whether the measures fully represent the domain” (Bollen, 1989, p.185). It is a component of construct validity and can be established using both quantitative and qualitative methods, often involving expert assessment.
Contribution A formal addition or activity in a research context. Contribution and contributor statements, including acknowledgments sections in journal articles, are attached to research products to better classify and recognize the variety of labor beyond “authorship” that any intellectual pursuit requires. Contribution is an evolving “source of data for understanding the relationship between authorship and knowledge production.” (Lariviere et al., p.430). In open source software development, a contribution may count as changes committed onto a project’s software repository following a peer-review (known technically as a pull request). An example of an open-source project accepting contributions is NumPy (Harris et al., 2020).
Corrigendum A corrigendum (pl. corrigenda, Latin: ‘to correct’) documents one or multiple errors within a published work that do not alter the central claim or conclusions and thus does not rise to the standard of requiring a retraction of the work. Corrigenda are typically available alongside the original work to aid transparency. Some publishers refer to this document as an erratum (pl. errata, Latin: ‘error’), while others draw a distinction between the two (corrigenda as author-errors and errata as publisher-errors).
Creative Commons (CC) license A set of free and easy-to-use copyright licences that define the rights of the authors and users of open data and materials in a standardized way. CC licenses enable authors or creators to share copyright-law-protected work with the public and come in different varieties with more or less clauses. For example, the Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) allows you to share and adapt the material, under the condition that you; give credit to the original creators, indicate if changes were made, and share under the same license as the original, and you cannot use the material for commercial purposes.
Creative destruction approach to replication Replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. This approach therefore involves ‘pruning’ existing theories, comparing all the alternative theories, and making replication efforts more generative and engaged in theory-building (Tierney et al. 2020, 2021).
Credibility revolution The problems and the solutions resulting from a growing distrust in scientific findings, following concerns about the credibility of scientific claims (e.g., low replicability). The term has been proposed as a more positive alternative to the term replicability crisis, and includes the many solutions to improve the credibility of research, such as preregistration, transparency, and replication.
CRediT The Contributor Roles Taxonomy (CRediT; https://casrai.org/credit/) is a high-level taxonomy used to indicate the roles typically adopted by contributors to scientific scholarly output. There are currently 14 roles that describe each contributor’s specific contribution to the scholarly output. They can be assigned multiple times to different authors and one author can also be assigned multiple roles. CRediT includes the following roles: Conceptualization, Data curation, Formal Analysis, Funding acquisition, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing. A description of the different roles can be found in the work of Brand et al., (2015).
Criterion validity The degree to which a measure corresponds to other valid measures of the same concept. Criterion validity is usually established by calculating regression coefficients or bivariate correlations estimating the direction and strength of relation between test measure and criterion measure. It is often confused with construct validity although it differs from it in intent (merely predictive rather than theoretical) and interest (predicting an observable outcome rather than a latent construct). Unreliability in either test or criterion scores usually diminishes criterion validity. Also called criterion-related or concrete validity.
Crowdsourced Research Crowdsourced research is a model of the social organisation of research as a large-scale collaboration in which one or more research projects are conducted by multiple teams in an independent yet coordinated manner. Crowdsourced research aims at achieving efficiency and scalability gains by pooling resources, promoting transparency and social inclusion, as well as increasing the rigor, reliability, and trustworthiness by enhancing statistical power and mutual social vetting. It stands in contrast to the traditional model of academic research production, which is dominated by the independent work of individual or small groups of researchers (‘small science’). Examples of crowdsourced research include so-called ‘many labs replication’ studies (Klein et al., 2018), ‘many analysts, one dataset’ studies (Silberzahn et al., 2018), distributive collaborative networks (Moshontz et al., 2018) and open collaborative writing projects such as Massively Open Online Papers (MOOPs) (Himmelstein et al., 2019; Tennant et al., 2019). Alternatively, crowdsourced research can refer to the use of a large number of research “crowdworkers” in data collection hired through online labor markets like Amazon Mechanical Turk or Prolific, for example in content analysis (Benoit et al., 2016; Lind et al., 2017) or experimental research (Peer et al., 2017). Crowdsourced research that is both open for participation and open through shared intermediate outputs has been referred to as crowd science (Franzoni & Sauermann, 2014).
Cultural taxation The additional labor expected or demanded of members of underrepresented or marginalized minority groups, particularly scholars of color. This labor often comes from service roles providing ethnic, cultural, or gender representation and diversity. These roles can be formal or informal, and are generally unrewarded or uncompensated. Such labor includes providing expertise on matters of diversity, educating members of majority groups, acting as a liaison to minority communities, and formal and informal roles as mentor and support system for minority students.
Cumulative science Goal of any empirical science, it is the pursuit of “the construction of a cumulative base of knowledge upon which the future of the science may be built” (Curran, 2009, p. 1). The idea that science will create more complete and accurate theories as a function of the amount of evidence and data that has been collected. Cumulative science develops in gradual and incremental steps, as opposed to one abrupt discovery. While revolutionary science occurs scarcely, cumulative science is the most common form of science.
Data Access and Research Transparency (DA-RT) Data Access and Research Transparency (DA-RT) is an initiative aimed at increasing data access and research transparency in the social sciences. It is a multi-epistemic and multi-method initiative, created in 2014 by the Council of the American Political Science Association (APSA), to bolster the rigor of empirical social inquiry. In addition to other activities, DA-RT developed the Journal Editors’ Transparency Statement (JETS), which requires subscribing journals to (a) making relevant data publicly available if the study is published, (b) following a strict data citation policy, (c) transparently describing the analytical procedures and, if possible, providing public access to analytical code, and (d) updating their journal style guides, codes of ethics to include improved data access and research transparency requirements.
Data management plan (DMP) A structured document that describes the process of data acquisition, analysis, management and storage during a research project. It also describes data ownership and how the data will be preserved and shared during and upon completion of a project. Data management templates also provide guidance on how to make research data FAIR and where possible, openly available.
Data sharing collection of practices, technologies, cultural elements and legal frameworks that are relevant to the practice of making data used for scholarly research available to other investigators. Gollwitzer et al. (2020) describe two types of data sharing: Type 1: Data that is necessary to reproduce the findings of a published research article. Type 2: data that have been collected in a research project but have not (or only partly) been analysed or reported after the completion of the project and are hence typically shared under a specified embargo period.
Data visualisation Graphical representation of data or information. Data visualisation takes advantage of humans’ well-developed visual processing capacity to convey insight and communicate key information. Data visualisations often display the raw data, descriptive statistics, and/or inferential statistics.
Decolonisation Coloniality can be described as the naturalisation of concepts such as imperialism, capitalism, and nationalism. Together these concepts can be thought of as a matrix of power (and power relations) that can be traced to the colonial period. Decoloniality seeks to break down and decentralize those power relations, with the aim to understand their persistence and to reconstruct the norms and values of a given domain. In an academic setting, decolonisation refers to the rethinking of the lens through which we teach, research, and co-exist, so that the lens generalises beyond Western-centred and colonial perspectives. Decolonising academia involves reconstructing the historical and cultural frameworks being used, redistributing a sense of belonging in universities, and empowering and including voices and knowledge types that have historically been excluded from academia. This is done when people engage with their past, present, and future whilst holding a perspective that is separate from the socially dominant perspective. Also, by including, not rejecting, an individuals’ internalised norms and taboos from the specific colony.
Demarcation criterion A criterion for distinguishing science from non-science which aims to indicate an optimal way for knowledge of the world to grow. In a Popperian approach, the demarcation criterion was falsifiability and the application of a falsificationist attitude. Alternative approaches include that of Kuhn, who believed that the criterion was puzzle solving with the aim of understanding nature, and Lakatos, who argued that science is marked by working within a progressive research programme.
Direct replication As ‘direct replication’ does not have a widely-agreed technical meaning nor there is no clear cut distinction between a direct and conceptual replication, below we list several contributions towards a consensus. Rather than debating the ‘exactness’ of a replication, it is more helpful to discuss the relevant differences between a replication and its target, and their implications for the reliability and generality of the target’s results.
Diversity Diversity refers to between-person (i.e., interindividual) variation in humans, e.g. ability, age, beliefs, cognition, country, disability, ethnicity, gender, language, race, religion or sexual orientation. Diversity can refer to diversity of researchers (who do the research), the diversity of participant samples (who is included in the study), and diversity of perspectives (the views and beliefs researchers bring into their work; Syed & Kathawalla, 2020).
DOI (digital object identifier) Digital Object Identifiers (DOI) are alpha-numeric strings that can be assigned to any entity, including: publications (including preprints), materials, datasets, and feature films – the use of DOIs is not restricted to just scholarly or academic material. DOIs “provides a system for persistent and actionable identification and interoperable exchange of managed information on digital networks.” (https://doi.org/hb.html). There are many different DOI registration agencies that operate DOIs, but the two that researchers would most likely encounter are Crossref and Datacite.
DORA The San Francisco Declaration on Research Assessment (DORA) is a global initiative aiming to reduce dependence on journal-based metrics (e.g. journal impact factor and citation counts) and, instead, promote a culture which emphasises the intrinsic value of research. The DORA declaration targets research funders, publishers, research institutes and researchers and signing it represents a commitment to aligning research practices and procedures with the declaration’s principles.
Double-blind peer review Evaluation of research products by qualified experts where both the author(s) and reviewer(s) are anonymous to each other. “This approach conceals the identity of the authors and their affiliations from reviewers and would, in theory, remove biases of professional reputation, gender, race, and institutional affiliation, allowing the reviewer to avoid bias and to focus on the manuscript’s merit alone.” (Tvina et al., 2019, 1082). Like all types of peer-review, double-blind peer review is not without flaws. Anonymity can be difficult, if not impossible, to achieve for certain researchers working in a niche area.
Double consciousness An identity confusion, as the individual feels like they have two distinct identities. One is to assimilate to the dominant culture at university when the individual is with colleagues and professors, while the other is when the individual is with their families. This continuous shift may cause a lack of certainty about the individual’s identity and a belief that the individual does not fully belong anywhere. This lack of belonging can lead to poor social integration within the academic culture that can manifest in less opportunities and more mental health issues in the individual (Rubin, 2021; Rubin et al., 2019).
Early career researchers (ECRs) A label given to researchers who “range from senior doctoral students to postdoctoral workers who may have up to 10 years postdoctoral education; the latter group may therefore include early career or junior academics” (Eley et al., 2012, p. 3). What specifically (e.g. age, time since PhD inclusive or exclusive of career breaks and leave, title, funding awarded) constitutes an ECR can vary across funding bodies, academic organisations, and countries.
Economic and societal impact The contribution a research item makes to the broader economy and society. It also captures the benefits of research to individuals, organisations, and/or nations.
Embargo Period Applied to Open Scholarship, in academic publishing, the period of time after an article has been published and before it can be made available as Open Access. If an author decides to self-archive their article (e.g., in an Open Access repository) they need to observe any embargo period a publisher might have in place. Embargo periods vary from instantaneous up to 48 months, with 6 and 12 months being common (Laakso & Björk, 2013). Embargo periods may also apply to pre-registrations, materials, and data, when authors decide to only make these available to the public after a certain period of time, for instance upon publication or even later when they have additional publication plans and want to avoid being scooped (Klein et al., 2018).
Epistemic uncertainty Systematic uncertainty due to limited data, measurement precision, model or process specification, or lack of knowledge. That is, uncertainty due to lack of knowledge that could, in theory, be reduced through conducting additional research to increase understanding. Such uncertainty is said to be personal, since knowledge differs across scientists, and temporary since it can change as new data become available.
Epistemology Alongside ethics, logic, and metaphysics, epistemology is one of the four main branches of philosophy. Epistemology is largely concerned with nature, origin, and scope of knowledge, as well as the rationality of beliefs.
Equity Different individuals have different starting positions (cf. “opportunity gaps”) and needs. Whereas equal treatment focuses on treating all individuals equally, equitable treatment aims to level the playing field by actively increasing opportunities for under-represented minorities. Equitable treatment aims to attain equality through “fairness”: taking into account different needs for support for different individuals, instead of focusing merely on the needs of the majority.
Equivalence Testing Equivalence tests statistically assess the null hypothesis that a given effect exceeds a minimum criterion to be considered meaningful. Thus, rejection of the null hypothesis provides evidence of a lack of (meaningful) effect. Based upon frequentist statistics, equivalence tests work by specifying equivalence bounds: a lower and upper value that reflect the smallest effect size of interest. Two one-sided t-tests are then conducted against each of these equivalence bounds to assess whether effects that are deemed meaningful can be rejected (see Schuirmann, 1972; Lakens et al., 2018; 2020).
Error detection Broadly refers to examining research data and manuscripts for mistakes or inconsistencies in reporting. Commonly discussed approaches include: checking inconsistencies in descriptive statistics (e.g. summary statistics that are not possible given the sample size and measure characteristics; Brown & Heathers, 2017; Heathers et al. 2018), inconsistencies in reported statistics (e.g. p-values that do not match the reported F statistics and accompanying degrees of freedom; Epskamp, & Nuijten, 2016; Nuijten et al. 2016), and image manipulation (Bik et al., 2016). Error detection is one motivation for data and analysis code to be openly available, so that peer review can confirm a manuscript’s findings, or if already published, the record can be corrected. Detected errors can result in corrections or retractions of published articles, though these actions are often delayed, long after erroneous findings have influenced and impacted further research.
Evidence Synthesis This is a type of research method which aims to draw general conclusions to address a research question on a certain topic, phenomenon or effect by reviewing research outcomes and information from a range of different sources. Information which is subject to synthesis can be extracted from both qualitative and quantitative studies. The method used to synthesise the gathered information can be qualitative (narrative synthesis), quantitative (meta-analysis) or mixed (meta-synthesis, systematic mapping). Evidence synthesis has many applications and is often used in the context of healthcare, public policy as well as understanding and advancement of specific research fields.
Exploratory data analysis Exploratory Data Analysis (EDA) is a well-established statistical tradition that provides conceptual and computational tools for discovering patterns in data to foster hypothesis development and refinement. These tools and attitudes complement the use of hypothesis tests used in confirmatory data analysis (CDA). Even when well-specified theories are held, EDA helps one interpret the results of CDA and may reveal unexpected or misleading patterns in the data.
External Validity Whether the findings of a scientific study can be generalized to other contexts outside the study context (different measures, settings, people, places, and times). Statistically, threats to external validity may reflect interactions whereby the effect of one factor (the independent variable) depends on another factor (a confounding variable). External validity may also be limited by the study design (e.g., an artificial laboratory setting or a non-representative sample).
Face validity A subjective judgement of how suitable a measure appears to be on the surface, that is, how well a measure is operationalized. For example, judging whether questionnaire items should relate to a construct of interest at face value. Face validity is related to construct validity, but since it is subjective/informal, it is considered an easy but weak form of validity.
FAIR principles Describes making scholarly materials Findable, Accessible, Interoperable and Reusable (FAIR). ‘Findable’ and ‘Accessible’ are concerned with where materials are stored (e.g. in data repositories), while ‘Interoperable’ and ‘Reusable’ focus on the importance of data formats and how such formats might change in the future.
Feminist psychology With a particular focus on gender and sexuality, feminist psychology is inherently concerned with representation, diversity, inclusion, accessibility, and equality. Feminist psychology initially grew out out of a concern for representing the lived experiences of girls and women, but has since evolved into a more nuanced, intersectional and comprehensive concern for all aspects of equality (e.g., Eagly & Riger, 2014). Feminist psychologists have advocated for more rigorous consideration of equality, diversity, and inclusion within Open Science spaces (Pownall et al., 2021).
First-last-author-emphasis norm (FLAE) An authorship system that assigns the order of authorship depending on the contributions of a given author while simultaneously valuing the first and last position of the authorship order most. According to this system, the two main authors are indicated as the first and last author – the order of the authors between the first and last position is determined by contribution in a descending order.
FORRT Framework of Open Reproducible Research and Teaching. It aims to provide a pedagogical infrastructure designed to recognize and support the teaching and mentoring of open and reproducible research in tandem with prototypical subject matters in higher education. FORRT strives to be an effective, evolving, and community-driven organization raising awareness of the pedagogical implications of open and reproducible science and its associated challenges (i.e., curricular reform, epistemological uncertainty, methods of education). FORRT also advocates for the opening of teaching and mentoring materials as a means to facilitate access, discovery, and learning to those who otherwise would be educationally disenfranchised.
Free Our Knowledge Platform A collective action platform aiming to support the open science movement by obtaining pledges from researchers that they will implement certain research practices (e.g., pre-registration, pre-print). Initially pledges will be anonymous until a sufficient number of people pledge, upon which names of pledges will be released. The initiative is a grassroots movement instigated by early career researchers.
G*Power Free to use statistical software for performing power analyses. The user specifies the desired statistical test (e.g. t-test, regression, ANOVA), and three of the following: the number of groups/observations, effect size, significance level, or power, in order to calculate the unspecified aspect.
Gaming (the system) Adopting questionable research practices (QRPs, e.g., salami slicing of an academic paper) that would align with academic incentive structures that benefit the academic (e.g. in prestige, hiring, or promotion) regardless of whether they support the process of scholarship. If systems rely on metrics to determine an outcome (e.g. academic credit) those metrics can be subject to intentional manipulation (Naudet et al., 2018) or “gamed”. Where promotions, hiring, and tenure are based on flawed metrics they may disfavor openness, rigor, and transparent work (Naudet et al., 2018) – for example favoring “quantity over quality” – and exacerbate existing inequalities.
Garden of forking paths The typically-invisible decision tree traversed during operationalization and statistical analysis given that ‘there is a one-to-many mapping from scientific to statistical hypotheses’ (Gelman and Loken, 2013, p. 6). In other words, even in absence of p-hacking or fishing expeditions and when the research hypothesis was posited ahead of time, there can be a plethora of statistical results that can appear to be supported by theory given data. “The problem is there can be a large number of potential comparisons when the details of data analysis are highly contingent on data, without the researcher having to perform any conscious procedure of fishing or examining multiple p-values” (Gelman and Loken, 2013, p. 1). The term aims to highlight the uncertainty ensuing from idiosyncratic analytical and statistical choices in mapping theory-to-test, and contrasting intentional (and unethical) questionable research practices (e.g. p-hacking and fishing expeditions) versus non-intentional research practices that can, potentially, have the same effect despite not having intent to corrupt their results. The garden of forking paths refers to the decisions during the scientific process that inflate the false-positive rate as a consequence of the potential paths which could have been taken (had other decisions been made).
General Data Protection Regulation (GDPR) A legal framework of seven principles implemented across the European Union (EU) that aims to safeguard individuals’ information. The framework seeks to commission citizens with control over their personal data, whilst regulating the parties involved in storing and processing these data. This set of legislation dictates the free movement of individuals’ personal information both within and outside the EU and must be considered by researchers when designing and running studies.
Generalizability Generalizability refers to how applicable a study’s results are to broader groups of people, settings, or situations they study and how the findings relate to this wider context (Frey, 2018; Kukull & Ganguli, 2012).
Gift (or Guest) Authorship The inclusion in an article’s author list of individuals who do not meet the criteria for authorship. As authorship is associated with benefits including peer recognition and financial rewards, there are incentives for inclusion as an author on published research. Gifting authorship, or extending authorship credit to an individual who does not merit such recognition, can be intended to help the gift recipient, repay favors (including reciprocal gift authorship), maintain personal and professional relationships, and enhance chances of publication. Gift authorship is widely considered an unethical practice.
Git A software package for tracking changes in a local set of files (local version control), initially developed by Linus Torvalds. In general, it is used by programmers to track and develop computer source code within a set directory, folder or a file system. Git can access remote repository hosting services (e.g. GitHub) for remote version control that enables collaborative software development by uploading contributions from a local system. This process found its way into the scientific process to enable open data, open code and reproducible analyses.
Goodhart’s Law A term coined by economist Charles Goodhart to refer to the observation that measuring something inherently changes user behaviour. In relation to examination performance, Strathern (1997) stated that “when a measure becomes a target, it ceases to be a good measure” (p. 308). Applied to open scholarship, and the structure of incentives in academia, Goodhart’s Law would predict that metrics of scientific evaluation will likely be abused and exploited, as evidenced by Muller (2019)
H-index Hirsch’s index, abbreviated as H-index, intends to measure both productivity and research impact by combining the number of publications and the number of citations to these publications. Hirsch (2005) defined the index as “the number of papers with citation number ≥ h” (p. 16569). That is, the greatest number such that an author (or journal) has published at least that many papers that have been cited at least that many times. The index is perceived as a superior measure to measures that only assess, for instance, the number of citations and number of publications but this index has been criticised for the purpose of researcher assessment (e.g. Wendl, 2007).
Hackathon An organized event where experts, designers, or researchers collaborate for a relatively short amount of time to work intensively on a project or problem. The term is originally borrowed from computer programmer and software development events whose goal is to create a fully fledged product (resources, research, software, hardware) by the end of the event, which can last several hours to several days.
HARKing A questionable research practice termed ‘Hypothesizing After the Results are Known’ (HARKing). “HARKing is defined as presenting a post hoc hypothesis (i.e., one based on or informed by one’s results) in a research report as if it was, in fact, a priori” (Kerr, 1998, p. 196). For example, performing subgroup analyses, finding an effect in one subgroup, and writing the introduction with a ‘hypothesis’ that matches these results.
Hidden Moderators Contextual conditions that can, unbeknownst to researchers, make the results of a replication attempt deviate from those of the original study. Hidden moderators are sometimes invoked to explain (away) failed replications. Also called hidden assumptions.
Hypothesis A hypothesis is an unproven statement relating the connection between variables (Glass & Hall, 2008) and can be based on prior experiences, scientific knowledge, preliminary observations, theory and/or logic. In scientific testing, a hypothesis can be usually formulated with (e.g. a positive correlation) or without a direction (e.g. there will be a correlation). Popper (1959) posits that hypotheses must be falsifiable, that is, it must be conceivably possible to prove the hypothesis false. However, hypothesis testing based on falsification has been argued to be vague, as it is contingent on many other untested assumptions in the hypothesis (i.e., auxiliary hypotheses). Longino (1990, 1992) argued that ontological heterogeneity should be valued more than ontological simplicity for the biological sciences, which considers we should investigate differences between and within biological organisms.
i10-index A research metric created by Google Scholar that represents the number of publications a researcher has with at least 10 citations.
Ideological bias The idea that pre-existing opinions about the quality of research can depend on the ideological views of the author(s). One of the many biases in the peer review process, it expects that favourable opinions towards the research would be more likely if friends, collaborators, or scientists agree with an editor or reviewer’s political viewpoints (Tvina et al. 2019). This could potentially lead to a variety of conflicts of interest that undermine diverse perspectives, for example: speeding or delaying peer-review, or influencing the chances of an individual being invited to present their research, thus promoting their work.
Incentive structure The set of evaluation and reward mechanisms (explicit and implicit) for scientists and their work. Incentivised areas within the broader structure include hiring and promotion practices, track record for awarding funding, and prestige indicators such as publication in journals with high impact factors, invited presentations, editorships, and awards. It is commonly believed that these criteria are often misaligned with the telos of science, and therefore do not promote rigorous scientific output. Initiatives like DORA aim to reduce the field’s dependency on evaluation criteria such as journal impact factors in favor of assessments based on the intrinsic quality of research outputs.
Inclusion Inclusion, or inclusivity, refers to a sense of welcome and respect within a given collaborative project or environment (such as academia) where diversity simply indicates a wide range of backgrounds, perspectives, and experiences, efforts to increase inclusion go further to promote engagement and equal valuation among diverse individuals, who might otherwise be marginalized. Increasing inclusivity often involves minimising the impact of, or even removing, systemic barriers to accessibility and engagement.
Induction “Reasoning by drawing a conclusion not guaranteed by the premises; for example, by inferring a general rule from a limited number of observations. Popper believed that there was no such logical process; we may guess general rules but such guesses are not rendered even more probable by any number of observations. By contrast, Bayesians inductively work out the increase in probability of a hypothesis that follows from the observations.” Dienes (p. 164, 2008)
Interaction Fallacy A statistical error in which a formal test is not conducted to assess the difference between a significant and non-significant correlation (or other measures, such as Odds Ratio). This fallacy occurs when a significant and non-significant correlation coefficient are assumed to represent a statistically significant difference but the comparison itself is not explicitly tested.
Interlocking An analysis at the core of intersectionality to analyse power, inequality and exclusion, as efforts to reform academic culture cannot be completed by investigating only one avenue in isolation (e.g. race, gender or ability) but by considering all the systems of exclusion. In contrast to intersectionality (which refers to the individual having multiple social identities), interlocking is usually used to describe the systems that combine to serve as oppressive measures toward the individual based on these identities.
Internal Validity An indicator of the extent to which a study’s findings are representative of the true effect in the population of interest and not due to research confounds, such as methodological shortcomings. In other words, whether the observed evidence or covariation between the independent (predictor) and dependent (criterion) variables can be taken as a bona fide relationship and not a spurious effect owing to uncontrolled aspects of the study’s set up. Since it involves the quality of the study itself, internal validity is a priority for scientific research.
Intersectionality A term which derives from Black feminist thought and broadly describes how social identities exist within ‘interlocking systems of oppression’ and structures of (in)equalities (Crenshaw, 1989). Intersectionality offers a perspective on the way multiple forms of inequality operate together to compound or exacerbate each other. Multiple concurrent forms of identity can have a multiplicative effect and are not merely the sum of the component elements. One implication is that identity cannot be adequately understood through examining a single axis (e.g., race, gender, sexual orientation, class) at a time in isolation, but requires simultaneous consideration of overlapping forms of identity.
JabRef An open-sourced, cross-platform citation and reference management tool that is available free of charge. It allows editing BibTeX files, importing data from online scientific databases, and managing and searching BibTeX files.
Jamovi Free and open source software for data analysis based on the R language. The software has a graphical user interface and provides the R code to the analyses. Jamovi supports computational reproducibility by saving the data, code, analyses, and results in a single file.
JASP Named after Sir Harold Jeffreys, JASP stands for Jeffrey’s Amazing Statistics Program. It is a free and open source software for data analysis. JASP relies on a user interface and offers both null hypothesis tests and their Bayesian counterparts. JASP supports computational reproducibility by saving the data, code, analyses, and results in a single file.
Journal Impact Factor™ The mean number of citations to research articles in that journal over the preceding two years. It is a proprietary and opaque calculation marketed by Clarivate™. Journal Impact Factors are not associated with the content quality or the peer review process.
JSON file JavaScript Object Notation (JSON) is a data format for structured data that can be used to represent attribute-value pairs. Values thereby can contain further JSON notation (i.e., nested information). JSON files can be formally encoded as strings of text and thus are human-readable. Beyond storing information this feature makes them suitable for annotating other content. For example, JSON files are used in Brain Imaging Data Structure (BIDS) for describing the metadata dataset by following a standardized format (dataset_description.json).
Knowledge acquisition The process by which the mind decodes or extracts, stores, and relates new information to existing information in long term memory. Given the complex structure and nature of knowledge, this process is studied in the philosophical field of epistemology, as well as the psychological field of learning and memory.
Likelihood function A statistical model of the data used in frequentist and Bayesian analyses, defined up to a constant of proportionality. A likelihood function represents the likeliness of different parameters for your distribution given the data. Given that probability distributions have unknown population parameters, the likelihood function indicates how well the sample data summarise these parameters. As such, the likelihood function gives an idea of the goodness of fit of a model to the sample data for a given set of values of the unknown population parameters.
Likelihood Principle The notion that all information relevant to inference contained in data is provided by the likelihood. The principle suggests that the likelihood function can be used to compare the plausibility of various parameter values. While Bayesians and likelihood theorists subscribe to the likelihood principle, Neyman-Pearson theorists do not, as significance tests violate the likelihood principle because they take into account information not in the likelihood.
Literature Review Researchers often review research records on a given topic to better understand effects and phenomena of interest before embarking on a new research project, to understand how theory links to evidence or to investigate common themes and directions of existing study results and claims. Different types of reviews can be conducted depending on the research question and literature scope. To determine the scope and key concepts in a given field, researchers may want to conduct a scoping literature review. Systematic reviews aim to access and review all available records for the most accurate and unbiased representation of existing literature. Non-systematic or focused literature reviews synthesise information from a selection of studies relevant to the research question although they are uncommon due to susceptibility to biases (e.g. researcher bias; Siddaway et al., 2019).
Manel Portmanteau for ‘male panel’, usually to refer to speaker panels at conferences entirely composed of (usually caucasian) males. Typically discussed in the context of gender disparities in academia (e.g., women being less likely to be recognised as experts by their peers and, subsequently, having fewer opportunities for career development).
Many authors Large-scale collaborative projects involving tens or hundreds of authors from different institutions. This kind of approach has become increasingly common in psychology and other sciences in recent years as opposed to research carried out by small teams of authors, following earlier trends which have been observed e.g. for high-energy physics or biomedical research in the 1990s. These large international scientific consortia work on a research project to bring together a broader range of expertise and work collaboratively to produce manuscripts.
Many Labs A crowdsourcing initiative led by the Open Science Collaboration (2015) whereby several hundred separate research groups from various universities run replication studies of published effects. This initiative is also known as “Many Labs I” and was subsequently followed by a “Many Labs II” project that assessed variation in replication results across samples and settings. Similar projects include ManyBabies, EEGManyLabs, and the Psychological Science Accelerator.
Massive Open Online Courses (MOOCs) Exclusively online courses which are accessible to any learner at any time, are typically free to access (while not necessarily openly licensed), and provide video-based instructions and downloadable data sets and exercises. The “massive” aspect describes the high volume of students that can access the course at any one time due to their flexibility, low or no cost, and online nature of the materials.
Massively Open Online Papers (MOOPs) Unlike the traditional collaborative article, a MOOP follows an open participatory and dynamic model that is not restricted by a predetermined list of contributors.
Matthew effect (in science) Named for the ‘rich get richer; poor get poorer’ paraphrase of the Gospel of Matthew. Eminent scientists and early-career researchers with a prestigious fellowship are disproportionately attributed greater levels of credit and funding for their contributions to science while relatively unknown or early-career researchers without a prestigious fellowship tend to get disproportionately little credit for comparable contributions. The impact is a substantial cumulative advantage that results from modest initial comparative advantages (and vice versa).
Meta-analysis A meta-analysis is a statistical synthesis of results from a series of studies examining the same phenomenon. A variety of meta-analytic approaches exist, including random or fixed effects models or meta-regressions, which allow for an examination of moderator effects. By aggregating data from multiple studies, a meta-analysis could provide a more precise estimate for a phenomenon (e.g. type of treatment) than individual studies. Results are usually visualized in a forest plot. Meta-analyses can also help examine heterogeneity across study results. Meta-analyses are often carried out in conjunction with systematic reviews and similarly require a systematic search and screening of studies. Publication bias is also commonly examined in the context of a meta-analysis and is typically visually presented via a funnel plot.
Meta-science or Meta-research The scientific study of science itself with the aim to describe, explain, evaluate and/or improve scientific practices. Meta-science typically investigates scientific methods, analyses, the reporting and evaluation of data, the reproducibility and replicability of research results, and research incentives.
Metadata Structured data that describes and synthesises other data. Metadata can help find, organize, and understand data. Examples of metadata include creator, title, contributors, keywords, tags, as well as any kind of information necessary to verify and understand the results and conclusions of a study such as codebook on data labels, descriptions, the sample and data collection process.
Model (computational) Computational models aim to mathematically translate the phenomena under study to better understand, communicate and predict complex behaviours.
Model (philosophy) The process by which a verbal description is formalised to remove ambiguity, while also constraining the dimensions a theory can span. The model is thus data derived. “Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system” (Frigg & Hartman, 2020).
Model (statistical) A mathematical representation of observed data that aims to reflect the population under study, allowing for the better understanding of the phenomenon of interest, identification of relationships among variables and predictions about future instances. A classic example would be the application of Chi square to understand the relationship between smoking and cancer (Doll & Hill, 1954).
Multi-Analyst Studies In typical empirical studies, a single researcher or research team conducts the analysis, which creates uncertainty about the extent to which the choice of analysis influences the results. In multi-analyst studies, two or more researchers independently analyse the same research question or hypothesis on the same dataset. According to Aczel and colleagues (2021), a multi-analyst approach may be beneficial in increasing our confidence in a particular finding; uncovering the impact of analytical preferences across research teams; and highlighting the variability in such analytical approaches.
Multiplicity Potential inflation of Type I error rates (incorrectly rejecting the null hypothesis) because of multiple statistical testing, for example, multiple outcomes, multiple follow-up time points, or multiple subgroup analyses. To overcome issues with multiplicity, researchers will often apply controlling procedures (e.g., Bonferroni, Holm-Bonferroni; Tukey) that correct the alpha value to control for inflated Type I errors. However, by controlling for Type I errors, one can increase the possibility of Type II errors (i.e., incorrectly accepting the null hypothesis).
Multiverse analysis Multiverse analyses are based on all potentially equally justifiable data processing and statistical analysis pipelines that can be employed to test a single hypothesis. In a data multiverse analysis, a single set of raw data is processed into a multiverse of data sets by applying all possible combinations of justifiable preprocessing choices. Model multiverse analyses apply equally justifiable statistical models to the same data to answer the same hypothesis. The statistical analysis is then conducted on all data sets in the multiverse and all results are reported which enhances promoting transparency and illustrates the robustness of results against different data processing (data multiverse) or statistical (model multiverse) pipelines). Multiverse analysis differs from Specification curve analysis with regards to the graphical displays (a histogram and tile plota rather than a specification curve plot).
Name Ambiguity Problem An attribution issue arising from two related problems: authors may use multiple names or monikers to publish work, and multiple authors in a single field may share full names. This makes accurate identification of authors on names and specialisms alone a difficult task. This can be addressed through the creation and use of unique digital identifiers that act akin to digital fingerprints such as ORCID.
Named entity-based Text Anonymization for Open Science (NETANOS) A free, open-source anonymisation software that identifies and modifies named entities (e.g. persons, locations, times, dates). Its key feature is that it preserves critical context needed for secondary analyses. The aim is to assist researchers in sharing their raw text data, while adhering to research ethics.
Non-Intervention, Reproducible, and Open Systematic Reviews (NIRO-SR) A comprehensive set of tools to facilitate the development, preregistration and dissemination of systematic literature reviews for non-intervention research. Part A represents detailed guidelines for creating and preregistering a systematic review protocol in the context of non-intervention research whilst preparing for transparency. Part B represents guidelines for writing up the completed systematic review, with a focus on enhancing reproducibility.
Null Hypothesis Significance Testing (NHST) A frequentist approach to inference used to test the probability of an observed effect against the null hypothesis of no effect/relationship (Pernet, 2015). Such a conclusion is arrived at through use of an index called the p-value. Specifically, researchers will conclude an effect is present when an a priori alpha threshold, set by the researchers, is satisfied; this determines the acceptable level of uncertainty and is closely related to Type I error.
Objectivity The idea that scientific claims, methods, results and scientists themselves should remain value-free and unbiased, and thus not be affected by cultural, political, racial or religious bias as well as any personal interests (Merton, 1942).
Ontology (Artificial Intelligence) A set of axioms in a subject area that help classify and explain the nature of the entities under study and the relationships between them.
Open access “Free availability of scholarship on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these research articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself” (Boai, 2002). Different methods of achieving open access (OA) are often referred to by color, including Green Open Access (when the work is openly accessible from a public repository), Gold Open Access (when the work is immediately openly accessible upon publication via a journal website), and Platinum (or Diamond) Open Access (a subset of Gold OA in which all works in the journal are immediately accessible after publication from the journal website without the authors needing to pay an article processing fee [APC]).
Open Code Making computer code (e.g., programming, analysis code, stimuli generation) freely and publicly available in order to make research methodology and analysis transparent and allow for reproducibility and collaboration. Code can be made available via open code websites, such as GitHub, the Open Science Framework, and Codeshare (to name a few), enabling others to evaluate and correct errors and re-use and modify the code for subsequent research.
Open Data Open data refers to data that is freely available and readily accessible for use by others without restriction, “Open data and content can be freely used, modified, and shared by anyone for any purpose” (https://opendefinition.org/). Open data are subject to the requirement to attribute and share alike, thus it is important to consider appropriate Open Licenses. Sensitive or time-sensitive datasets can be embargoed or shared with more selective access options to ensure data integrity is upheld.
Open Educational Resources (OER) Commons OER Commons (with OER standing for open educational resources) is a freely accessible online library allowing teachers to create, share and remix educational resources. The goal of the OER movement is to stimulate “collaborative teaching and learning” (https://www.oercommons.org/about) and provide high-quality educational resources that are accessible for everyone.
Open Educational Resources (OERs) Learning materials that can be modified and enhanced because their creators have given others permission to do so. The individuals or organizations that create OERs—which can include materials such as presentation slides, podcasts, syllabi, images, lesson plans, lecture videos, maps, worksheets, and even entire textbooks—waive some (if not all) of the copyright associated with their works, typically via legal tools like Creative Commons licenses, so others can freely access, reuse, translate, and modify them.
Open Licenses Open licenses are provided with open data and open software (e.g., analysis code) to define how others can (re)use the licensed material. In setting out the permissions and restrictions, open licenses often permit the unrestricted access, reuse and retribution of an author’s original work. Datasets are typically licensed under a type of open licence known as a Creative Commons license (e.g., MIT, Apache, and GPL). These can differ in relatively subtle ways with GPL licenses (and their variants) being Copyleft licenses that require that any derivative work is licensed under the same terms as the original.
Open Material Author’s public sharing of materials that were used in a study, “such as survey items, stimulus materials, and experiment programs” (Kidwell et al., 2016, p. 3). Digitally-shareable materials are posted on open access repositories, which makes them publicly available and accessible. Depending on licensing, the material can be reused by other authors for their own studies. Components that are not digitally-shareable (e.g. biological materials, equipment) must be described in sufficient detail to allow reproducibility.
Open Peer Review A scholarly review mechanism providing disclosure of any combination of author and referee identities, as well as peer-review reports and editorial decision letters, to one another or publicly at any point during or after the peer review or publication process. It may also refer to the removal of restrictions on who can participate in peer review and the platforms for doing so. Note that ‘open peer review’ has been used interchangeably to refer to any, or all, of the above practices.
Open Scholarship Knowledge Base The Open Scholarship Knowledge Base (OSKB) is a collaborative initiative to share knowledge on the what, why and how of open scholarship to make this knowledge easy to find and apply. Information is curated and created by the community. The OSKB is a community under the Center for Open Science (COS).
Open Scholarship ‘Open scholarship’ is often used synonymously with ‘open science’, but extends to all disciplines, drawing in those which might not traditionally identify as science-based. It reflects the idea that knowledge of all kinds should be openly shared, transparent, rigorous, reproducible, replicable, accumulative, and inclusive (allowing for all knowledge systems). Open scholarship includes all scholarly activities that are not solely limited to research such as teaching and pedagogy.
Open Science Framework A free and open source platform for researchers to organize and share their research project and to encourage collaboration. Often used as an open repository for research code, data and materials, preprints and preregistrations, while managing a more efficient workflow. Created and maintained by the Center for Open Science.
Open Science An umbrella term reflecting the idea that scientific knowledge of all kinds, where appropriate, should be openly accessible, transparent, rigorous, reproducible, replicable, accumulative, and inclusive, all which are considered fundamental features of the scientific endeavour. Open science consists of principles and behaviors that promote transparent, credible, reproducible, and accessible science. Open science has six major aspects: open data, open methodology, open source, open access, open peer review, and open educational resources.
Open Source software A type of computer software in which source code is released under a license that permits others to use, change, and distribute the software to anyone and for any purpose. Open source is more than openly accessible: the distribution terms of open-source software must comply with 10 specific criteria (see: https://opensource.org/osd).
Open washing Open washing, termed after “greenwashing”, refers to the act of claiming openness to secure perceptions of rigor or prestige associated with open practices. It has been used to characterise the marketing strategy of software companies that have the appearance of open-source and open-licensing, while engaging in proprietary practices. Open washing is a growing concern for those adopting open science practices as their actions are undermined by misleading uses of the practices, and actions designed to facilitate progressive developments are reduced to ‘ticking the box’ without clear quality control.
OpenNeuro A free platform where researchers can freely and openly share, browse, download and re-use brain imaging data (e.g., MRI, MEG, EEG, iEEG, ECoG, ASL, and PET data).
Optional Stopping The practice of (repeatedly) analyzing data during the data collection process and deciding to stop data collection if a statistical criterion (e.g. p-value, or bayes factor) reaches a specified threshold. If appropriate methodological precautions are taken to control the type 1 error rate, this can be an efficient analysis procedure (e.g. Lakens, 2014). However, without transparent reporting or appropriate error control the type 1 error can increase greatly and optional stopping could be considered a Questionable Research Practice (QRP) or a form of p-hacking.
ORCID (Open Researcher and Contributor ID) A organisation that provides a registry of persistent unique identifiers (ORCID iDs) for researchers and scholars, allowing these users to link their digital research documents and other contributions to their ORCID record. This avoids the name ambiguity problem in scholarly communication. ORCID iDs provide unique, persistent identifiers connecting researchers and their scholarly work. It is free to register for an ORCID iD at https://orcid.org/register.
Overlay Journal Open access electronic journals that collect and curate articles available from other sources (typically preprint servers, such as arXiv). Article curation may include (post-publication) peer review or editorial selection. Overlay journals do not publish novel material; rather, they organize and collate articles available in existing repositories.
P-curve P-curve is a tool for identifying potential publication bias and makes use of the distribution of significant p-values in a series of independent findings. The deviation from the expected right-skewed distribution can be used to assess the existence and degree of publication bias: if the curve is right-skewed, there are more low, highly significant p-values, reflecting an underlying true effect. If the curve is left-skewed, there are many barely significant results just under the 0.05-threshold. This suggests that the studies lack evidential value and may be underpinned by questionable research practices (QRPs; e.g., p-hacking). In the case of no true effect present (true null hypothesis) and unbiased p-value reporting, the p-curve should be a flat, horizontal line, representing the typical distribution of p-values.
P-hacking Exploiting techniques that may artificially increase the likelihood of obtaining a statistically significant result by meeting the standard statistical significance criterion (typically α = .05). For example, performing multiple analyses and reporting only those at p < .05, selectively removing data until p < .05, selecting variables for use in analyses based on whether those parameters are statistically significant.
p-value A statistic used to evaluate the outcome of a hypothesis test in Null Hypothesis Significance Testing (NHST). It refers to the probability of observing an effect, or more extreme effect, assuming the null hypothesis is true (Lakens, 2021b). The American Statistical Association’s statement on p-values (Wasserstein & Lazar, 2016) notes that p-values are not an indicator of the truth of the null hypothesis and instead defines p-values in this way: “Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value” (p. 131).
Papermill An organization that is engaged in scientific misconduct wherein multiple papers are produced by falsifying or fabricating data, e.g. by editing figures or numerical data or plagiarizing written text. Papermills are “alleged to offer products ranging from research data through to ghostwritten fraudulent or fabricated manuscripts and submission services” (Byrne & Christopher, 2020, p. 583). A papermill relates to the fast production and dissemination of multiple allegedly new papers. These are often not detected in the scientific publishing process and therefore either never found or retracted if discovered (e.g. through plagiarism software).
Paradata Data that are captured about the characteristics and context of primary data collected from an individual – distinct from metadata. Paradata can be used to investigate a respondent’s interaction with a survey or an experiment on a micro-level. They can be most easily collected during computer mediated surveys but are not limited to them. Examples include response times to survey questions, repeated patterns of responses such as choosing the same answer for all questions, contextual characteristics of the participant such as injuries that prevent good performance on tasks, the number of premature responses to stimuli in an experiment. Paradata have been used for the investigation and adjustment of measurement and sampling errors.
PARKing PARKing (preregistering after results are known) is defined as the practice where researchers complete an experiment (possibly with infinite re-experimentation) before preregistering. This practice invalidates the purpose of preregistration, and is one of the QRPs (or, even scientific misconduct) that try to gain only “credibility that it has been preregistered.”
Participatory Research Participatory research refers to incorporating the views of people from relevant communities in the entire research process to achieve shared goals between researchers and the communities. This approach takes a collaborative stance that seeks to reduce the power imbalance between the researcher and those researched through a “systematic cocreation of new knowledge” (Andersson, 2018).
Patient and Public Involvement (PPI) Active research collaboration with the population of interest, as opposed to conducting research “about” them. Researchers can incorporate the lived experience and expertise of patients and the public at all stages of the research process. For example, patients can help to develop a set of research questions, review the suitability of a study design, approve plain English summaries for grant/ethics applications and dissemination, collect and analyse data, and assist with writing up a project for publication. This is becoming highly recommended and even required by funders (Boivin et al., 2018).
Paywall A technological barrier that permits access to information only to individuals who have paid – either personally, or via an organisation – a designated fee or subscription.
PCI (Peer Community In) PCI is a non-profit organisation that creates communities of researchers who review and recommend unpublished preprints based upon high-quality peer review from at least two researchers in their field. These preprints are then assigned a DOI, similarly to a journal article. PCI was developed to establish a free, transparent and public scientific publication system based on the review and recommendation of preprints.
PCI Registered Reports An initiative launched in 2021 dedicated to receiving, reviewing, and recommending Registered Reports (RRs) across the full spectrum of Science, technology, engineering, and mathematics (STEM), medicine, social sciences and humanities. Peer Community In (PCI) RRs are overseen by a ‘Recommender’ (equivalent to an Action Editor) and reviewed by at least two experts in the relevant field. It provides free and transparent pre- (Stage 1) and post-study (Stage 2) reviews across research fields. A network of PCI RR-friendly journals endorse the PCI RR review criteria and commit to accepting, without further peer review, RRs that receive a positive final recommendation from PCI RR.
Plan S Plan S is an initiative, launched in September 2018 by cOAlition S, a consortium of research funding organisations, which aims to accelerate the transition to full and immediate Open Access. Participating funders require recipients of research grants to publish their research in compliant Open Access journals or platforms, or make their work openly and immediately available in an Open Access repository, from 2021 onwards. cOAlition S funders have commited to not financially support ‘hybrid’ Open Access publication fees in subscription venues. However, authors can comply with plan S through publishing Open Access in a subscription journal under a “transformative arrangement” as further described in the implementation guidance. The “S” in Plan S stands for shock.
Positionality Map A reflexive tool for practicing explicit positionality in critical qualitative research. The map is to be used “as a flexible starting point to guide researchers to reflect and be reflexive about their social location. The map involves three tiers: the identification of social identities (Tier 1), how these positions impact our life (Tier 2), and details that may be tied to the particularities of our social identity (Tier 3).” (Jacobson and Mustafa 2019, p. 1). The aim of the map is “for researchers to be able to better identify and understand their social locations and how they may pose challenges and aspects of ease within the qualitative research process.”
Positionality The contextualization of both the research environment and the researcher, to define the boundaries within the research was produced (Jaraf, 2018). Positionality is typically centred and celebrated in qualitative research, but there have been recent calls for it to also be used in quantitative research as well. Positionality statements, whereby a researcher outlines their background and ‘position’ within and towards the research, have been suggested as one method of recognising and centring researcher bias.
Post Hoc Post hoc is borrowed from Latin, meaning “after this”. In statistics, post hoc (or post hoc analysis) refers to the testing of hypotheses not specified prior to data analysis. In frequentist statistics, the procedure differs based on whether the analysis was planned or post-hoc, for example by applying more stringent error control. In contrast, Bayesian and likelihood approaches do not differ as a function of when the hypothesis was specified.
Post Publication Peer Review Peer review that takes place after research has been published. It is typically posted on a dedicated platform (e.g., PubPeer). It is distinct from the traditional commentary which is published in the same journal and which is itself usually peer reviewed.
Posterior distribution A way to summarize one’s updated knowledge in Bayesian inference, balancing prior knowledge with observed data. In statistical terms, posterior distributions are proportional to the product of the likelihood function and the prior. A posterior probability distribution captures (un)certainty about a given parameter value.
Predatory Publishing Predatory (sometimes “vanity”) publishing describes a range of business practices in which publishers seek to profit, primarily by collecting article processing charges (APCs), from publishing scientific works without necessarily providing legitimate quality checks (e.g., peer review) or editorial services. In its most extreme form, predatory publishers will publish any work, so long as charges are paid. Other less extreme strategies, such as sending out high numbers of unsolicited requests for editing or publishing in fee-driven special issues, have also been accused as predatory (Crosetto, 2021).
PREPARE Guidelines The PREPARE guidelines and checklist (Planning Research and Experimental Procedures on Animals: Recommendations for Excellence) aim to help the planning of animal research, and support adherence to the 3Rs (Replacement, Reduction or Refinement) and facilitate the reproducibility of animal research.
Preprint A publicly available version of any type of scientific manuscript/research output preceding formal publication, considered a form of Green Open Access. Preprints are usually hosted on a repository (e.g. arXiv) that facilitates dissemination by sharing research results more quickly than through traditional publication. Preprint repositories typically provide persistent identifiers (e.g. DOIs) to preprints. Preprints can be published at any point during the research cycle, but are most commonly published upon submission (i.e., before peer-review). Accepted and peer-reviewed versions of articles are also often uploaded to preprint servers, and are called postprints.
Preregistration Pledge In a “collective action in support of open and reproducible research practices”, the preregistration pledge is a campaign from the Project Free Our Knowledge that asks a researcher to commit to preregistering at least one study in the next two years (https://freeourknowledge.org/about/). The project is a grassroots movement initiated by early career researchers (ECRs).
Preregistration The practice of publishing the plan for a study, including research questions/hypotheses, research design, data analysis before the data has been collected or examined. It is also possible to preregister secondary data analyses (Merten & Krypotos, 2019). A preregistration document is time-stamped and typically registered with an independent party (e.g., a repository) so that it can be publicly shared with others (possibly after an embargo period). Preregistration provides a transparent documentation of what was planned at a certain time point, and allows third parties to assess what changes may have occurred afterwards. The more detailed a preregistration is, the better third parties can assess these changes and with that the validity of the performed analyses. Preregistration aims to clearly distinguish confirmatory from exploratory research.
Prior distribution Beliefs held by researchers about the parameters in a statistical model before further evidence is taken into account. A ‘prior’ is expressed as a probability distribution and can be determined in a number of ways (e.g., previous research, subjective assessment, principles such as maximising entropy given constraints), and is typically combined with the likelihood function using Bayes’ theorem to obtain a posterior distribution.
PRO (peer review openness) initiative The agreement made by several academics that they will not provide a peer review of a manuscript unless certain conditions are met. Specifically, the manuscript authors should ensure the data and materials will be made publically available (or give a justification as to why they are not freely available or shared), provide documentation detailing how to interpret and run any files or code and detail where these files can be located via the manuscript itself.
Pseudonymisation Pseudonymisation refers to a technique that involves replacing or removing any information that could lead to identification of research subjects’ identity whilst still being able to make them identifiable through the use of the combination of code number and identifiers. This process comprises the following steps: removal of all identifiers from the research dataset; attribution of a specific identifier (pseudonym) for each participant and using it to label each research record; and maintenance of a cipher that links the code number to the participant in a document physically separate from the dataset. Pseudonymisation is typically a minimum requirement from ethical committees when conducting research, especially on human participants or involving confidential information, in order to ensure upholding of data privacy.
Pseudoreplication When there is a lack of statistical independence presented in the data and thus artificially inflating the number of samples (i.e. replicates). For instance, collecting more than one data point from the same experimental unit (e.g. participant or crops). Numerous methods can overcome this, such as averaging across replicates (e.g., taking the mean RT for a participant) or implementing mixed effects models with the random effects structure accounting for the pseudoreplication (e.g., specifying each individual RT as belonging to the same subject). Note, the former option would be associated with a loss of information and statistical power.
Psychometric meta-analysis Psychometric meta-analyses aim to correct for attenuation of the effect sizes of interest due to measurement error and other artifacts by using procedures based on psychometric principles, e.g. reliability of the measures. These procedures should be implemented before using the synthesised effect sizes in correlational or experimental meta-analysis, as making these corrections tends to lead to larger and less variable effect sizes.
Public Trust in Science Trust in the knowledge, guidelines and recommendations that has been produced or provided by scientists to the benefit of civil society (Hendriks et al., 2016). These may also refer to trust in scientific-based recommendations on public health (e.g., universal health-care, stem cell research, federal funds for women’s reproductive rights, preventive measures of contagious diseases, and vaccination), climate change, economic policies (e.g., welfare, inequality- and poverty-control) and their intersections. The trust a member of the public has in science has been shown to be influenced by a vast number of factors such as age (Anderson et al., 2012), gender (Von Roten, 2004), rejection of scientific norms (Lewandowsky & Oberauer, 2021), political ideology (Azevedo & Jost, 2021; Brewer & Ley, 2012; Leiserowitz et al., 2010), right-wing authoritarianism and social dominance (Kerr & Wilson, 2021), education (Bak, 2001; Hayes & Tariq, 2000), income (Anderson et al., 2012), science knowledge (Evans & Durant, 1995; Nisbet et al., 2002), social media use (Huber et al., 2019), and religiosity (Azevedo, 2021; Brewer & Ley, 2013; Liu & Priest, 2009).
Publication bias (File Drawer Problem) The failure to publish results based on the “direction or strength of the study findings” (Dickersin & Min, 1993, p. 135). The bias arises when the evaluation of a study’s publishability disproportionately hinges on the outcome of the study, often with the inclination that novel and significant results are worth publishing more than replications and null results. This bias typically materializes through a disproportionate number of significant findings and inflated effect sizes. This process leads to the published scientific literature not being representative of the full extent of all research, and specifically underrepresents null finding. Such findings, in turn, land in the so called “file drawer”, where they are never published and have no findable documentation.
Publish or Perish An aphorism describing the pressure researchers feel to publish academic manuscripts, often in high prestige academic journals, in order to have a successful academic career. This pressure to publish a high quantity of manuscripts can go at the expense of the quality of the manuscripts. This institutional pressure is exacerbated by hiring procedures and funding decisions strongly focusing on the number and impact of publications.
PubPeer A website that allows users to post anonymous peer reviews of research that has been published (i.e. post-publication peer review).
Python An interpreted general-purpose programming language, intended to be user-friendly and easily readable, originally created by Guido van Rossum in 1991. Python has an extensive library of additional features with accessible documentation for tasks ranging from data analysis to experiment creation. It is a popular programming language in data science, machine learning and web development. Similar to R Markdown, Python can be presented in an interactive online format called a Jupyter notebook, combining code, data, and text.
Qualitative research Research which uses non-numerical data, such as textual responses, images, videos or other artefacts, to explore in-depth concepts, theories, or experiences. There are a wide range of qualitative approaches, from micro-detailed exploration of language or focusing on personal subjective experiences, to those which explore macro-level social experiences and opinions.
Quantitative research Quantitative research encompasses a diverse range of methods to systematically investigate a range of phenomena via the use of numerical data which can be analysed with statistics.
Questionable Measurement Practices (QMP) Decisions researchers make that raise doubts about the validity of measures used in a study, and ultimately the study’s final conclusions (Flake & Fried, 2020). Issues arise from a lack of transparency in reporting measurement practices, a failure to address construct validity, negligence, ignorance, or deliberate misrepresentation of information.
Questionable Research Practices or Questionable Reporting Practices (QRPs) A range of activities that intentionally or unintentionally distort data in favour of a researcher’s own hypotheses – or omissions in reporting such practices – including; selective inclusion of data, hypothesising after the results are known (HARKing), and p-hacking. Popularized by John et al. (2012).
R R is a free, open-source programming language and software environment that can be used to conduct statistical analyses and plot data. R was created by Ross Ihaka and Robert Gentleman at the University of Auckland. R enables authors to share reproducible analysis scripts, which increases the transparency of a study. Often, R is used in conjunction with an integrated development environment (IDE) which simplifies working with the language, for example RStudio or Visual Studio Code, or Tinn-R .
Red Teams An approach that integrates external criticism by colleagues and peers into the research process. Red teams are based on the idea that research that is more critically and widely evaluated is more reliable. The term originates from a military practice: One group (the red team) attacks something, and another group (the blue team) defends it. The practice has been applied to open science, by giving a red team (designated critical individuals) financial incentives to find errors in or identify improvements to the materials or content of a research project (in the materials, code, writing, etc.; Coles et al., 2020).
Reflexivity The process of reflexivity refers to critically considering the knowledge that we produce through research, how it is produced, and our own role as researchers in producing this knowledge. There are different forms of reflexivity; personal reflexivity whereby researchers consider the impact of their own personal experiences, and functional whereby researchers consider the way in which our research tools and methods may have impacted knowledge production. Reflexivity aims to bring attention to underlying factors which may impact the research process, including development of research questions, data collection, and the analysis.
Registered Report A scientific publishing format that includes an initial round of peer review of the background and methods (study design, measurement, and analysis plan); sufficiently high quality manuscripts are accepted for in-principle acceptance (IPA) at this stage. Typically, this stage 1 review occurs before data collection, however secondary data analyses are possible in this publishing format. Following data analyses and write up of results and discussion sections, the stage 2 review assesses whether authors sufficiently followed their study plan and reported deviations from it (and remains indifferent to the results). This shifts the focus of the review to the study’s proposed research question and methodology and away from the perceived interest in the study’s results.
Registry of Research Data Repositories A global registry of research data repositories from different academic disciplines. It includes repositories that enable permanent storage of, description via metadata and access to, data sets by researchers, funding bodies, publishers, and scholarly institutions.
Reliability The extent to which repeated measurements lead to the same results. In psychometrics, reliability refers to the extent to which respondents have similar scores when they take a questionnaire on multiple occasions. Noteworthy, reliability does not imply validity. Furthermore, additional types of reliability besides internal consistency exist, including: test-retest reliability, parallel forms reliability and interrater reliability.
Repeatability Synonymous with test-retest reliability. It refers to the agreement between the results of successive measurements of the same measure. Repeatability requires the same experimental tools, the same observer, the same measuring instrument administered under the same conditions, the same location, repetition over a short period of time, and the same objectives (Joint Committee for Guidelines in Metrology, 2008)
Replicability An umbrella term, used differently across fields, covering concepts of: direct and conceptual replication, computational reproducibility/replicability, generalizability analysis and robustness analyses. Some of the definitions used previously include: a different team arriving at the same results using the original author’s artifacts (Barba 2018); a study arriving at the same conclusion after collecting new data (Claerbout and Karrenbach, 1992); as well as studies for which any outcome would be considered diagnostic evidence about a claim from prior research (Nosek & Errington, 2020).
Replication Markets A replication market is an environment where users bet on the replicability of certain effects. Forecasters are incentivized to make accurate predictions and the top successful forecasters receive monetary compensation or contributorship for their bets. The rationale behind a replication market is that it leverages the collective wisdom of the scientific community to predict which effect will most likely replicate, thus encouraging researchers to channel their limited resources to replicating these effects.
RepliCATs project Collaborative Assessment for Trustworthy Science. The repliCATS project’s aim is to crowdsource predictions about the reliability and replicability of published research in eight social science fields: business research, criminology, economics, education, political science, psychology, public administration, and sociology.
Reporting Guideline A reporting guideline is a “checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology.” (EQUATOR Network, n.d.). Reporting guidelines provide the minimum guidance required to ensure that research findings can be appropriately interpreted, appraised, synthesized and replicated. Their use often differs per scientific journal or publisher.
Repository An online archive for the storage of digital objects including research outputs, manuscripts, analysis code and/or data. Examples include preprint servers such as bioRxiv, MetaArXiv, PsyArXiv, institutional research repositories, as well as data repositories that collect and store datasets including zenodo.org, PsychData, and code repositories such as Github, or more general repositories for all kinds of research data, such as the Open Science Framework (OSF). Digital objects stored in repositories are typically described through metadata which enables discovery across different storage locations.
ReproducibiliTea A grassroots initiative that helps researchers create local journal clubs at their universities to discuss a range of topics relating to open research and scholarship. Each meeting usually centres around a specific paper that discusses, for example, reproducibility, research practice, research quality, social justice and inclusion, and ideas for improving science.
Reproducibility crisis (aka Replicability or replication crisis) The finding, and related shift in academic culture and thinking, that a large proportion of scientific studies published across disciplines do not replicate (e.g. Open Science Collaboration, 2015). This is considered to be due to a lack of quality and integrity of research and publication practices, such as publication bias, QRPs and a lack of transparency, leading to an inflated rate of false positive results. Others have described this process as a ‘Credibility revolution’ towards improving these practices.
Reproducibility Network A reproducibility network is a consortium of open research working groups, often peer-led. The groups operate on a wheel-and-spoke model across a particular country, in which the network connects local cross-disciplinary researchers, groups, and institutions with a central steering group, who also connect with external stakeholders in the research ecosystem. The goals of reproducibility networks include; advocating for greater awareness, promoting training activities, and disseminating best-practices at grassroots, institutional, and research ecosystem levels. Such networks exist in the UK, Germany, Switzerland, Slovakia, and Australia (as of March 2021).
Reproducibility A minimum standard on a spectrum of activities (“reproducibility spectrum”) for assessing the value or accuracy of scientific claims based on the original methods, data, and code. For instance, where the original researcher’s data and computer codes are used to regenerate the results (Barba, 2018), often referred to as computational reproducibility. Reproducibility does not guarantee the quality, correctness, or validity of the published results (Peng, 2011). In some fields, this meaning is, instead, associated with the term “replicability” or ‘repeatability’.
Research Contribution Metric (p) Type of semantometric measure assessing similarity of publications connected in a citation network. This method uses a simple formula to assess authors’ contributions. Publication p can be estimated based on the semantic distance from the publications cited by p to publications citing p.
Research Cycle Describes the circular process of conducting scientific research, with “researchers working at various stages of inquiry, from more tentative and exploratory investigations to the testing of more definitive and well-supported claims” (Lieberman, 2020, p. 42). The cycle includes literature research and hypothesis generation, data collection and analysis, as well as dissemination of results (e.g. through publication in peer-reviewed journals), which again informs theory and new hypotheses/research.
Research Data Management Research Data Management (RDM) is a broad concept that includes processes undertaken to create organized, documented, accessible, and reusable quality research data. Adequate research data management provides many benefits including, but not limited to, reduced likelihood of data loss, greater visibility and collaborations due to data sharing, demonstration of research integrity and accountability.
Research integrity Research integrity is defined by a set of good research practices based on fundamental principles: honesty, reliability, respect and accountability (ALLEA, 2017). Good research practices —which are based on fundamental principles of research integrity and should guide researchers in their work as well as in their engagement with the practical, ethical and intellectual challenges inherent in research— refer to areas such as: research environment (e.g., research institutions and organisations promote awareness and ensure a prevailing culture of research integrity), training, supervision and mentoring (e.g., Research institutions and organisations develop appropriate and adequate training in ethics and research integrity to ensure that all concerned are made aware of the relevant codes and regulations), research procedures (e.g., researchers report their results in a way that is compatible with the standards of the discipline and, where applicable, can be verified and reproduced), safeguards (e.g., researchers have due regard for the health, safety and welfare of the community, of collaborators and others connected with their research), data practices and management (e.g., researchers, research institutions and organisations provide transparency about how to access or make use of their data and research materials), collaborative working, publication and dissemination (e.g., authors and publishers consider negative results to be as valid as positive findings for publication and dissemination), reviewing, evaluating and editing (e.g., researchers review and evaluate submissions for publication, funding, appointment, promotion or reward in a transparent and justifiable manner).
Research Protocol A detailed document prepared before conducting a study, often written as part of ethics and funding applications. The protocol should include information relating to the background, rationale and aims of the study, as well as hypotheses which reflect the researchers’ expectations. The protocol should also provide a “recipe” for conducting the study, including methodological details and clear analysis plans. Best practice guidelines for creating a study protocol should be used for specific methodologies and fields. It is possible to publically share research protocols to attract new collaborators or facilitate efficient collaboration across labs (e.g. https://www.protocols.io/). In medical and educational fields, protocols are often a separate article type suitable for publication in journals. Where protocol sharing or publication is not common practice, researchers can choose preregistration.
Research workflow The process of conducting research from conceptualisation to dissemination. A typical workflow may look like the following: Starting with conceptualisation to identify a research question and design a study. After study design, researchers need to gain ethical approval (if necessary) and may decide to preregister the final version. Researchers then collect and analyse their data. Finally, the process ends with dissemination; moving between pre-print and post-print stages as the manuscript is submitted to a journal.
Researcher degrees of freedom refers to the flexibility often inherent in the scientific process, from hypothesis generation, designing and conducting a research study to processing the data and analyzing as well as interpreting and reporting results. Due to a lack of precisely defined theories and/or empirical evidence, multiple decisions are often equally justifiable. The term is sometimes used to refer to the opportunistic (ab-)use of this flexibility aiming to achieve desired results —e.g., when in- or excluding certain data— albeit the fact that technically the term is not inherently value-laden.
Responsible Research and Innovation An approach that considers societal implications and expectations, relating to research and innovation, with the aim to foster inclusivity and sustainability. It accounts for the fact that scientific endeavours are not isolated from their wider effects and that research is motivated by factors beyond the pursuit of knowledge. As such, many parties are important in fostering responsible research, including funding bodies, research teams, stakeholders, activists, and members of the public.
Reverse p-hacking Exploiting researcher degrees of freedom during statistical analysis in order to increase the likelihood of accepting the null hypothesis (for instance, p > .05).
RIOT Science Club The RIOT Science Club is a multi-site seminar series that raises awareness and provides training in Reproducible, Interpretable, Open & Transparent science practices. It provides regular talks, workshops and conferences, all of which are openly available and rewatchable on the respective location’s websites and Youtube.
Robustness (analyses) The persistence of support for a hypothesis under perturbations of the methodological/analytical pipeline In other words, applying different methods/analysis pipelines to examine if the same conclusion is supported under analytical different conditions.
Salami slicing A questionable research/reporting practice strategy, often done post hoc, to increase the number of publishable manuscripts by ‘slicing’ up the data from a single study – one example of a method of ‘gaming the system’ of academic incentives. For instance, this may involve publishing multiple studies based on a single dataset, or publishing multiple studies from different data collection sites without transparently stating where the data originally derives from. Such practices distort the literature, and particularly meta-analyses, because it is unclear that the findings were obtained from the same dataset, thereby concealing the dependencies across the separately published papers.
Scooping The act of reporting or publishing a novel finding prior to another researcher/team. Survey-based research indicates that fear of being scooped is an important fear-related barrier for data sharing in psychology, and agent-based models suggest that competition for priority harms scientific reliability (Tiokhin et al. 2021).
Semantometrics A class of metrics for evaluating research using full publication text to measure semantic similarity of publications and highlighting an article’s contribution to the progress of scholarly discussion. It is an extension of tools such as bibliometrics, webometrics, and altmetrics.
Sensitive research Research that poses a threat to those who are or have been involved in it, including the researchers, the participants, and the wider society. This threat can be physical danger (e.g. suicide) or a negative emotional response (e.g. depression) to those who are involved in the research process. For instance, research conducted on victims of suicide, the researcher might be emotionally traumatised by the descriptions of the suicidal behaviours. Indeed, the communication with the victims might also make them re-experience the traumatic memories, leading to negative psychological responses.
Sequence-determines-credit approach (SDC) An authorship system that assigns authorship order based on the contribution of each author. The names of the authors are listed according to their contribution in descending order with the most contributing author first and the least contributing author last.
Sherpa Romeo An online resource that collects and presents open access policies from publishers, from across the world, providing summaries of individual journal’s copyright and open access archiving policies.
Single-blind peer review Evaluation of research products by qualified experts where the reviewer(s) knows the identity of the author(s), but the reviewer(s) remains anonymous to the author(s).
Slow science Adopting Open Scholarship practices leads to a longer research process overall, with more focus on transparency, reproducibility, replicability and quality, over the quantity of outputs. Slow Science opposes publish-or-perish culture and describes an academic system that allows time and resources to produce fewer higher-quality and transparent outputs, for instance prioritising researcher time towards collecting more data, more time to read the literature, think about how their findings fit the literature and documenting and sharing research materials instead of running additional studies.
Social class Social class is usually measured using both objective and subjective measurements, as recommended by the American Psychological Association (American Psychological Association,Task Force on Socioeconomic Status, 2007). Unlike the conventional concept, which only considers one factor, either education or income (e.g., economic variables), an individual’s social class is considered to be a combination of their education, income, occupational prestige, subjective social status, and self-identified social class. Social class is partly a cultural variable, as it is a stable variable and likely to change slowly over the years. Social class can have important implications to academic outcomes. An individual may have a high socio-economic status yet identify as a working class individual. Working class students tend to have different life circumstances and often more restrictive commitments than middle-class students, which make their integration with other students more difficult (Rubin, 2021). The lack of time and money is obstructive to their social experience at university. Working class students are more likely to work to support themselves, resulting in less time for academic activities and for socializing with other students as well as less money to purchase items linked to social experiences (e.g. food).
Social integration Social integration is a multi-dimensional construct. In an academic context, social integration is related to the quantity and quality of the social interactions with staff and students, as well as the sense of connection and belonging to the university and the people within the institute. To be more specific, social support, trust, and connectedness are all variables that contribute to social integration. Social integration has important implications for academic outcomes and mental wellbeing (Evans & Rubin, 2021). Working class students are less likely to integrate with other students, since they have differing social and economic backgrounds and less disposable income. Thus they are not able to experience as many educational and fiscal opportunities than others. In turn, this can lead to poor mental health and feelings of ostracism (Rubin, 2021).
Society for Open, Reliable, and Transparent Ecology and Evolutionary biology (SORTEE) SORTEE (https://www.sortee.org/) is an international society with the aim of improving the transparency and reliability of research results in the fields of ecology, evolution, and related disciplines through cultural and institutional changes. SORTEE was launched in December 2020 to anyone interested in improving research in these disciplines, regardless of experience. The society is international in scope, membership, and objectives. As of May 2021, SORTEE comprises of over 600 members.
Society for the Improvement of Psychological Science (SIPS) A membership society founded to further promote improved methods and practices in the psychological research field. The society aims to complete its mission statement by enhancing the training of psychological researchers; by promoting research cultures that are more conducive to better quality research; by quantifying and empirically assessing the impact of such reforms; and by leading outreach events within and outside psychology to better the current state of research norms.
Specification Curve Analysis An analytic approach that consists of identifying, calculating, visualising and interpreting results (through inferential statistics) for all reasonable specifications for a particular research question (see Simonsohn et al. 2015). Specification curve analysis helps make transparent the influence of presumably arbitrary decisions during the scientific progress (e.g., experimental design, construct operationalization, statistical models or several of these) made by a researcher by comprehensively reporting all non-redundant, sensible tests of the research question. Voracek et al. (2019) suggest that SCA differs from multiverse analysis with regards to the graphical displays (a specification curve plot rather than a histogram and tile plot) and the use of inferential statistics to interpret findings.
Statistical Assumptions Analytical approaches and models assume certain characteristics of one’s data (e.g., statistical independence, random samples, normality, equal variance,…). Before running an analysis, these assumptions should be checked since their violation can change the results and conclusion of a study. Good practice in open and reproducible science is to report assumption testing in terms of the assumptions verified and the results of such checks or corrections applied.
Statistical power Statistical power is the long-run probability that a statistical test correctly rejects the null hypothesis if the alternative hypothesis is true. It ranges from 0 to 1, but is often expressed as a percentage. Power can be estimated using the significance criterion (alpha), effect size, and sample size used for a specific analysis technique. There are two main applications of statistical power. A priori power where the researcher asks the question “given an effect size, how many participants would I need for X% power?”. Sensitivity power asks the question “given a known sample size, what effect size could I detect with X% power?”.
Statistical significance A property of a result using Null Hypothesis Significance Testing (NHST) that, given a significance level, is deemed unlikely to have occurred given the null hypothesis. Tenny and Abdelgawad (2017) defined it as “a measure of the probability of obtaining your data or more extreme data assuming the null hypothesis is true, compared to a pre-selected acceptable level of uncertainty regarding the true answer” (p. 1). Conventions for determining the threshold vary between applications and disciplines but ultimately depend on the considerations of the researcher about an appropriate error margin. The American Statistical Association’s statement (Wasserstein & Lazar, 2016) notes that “Researchers often wish to turn a p-value into a statement about the truth of a null hypothesis, or about the probability that random chance produced the observed data. The p-value is neither. It is a statement about data in relation to a specified hypothetical explanation, and is not a statement about the explanation itself” (p. 131).
Statistical validity The extent to which conclusions from a statistical test are accurate and reflective of the true effect found in nature. In other words, whether or not a relationship exists between two variables and can be accurately detected with the conducted analyses. Threats to statistical validity include low power, violation of assumptions, reliability of measures, etc, affecting the reliability and generality of the conclusions.
STRANGE The STRANGE “framework” is a proposal and series of questions to help animal behaviour researchers consider sampling biases when planning, performing and interpreting research with animals. STRANGE is an acronym highlighting several possible sources of sampling bias in animal research, such as the animals’ Social background; Trappability and self-selection; Rearing history; Acclimation and habituation; Natural changes in responsiveness; Genetic make-up, and Experience.
StudySwap A free online platform through which researchers post brief descriptions of research projects or resources that are available for use (“haves”) or that they require and another researcher may have (“needs”). StudySwap is a crowdsourcing approach to research which can ensure that fewer research resources go unused and more researchers have access to the resources they need.
Systematic Review A form of literature review and evidence synthesis. A systematic review will usually include a thorough, repeatable (reproducible) search strategy including key terms and databases in order to find relevant literature on a given topic or research question. Systematic reviewers follow a process of screening the papers found through their search, until they have filtered down to a set of papers that fit their predefined inclusion criteria. These papers can then be synthesised in a written review which may optionally include statistical synthesis in the form of a meta-analysis as well. A systematic review should follow a standard set of guidelines to ensure that bias is kept to a minimum for example PRISMA (Moher et al., 2009; Page et al., 2021), Cochrane Systematic Reviews (Higgins et al., 2019), or NIRO-SR (Topor et al., 2021).
Tenzing tenzing is an online webapp and R package that helps researchers to track and report the contributions of each team member using the CRediT taxonomy in an efficient way. Team members of a research project can indicate their contributions to each CRediT role using an online spreadsheet template, and provide any additional authors’ information (e.g., name, affiliation, order in publication, email address, and ORCID iD). Upon writing the manuscript, tenzing can automatically create a list of contributors belonging to each CRediT role to be included in the contributions section and create the manuscript’s title page.
The Troubling Trio Described as a combination of low statistical power, a surprising result, and a p-value only slightly lower than .05.
Theory building The process of creating and developing a statement of concepts and their interrelationships to show how and/or why a phenomenon occurs. Theory building leads to theory testing.
Theory A theory is a unifying explanation or description of a process or phenomenon, which is amenable to repeated testing and verifiable through scientific investigation, using various experiments led by several independent researchers. Theories may be rejected or deemed an unsatisfactory explanation of a phenomenon after rigorous testing of a new hypothesis that explains the phenomena better or seems to contradict them but is more generalisable to a wider array of findings.
Transparency Checklist The transparency checklist is a consensus-based, comprehensive checklist that contains 36 items that cover the prepregistration, methods, results and discussion and data, code and materials availability. A shortened 12-item version of the checklist is also available. Checklist responses can be submitted alongside a manuscript for review. While the checklist can also work for educational purposes, it mainly aims to support researchers to identify concrete actions that can increase the transparency of their research while a disclosed checklist can help the readers and reviewers gain critical information about different aspects of transparency of the submitted research.
Transparency Having one’s actions open and accessible for external evaluation. Transparency pertains to researchers being honest about theoretical, methodological, and analytical decisions made throughout the research cycle. Transparency can be usefully differentiated into “scientifically relevant transparency” and “socially relevant transparency”. While the former has been the focus of early Open Science discourses, the latter is needed to provide scientific information in ways that are relevant to decision makers and members of the public (Elliott & Resnik, 2019).
Triple-blind peer review Evaluation of research products by qualified experts where the author(s) are anonymous to both the reviewer(s) and editor(s). “Blinding of the authors and their affiliations to both editors and reviewers. This approach aims to eliminate institutional, personal, and gender biases” (Tvina et al., 2019, p. 1082).
TRUST Principles A set of guiding principles that consider Transparency, Responsibility, User focus, Sustainability, and Technology (TRUST) as the essential components for assessing, developing, and sustaining the trustworthiness of digital data repositories (especially those that store research data). They are complementary to the FAIR Data Principles.
Type I error “Incorrect rejection of a null hypothesis” (Simmons et al., 2011, p. 1359), i.e. finding evidence to reject the null hypothesis that there is no effect when the evidence is actually in favouring of retaining the null that there is no effect (For example, a judge imprisoning an innocent person). Concluding that there is a significant effect and rejecting the null hypothesis when your findings actually occured by chance.
Type II error A false negative result occurs when the alternative hypothesis is true in the population but the null hypothesis is accepted as part of the analysis (Hartgerink et al., 2017). That is, finding a non-significant statistical result when the effect is true (For example, a judge passing an innocent verdict on a guilty person). False negatives are less likely to be the subject of replications than positive results (Fiedler et al., 2012), and remain an unresolved issue in scientific research (Hartgerink et al., 2017).
Type M error A Type M error occurs when a researcher concludes that an effect was observed with magnitude lower or higher than the real one. For example, a type M error occurs when a researcher claims that an effect of small magnitude was observed when it is large in truth or vice versa.
Type S error A Type S error occurs when a researcher concludes that an effect was observed with an opposite sign than real one. For example, a type S error occurs when a researcher claims that a positive effect was observed when it is negative in reality or vice versa.
Under-representation Not all voices, perspectives, and members of the community are adequately represented. Under-representation typically occurs when the voices or perspectives of one group dominate, resulting in the marginalization of another. This often affects groups who are a minority in relation to certain personal characteristics.
Universal design for learning (UDL) A framework for improving learning and optimising teaching based upon scientific insights of how humans learn. It aims to make learning inclusive and transformative for all people in which the focus is on catering to the differing needs of different students. It is often regarded as an evidence-based and scientifically valid framework to guide educational practice, consisting of three key principles: engagement, representation, and action and expression. In addition, UDL is included in the Higher Education Opportunity Act of 2008 (Edyburn, 2010).
Validity Validity refers to the application of statistical principles to arrive at well-founded —i.e., likely corresponding accurately to the real world— concepts, conclusions or measurement. In psychometrics, validity refers to the extent to which something measures what it intends to or claims to measure. Under this generic term, there are different types of validity (e.g., internal validity, construct validity, face validity, criterion validity, diagnostic validity, discriminant validity, concurrent validity, convergent validity, predictive validity, external validity).
Version control The practice of managing and recording changes to digital resources (e.g. files, websites, programmes, etc.) over time so that you can recall specific versions later. Version control systems are designed to record the history of changes (who, what and when), and help to avoid human errors (e.g. working on the wrong version). For example, the Git version control system is a widely used software tool that originally helped software developers to version control shared code and is now used across many scientific disciplines to manage and share files.
Webometrics Webometrics involves the study of online content. Webometrics focuses on the numbers and types of hyperlinks between different online sites. Such approaches have been considered as a type of altmetrics. “The study of the quantitative aspects of the construction and use of information resources, structures and technologies on the Web drawing on bibliometric and informetric approaches” (Björneborn & Ingwersen, 2004).
WEIRD This acronym refers to Western, Educated, Industrialized, Rich and Democratic societies. Most research is conducted on, and conducted by, relatively homogeneous samples from WEIRD societies. This limits the generalizability of a large number of research findings, particularly given that WEIRD people are often psychological outliers. It has been argued that “WEIRD psychology ” started to evolve culturally as a result of societal changes and religious beliefs in the Middle Ages in Europe. Critics of this term suggest it presents a binary view of the global population and erases variation that exists both between and within societies, and that other aspects of diversity are not captured.
Z-Curve Computing a Z-score is a statistical approach mainly used to obtain the ‘Estimated Replication Rate’ (ERR) and ‘Expected Discovery Rate’ (EDR) for a set of reported studies. Calculating a z-curve for a set of statistically significant studies involves converting reported p-values to z-scores, fitting a finite mixture model to the distribution of z-scores, and estimating mean power based on the mixture model. The Z-curve analysis can be performed in R through a dedicated package – https://cran.r-project.org/web/packages/zcurve/index.html.
Zenodo An open science repository where researchers can deposit research papers, reports, data sets, research software, and any other research-related digital artifacts. Zenodo creates a persistent digital object identifier (DOI) for each submission to make it citable. This platform was developed under the European OpenAIRE program and operated by CERN.