Publicação Contínua
Qualis Capes Quadriênio 2017-2020 - B1 em medicina I, II e III, saúde coletiva
Versão on-line ISSN: 1806-9804
Versão impressa ISSN: 1519-3829

Todo o conteúdo do periódico, exceto onde está identificado, está licenciado sob uma

Licença Creative Commons

ARTIGOS ESPECIAIS


Acesso aberto Revisado por pares
0
Visualização

The New Biostatistics Required by Medical Journals (2015–2025) and Its Impact on Research and Scientific Publishing

Melania Maria Ramos Amorim1,2; Alex Sandro Rolland Souza3,4,5; João Lucas Brito Freitas3,6; Emídio Cavalcanti de Albuquerque3; Anna Catharina Carneiro da Cunha3

DOI: 10.1590/1806-9304202620250451 e20250451

ABSTRACT

From 2015 to 2025, a profound shift has consolidated in how medical journals progressively adopted stricter methodological and statistical standards, redefining expectations about study design, transparency and reporting. Driven by the reproducibility crisis, the critique of the ritualistic use of p-values and the expansion of Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network guidelines, the emergence of a "new biostatistics" - one that integrates statistical reasoning throughout the research cycle rather than restricting it to data analysis. Formerly peripheral elements, such as precise outcome definitions, justified sample size calculations, transparent handling of missing data, multiplicity control, pre-specified statistical analysis plans and structured risk-of-bias assessments, have become central editorial requirements. Open-science policies, study registration and data-sharing statements further amplify expectations regarding transparency and reproducibility. In Latin America, such demands encounter structural challenges but also create opportunities to strengthen scientific integrity. This article outlines the conceptual and normative pillars shaping this "new biostatistics", examines its practical consequences for study design, analysis and publication, and discusses implications for researchers, statisticians, editors and graduate programs.

Keywords: Biostatistics, Statistics as topic, Reproducibility of results

RESUMO

Entre 2015 e 2025, consolidou-se uma inflexão profunda na forma como revistas médicas avaliam a qualidade metodológica e o rigor estatístico dos manuscritos submetidos. Esse movimento, impulsionado pela crise de reprodutibilidade, pela crítica ao uso ritualístico do valor de p e pela expansão das diretrizes da Enhancing the QUAlity and Transparency Of health Research (EQUATOR) Network, levou à emergência de uma "nova bioestatística", que desloca o foco da mera execução de testes para a integração da estatística em todo o ciclo da pesquisa. Elementos antes periféricos como a definição rigorosa de desfechos, tamanho amostral justificado, manejo transparente de dados ausentes, controle de multiplicidade, planos de análise estatística pré-especificados e avaliação de risco de viés tornaram-se critérios centrais de editoração. Políticas editoriais de ciência aberta, registro de estudos e compartilhamento de dados ampliaram ainda mais as expectativas em relação à transparência e reprodutibilidade. No contexto latino-americano, essas exigências encontram desafios estruturais importantes, mas também oportunidades para fortalecer a integridade científica. Este artigo descreve os pilares conceituais e normativos que moldam a "nova bioestatística", analisa seus impactos concretos sobre desenho, análise e publicação de estudos e discute implicações para pesquisadores, estatísticos, editores e programas de pós-graduação.

Palavras-chave: Bioestatística, Estatística como assunto, Reprodutibilidade dos testes

Introduction: What constitutes the "new" about biostatistics in medical journals?

The notion that statistics is central to medicine is not new. Since the consolidation of evidence-based medicine, and the routine adoption of randomized clinical trials, biostatistics has assumed a prominent role in the production and validation of medical knowledge.1,2 However, until recently, the statistician's involvement was frequently restricted to the final analytical stage – at times consisting of a mere "signing off" on a significance test.

Between 2015 and 2025, this arrangement has become increasingly incompatible with the requirements of leading medical journals, both general and specialty-specific. Four converging trends help explain this shift:
1. The reproducibility crisis and metascience. Landmark publications drew attention to the high rate of non-reproducible results and flaws study design, analysis, and reporting in biomedical and psychological studies.3-5
2. Critique of the ritualistic use of p-values. The 2016 American Statistical Association (ASA) statement and subsequent debates questioned the role of a fixed threshold (p<0.05) as an absolute arbiter between "truth" and "falsehood," stressing the importance of estimates, confidence intervals, context, and scientific plausibility.6,7
3. Expansion and institutionalization of reporting guidelines. Spearheaded by the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network, a burgeoning ecosystem of checklists tailored to various study designs – CONsolidated Standards Of Reporting Trials (CONSORT), for clinical trials; Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), for observational studies; Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA), for systematic reviews; Transparent Reporting of a multivariable prediction model for Individual Prognosis OR Diagnosis (TRIPOD)/TRIPOD + Artificial Intelligence (AI) for predictive models; and the Prediction model Risk Of Bias Assessment Tool (PROBAST), for the assessment of risk of bias and applicability of predictive models, among others.2,8-12
4. The transparency, open data, and open science agenda. Driven by editorials and recommendations from committees such as the International Committee of Medical Journal Editors (ICMJE), there is mounting pressure for prospective study registration, publication of protocols, statistical analysis plans (Statistical Analysis Plan – SAP), and sharing of individual participant data.13-17

These vectors converge to a scenario in which biostatistics moves beyond being merely a set of techniques to "analyzing the data at hand" and begins to be a normative quality framework, integrated to the design, registration, conduct analysis, reporting, and reuse of evidence.

In this article, we term this rearrangement the "new biostatistics" required by medical journals. It is not a new discipline, but a shift of emphasis: from statistics as isolated computation to biostatistics as a common language among clinicians, epidemiologists, reviewers, and editors, mediated by robust editorial guidelines and policies.

From significance-based statistics to the biostatistics of reproducibility

The legacy of critique: why "old" biostatistics is insufficient?

The Ioannidis critique (2005) – "Why most published research findings are false" – synthetized a range of concerns regarding low statistical power, multiple testing, publication bias and analytical flexibility ("p-hacking") in biomedical research. Subsequent investigations, such as Simmons et al.4 in experimental psychology, empirically demonstrated how small analytical decisions, often unreported, may substantially inflate the false-positive rate.

The Reproducibility Project: Psychology, which involved the replication of 100 studies, reinforced the perception that significant results in high-impact journals do not guarantee robust effects.5 Although this project stems from another field, its methodological implications resonate deeply within both clinical medicine and epidemiology.

The ASA statement and the debate over "p<0.05"

In 2016, the ASA published a formal statement regarding the use of p-values, emphasizing that they:6

1. Do not measure the probability that the null hypothesis is true;
2. Do not quantify effect size or the importance of a finding;
3. Can be easily distorted by research bias and flexible analytical practices.

Shortly thereafter, Benjamin et al.7 provocatively proposed reducing the standard significance threshold from 0.05 to 0.005 for studies claiming novel discoveries, arguing that this could reduce the proportion of false-positive findings in fields with low signal-to-noise ratios.

Whether or not this proposal is accepted, it succeeded in shifting the focus away from "p<0.05" as an editorial fetish toward a broader framework, in which effect sizes, confidence intervals, clinical plausibility, and cumulative evidence gain relevance in the evaluation of manuscripts.

Reproducibility, transparency, and the culture of science

Beginning in 2015, a burgeoning body of literature on how to reform scientific practices to enhance their reproducibility emerged. Nosek et al.13 discussed how authors guidelines, open data policies, and editorial incentives can promote transparency. Munafò et al.18 propose a "manifesto for reproducible science", listing measures related to methods, dissemination, incentives, and evaluation.

In medicine, the discussion concerning transparency is linked to the mandatory registration of clinical trials and, more recently, to requirements for data sharing and statistical analysis plans in both trial registries and the manuscripts themselves.15-17 This convergence of critiques and proposals redefines the "minimum threshold" of statistical acceptability for contemporary medical journals.

Reporting guidelines and editorial policies: the grammar of the "new biostatistics"

The EQUATOR Network and the proliferation of checklists

The EQUATOR Network established a comprehensive repository of reporting guidelines for various study designs and has taken a leading role in promoting transparency in health research.8 Among the key guidelines for contemporary biostatistics, the following are particularly noteworthy:

• CONSORT 2010 for randomized trials;2
• STROBE for observational studies;9
• PRISMA for systematic reviews and meta-analyses;10
• TRIPOD (2015) and TRIPOD+AI (2024) for diagnostic or prognostic prediction models, including those based on machine learning;11,12
• PROBAST (2019) for risk-of-bias assessment and applicability of predictive models.19

Such guidelines are not "statistical" in the strict sense, yet they contain multiple items that hinge on sound statistical decisions: definition of primary and secondary outcomes, sample size justification, description of analytical methods, handling of missing data, assessment of statistical assumptions, presentation of estimates and measures of uncertainties, subgroup and sensibility analyses, among others.

SAMPL and the specifications for statistical reporting

A specific landmark of the "new biostatistics" is the set of SAMPL (Statistical Analyses and Methods in the Published Literature) recommendations, by Lang and Altman,20 which outline, in accessible language, what should be reported for each basic statistical method.

Since 2015, these recommendations have been incorporated, formally or informally, in the authors guidelines and in the peer-review processes of various biomedical journals. Recent studies suggest, however, that their adoption is still partial and the systematic SAMPL use in statistical peer review improves clarity and completeness of manuscripts.21

In practice, SAMPL represents a shift in expectations: merely declaring "a t-test was used" is insufficient; it is necessary to specify hypotheses, underlying assumptions, the test version, nature of the data, units of measurement, summary statistics, and reporting of effect estimates with confidence intervals.

Emerging guidelines for general medical journals

High impact general journals have been outlining their statistical expectations in an increasingly sophisticated manner. In 2019, the New England Journal of Medicine published "New Guidelines for Statistical Reporting in the Journal", emphasizing the need:22

• Describe statistical methods with sufficient detail to ensure reproducibility;
• Present effect sizes and confidence intervals;
• Discuss assumptions, sensitivity analyses and clinical implication of results;
• Avoid the use of p-values as the sole metric of significance.

The JAMA (Journal of the American Medical Association) Network journals maintain a continuous series of "Guides to Statistics and Methods" and have published specific recommendations for the content of statistical analysis plans for clinical trials, including a 55-item checklist.14

Specialty journals, such as the Journal of Thoracic Oncology, have issued statistical reporting guidelines adapted to their scope, reinforcing the importance of clarity and precision in the description of analyses.23

The IMCJE, data sharing, and open science

The IMCJE has updated, since 2015, its recommendations, including an editorial from 2017 that requires a declaration of data-sharing statement in clinical trials manuscripts submitted to member journals as of 2018.15 In addition:

• Trials initiated as of 2019 must include an individual participant data sharing plan in the public trial registry;
• The absence of effective access may not preclude publication, but it must be explicitly justified.15

Audit studies show that the implementation of these policies is heterogeneous, yet growing, both across journals and clinical trial registries.16,17 The underlying message is clear: for contemporary medical journals, statistical quality and transparency are inseparable dimensions of scientific credibility.

Biostatistics in the application of diverse study designs

Randomized clinical trials

In clinical trials, the "new biostatistics" is evident at several levels:
• Registration and protocol. CONSORT and the ICMJE request prospective trial registration and the explicit description of primary and secondary outcomes, randomization methods, blinding, and analysis plans.2,4
• Statistical analysis plans (SAP). The JAMA guidelines (2017) recommend that the SAP be a standalone document, finalized prior to unblinding, specifying models, adjustments, missing data management, multiplicity, interim and subgroup analyses.14
• Outcomes and sample size. Guidelines such as Difference ELicitation in TriALs (DELTA2) refine the calculation of target-difference and sample size, emphasizing the need to justify the clinical relevance rather than just statistical significance.24,25
• Non-inferiority and equivalence analyses. Journals now expect clear justifications for non-inferiority margins, population selection (intention to treat vs. per protocol) and a discussion of the sensitivity of these choices.
• Adaptive and Bayesian trials. Although still less common, adaptive designs and Bayesian analyses are gaining ground, requiring familiarity from editors and peer-reviewers with concepts such as posterior probability, sequential updates, and Type I error control in more complex contexts.

Observational studies and causal inference

For observational studies, STROBE9 and the literature on causal inference emphasize:

• Clarity in the source population definition, exposure, outcomes, and confounders;
• Quasi-experimental designs (cohorts, case-control, time series, and regression discontinuity designs) and their vulnerabilities;.
•Explicit use of directed acyclic graphs (DAG) to outline causal assumptions and adjustment decisions;26
•Methods such as propensity score, inverse probability weighting, and marginal structured models, which are becoming increasingly common in widely circulated clinical articles.

The "new biostatistics" here is not just about applying more sophisticated models, but the requirement of making the underlying causal logic explicit, avoiding implicit causal interpretations based on weak associations.

Systematic reviews and meta-analyses

PRISMA 2020 updated the checklist for systematic reviews, aligning it with improvements in search, selection, and quantitative synthesis.10 From a statistical perspective, journals tend to require:

• Clear justification for fixed-effects vs. random-effects models;
•Discussion of heterogeneity, including I2 tau2 and their implications for interpretation;
•Evaluation of publication bias and small-study effects;4
•Sensitivity and subgroups analyses, with caution to avoid p-hacking.

Predictive models, machine learning, and artificial intelligence

The explosion of predictive and artificial intelligence studies in the healthcare has led to the development of TRIPOD, TRIPOD+AI, and PROBAST.11,12,19 The new requirements include:

• Clearly distinguishing between development, internal validation, and external validation;
•Reporting discriminative (e.g. , Area Under the Curve – AUC), calibration, and clinical utility (decision curve analysis);
•Avoiding overfitting through cross-validation, bootstrapping, or independent validation samples;
•For machine learning models, outlining architecture, hyperparameters, and tuning strategy, as well as ensuring reproducibility via code and seeds.

Medical journals that publish AI diagnostic or prognostic studies generally require adherence to TRIPOD/TRIPOD+AI, and, for algorithm-based intervention trials, to extensions such as CONSORTI-AI and SPIRIT-AI.

Implications for medical research and scientific publishing

Design stage: Statistics "under the hood" of the protocol

The rising of editorial standard means that many decisions previously made, "post-data collection", are now defined during the study design:
• The formulation of primary, secondary, and exploratory outcomes must be compatible with the sample size and the desired statistical power;
•The strategy for missing data (e.g. , multiple imputation) must be planned a priori, rather than improvised;
•ACriteria for subgroup analyses, clinically relevant thresholds, and multiplicity adjustments are discussed in the SAP, not just in the article's Discussion section.

This increases the value of early collaboration between clinical researchers and biostatisticians, shifting the relationship of "post-hoc consultation" to partnership starting at the design stage.

Submission and peer review: the rise of specialized statistical review

Many journals now rely on dedicated statistical reviewers or utilize specific checklists for methodological assessment.1,23 In practice, this means:

• Manuscripts with flawed design, analysis, or statistical reporting are more prone to be rejected, regardless of the clinical relevance of the research question;
•Common errors – such as the misapplication of statistical tests, lack of adjustment for multiple comparisons, post-hoc outcome switching, and failure to account for clustering or repeated measures – are increasingly less tolerated;
•Peer-review turnaround times increase when reviewers request further methodological detail, re-analyses, or additional supplemental material.

For research groups with limited statistical expertise, this may be experienced as journals "stiffening", however, at a systemic level, it serves to reduce the circulation of statistically weak evidence.

Post-publication stage: data, code, and evidence reuse

The requirement of data-sharing statements and public protocols and SAPs generates an environment in which:
• Other groups can re-analyze data, reproduce analyses, and test the robustness of conclusions;16,17
•Systematic reviews and meta-analyses have access to more complete information, reducing selective reporting bias;
•Questionable research practices (p-hacking, selective exclusion of outliers, post-hoc outcome switching) become more easily detectable.

This transparency, however, requires institutional infrastructure for secure data storage, curation, and legal and ethical support, which is not always available equitably across institutions and countries.

The Brazilian and Latin American context

Reporting guidelines and open science in national journals

In Brazil, the advancement of open science and reporting guidelines is reflected in initiatives of agencies, bibliographic databases, and journals:
• Galvão et al.27 synthesized the role of "reporting guidelines" in enhancing the quality and transparency of health research, highlighting the EQUATOR network and checklists such as CONSORT, STROBE, and PRISMA;27
•Montagna et al.28 discuss the adoption of standardized protocols to improve the quality of medical research, arguing that such protocols strengthen scientific production and the dialogue between researchers;28
•The recent update of selection and indexing criteria for journals in the Latin-American and Caribbean Literature in Health Sciences (LILACS) explicitly includes the adoption of international guidelines for the reporting of results, aligned with the recommendations of the Pan-American Health Organization (PAHO)/ World Health Organization (WHO) and the open science agenda.29

Studies on the knowledge and use of reporting guidelines among leaders of health research groups show an increase in familiarity with EQUATOR, yet significant gaps in systematic implementation persist.30

Specific challenges

Some particular challenges emerge in the national context:
• Unequal access to qualified statistical support. Several graduate programs and smaller institutions do not have full-time statisticians or methodological support centers, which hinders the full adoption of editorial requirements;
•Statistical training of health professionals. Undergraduate and graduate curricula often emphasize analysis techniques, but overlook design-based reasoning, causal inference, and reproducibility. This favors an instrumental and reactive use of biostatistics;
•Pressure for publication and productivity metrics. The combination of stricter methodological requirements with evaluation systems based on the of articles may generate tension between quality and quantity;
•Technical and legal barriers to data sharing. Ethical-regulatory aspects (General Data Protection Law – LGDP, ethics committees, and informed consent forms) and limited infrastructure for secure repositories may delay the practical adoption of open data policies, notwithstanding their formal inclusion.

These challenges point to the need for institutional policies that go beyond the "minimalist" compliance with journal requirements, promoting a culture of integrity and interdisciplinary collaboration.

Risks and misunderstandings: when the new biostatistics becomes mere bureaucracy

Not every adoption of guidelines automatically results in better science. Some risks deserve critical attention:
• "Checklistization" of science. The mechanical application of checklists without internalization of underlying principles can produce formally correct articles that are conceptually weak;
•Epistemic exclusion. Requirements for high level of statistical sophistication and infrastructure may marginalize research groups in resource-limited settings, further concentrating evidence production in well-resourced centers;
•Confusion between complexity and quality. The use of sophisticated statistical models (e.g. , machine learning) does not replace sound design, data quality, or clinical relevance. Medical journals are rightly starting to require that complexity be justified and that models be interpretable and useful for decision-making;12,19
•Defensive use of statistics. In some contexts, the proliferation of sensitivity analyses, adjustments, and tests may serve more to insulate the study from criticism rather than to illuminate the research question.

Recognizing these risks is part of the maturity of the "new biostatistics": the goal is not to increase the quantity of statistics in articles, but the quality of inferential reasoning.

The way forward: toward collaborative biostatistics

In light of the described shifts, some strategies may appear particularly promising for researchers and institutions:
1. Early involvement of statisticians and methodologists in projects. The participation from the initial formulation of the research question and design reduces the likelihood of insoluble problems in the analysis state;
2. Establishment of institutional methodological support centers. Structures that bring together statisticians, epidemiologists, qualitative method experts, and data managers may provide transversal support to multiple research groups;
3. Continued training in methods and reporting guidelines. Regular courses on CONSORT, STROBE, PRISMA, TRIPOD, SAP, and open science – aimed at faculty, graduate students, and editorial teams help translate guidelines into everyday practice.27,28
4. Institutional registration and sharing policies. Requiring pre-registration of trials and review protocols, alongside data management plans and sharing plans, aligns institutions to editorial expectations and reduce redundant work during submission;
5. Valuing reproducibility in academic evaluation. Promotion and funding criteria that account for methodological quality, open access to data, and cumulative impact can counterbalance the incentive for purely quantitative publication.

In the Brazilian context, these strategies resonate with recent guidelines for graduate program evaluation and open science policies from agencies such as the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES) and the National Council for Scientific and Technological Development (CNPq), in addition to initiatives from bibliographic networks like Scientific Electronic Library Online (SciELO) and LILACS.

Final considerations

Between 2015 and 2025, the biostatistics required by medical journals evolved from a collection of analysis techniques positioned at the tail end of the study to a normative structure that encompasses the entire research cycle: from design to the publication and data reuse. This "new biostatistics" is embodied in reporting guidelines, analysis plans, registration and data-sharing policies, specialized statistical review and a culture of reproducibility.

The impact on medical research is broad: improves transparency, clarity, and the reliability of evidence, while also increasing the planning complexity, the need for interdisciplinary collaboration, and the risk of marginalizing groups with less methodological support. In Brazil and other middle-income countries, the challenge is to translate these requirements into opportunities to strengthen the culture of scientific integrity, rather than allowing them to become a mere bureaucratic barrier.

Responding to this challenge implies recognizing biostatistics as a partner, rather than a "gatekeeper" for publication. For researchers, statisticians, editors, and reviewers, the emerging horizon is one of biostatistics that is more collaborative, critical, and focused on reproducibility – a fundamental condition for medical literature to fulfill its promise of reliably guiding the clinical practice and the health policy.

REFERENCES

1. Altman DG, Gore SM, Gardner MJ, Pocock SJ. Statistical guidelines for contributors to medical journals. Br Med J (Clin Res Ed). 1983; 286 (6376): 1489–93.

2. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ. 2010; 340: c869.

3. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005; 2 (8): e124.

4. Simmons JP, Nelson LD, Simonsohn U. False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22 (11): 1359–66.

5. Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015; 349 (6251): aac4716.

6. Wasserstein RL, Lazar NA. The ASA's statement on p-values: context, process, and purpose. Am Stat. 2016;70(2):129–33.

7. Benjamin DJ, Berger JO, Johannesson M, Nosek BA, Wagenmakers EJ, Berk R, et al. Redefine statistical significance. Nat Hum Behav. 2018; 2 (1): 6–10.

8. Simera I, Altman DG, Moher D, Schulz KH, Hoey J. The EQUATOR Network: facilitanting transparent and accurate reporting of health research. Serials. 2008; 21 (3): 183–7.

9. Vandenbroucke JP, von Elm E, Altman DG, Gøtzsche PC, Mulrow CD, Pocock SJ, et al. Strengthening the reporting of observational studies in epidemiology (STROBE): explanation and elaboration. Epidemiology. 2007; 18 (6): 805–35.

10. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. 2021; 372: n71.

11. Collins GS, Reitsma JB, Altman DG, Moons KGM. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD statement. Ann Intern Med. 2015; 162 (1): 55–63.

12. Collins GS, Moons KGM, Dhiman P, et al. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024; 385: e078378.

13. Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, et al. Promoting an open research culture. Science. 2015; 348 (6242): 1422–5.

14. Gamble C, Krishan A, Stocken D, Lewis S, Juszczak E, Dore C, et al. Guidelines for the content of statistical analysis plans in clinical trials. JAMA. 2017; 318 (23): 2337–43.

15. Taichman DB, Sahni P, Pinborg A, Peiperl L, Laine C, James A, et al. Data sharing statements for clinical trials: a requirement of the International Committee of Medical Journal Editors. N Engl J Med. 2017; 376 (23): 2277–9.

16. Zarin DA, Fain KM, Dobbins H, Tse T, Williams RJ. 10-year update on study results submitted to ClinicalTrials.gov. N Engl J Med. 2019; 381 (20): 1966–74.

17. Siebert M, Gaba JF, Caquelin L, Gouraud H, Dupuy A, Moher D, et al. Data-sharing recommendations in biomedical journals and randomised controlled trials: an audit of journals following the ICMJE recommendations. BMJ Open. 2020; 10 (5): e038887.

18. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, et al. A manifesto for reproducible science. Nat Hum Behav. 2017; 1 (1): 0021.

19. Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS, et al. PROBAST: a tool to assess the risk of bias and applicability of prediction model studies. Ann Intern Med. 2019; 170 (1): 51–8.

20. Lang TA, Altman DG. Basic statistical reporting for articles published in biomedical journals: the "Statistical Analyses and Methods in the Published Literature" or the SAMPL guidelines. Int J Nurs Stud. 2015; 52 (1): 5–9.

21. Ordak M. Implementation of SAMPL guidelines: recommendations for improving statistical reporting in biomedical journals. Clin Med (Lond). 2025; 25 (3): 100304.

22. Harrington D, D'Agostino RB Sr, Gatsonis C, Hogan JW, Hunter DJ, Normand SLT, et al. New guidelines for statistical reporting in the journal. N Engl J Med. 2019; 381 (3): 285–6.

23. Ou FS, Moore KL, George SL, Zhou Y, O'Brien PC, Halyard MY, et al. Guidelines for statistical reporting in medical journals. J Thorac Oncol. 2020; 15 (5): 587–93.

24. International Committee of Medical Journal Editors (ICMJE). Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Updated 2019–2025. [access in 2025 Out 4]. Available from: https://www.icmje.org/recommendations/

25. Cook JA, Julious SA, Sones W, Hampson LV, Hewitt CE, Berlin JA, et al. DELTA2 guidance on choosing the target difference and undertaking and reporting the sample size calculation for a randomised controlled trial. BMJ. 2018; 363: k3750.

26. Hernán MA, Robins JM. Causal inference: what if. Boca Raton: Chapman & Hall/CRC; 2020.

27. Galvão TF, Silva MT, Garcia LP. Ferramentas para melhorar a qualidade e a transparência dos relatos de pesquisa em saúde: guias de redação científica. Epidemiol Serv Saúde. 2016; 25 (2): 427–36.

28. Montagna E, Zaia V, Laporta GZ. Adoption of protocols to improve quality of medical research. Einstein (São Paulo). 2019; 18: eED5316.

29. BIREME/OPAS/OMS. Critérios de Seleção e Permanência de Periódicos LILACS Brasil 2025. São Paulo: BIREME; 2025.

30. Galdino-Santos L, Alves Lucena CP, Barreto Segundo JD, Cenci MS, Silva ICM, Moher D, et al. Knowledge and usage of the EQUATOR reporting guidelines: a survey among Brazilian health research group leaders. Encontros Bibli. 2025; 30: e104033.

Acknowledgments
We thank the National Council for Scientific and Technological Development (CNPq) for the financial funding through the Research Productivity Fellowship (Level 1C).

AI use: Declared use of generative AI tool (ChatGPT) for editorial and stylistic refinement.

Autho's contribution
Amorim MMR: conceptualization, methodology, project administration, supervision; writing, review, and editing of the manuscript. Freitas JLB: conceptualization, validation, supervision; writing, review, and editing of the manuscript. Albuquerque EC: data curation, investigation, visualization, writing of the manuscript. Cunha ACMC, Souza ASR: visualization, writing, review, and edition of the manuscript. All authors approved the final version of the article and declared no conflicts of interest.

Data availability
All datasets supporting the study are included in the article.

Received on December 16, 2025
Final version presented on January 26, 2026
Approved on January 30, 2026

Associated Editor: Alex Sandro Souza

Copyright © 2026 Revista Brasileira de Saúde Materno Infantil Todos os direitos reservados.

Desenvolvido por: