• Mashup Score: 0

    Background Health and social care interventions are often complex and can be decomposed into multiple components. Multicomponent interventions are often evaluated in randomised controlled trials. Across trials, interventions often have components in common which are given alongside other components which differ across trials. Multicomponent interventions can be synthesised using component NMA (CNMA). CNMA is limited by the structure of the available evidence, but it is not always straightforward to visualise such complex evidence networks. The aim of this paper is to develop tools to visualise the structure of complex evidence networks to support CNMA. Methods We performed a citation review of two key CNMA methods papers to identify existing published CNMA analyses and reviewed how they graphically represent intervention complexity and comparisons across trials. Building on identified shortcomings of existing visualisation approaches, we propose three approaches to standardise visualis

    Tweet Tweets with this article
    • As component network meta-analyses becomes more widely used for the evaluation of multicomponent interventions, novel specific visualisations will be important to aid understanding of the complex data structure and facilitate interpretation of results https://t.co/owzQsR22q1 https://t.co/Cfu9dFGSIa

  • Mashup Score: 0

    Background Intensive care unit (ICU) length of stay (LOS) and the risk adjusted equivalent (RALOS) have been used as quality metrics. The latter measures entail either ratio or difference formulations or ICU random effects (RE), which have not been previously compared. Methods From calendar year 2016 data of an adult ICU registry-database (Australia & New Zealand Intensive Care Society (ANZICS) CORE), LOS predictive models were established using linear (LMM) and generalised linear (GLMM) mixed models. Model fixed effects quality-metric formulations were estimated as RALOSR for LMM (geometric mean derived from log(ICU LOS)) and GLMM (day) and observed minus expected ICU LOS (OMELOS from GLMM). Metric confidence intervals (95%CI) were estimated by bootstrapping; random effects (RE) were predicted for LMM and GLMM. Forest-plot displays of ranked quality-metric point-estimates (95%CI) were generated for ICU hospital classifications (metropolitan, private, rural/regional, and tertiary). Rob

    Tweet Tweets with this article
    • Modelling of intensive care unit (ICU) length of stay as a quality measure: a problematic exercise https://t.co/RA8hG4YkF4

  • Mashup Score: 0

    Background Stringent requirements exist regarding the transparency of the study selection process and the reliability of results. A 2-step selection process is generally recommended; this is conducted by 2 reviewers independently of each other (conventional double-screening). However, the approach is resource intensive, which can be a problem, as systematic reviews generally need to be completed within a defined period with a limited budget. The aim of the following methodological systematic review was to analyse the evidence available on whether single screening is equivalent to double screening in the screening process conducted in systematic reviews. Methods We searched Medline, PubMed and the Cochrane Methodology Register (last search 10/2018). We also used supplementary search techniques and sources (“similar articles” function in PubMed, conference abstracts and reference lists). We included all evaluations comparing single with double screening. Data were summarized in a structu

    Tweet Tweets with this article
    • Single screening versus conventional double screening for study selection in systematic reviews: a methodological systematic review https://t.co/6ySFXH0i8Y via @PieperDawid et al

  • Mashup Score: 2

    Background Pragmatic clinical trials (PCTs) are designed to reflect how an investigational treatment would be applied in clinical practice. As such, unlike their explanatory counterparts, they measure therapeutic effectiveness and are capable of generating high-quality real-world evidence. However, the conduct of PCTs remains extremely rare. The scarcity of such studies has contributed to the emergence of the efficacy-effectiveness gap and has led to calls for launching more of them, including in the field of oncology. This analysis aimed to identify self-labelled pragmatic trials of antineoplastic interventions and to evaluate whether their use of this label was justified. Methods We searched PubMed® and Embase® for publications corresponding with studies that investigated antitumor therapies and that were tagged as pragmatic in their titles, abstracts and/or index terms. Subsequently, we consulted all available source documents for the included trials and extracted relevant informati

    Tweet Tweets with this article
    • Proud to share EORTC fellow @robbesaesen's article discussing the learnings from his recent #publication on the use of #PragmaticTrials in #oncology. Publication available here: https://t.co/doJfV97vyV #ClinicalTrials #CancerResearch https://t.co/X3H8VvvIhA

  • Mashup Score: 10

    Background Having an appropriate sample size is important when developing a clinical prediction model. We aimed to review how sample size is considered in studies developing a prediction model for a binary outcome. Methods We searched PubMed for studies published between 01/07/2020 and 30/07/2020 and reviewed the sample size calculations used to develop the prediction models. Using the available information, we calculated the minimum sample size that would be needed to estimate overall risk and minimise overfitting in each study and summarised the difference between the calculated and used sample size. Results A total of 119 studies were included, of which nine studies provided sample size justification (8%). The recommended minimum sample size could be calculated for 94 studies: 73% (95% CI: 63–82%) used sample sizes lower than required to estimate overall risk and minimise overfitting including 26% studies that used sample sizes lower than required to estimate overall risk only. A si

    Tweet Tweets with this article
    • Sample size requirements are not being considered in studies developing prediction models for binary outcomes: a systematic review https://t.co/U0piAtyJyT via @pauladhiman et al @Argenscore @ovidiogarciav @pomyers @FaisalBakaeen @mmamas1973 https://t.co/XujeMwUAsU