Welcome to our research page featuring recent publications in the field of biostatistics and epidemiology! These fields play a crucial role in advancing our understanding of the causes, prevention, and treatment of various health conditions. Our team is dedicated to advancing the field through innovative studies and cutting-edge statistical analyses. On this page, you will find our collection of research publications describing the development of new statistical methods and their application to real-world data. Please feel free to contact us with any questions or comments.
Filter
Topic
Showing 2 of 2 publications
Various statistical and machine learning algorithms can be used to predict treatment effects at the patient level using data from randomized clinical trials (RCTs). Such predictions can facilitate individualized treatment decisions. Recently, a range of methods and metrics were developed for assessing the accuracy of such predictions. Here, we extend these methods, focusing on the case of survival (time-to-event) outcomes. We start by providing alternative definitions of the participant-level treatment benefit; subsequently, we summarize existing and propose new measures for assessing the performance of models estimating participant-level treatment benefits. We explore metrics assessing discrimination and calibration for benefit and decision accuracy. These measures can be used to assess the performance of statistical as well as machine learning models and can be useful during model development (i.e., for model selection or for internal validation) or when testing a model in new settings (i.e., in an external validation). We illustrate methods using simulated data and real data from the OPERAM trial, an RCT in multimorbid older people, which randomized participants to either standard care or a pharmacotherapy optimization intervention. We provide R codes for implementing all models and measures.
Small-study effects are a common threat in systematic reviews and may indicate publication bias. Their existence is often verified by visual inspection of the funnel plot. Formal tests to assess the presence of funnel plot asymmetry typically estimate the association between the reported effect size and their standard error, the total sample size or the inverse of the total sample size. In this paper, we demonstrate that the application of these tests may be less appropriate in meta-analysis of survival data, where censoring influences statistical significance of the hazard ratio. We subsequently propose two new tests that are based on the total number of observed events and adopt a multiplicative variance component.
We compare the performance of the various funnel plot asymmetry tests in an extensive simulation study where we varied the true hazard ratio (HR=0.5 to 1), the number of published trials (N=10 to 100), the degree of censoring within trials (0 to 90%) and the mechanism leading to participant drop-out (non-informative versus informative).
Results demonstrate that previous well known tests for detecting funnel plot asymmetry suffer from low power or excessive type-I error rates in meta-analysis of survival data, particularly when trials are affected by participant drop-out. Because our novel test (adopting estimates of the asymptotic precision as study weights) yields reasonable power and maintains appropriate type-I error rates, we recommend its use to evaluate funnel plot asymmetry in meta-analysis of survival data. The use of funnel plot asymmetry tests should, however, be avoided when there are few trials available for any meta-analysis.