Welcome to our research page featuring recent publications in the field of biostatistics and epidemiology! These fields play a crucial role in advancing our understanding of the causes, prevention, and treatment of various health conditions. Our team is dedicated to advancing the field through innovative studies and cutting-edge statistical analyses. On this page, you will find our collection of research publications describing the development of new statistical methods and their application to real-world data. Please feel free to contact us with any questions or comments.
Filter
Topic
Showing 1 of 2 publications
To the Editor: We recently became aware of the study by Chen et al. Notable differences in the 3-month confirmed disability progression (CDP3M) outcome in this analysis have been identified compared with previously published network meta-analysis (NMA). More specifically, the results for CDP3M greatly differ for interferon (IFN) beta-1A 30 mcg every week, IFN beta-1A 44 mcg 3 times a week, IFN beta-1A 22 mcg 3 times a week, natalizumab 300 mg every 4 weeks, and ocrelizumab 600 mg every 24 weeks. The published comparative estimates by Chen et al. may compromise the external validity of the SUCRA ranking results given that it is inconsistent with the totality of the existing body of published evidence. For example, the NMA of Chen et al. includes only one trial assessing the efficacy of natalizumab (where it was compared with placebo). Because there are no trials comparing natalizumab with other active treatments, the pooled effect estimate for natalizumab versus placebo (hazard ratio = 0.85) should remain similar to the treatment effect estimate from the original trial (hazard ratio = 0.58). However, this is not the case in the review of Chen et al. Similar discrepancies appear for ponesimod where the original trial reported a hazard ratio versus teriflunomide 14 mg of 0.83 (0.58; 1.18), whereas the NMA reported a hazard ratio of 1.39 (0.55; 3.57). Therefore, we respectfully request additional transparency from Chen et al. regarding the NMA methods and additional clarity supporting their results.
A common problem in the analysis of multiple data sources, including individual participant data meta-analysis (IPD-MA), is the misclassification of binary variables. Misclassification may lead to biased estimators of model parameters, even when the misclassification is entirely random. We aimed to develop statistical methods that facilitate unbiased estimation of adjusted and unadjusted exposure-outcome associations and between-study heterogeneity in IPD-MA, where the extent and nature of exposure misclassification may vary across studies.
We present Bayesian methods that allow misclassification of binary exposure variables to depend on study- and participant-level characteristics. In an example of the differential diagnosis of dengue using two variables, where the gold standard measurement for the exposure variable was unavailable for some studies which only measured a surrogate prone to misclassification, our methods yielded more accurate estimates than analyses naive with regard to misclassification or based on gold standard measurements alone. In a simulation study, the evaluated misclassification model yielded valid estimates of the exposure-outcome association, and was more accurate than analyses restricted to gold standard measurements.
Our proposed framework can appropriately account for the presence of binary exposure misclassification in IPD-MA. It requires that some studies supply IPD for the surrogate and gold standard exposure, and allows misclassification to follow a random effects distribution across studies conditional on observed covariates (and outcome). The proposed methods are most beneficial when few large studies that measured the gold standard are available, and when misclassification is frequent.