no code implementations • 23 Apr 2024 • Haozhe Cheng, Cheng Ju, Haicheng Wang, Jinxiang Liu, Mengting Chen, Qiang Hu, Xiaoyun Zhang, Yanfeng Wang
The denoised text classes help OVAR models classify visual samples more accurately; in return, classified visual samples help better denoising.
no code implementations • 16 Feb 2020 • Cheng Ju, Yan Qin, Chunjiang Fu
Iterative linear quadradic regulator(iLQR) has become a benchmark method to deal with nonlinear stochastic optimal control problem.
no code implementations • 18 Jun 2018 • Cheng Ju, David Benkeser, Mark J. Van Der Laan
Many estimators of the average effect of a treatment on an outcome require estimation of the propensity score, the outcome regression, or both.
no code implementations • 18 May 2018 • Cheng Ju, James Li, Bram Wasti, Shengbo Guo
We show that the HELP algorithm improves the predictive performance across multiple tasks, together with semantically meaningful embedding that are discriminative for downstream classification or regression tasks.
no code implementations • 31 Mar 2018 • Cheng Ju, Antoine Chambaz, Mark J. Van Der Laan
Say that the above product is not fast enough and the algorithm for the $G$-component is fine-tuned by a real-valued $h$.
1 code implementation • 18 Jul 2017 • Cheng Ju, Joshua Schwab, Mark J. Van Der Laan
Even if the positivity assumption holds, practical violations of this assumption may jeopardize the finite sample performance of the causal estimator.
no code implementations • 30 Jun 2017 • Cheng Ju, Richard Wyss, Jessica M. Franklin, Sebastian Schneeweiss, Jenny Häggström, Mark J. Van Der Laan
Collaborative minimum loss-based estimation (C-TMLE) is a novel methodology for causal inference that takes into account information on the causal parameter of interest when selecting a PS model.
1 code implementation • 5 Apr 2017 • Cheng Ju, Aurélien Bibaut, Mark J. Van Der Laan
In this work, we investigated multiple widely used ensemble methods, including unweighted averaging, majority voting, the Bayes Optimal Classifier, and the (discrete) Super Learner, for image recognition tasks, with deep neural networks as candidate algorithms.
no code implementations • 7 Mar 2017 • Cheng Ju, Mary Combs, Samuel D Lendle, Jessica M. Franklin, Richard Wyss, Sebastian Schneeweiss, Mark J. Van Der Laan
In this study, we applied and evaluated the performance of the SL in its ability to predict treatment assignment using three electronic healthcare databases.