Paper

On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning

For many causal effect parameters of interest, doubly robust machine learning (DRML) estimators $\hat{\psi}_{1}$ are the state-of-the-art, incorporating the good prediction performance of machine learning; the decreased bias of doubly robust estimators; and the analytic tractability and bias reduction of sample splitting with cross fitting. Nonetheless, even in the absence of confounding by unmeasured factors, the nominal $(1 - \alpha)$ Wald confidence interval $\hat{\psi}_{1} \pm z_{\alpha / 2} \widehat{\mathsf{se}} [\hat{\psi}_{1}]$ may still undercover even in large samples, because the bias of $\hat{\psi}_{1}$ may be of the same or even larger order than its standard error of order $n^{-1/2}$. In this paper, we introduce essentially assumption-free tests that (i) can falsify the null hypothesis that the bias of $\hat{\psi}_{1}$ is of smaller order than its standard error, (ii) can provide an upper confidence bound on the true coverage of the Wald interval, and (iii) are valid under the null under no smoothness/sparsity assumptions on the nuisance parameters. The tests, which we refer to as \underline{A}ssumption \underline{F}ree \underline{E}mpirical \underline{C}overage \underline{T}ests (AFECTs), are based on a U-statistic that estimates part of the bias of $\hat{\psi}_{1}$.

Results in Papers With Code
(↓ scroll down to see all results)