Stochastic Treatment Choice with Empirical Welfare Updating
This paper proposes a novel method to estimate individualised treatment assignment rules. The method is designed to find rules that are stochastic, reflecting uncertainty in estimation of an assignment rule and about its welfare performance. Our approach is to form a prior distribution over assignment rules, not over data generating processes, and to update this prior based upon an empirical welfare criterion, not likelihood. The social planner then assigns treatment by drawing a policy from the resulting posterior. We show analytically a welfare-optimal way of updating the prior using empirical welfare; this posterior is not feasible to compute, so we propose a variational Bayes approximation for the optimal posterior. We characterise the welfare regret convergence of the assignment rule based upon this variational Bayes approximation, showing that it converges to zero at a rate of ln(n)/sqrt(n). We apply our methods to experimental data from the Job Training Partnership Act Study to illustrate the implementation of our methods.
PDF Abstract