Demystifying Statistical Learning Based on Efficient Influence Functions
Evaluation of treatment effects and more general estimands is typically achieved via parametric modeling, which is unsatisfactory since model misspecification is likely. Data-adaptive model building (e.g., statistical/machine learning) is commonly employed to reduce the risk of misspecification. Naïve use of such methods, however, delivers estimators whose bias may shrink too slowly with sample size for inferential methods to perform well, including those based on the bootstrap. Bias arises because standard data-adaptive methods are tuned toward minimal prediction error as opposed to, for example, minimal MSE in the estimator. This may cause excess variability that is difficult to acknowledge, due to the complexity of such strategies. Building on results from nonparametric statistics, targeted learning and debiased machine learning overcome these problems by constructing estimators using the estimand’s efficient influence function under the nonparametric model. These increasingly popular methodologies typically assume that the efficient influence function is given, or that the reader is familiar with its derivation. In this article, we focus on derivation of the efficient influence function and explain how it may be used to construct statistical/machine-learning-based estimators. We discuss the requisite conditions for these estimators to perform well and use diverse examples to convey the broad applicability of the theory.
Item Type | Article |
---|---|
Elements ID | 163687 |
Official URL | http://dx.doi.org/10.1080/00031305.2021.2021984 |