Empirical Bayes in Bayesian learning: understanding a common practice

Biostatistical seminar with Sonia Petrone, Professor, Department of Decision Sciences, Bocconi University of Milano, Italy.

Abstract

In applications of Bayesian procedures, even when the prior law is carefully specified, it may be delicate to elicit the prior hyperparameters so that it is often tempting to fix them from the data, usually by their maximum marginal likelihood estimates (MMLE), obtaining a so-called empirical Bayes posterior distribution. Although questionable, this is a common practice; often thought of as a computationally convenient approximation of a genuine Bayesian procedure. However, whether it is actually such, or what Bayesian inference would it approximate, is unclear; and most theoretical results seem only available on a case-by-case basis. In the talk we will discuss this empirical Bayes practice, suggesting a theoretical framework that allows us to give formal contents to the above common beliefs, and to prove general results for parametric models.

We first establish the limit behavior of the MMLE in quite general settings; we also conceptualize the frequentist context as an unexplored case of maximum likelihood estimation under model misspecification. Finally, we show that,  in regular cases, the empirical Bayes posterior is a fast approximation to the "oracle" Bayesian posterior distribution, that corresponds to the prior law that, within the given class, expresses the most information about the true model's parameters. This is a faster approximation than classic Bernstein-von Mises results.

These results assume that the class of priors is given; choosing the class of priors is a wider problem, deeply studied in Bayesian statistics.

This is a joint work with Judith Rousseau and Stefano Rizzelli.

Published Mar. 28, 2024 1:04 PM - Last modified Apr. 11, 2024 10:27 AM