School of Mathematical Sciences

Bayesian shrinkage prediction and its application to regression problems menu

Bayesian shrinkage prediction and its application to regression problems

Speaker: 
Kei Kobayashi
Department of Mathematical Analysis and Statistical InferenceThe Institute of Statistical Mathematics
Date/Time: 
Thu, 04/03/2010 - 16:30
Room: 
M203
Seminar series: 

In this talk, we consider Bayesian shrinkage predictions for the Normal regression problem under the frequentist Kullback-Leibler risk function. This result is an extension of Komaki (2001, Biometrika) and George (2006, Annals. Stat.).

 Firstly, we consider the multivariate Normal model with an unknown mean and a known covariance. The covariance matrix can be changed after the first sampling. We assume rotation invariant priors of the covariance matrix and the future covariance matrix and show that the shrinkage predictive density with the rescaled rotation invariant superharmonic priors is minimax under the Kullback-Leibler risk. Moreover, if the prior is not constant, Bayesian predictive density based on the prior dominates the one with the uniform prior.
 In this case, the rescaled priors are independent of the covariance matrix of future samples. Therefore, we can calculate the posterior distribution and the mean of the predictive distribution (i.e. the posterior mean and the Bayesian estimate for quadratic loss) based on some of the rescaled Stein priors without knowledge of future covariance. Since the predictive density with the uniform prior is minimax, the one with each rescaled Stein prior is also minimax.

Next we consider Bayesian predictions whose prior can depend on the future covariance. In this case, we prove that the Bayesian prediction based on a rescaled superharmonic prior dominates the one with the uniform prior without assuming the rotation invariance.
 Applying these results to the prediction of response variables in the Normal regression model, we show that there exists the prior distribution such that the corresponding Bayesian predictive density dominates that based on the uniform prior. Since the prior distribution depends on the future explanatory variables, both the posterior distribution and the mean of the predictive distribution may depend on the future explanatory variables.

 The Stein effect has robustness in the sense that it depends on the loss function rather than the true distribution of the observations. Our result shows that the Stein effect has
robustness with respect to the covariance of the true distribution of the future observations.