Bayesian empirical likelihood for linear regression and penalized regression
MetadataShow full item record
The likelihood function plays an essential role in statistical analysis. It helps to estimate a set of parameters of interest. To make inferences, usually one must specify a parametric model given data, which is a challenging task because it requires specification of a correct distribution, and this parametric model may be prone to bias that arises either from the estimation of a parameter or an incorrect specification of the probability distribution. Non-parametric approaches are used as a remedy to overcome the misspecification of the model but can be computationally costly. In this dissertation, we proposed an alternative approach based on Bayesian empirical likelihood for linear regression and penalized regression. This method is semi-parametric because it combines a nonparametric and a parametric model. The advantage of this approach is that it does not require the assumption of a parametric model nor the linearity of estimators; that is, we avoided problems with model misspecification. By using a Hamiltonian Monte Carlo, we averted the problem of convergence and the daunting task of finding an adequate proposal density in the Metropolis-Hastings method. Additionally, we showed that the maximum empirical likelihood estimator is consistent. Moreover, the resulting posterior density under the Bayesian empirical likelihood framework lacks a closed form, which makes it difficult to obtain the exact distribution. For this purpose, we derived the asymptotic distribution of the regression parameters in the linear regression along with Bayesian credible intervals.