[link]
Summary by semaphore 8 years ago
This paper implements a parameter expansion technique with Markov Chain Monte Carlo (MCMC) for the multivariate probit model (MVP). Since the covariance matrix is not identifiable from multivariate ordinal data, a correlation matrix restriction is appropriate with MVP. This is a problem for Bayesian inference, as the structural constraints on correlation matrices make the selection of a good prior and the estimation of a posterior difficult.
The main advantage of parameter expansion with this model is that we can draw the covariance matrix in the expanded parameter space from an inverse Wishart distribution with a simple Gibbs sampler. At each iteration, all parameters are drawn from the expanded space and rescaled to their original form. Since the parameter draws are only dependent on the latest latent variable, the dependence between iterations is reduced. This technique improves the mixing and convergence rate of the MCMC.
Parameter expansion for the MVP model is also easy to implement for stochastic search variable selection (SSVS). An indicator variable $\lambda_i$ is included as a parameter, where each 1 in $\lambda_i$ is associated to a covariate selected for the model.
Gibbs sampler algorithm:
Let $q$-variate vector $Z_i \sim MVN( \beta X_i, R )$ be a latent variable, truncated in all dimensions according to the corresponding cutpoints, where $R$ is a $q \times q$ correlation matrix.
Let $\beta$ be a $q \times p$ regression coefficient matrix, $\gamma$ be the set of cutpoints where $\gamma_0 = -\infty$, $\gamma_1 = 0$ and $\gamma_k = \infty$, and $k$ be the number of categories for the response.
Map $Z $ onto $W$:
Cycle through draws of the univariate truncated $Z_{i,j} \sim N(\mu_{i,j} - \frac{1}{R^{-1}_{j,j}}R^{-1}_{j,-j}(Z_{i,-j}-\mu_{i,-j}), \frac{1}{R^{-1}_{j,j}})$, where $\mu_i = \beta X_i$, to get the truncated multivariate normal distribution of $W_i$.
$R \rightarrow \Sigma$:
Draw $\Sigma | Z \sim IW(n-p+q-1, WW^T - WX^T(XX^T)^{-1}XW^T)$
$\beta \rightarrow \alpha$:
Draw $\alpha | W, \Sigma \sim N_{q,p}(WX^T(XX^T)^{-1}, \Sigma \otimes (XX^T)^{-1})$
where $N_{q,p}$ is a matrix variate normal distribution.
$\gamma \rightarrow \theta$:
Draw $\theta_{j,k+1} | W, Y \sim Unif(max_i(\{W_{ij}|Y_{ij}=c\}), min_i(\{W_{ij}|Y_{ij} = c + 1\})) $ for all free cutpoints.
Rescale the parameters back to the original space:
$Z_{i,j} = \dfrac{W_{i,j}}{\sqrt{\Sigma_{j,j}}}$
$R_{i,j} = \dfrac{\Sigma_{i,j}}{\sqrt{\Sigma_{i,i}\cdot\Sigma_{j,j}}}$
$\beta_{k,j} = \dfrac{\alpha_{k,j}}{\sqrt{\Sigma_{j,j}}}$
$\gamma_{j,k+1} = \dfrac{\theta_{j,k+1}}{\sqrt{\Sigma_{j,j}}}$
Repeat.
more
less