We use a Markov chain Monte Carlo algorithm called Gibbs sampling (Gelfand and Smith 1990, Gilks et al.1996) to draw samples from joint posterior distributions of model parameters. The Gibbs sampler is well suited to the model described in Section 2.2 because conditional posterior distributions of its parameters are relatively easy to sample. For example, when inferences are based on only 1 year of data (as in Sections 2.51 and 2.52), the joint posterior density is formed by taking the product of the likelihood function
where = 1/2 and and the prior. Assuming mutually independent, Uniform(0,1) priors for each component of , a Uniform(-1,1) prior for , and a conjugate Gamma(1,2) prior for , the prior density function of model parameters is
Forming the product of Eq. A.1 and Eq. A.2 and excluding terms without parameters yields the joint posterior density (modulo the normalizing constant):
Random draws of , , and are difficult to compute by sampling (A.3) directly. Gibbs sampling provides a sample from (A.3) by computing random draws from each parameter’s full conditional posterior distribution, which holds the values of other parameters constant. By alternating among the parameters, the Gibbs sampler yields a stochastic sequence (actually a Markov chain) whose stationary distribution is the joint posterior; thus, a sample from the stationary Markov chain is also a sample from (A.3). The conditional posterior densities needed for Gibbs sampling are readily derived from the joint posterior density. For example, ignoring terms in (A.3) that don’t include yields the full conditional density for (modulo its normalizing constant)
This function is proportional to the density function of a Gamma distribution, so random draws from (A.4) are relatively easy to compute as follows:
The full conditional densities of and are derived from (A.3) in the same way as that of :
These conditional densities do not have a familiar form, but still may be sampled using adaptive-rejection sampling (Gilks 1992, Gilks and Wild 1992) or other algorithms for computing random draws from univariate densities (Carlin and Louis 2000, p. 131137). When inferences are based on 2 years of data (as in Section 2.5.3), we form the joint posterior density as before and obtain
where Gibbs sampling may be used to sample (A.7) by computing random draws from the following full-conditional distributions (modulo their normalizing constants):
Given a sample from the joint posterior distribution of model parameters, the method of composition (Tanner 1996) may be used to compute a sample from the posterior predictive distribution (Eq. 7) associated with a particular set of management actions; then, Monte Carlo integration may be used to estimate the expected loss (Eq. 8) associated with this set of management actions. We demonstrate these calculations, which are rather trivial for the autoregressive model, using the example in Section 2.5.3. Suppose Gibbs sampling has been used to compute an arbitrarily large sample from the joint posterior distribution (Eq. 5), and let (r) = ((r),2(r),(r)) denote the rth element in this sample. We require a sample of the posterior predictive distribution of vegetation responses (3 |,y1,y2,X1,X2) associated with the proposed management actions specified in . By applying the method of composition to (Eq. 7), the rth element 3(r) is easily obtained by computing a random draw from the following, n-variate normal distribution: N( 3 (r) + (r)(y 2 -X2(r)), 2(r)I), where I is the n×n identity matrix. The absolute-error loss function used in the example of Section 2.5.3 is To estimate the expected loss associated with the proposed management actions , we use Monte Carlo integration to average over the posterior uncertainty expressed in the predictions of 3:
where R denotes the number of draws computed from the posterior predictive distribution of 3.
Carlin, B. P., and T. A. Louis. 2000. Bayes and empirical Bayes methods for data analysis, second edition. Chapman and Hall, Boca Raton, Florida, USA.
Gelfand, A. E., and A. F. M. Smith. 1990. Sampling-based approaches to calculating marginal densities. Journal of the American Statistical Association 85:398409.
Gilks, W. R. 1992. Derivative-free adaptive rejection sampling for Gibbs sampling. Pages 641649 in J. M. Bernardo, J. O. Berger, A. P. Dawid, and A. F. M. Smith, editors. Oxford University Press, Oxford, UK.
Gilks, W. R., S. Richardson, and D. J. Spiegelhalter. 1996. Markov Chain Monte Carlo in Practice. Chapman and Hall, London, UK.
Gilks, W. R., and P. Wild. 1992. Adaptive rejection sampling for Gibbs sampling. Applied Statistics 41:337348.
Tanner, M. A. 1996. Tools for statistical inference: methods for the exploration of posterior distributions and likelihood functions, third edition. Springer-Verlag, New York, New York, USA.