### J. Andrew Royle, Marc Kéry, Roland Gautier, and Hans Schmid. 2007. Hierarchical spatial models of abundance and occurrence from imperfect survey data. Ecological Monographs 77:465–481.

Appendix A. Integrated likelihood and Bayesian analysis.

### 1 The Integrated Likelihood

For purposes of estimating model parameters, it is convenient to remove the collection of abundance parameters, {Ni}i=1R, from the likelihood by integration, to yield a likelihood that is only a function of the detection parameters α and intensity parameters {λi(zi,β)}i=1R. In the present case, the integrated likelihood of the data from quadrat i is

where [Nii(zi,β)] is the Poisson (prior) distribution on abundance. In this expression, ni = ∑ h>0xih, the number of observed individuals at sample location i. In this case, the integrated likelihood of the data for quadrat i simplifies to a product-Poisson likelihood:

 (A.1)

and the joint likelihood is

 (A.2)

1.1 Maximum Likelihood Estimation and Prediction

When the log-intensity function only contains fixed effects so that λi(zi,β) ≡ λi(β), one may obtain parameter estimates by maximizing the integrated likelihood to obtain estimates of α and β. In this case, estimates or predictions of N for any site (sampled, or not) can be obtained using a conventional “plug-in” type of procedure wherein and are substituted into the Best Unbiased Predictor (BUP) (see Royle and Nichols 2003; Royle 2004). In the present case, note that the conditional distribution of Ni given ni = ∑ h>0xih is:

 (A.3)

where

 .

Thus, an estimate of Ni can be taken as the mean of Eq. A.3, or quantiles can be used to construct an interval estimate for Ni.

However, a formal Bayesian procedure may be more appealing in order to better characterize uncertainty due to having estimated the parameters α and β. Also, direct analysis by integrated likelihood is impractical when additional model structure containing random effects is considered (i.e., under the likelihood specified by Eq. A.2). We next consider a more general framework for estimation and prediction based on MCMC.

### 2 Bayesian Analysis by MCMC

For the case where we have a spatial model specified on the parameters of the data model, the convenient form of the integrated likelihood given by Eq. A.2 is helpful but the spatial effects pose some difficulty in attempting to numerically integrate the likelihood (as noted also by Diggle et al. 1998). To deal with these spatial effects, we devised a MCMC algorithm for iteratively updating each parameter in the model. The algorithm was developed in the free software package R (Ihaka and Gentleman, 1996) using conventional methods based on the Metropolis-Hastings algorithm and Gibbs sampling, yielding a hybrid algorithm referred to generically as Metropolized Gibbs sampling (Robert and Casella 1999, sec. 7.3). Because the structure of the algorithm is fairly conventional, we avoid a detailed description of each component of the algorithm.

Each iteration of the algorithm proceeds by sequentially drawing samples of the log-intensity parameters, log(λ(si)), the vector of spatial effects z, and remaining structural parameters (α, β, σε2, σ z2, θ) from their conditional posterior distributions. Operationally, the Poisson log-intensities u(si) ≡ log(λ(si)) (i.e., for sampled quadrat i) are updated using a Metropolis-Hastings step (based on the integrated likelihood). Given u = {u(si)}i=1R, the vector z is updated by drawing a sample from a multivariate normal distribution using the Cholesky factorization method. Then, β can be updated as a draw from a low-dimensional multivariate normal distribution. The detection probability parameters, α are sampled by Metropolis-Hastings, and the variance components from conjugate inverse-gamma distributions. Finally, the logarithm correlation parameter (θ) is sampled using a Metropolis step, with candidate values generated by perturbing the current value of log(θ). These steps are repeated a large number of times after a suitably long burn-in period required to ensure that sampling from the target posterior distribution is achieved.

If desired, draws of Ni (for the sampled quadrats) may be obtained by adding a draw of the “unobserved” component of the population to the number of unique individuals observed:

 (A.4)

with α and λ(si) ≡ exp set to their current values of the Markov chain.

#### 2.1 Posterior Prediction

Having obtained a large sample of parameters from the posterior distribution, one may use these samples to obtain predictions of various quantities of interest, such as N(s) for each (unsampled) quadrat s. For some unsampled quadrat s, note that the conditional posterior distribution of z(s) only depends on z (the spatial effects at the data locations) and the structural parameters β, θ and σz2. Consequently, after we have obtained a large posterior sampled from the estimation component of the algorithm, draws of z(s) may be obtained from the full-conditional distribution [z(s)|z,…] which is univariate Gaussian. Then, conditional on z(s), [u(s)|z(s),…] = Gau(z(s),σε2), and λ(s) ≡ exp. Finally, draw N(s) from

 (A.5)

Several functions of N(s) may be of particular importance. For example, occurrence is equivalent to the event that N(s) > 0 and the posterior probability of occurrence can be computed by obtaining a posterior sample {N(s)(k)} k=1K and computing ψ(s) = ∑ kI(N(s) > 0). Also, the total number of territories within any region is and posterior samples of this quantity can also be computed directly from the MCMC output.

LITERATURE CITED

Diggle, P. J., J. A. Tawn, and R. A. Moyeed. 1998. Model-based geostatistics (with discussion). Applied Statistics 47:299–350.

Ihaka, R., and R. Gentleman. 1996. R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics 5:299–314.

Royle, J. A., and J. D. Nichols. 2003. Estimating abundance from repeated presence absence data or point counts. Ecology 84(3):777–790.

Royle, J. A. 2004. N-Mixture models for estimating population size from spatially replicated counts. Biometrics 60:108–115.

Robert, C. P., and G. Casella. 1999. Monte Carlo Statistical Methods. Springer-Verlag, New York, New York, USA.

[Back to M077-013]