Cover
Title Page
Copyright Page
Dedication
Contents
Preface
Preface to the Second Edition
R Software and Functions
Data Sets
Open Problems in Mixed Models
1 Introduction: Why Mixed Models?
1.1 Mixed effects for clustered data
1.2 ANOVA, variance components, and the mixed model
1.3 Other special cases of the mixed effects model
1.4 Compromise between Bayesian and frequentist approaches
1.5 Penalized likelihood and mixed effects
1.6 Healthy Akaike information criterion
1.7 Penalized smoothing
1.8 Penalized polynomial fitting
1.9 Restraining parameters, or what to eat
1.10 Ill-posed problems, Tikhonov regularization, and mixed effects
1.11 Computerized tomography and linear image reconstruction
1.12 GLMM for PET
1.13 Maple leaf shape analysis
1.14 DNA Western blot analysis
1.15 Where does the wind blow?
1.16 Software and books
1.17 Summary points
2 MLE for the LME Model
2.1 Example: weight versus height
2.1.1 The first R script
2.2 The model and log-likelihood functions
2.2.1 The model
2.2.2 Log-likelihood functions
2.2.3 Dimension-reduction formulas
2.2.4 Profile log-likelihood functions
2.2.5 Dimension-reduction GLS estimate
2.2.6 Restricted maximum likelihood
2.2.7 Weight versus height (continued)
2.3 Balanced random-coefficient model
2.4 LME model with random intercepts
2.4.1 Balanced random-intercept model
2.4.2 How random effect affects the variance of MLE
2.5 Criterion for MLE existence
2.6 Criterion for the positive definiteness of matrix D
2.6.1 Example of an invalid LME model
2.7 Pre-estimation bounds for variance parameters
2.8 Maximization algorithms
2.9 Derivatives of the log-likelihood function
2.10 Newton-Raphson algorithm
2.11 Fisher scoring algorithm
2.11.1 Simplified FS algorithm
2.11.2 Empirical FS algorithm
2.11.3 Variance-profile FS algorithm
2.12 EM algorithm
2.12.1 Fixed-point algorithm
2.13 Starting point
2.13.1 FS starting point
2.13.2 FP starting point
2.14 Algorithms for restricted MLE
2.14.1 Fisher scoring algorithm
2.14.2 EM algorithm
2.15 Optimization on nonnegative definite matrices
2.15.1 How often can one hit the boundary?
2.15.2 Allow matrix D to be not nonnegative definite
2.15.3 Force matrix D to stay nonnegative definite
2.15.4 Matrix D reparameterization
2.15.5 Criteria for convergence
2.16 lmeFS and lme in R
2.17 Appendix: proof of the existence of MLE
2.18 Summary points
3 Statistical Properties of the LME Model
3.1 Introduction
3.2 Identifiability of the LME model
3.2.1 Linear regression with random coefficients
3.3 Information matrix for variance parameters
3.3.1 Efficiency of variance parameters for balanced data
3.4 Profile-likelihood confidence intervals
3.5 Statistical testing of the presence of random effects
3.6 Statistical properties of MLE
3.6.1 Small-sample properties
3.6.2 Large-sample properties
3.6.3 ML and RML are asymptotically equivalent
3.7 Estimation of random effects
3.7.1 Implementation in R
3.8 Hypothesis and membership testing
3.8.1 Membership test
3.9 Ignoring random effects
3.10 MINQUE for variance parameters
3.10.1 Example: linear regression
3.10.2 MINQUE for σ2
3.10.3 MINQUE for D*
3.10.4 Linear model with random intercepts
3.10.5 MINQUE for the balanced model
3.10.6 lmevarMINQUE function
3.11 Method of moments
3.11.1 lmevarMM function
3.12 Variance least squares estimator
3.12.1 Unbiased VLS estimator
3.12.2 Linear model with random intercepts
3.12.3 Balanced design
3.12.4 VLS as the first iteration of ML
3.12.5 lmevarUVLS function
3.13 Projection on D+ space
3.14 Comparison of the variance parameter estimation
3.14.1 lmesim function
3.15 Asymptotically efficient estimation for β
3.16 Summary points
4 Growth Curve Model and Generalizations
4.1 Linear growth curve model
4.1.1 Known matrix D
4.1.2 Maximum likelihood estimation
4.1.3 Method of moments for variance parameters
4.1.4 Two-stage estimation
4.1.5 Special growth curve models
4.1.6 Unbiasedness and efficient estimation for β
4.2 General linear growth curve model
4.2.1 Example: Calcium supplementation for bone gain
4.2.2 Variance parameters are known
4.2.3 Balanced model
4.2.4 Likelihood-based estimation
4.2.5 MM estimator for variance parameters
4.2.6 Two-stage estimator and asymptotic properties
4.2.7 Analysis of misspecification
4.3 Linear model with linear covariance structure
4.3.1 Method of maximum likelihood
4.3.2 Variance least squares
4.3.3 Statistical properties
4.3.4 LME model for longitudinal autocorrelated data
4.3.5 Multidimensional LME model
4.4 Robust linear mixed effects model
4.4.1 Robust estimation of the location parameter with estimated σ and c
4.4.2 Robust linear regression with estimated threshold
4.4.3 Robust LME model
4.4.4 Alternative robust functions
4.4.5 Robust random effect model
4.5 Appendix: derivation of the MM estimator
4.6 Summary points
5 Meta-analysis Model
5.1 Simple meta-analysis model
5.1.1 Estimation of random effects
5.1.2 Maximum likelihood estimation
5.1.3 Quadratic unbiased estimation for σ2
5.1.4 Statistical inference
5.1.5 Robust /median meta-analysis
5.1.6 Random effect coefficient of determination
5.2 Meta-analysis model with covariates
5.2.1 Maximum likelihood estimation
5.2.2 Quadratic unbiased estimation for σ2
5.2.3 Hypothesis testing
5.3 Multivariate meta-analysis model
5.3.1 The model
5.3.2 Maximum likelihood estimation
5.3.3 Quadratic estimation of the heterogeneity matrix
5.3.4 Test for homogeneity
5.4 Summary points
6 Nonlinear Marginal Model
6.1 Fixed matrix of random effects
6.1.1 Log-likelihood function
6.1.2 nls function in R
6.1.3 Computational issues of nonlinear least squares
6.1.4 Distribution-free estimation
6.1.5 Testing for the presence of random effects
6.1.6 Asymptotic properties
6.1.7 Example: log-Gompertz growth curve
6.2 Varied matrix of random effects
6.2.1 Maximum likelihood estimation
6.2.2 Distribution-free variance parameter estimation
6.2.3 GEE and iteratively reweighted least squares
6.2.4 Example: logistic curve with random asymptote
6.3 Three types of nonlinear marginal models
6.3.1 Type I nonlinear marginal model
6.3.2 Type II nonlinear marginal model
6.3.3 Type III nonlinear marginal model
6.3.4 Asymptotic properties under distribution misspecification
6.4 Total generalized estimating equations approach
6.4.1 Robust feature of total GEE
6.4.2 Expected Newton–Raphson algorithm for total GEE
6.4.3 Total GEE for the mixed effects model
6.4.4 Total GEE for the LME model
6.4.5 Example (continued): log-Gompertz curve
6.4.6 Photodynamic tumor therapy
6.5 Summary points
7 Generalized Linear Mixed Models
7.1 Regression models for binary data
7.1.1 Approximate relationship between logit and probit
7.1.2 Computation of the logistic-normal integral
7.1.3 Gauss-Hermite numerical quadrature for multidimensional integrals in R
7.1.4 Log-likelihood and its numerical properties
7.1.5 Unit step algorithm
7.2 Binary model with subject-specific intercept
7.2.1 Consequences of ignoring a random effect
7.2.2 ML logistic regression with a fixed subject-specific intercept
7.2.3 Conditional logistic regression
7.3 Logistic regression with random intercept
7.3.1 Maximum likelihood
7.3.2 Fixed sample likelihood approximation
7.3.3 Quadratic approximation
7.3.4 Laplace approximation to the likelihood
7.3.5 VARLINK estimation
7.3.6 Beta-binomial model
7.3.7 Statistical test of homogeneity
7.3.8 Asymptotic properties
7.4 Probit model with random intercept
7.4.1 Laplace and PQL approximations
7.4.2 VARLINK estimation
7.4.3 Heckman method for the probit model
7.4.4 Generalized estimating equations approach
7.4.5 Implementation in R
7.5 Poisson model with random intercept
7.5.1 Poisson regression for count data
7.5.2 Clustered count data
7.5.3 Fixed intercepts
7.5.4 Conditional Poisson regression
7.5.5 Negative binomial regression
7.5.6 Normally distributed intercepts
7.5.7 Exact GEE for any distribution
7.5.8 Exact GEE for balanced count data
7.5.9 Heckman method for the Poisson model
7.5.10 Tests for overdispersion
7.5.11 Implementation in R
7.6 Random intercept model: overview
7.7 Mixed models with multiple random effects
7.7.1 Multivariate Laplace approximation
7.7.2 Logistic regression
7.7.3 Probit regression
7.7.4 Poisson regression
7.7.5 Homogeneity tests
7.8 GLMM and simulation methods
7.8.1 General form of GLMM via the exponential family
7.8.2 Monte Carlo for ML
7.8.3 Fixed sample likelihood approach
7.9 GEE for clustered marginal GLM
7.9.1 Variance least squares
7.9.2 Limitations of the GEE approach
7.9.3 Marginal or conditional model?
7.9.4 Implementation in R
7.10 Criteria for MLE existence for a binary model
7.11 Summary points
8 Nonlinear Mixed Effects Model
8.1 Introduction
8.2 The model
8.3 Example: height of girls and boys
8.4 Maximum likelihood estimation
8.5 Two-stage estimator
8.5.1 Maximum likelihood estimation
8.5.2 Method of moments
8.5.3 Disadvantage of two-stage estimation
8.5.4 Further discussion
8.5.5 Two-stage method in the presence of a common parameter
8.6 First-order approximation
8.6.1 GEE and MLE
8.6.2 Method of moments and VLS
8.7 Lindstrom–Bates estimator
8.7.1 What if matrix D is not positive definite?
8.7.2 Relation to the two-stage estimator
8.7.3 Computational aspects of penalized least squares
8.7.4 Implementation in R: the function nlme
8.8 Likelihood approximations
8.8.1 Linear approximation of the likelihood at zero
8.8.2 Laplace and PQL approximations
8.9 One-parameter exponential model
8.9.1 Maximum likelihood estimator
8.9.2 First-order approximation
8.9.3 Two-stage estimator
8.9.4 Lindstrom-Bates estimator
8.10 Asymptotic equivalence of the TS and LB estimators
8.11 Bias-corrected two-stage estimator
8.12 Distribution misspecification
8.13 Partially nonlinear marginal mixed model
8.14 Fixed sample likelihood approach
8.14.1 Example: one-parameter exponential model
8.15 Estimation of random effects and hypothesis testing
8.15.1 Estimation of the random effects
8.15.2 Hypothesis testing for the NLME model
8.16 Example (continued)
8.17 Practical recommendations
8.18 Appendix: Proof of theorem on equivalence
8.19 Summary points
9 Diagnostics and Influence Analysis
9.1 Introduction
9.2 Influence analysis for linear regression
9.3 The idea of infinitesimal influence
9.3.1 Data influence
9.3.2 Model influence
9.4 Linear regression model
9.4.1 Influence of the dependent variable
9.4.2 Influence of the continuous explanatory variable
9.4.3 Influence of the binary explanatory variable
9.4.4 Influence on the predicted value
9.4.5 Case or group deletion
9.4.6 R code
9.4.7 Influence on regression characteristics
9.4.8 Example 1: Women's body fat
9.4.9 Example 2: gypsy moth study
9.5 Nonlinear regression model
9.5.1 Influence of the dependent variable on the LSE
9.5.2 Influence of the explanatory variable on the LSE
9.5.3 Influence on the predicted value
9.5.4 Influence of case deletion
9.5.5 Example 3: logistic growth curve model
9.6 Logistic regression for binary outcome
9.6.1 Influence of the covariate on the MLE
9.6.2 Influence on the predicted probability
9.6.3 Influence of the case deletion on the MLE
9.6.4 Sensitivity to misclassification
9.6.5 Example: Finney data
9.7 Influence of correlation structure
9.8 Influence of measurement error
9.9 Influence analysis for the LME model
9.9.1 Example: Weight versus height
9.10 Appendix: MLE derivative with respect to ó2
9.11 Summary points
10 Tumor Regrowth Curves
10.1 Survival curves
10.2 Double-exponential regrowth curve
10.2.1 Time to regrowth, TR
10.2.2 Time to reach specific tumor volume, T*
10.2.3 Doubling time, TD
10.2.4 Statistical model for regrowth
10.2.5 Variance estimation for tumor regrowth outcomes
10.2.6 Starting values
10.2.7 Example: chemotherapy treatment comparison
10.3 Exponential growth with fixed regrowth time
10.3.1 Statistical hypothesis testing
10.3.2 Synergistic or supra-additive effect
10.3.3 Example: combination of treatments
10.4 General regrowth curve
10.5 Double-exponential transient regrowth curve
10.5.1 Example: treatment of cellular spheroids
10.6 Gompertz transient regrowth curve
10.6.1 Example: tumor treated in mice
10.7 Summary points
11 Statistical Analysis of Shape
11.1 Introduction
11.2 Statistical analysis of random triangles
11.3 Face recognition
11.4 Scale-irrelevant shape model
11.4.1 Random effects scale-irrelevant shape model
11.4.2 Scale-irrelevant shape model on the log scale
11.4.3 Fixed or random size?
11.5 Gorilla vertebrae analysis
11.6 Procrustes estimation of the mean shape
11.6.1 Polygon estimation
11.6.2 Generalized Procrustes model
11.6.3 Random effects shape model
11.6.4 Random or fixed (Procrustes) effects model?
11.6.5 Maple leaf analysis
11.7 Fourier descriptor analysis
11.7.1 Analysis of a star shape
11.7.2 Random Fourier descriptor analysis
11.7.3 Potato project
11.8 Summary points
12 Statistical Image Analysis
12.1 Introduction
12.1.1 What is a digital image?
12.1.2 Image arithmetic
12.1.3 Ensemble and repeated measurements
12.1.4 Image and spatial statistics
12.1.5 Structured and unstructured images
12.2 Testing for uniform lighting
12.2.1 Estimating light direction and position
12.3 Kolmogorov–Smirnov image comparison
12.3.1 Kolmogorov-Smirnov test for image comparison
12.3.2 Example: histological analysis of cancer treatment
12.4 Multinomial statistical model for images
12.4.1 Multinomial image comparison
12.5 Image entropy
12.5.1 Reduction of a gray image to binary
12.5.2 Entropy of a gray image and histogram equalization
12.6 Ensemble of unstructured images
12.6.1 Fixed-shift model
12.6.2 Random-shift model
12.6.3 Mixed model for gray images
12.6.4 Two-stage estimation
12.6.5 Schizophrenia MRI analysis
12.7 Image alignment and registration
12.7.1 Affine image registration
12.7.2 Weighted sum of squares
12.7.3 Nonlinear transformations
12.7.4 Random registration
12.7.5 Linear image interpolation
12.7.6 Computational aspects
12.7.7 Derivative-free algorithm for image registration
12.7.8 Example: clock alignment
12.8 Ensemble of structured images
12.8.1 Fixed affine transformations
12.8.2 Random affine transformations
12.9 Modeling spatial correlation
12.9.1 Toeplitz correlation structure
12.9.2 Simultaneous estimation of variance and transform parameters
12.10 Summary points
13 Appendix: Useful Facts and Formulas
13.1 Basic facts of asymptotic theory
13.1.1 Central Limit Theorem
13.1.2 Generalized Slutsky theorem
13.1.3 Pseudo-maximum likelihood
13.1.4 Estimating equations approach and the sandwich formula
13.1.5 Generalized estimating equations approach
13.2 Some formulas of matrix algebra
13.2.1 Some matrix identities
13.2.2 Formulas for generalized matrix inverse
13.2.3 Vec and vech functions; duplication matrix
13.2.4 Matrix differentiation
13.3 Basic facts of optimization theory
13.3.1 Criteria for unimodality
13.3.2 Criteria for global optimum
13.3.3 Criteria for minimum existence
13.3.4 Optimization algorithms in statistics
13.3.5 Necessary condition for optimization and criteria for convergence
References
Index