Preface |
|
xvii | |
|
I Introduction to models and packages |
|
|
1 | (56) |
|
|
3 | (28) |
|
|
3 | (1) |
|
1.2 The 1980s Munich rent data |
|
|
4 | (2) |
|
1.3 The linear regression model (LM) |
|
|
6 | (4) |
|
1.4 The generalized linear model (GLM) |
|
|
10 | (6) |
|
1.5 The generalized additive model (GAM) |
|
|
16 | (4) |
|
1.6 Modelling the scale parameter |
|
|
20 | (3) |
|
1.7 The generalized additive model for location, scale and shape (GAMLSS) |
|
|
23 | (4) |
|
|
27 | (1) |
|
|
28 | (3) |
|
2 Introduction to the gamlss packages |
|
|
31 | (26) |
|
|
31 | (1) |
|
|
32 | (1) |
|
2.3 A simple example using the gamlss packages |
|
|
33 | (21) |
|
2.3.1 Fitting a parametric model |
|
|
34 | (6) |
|
2.3.2 Fitting a nonparametric smoothing model |
|
|
40 | (1) |
|
|
40 | (3) |
|
|
43 | (1) |
|
|
44 | (1) |
|
|
44 | (2) |
|
2.3.3 Extracting fitted values |
|
|
46 | (1) |
|
2.3.4 Modelling both fi and a |
|
|
46 | (2) |
|
|
48 | (1) |
|
2.3.6 Fitting different distributions |
|
|
49 | (1) |
|
2.3.7 Selection between models |
|
|
50 | (4) |
|
|
54 | (1) |
|
|
55 | (2) |
|
II Algorithms, functions and inference |
|
|
57 | (94) |
|
|
59 | (28) |
|
|
59 | (3) |
|
3.2 Estimating β and γ for fixed α |
|
|
62 | (14) |
|
|
63 | (1) |
|
3.2.1.1 The outer iteration (GAMLSS iteration) |
|
|
63 | (1) |
|
3.2.1.2 The inner iteration (GLM or GLIM iteration) |
|
|
64 | (4) |
|
3.2.1.3 The modified backfitting algorithm |
|
|
68 | (2) |
|
|
70 | (1) |
|
3.2.2.1 The outer iteration |
|
|
70 | (1) |
|
3.2.2.2 The inner iteration |
|
|
70 | (2) |
|
3.2.2.3 The modified backfitting algorithm |
|
|
72 | (1) |
|
3.2.3 Fish species example |
|
|
72 | (3) |
|
3.2.4 Remarks on the GAMLSS algorithms |
|
|
75 | (1) |
|
3.3 MAP estimators of ft and 7 for fixed A |
|
|
76 | (1) |
|
3.4 Estimating the hyperparameters A |
|
|
77 | (5) |
|
|
79 | (1) |
|
3.4.1.1 Maximum likelihood |
|
|
79 | (1) |
|
3.4.1.2 Generalized Akaike information criterion |
|
|
79 | (1) |
|
|
80 | (1) |
|
|
81 | (1) |
|
3.4.2.1 Maximum likelihood |
|
|
81 | (1) |
|
3.4.2.2 Generalized Akaike information criterion |
|
|
82 | (1) |
|
3.4.2.3 Generalized cross validation |
|
|
82 | (1) |
|
|
82 | (2) |
|
|
84 | (3) |
|
|
87 | (26) |
|
4.1 Introduction to the gamlss () function |
|
|
87 | (1) |
|
4.2 The arguments of the gamlss () function |
|
|
88 | (10) |
|
4.2.1 The algorithmic control functions |
|
|
91 | (3) |
|
4.2.2 Weighting out observations: the weights and data=subset () arguments |
|
|
94 | (4) |
|
4.3 The refit and update functions |
|
|
98 | (4) |
|
|
98 | (1) |
|
|
99 | (3) |
|
|
102 | (6) |
|
4.5 Methods and functions for gamlss objects |
|
|
108 | (1) |
|
|
109 | (1) |
|
|
110 | (3) |
|
5 Inference and prediction |
|
|
113 | (38) |
|
|
113 | (5) |
|
5.1.1 Asymptotic behaviour of a parametric GAMLSS model |
|
|
114 | (1) |
|
5.1.2 Types of inference in a GAMLSS model |
|
|
114 | (2) |
|
5.1.3 Likelihood-based inference |
|
|
116 | (1) |
|
|
117 | (1) |
|
5.2 Functions to obtain standard errors |
|
|
118 | (8) |
|
5.2.1 The gen.likelihood() function |
|
|
118 | (2) |
|
5.2.2 The vcov() and rvcovO functions |
|
|
120 | (3) |
|
5.2.3 The summary () function |
|
|
123 | (3) |
|
5.3 Functions to obtain confidence intervals |
|
|
126 | (9) |
|
5.3.1 The conf int() function |
|
|
126 | (1) |
|
5.3.2 The prof dev() function |
|
|
127 | (3) |
|
5.3.3 The prof term() function |
|
|
130 | (5) |
|
5.4 Functions to obtain predictions |
|
|
135 | (9) |
|
5.4.1 The predict () function |
|
|
135 | (8) |
|
5.4.2 The predictAll() function |
|
|
143 | (1) |
|
5.5 Appendix: Some theoretical properties of GLM and GAMLSS |
|
|
144 | (1) |
|
|
145 | (1) |
|
|
146 | (5) |
|
|
151 | (70) |
|
6 The GAMLSS family of distributions |
|
|
153 | (38) |
|
|
153 | (3) |
|
6.2 Types of distribution within the GAMLSS family |
|
|
156 | (12) |
|
6.2.1 Explicit GAMLSS family distributions |
|
|
156 | (5) |
|
6.2.2 Extending GAMLSS family distributions |
|
|
161 | (7) |
|
6.3 Displaying GAMLSS family distributions |
|
|
168 | (4) |
|
6.3.1 Using the distribution demos |
|
|
168 | (1) |
|
6.3.2 Using the pdf plot() function |
|
|
169 | (3) |
|
6.4 Amending an existing distribution and constructing a new distribution |
|
|
172 | (7) |
|
6.4.1 Definition of the link functions |
|
|
173 | (1) |
|
6.4.2 The fitting information |
|
|
174 | (2) |
|
6.4.3 The S3 class definition |
|
|
176 | (1) |
|
6.4.4 Definition of the d, p, q and r functions |
|
|
176 | (1) |
|
6.4.5 Example: reparameterizing the NO distribution |
|
|
177 | (2) |
|
|
179 | (3) |
|
6.5.1 How to display the available link functions |
|
|
179 | (1) |
|
6.5.2 Changing the default link function |
|
|
180 | (1) |
|
6.5.3 Defining a link function |
|
|
180 | (1) |
|
6.5.4 Creating a link function |
|
|
181 | (1) |
|
6.5.5 Using the own link function |
|
|
181 | (1) |
|
|
182 | (1) |
|
|
183 | (8) |
|
7 Finite mixture distributions |
|
|
191 | (30) |
|
7.1 Introduction to finite mixtures |
|
|
191 | (2) |
|
7.2 Finite mixtures with no parameters in common |
|
|
193 | (4) |
|
7.2.1 The likelihood function |
|
|
193 | (1) |
|
7.2.2 Maximizing the likelihood function using the EM algorithm |
|
|
194 | (2) |
|
7.2.3 Modelling the mixing probabilities |
|
|
196 | (1) |
|
7.2.4 Estimating the total number of components |
|
|
197 | (1) |
|
|
197 | (1) |
|
7.3 The gamlssMXO function |
|
|
197 | (3) |
|
7.4 Example using gamlssMXO: Reading glasses data |
|
|
200 | (7) |
|
7.5 Finite mixtures with parameters in common |
|
|
207 | (2) |
|
7.6 The gamlssNPO function |
|
|
209 | (2) |
|
7.7 Example using gamlssNPO: Animal brain data |
|
|
211 | (6) |
|
|
217 | (1) |
|
|
218 | (3) |
|
|
221 | (154) |
|
8 Linear parametric additive terms |
|
|
223 | (32) |
|
8.1 Introduction to linear and additive terms |
|
|
223 | (2) |
|
8.2 Linear additive terms |
|
|
225 | (6) |
|
8.2.1 Linear main effects |
|
|
227 | (1) |
|
8.2.2 Linear interactions |
|
|
227 | (4) |
|
|
231 | (2) |
|
8.4 Fractional polynomials |
|
|
233 | (2) |
|
8.5 Piecewise polynomials and regression splines |
|
|
235 | (4) |
|
|
239 | (3) |
|
|
242 | (1) |
|
8.8 Example: the CD4 data |
|
|
243 | (10) |
|
8.8.1 Orthogonal polynomials |
|
|
245 | (2) |
|
8.8.2 Fractional polynomials |
|
|
247 | (2) |
|
8.8.3 Piecewise polynomials |
|
|
249 | (1) |
|
|
250 | (3) |
|
|
253 | (1) |
|
|
254 | (1) |
|
9 Additive smoothing terms |
|
|
255 | (66) |
|
|
256 | (2) |
|
9.2 What is a scatterplot smoother? |
|
|
258 | (3) |
|
9.3 Local regression smoothers |
|
|
261 | (4) |
|
9.4 Penalized smoothers: Univariate |
|
|
265 | (31) |
|
9.4.1 Demos on penalized smoothers |
|
|
269 | (1) |
|
9.4.2 The pb(), pbo() and ps() functions for fitting a P-splines smoother |
|
|
270 | (4) |
|
9.4.3 The pbz() function for fitting smooth curves which can shrink to a constant |
|
|
274 | (1) |
|
9.4.4 The pbm() function for fitting monotonic smooth functions |
|
|
275 | (2) |
|
9.4.5 The pbc() and cy() functions for fitting cyclic smooth functions |
|
|
277 | (1) |
|
9.4.6 The cs() and scs() functions for fitting cubic splines |
|
|
278 | (4) |
|
9.4.7 The ri() function for fitting ridge and lasso regression terms |
|
|
282 | (5) |
|
9.4.8 The pcat() function for reducing levels of a factor |
|
|
287 | (6) |
|
9.4.9 The gmrf () function for fitting Gaussian Markov random fields |
|
|
293 | (3) |
|
9.5 Penalized smoothers: Multivariate |
|
|
296 | (9) |
|
9.5.1 The pvc() function for fitting varying coefficient models |
|
|
296 | (1) |
|
|
297 | (1) |
|
|
298 | (3) |
|
9.5.2 Interfacing with gam(): The ga() function |
|
|
301 | (1) |
|
|
301 | (1) |
|
9.5.2.2 Smooth surface fitting |
|
|
302 | (3) |
|
|
305 | (10) |
|
9.6.1 Interfacing with nnet(): nn() |
|
|
305 | (3) |
|
9.6.2 Interfacing with rpart(): tr() |
|
|
308 | (2) |
|
9.6.3 Interfacing with loess(): lo() |
|
|
310 | (4) |
|
9.6.4 Interfacing with earth(): ma() |
|
|
314 | (1) |
|
|
315 | (2) |
|
|
317 | (4) |
|
|
321 | (54) |
|
|
322 | (5) |
|
10.1.1 Random effects at the observational and at the factor level |
|
|
323 | (1) |
|
10.1.2 Marginal and joint likelihood |
|
|
324 | (1) |
|
10.1.3 Functions available for fitting random effects |
|
|
324 | (3) |
|
10.2 Nonparametric random effect models |
|
|
327 | (7) |
|
10.2.1 Nonparametric random intercept model for μ at the factor level |
|
|
327 | (1) |
|
10.2.2 Fitting the nonparametric random intercept model for μ at the factor level |
|
|
328 | (3) |
|
10.2.3 Nonparametric random intercept and slopes model for μ |
|
|
331 | (3) |
|
10.3 Normal random effect models |
|
|
334 | (2) |
|
10.3.1 Summary of the (r + 1)st iteration of the EM algorithm |
|
|
335 | (1) |
|
10.4 The function gamlssNP() for random effects |
|
|
336 | (3) |
|
10.4.1 Fitting a normal random intercept for μ |
|
|
337 | (1) |
|
10.4.2 Fitting nonparametric random effects |
|
|
337 | (1) |
|
10.4.2.1 Fitting a nonparametric random intercept in the predictor for μ |
|
|
337 | (1) |
|
10.4.2.2 Fitting nonparametric random intercept and slopes in the predictor for μ |
|
|
337 | (1) |
|
10.4.2.3 Fitting nonparametric random coefficients in the predictor for other distribution parameters |
|
|
338 | (1) |
|
10.5 Examples using gamlssNP() |
|
|
339 | (7) |
|
10.5.1 Example: Binary response with normal random intercept |
|
|
339 | (2) |
|
10.5.2 Example: Binomial response with nonparametric random intercept and slope |
|
|
341 | (5) |
|
10.6 The function random() |
|
|
346 | (1) |
|
10.7 Examples using random() |
|
|
347 | (7) |
|
|
347 | (5) |
|
10.7.2 Revisiting the respiratory infection in children |
|
|
352 | (2) |
|
10.8 The function re(), interfacing with lme() |
|
|
354 | (4) |
|
|
358 | (8) |
|
10.9.1 Refitting Hodges data using re () |
|
|
358 | (1) |
|
10.9.2 Fitting a P-spline smoother using re () |
|
|
359 | (2) |
|
|
361 | (5) |
|
10.10 Bibliographic notes |
|
|
366 | (1) |
|
|
367 | (8) |
|
V Model selection and diagnostics |
|
|
375 | (72) |
|
11 Model selection techniques |
|
|
377 | (40) |
|
11.1 Introduction: Statistical model selection |
|
|
377 | (3) |
|
11.2 GAMLSS model selection |
|
|
380 | (5) |
|
11.2.1 Component D: Selection of the distribution |
|
|
381 | (1) |
|
11.2.2 Component G: Selection of the link functions |
|
|
381 | (1) |
|
11.2.3 Component T: Selection of the additive terms in the model |
|
|
382 | (1) |
|
11.2.4 Component L: Selection of the smoothing parameters |
|
|
383 | (1) |
|
11.2.5 Selection of all components using a validation data set |
|
|
384 | (1) |
|
11.2.6 Summary of the GAMLSS functions for model selection |
|
|
384 | (1) |
|
11.3 The addterm() and dropterm() functions |
|
|
385 | (7) |
|
11.4 The stepGAIC() function |
|
|
392 | (5) |
|
11.4.1 Selecting a model for μ |
|
|
393 | (3) |
|
11.4.2 Selecting a model for σ |
|
|
396 | (1) |
|
11.5 Strategy A: The stepGAICAll.A() function |
|
|
397 | (2) |
|
11.6 Strategy B: The stepGAICAll.B() function |
|
|
399 | (2) |
|
11.7 K-fold cross validation |
|
|
401 | (1) |
|
11.8 Validation and test data |
|
|
402 | (6) |
|
11.8.1 The gamlssVGD() and VGD() functions |
|
|
402 | (2) |
|
11.8.2 The getTGD() and TGD() functions |
|
|
404 | (1) |
|
11.8.3 The stepTGD() function |
|
|
404 | (2) |
|
11.8.4 The stepTGDAll.A() function |
|
|
406 | (2) |
|
11.9 The find.hyper() function |
|
|
408 | (3) |
|
11.10 Bibliographic notes |
|
|
411 | (1) |
|
|
411 | (6) |
|
|
417 | (30) |
|
|
417 | (1) |
|
12.2 Normalized (randomized) quantile residuals |
|
|
418 | (4) |
|
12.3 The plot () function |
|
|
422 | (4) |
|
|
426 | (7) |
|
|
426 | (2) |
|
12.4.2 Multiple worm plot |
|
|
428 | (4) |
|
12.4.3 Arguments of the wp function |
|
|
432 | (1) |
|
|
433 | (2) |
|
12.5.1 Arguments of the dtop function |
|
|
434 | (1) |
|
12.6 The () stats() function |
|
|
435 | (4) |
|
|
436 | (3) |
|
12.6.2 Arguments of the Q. stats function |
|
|
439 | (1) |
|
12.7 The rqres.plot() function |
|
|
439 | (2) |
|
|
439 | (1) |
|
12.7.2 Arguments of the rqres.plot () function |
|
|
440 | (1) |
|
|
441 | (1) |
|
12.8.1 Proof of probability integral transform: Continuous case |
|
|
441 | (1) |
|
12.8.2 Proof of calibration: Calibrating the pdf |
|
|
441 | (1) |
|
|
442 | (1) |
|
|
443 | (4) |
|
|
447 | (76) |
|
|
449 | (48) |
|
|
449 | (6) |
|
13.1.1 Quantile regression |
|
|
451 | (1) |
|
13.1.2 The LMS method and extensions |
|
|
452 | (3) |
|
13.1.3 Example: The Dutch boys BMI data |
|
|
455 | (1) |
|
13.2 Fitting centile curves |
|
|
455 | (10) |
|
13.2.1 The lms () function |
|
|
456 | (2) |
|
13.2.2 Estimating the smoothing degrees of freedom using a local GAIC |
|
|
458 | (1) |
|
13.2.3 The find.hyper() function |
|
|
459 | (1) |
|
13.2.4 Residual diagnostics |
|
|
460 | (2) |
|
13.2.5 The fittedPlot) function |
|
|
462 | (3) |
|
13.3 Plotting centile curves |
|
|
465 | (10) |
|
|
465 | (3) |
|
|
468 | (2) |
|
|
470 | (1) |
|
|
471 | (2) |
|
13.3.5 Comparing centile curves: centiles. com() |
|
|
473 | (1) |
|
13.3.6 Plot of distribution of y for specific values of x |
|
|
474 | (1) |
|
13.4 Predictive centile curves: centiles.predO, z.scores() |
|
|
475 | (5) |
|
13.4.1 Case 1: Centile for y given x and centile percentage |
|
|
476 | (1) |
|
13.4.2 Case 2: Centile for y given x and centile z-score |
|
|
477 | (1) |
|
13.4.3 Case 3: z-score given y and x |
|
|
478 | (2) |
|
13.5 Quantile sheets: quantSheets() |
|
|
480 | (7) |
|
13.5.1 Smoothing parameters |
|
|
481 | (1) |
|
|
481 | (1) |
|
|
482 | (5) |
|
|
487 | (1) |
|
|
488 | (9) |
|
|
497 | (26) |
|
|
497 | (1) |
|
14.2 Count data: The fish species data |
|
|
498 | (10) |
|
14.3 Binomial data: The hospital stay data |
|
|
508 | (5) |
|
14.4 Continuous data: Revisiting the 1990s film data |
|
|
513 | (6) |
|
14.4.1 Preliminary analysis |
|
|
513 | (1) |
|
14.4.2 Modelling the data using the normal distribution |
|
|
514 | (4) |
|
14.4.3 Modelling the data using the BCPE distribution |
|
|
518 | (1) |
|
|
519 | (4) |
Bibliography |
|
523 | (20) |
Index |
|
543 | |