Atnaujinkite slapukų nuostatas

Handbook of Mixture Analysis [Kietas viršelis]

Edited by , Edited by (Universite Paris-Sud, France), Edited by (University of Vienna, Austria)
  • Formatas: Hardback, 522 pages, aukštis x plotis: 254x178 mm, weight: 1104 g, 34 Tables, black and white; 2 Line drawings, color; 85 Line drawings, black and white; 87 Illustrations, black and white
  • Serija: Chapman & Hall/CRC Handbooks of Modern Statistical Methods
  • Išleidimo metai: 07-Jan-2019
  • Leidėjas: Chapman & Hall/CRC
  • ISBN-10: 1498763812
  • ISBN-13: 9781498763813
Kitos knygos pagal šią temą:
  • Formatas: Hardback, 522 pages, aukštis x plotis: 254x178 mm, weight: 1104 g, 34 Tables, black and white; 2 Line drawings, color; 85 Line drawings, black and white; 87 Illustrations, black and white
  • Serija: Chapman & Hall/CRC Handbooks of Modern Statistical Methods
  • Išleidimo metai: 07-Jan-2019
  • Leidėjas: Chapman & Hall/CRC
  • ISBN-10: 1498763812
  • ISBN-13: 9781498763813
Kitos knygos pagal šią temą:
Mixture models have been around for over 150 years, and they are found in many branches of statistical modelling, as a versatile and multifaceted tool. They can be applied to a wide range of data: univariate or multivariate, continuous or categorical, cross-sectional, time series, networks, and much more. Mixture analysis is a very active research topic in statistics and machine learning, with new developments in methodology and applications taking place all the time.

The Handbook of Mixture Analysis is a very timely publication, presenting a broad overview of the methods and applications of this important field of research. It covers a wide array of topics, including the EM algorithm, Bayesian mixture models, model-based clustering, high-dimensional data, hidden Markov models, and applications in finance, genomics, and astronomy.

Features:











Provides a comprehensive overview of the methods and applications of mixture modelling and analysis





Divided into three parts: Foundations and Methods; Mixture Modelling and Extensions; and Selected Applications





Contains many worked examples using real data, together with computational implementation, to illustrate the methods described





Includes contributions from the leading researchers in the field

The Handbook of Mixture Analysis is targeted at graduate students and young researchers new to the field. It will also be an important reference for anyone working in this field, whether they are developing new methodology, or applying the models to real scientific problems.

Recenzijos

"This is another handbook in the remarkable series by CRC Press, which now consists of 21 volumes. The book has 19 chapters, split in to three parts: Foundations and Methods, Mixture Modelling and Extensions, and Selected Applications. Many first-class statisticians have contributed to the book. The editors have taken their coordinating task seriously, even organizing a workshop in Vienna. The balance between the chapters is quite good and the notation is streamlined... To summarize, I can recommend this handbook... The technical quality of typesetting and printing is high." - Paul Eilers, Erasmus University Medical Centre, Netherlands, Appeared in ISCB News, January 2020

Preface xv
Editors xvii
Contributors xix
List of Symbols xxi
I Foundations and Methods 1(154)
1 Introduction to Finite Mixtures
3(18)
Peter J. Green
1.1 Introduction and Motivation
3(5)
1.1.1 Basic formulation
4(2)
1.1.2 Likelihood
6(1)
1.1.3 Latent allocation variables
6(2)
1.1.4 A little history
8(1)
1.2 Generalizations
8(5)
1.2.1 Infinite mixtures
8(1)
1.2.2 Continuous mixtures
9(1)
1.2.3 Finite mixtures with nonparametric components
10(1)
1.2.4 Covariates and mixtures of experts
11(1)
1.2.5 Hidden Markov models
11(1)
1.2.6 Spatial mixtures
12(1)
1.3 Some Technical Concerns
13(1)
1.3.1 Identifiability
13(1)
1.3.2 Label switching
13(1)
1.4 Inference
14(4)
1.4.1 Frequentist inference, and the role of EM
14(1)
1.4.2 Bayesian inference, and the role of MCMC
15(1)
1.4.3 Variable number of components
16(1)
1.4.4 Modes versus components
16(1)
1.4.5 Clustering and classification
17(1)
1.5 Concluding Remarks
18(1)
Bibliography
19(2)
2 EM Methods for Finite Mixtures
21(20)
Gilles Celeux
2.1 Introduction
21(1)
2.2 The EM Algorithm
22(3)
2.2.1 Description of EM for finite mixtures
22(2)
2.2.2 EM as an alternating-maximization algorithm
24(1)
2.3 Convergence and Behavior of EM
25(1)
2.4 Cousin Algorithms of EM
26(4)
2.4.1 Stochastic versions of the EM algorithm
27(2)
2.4.2 The Classification EM algorithm
29(1)
2.5 Accelerating the EM Algorithm
30(2)
2.6 Initializing the EM Algorithm
32(2)
2.6.1 Random initialization
33(1)
2.6.2 Hierarchical initialization
33(1)
2.6.3 Recursive initialization
33(1)
2.7 Avoiding Spurious Local Maximizers
34(1)
2.8 Concluding Remarks
35(2)
Bibliography
37(4)
3 An Expansive View of EM Algorithms
41(12)
David R. Hunter
Prabhani Kuruppumullage Don
Bruce G. Lindsay
3.1 Introduction
41(1)
3.2 The Product-of-Sums Formulation
42(2)
3.2.1 Iterative algorithms and the ascent property
43(1)
3.2.2 Creating a minorizing surrogate function
43(1)
3.3 Likelihood as a Product of Sums
44(2)
3.4 Non-standard Examples of EM Algorithms
46(4)
3.4.1 Modes of a density
47(1)
3.4.2 Gradient maxima
47(1)
3.4.3 Two-step EM
48(2)
3.5 Stopping Rules for EM Algorithms
50(1)
3.6 Concluding Remarks
50(2)
Bibliography
52(1)
4 Bayesian Mixture Models: Theory and Methods
53(20)
Judith Rousseau
Clara Grazian
Jeong Eun Lee
4.1 Introduction
53(1)
4.2 Bayesian Mixtures: From Priors to Posteriors
54(7)
4.2.1 Models and representations
54(2)
4.2.2 Impact of the prior distribution
56(1)
4.2.2.1 Conjugate priors
56(1)
4.2.2.2 Improper and non-informative priors
57(1)
4.2.2.3 Data-dependent priors
59(1)
4.2.2.4 Priors for overfitted mixtures
60(1)
4.3 Asymptotic Properties of the Posterior Distribution in the Finite Case
61(7)
4.3.1 Posterior concentration around the marginal density
61(2)
4.3.2 Recovering the parameters in the well-behaved case
63(1)
4.3.3 Boundary parameters: overfitted mixtures
64(4)
4.3.4 Asymptotic behaviour of posterior estimates of the number of components
68(1)
4.4 Concluding Remarks
68(2)
Bibliography
70(3)
5 Computational Solutions for Bayesian Inference in Mixture Models
73(24)
Gilles Celeux
Kaniav Kamary
Gertraud Malsiner-Walli
Jean-Michel Marin
Christian P. Robert
5.1 Introduction
73(2)
5.2 Algorithms for Posterior Sampling
75(8)
5.2.1 A computational problem? Which computational problem?
75(1)
5.2.2 Gibbs sampling
76(4)
5.2.3 Metropolis-Hastings schemes
80(1)
5.2.4 Reversible jump MCMC
81(1)
5.2.5 Sequential Monte Carlo
82(1)
5.2.6 Nested sampling
82(1)
5.3 Bayesian Inference in the Model-Based Clustering Context
83(2)
5.4 Simulation Studies
85(3)
5.4.1 Known number of components
86(1)
5.4.2 Unknown number of components
87(1)
5.5 Gibbs Sampling for High-Dimensional Mixtures
88(4)
5.5.1 Determinant coefficient of determination
89(2)
5.5.2 Simulation study using the determinant criterion
91(1)
5.6 Concluding Remarks
92(1)
Bibliography
93(4)
6 Bayesian Nonparametric Mixture Models
97(20)
Peter Muller
6.1 Introduction
97(3)
6.2 Dirichlet Process Mixtures
100(4)
6.2.1 The Dirichlet process prior
100(2)
6.2.2 Posterior simulation in Dirichlet process mixture models
102(2)
6.2.3 Dependent mixtures - the dependent Dirichlet process model
104(1)
6.3 Normalized Generalized Gamma Process Mixtures
104(4)
6.3.1 NRMI construction
104(2)
6.3.2 Posterior simulation for normalized generalized gamma process mixtures
106(2)
6.4 Bayesian Nonparametric Mixtures with Random Partitions
108(2)
6.4.1 Locally weighted mixtures
108(1)
6.4.2 Conditional regression
109(1)
6.5 Repulsive Mixtures (Determinantal Point Process)
110(2)
6.6 Concluding Remarks
112(2)
Bibliography
114(3)
7 Model Selection for Mixture Models - Perspectives and Strategies
117(38)
Gilles Celeux
Sylvia Pruhwirth-Schnatter
Christian P. Robert
7.1 Introduction
118(1)
7.2 Selecting G as a Density Estimation Problem
119(12)
7.2.1 Testing the order of a finite mixture through likelihood ratio tests
121(1)
7.2.2 Information criteria for order selection
122(1)
7.2.2.1 AIC and BIC
122(1)
7.2.2.2 The Slope Heuristics
123(1)
7.2.2.3 DIC
124(1)
7.2.2.4 The minimum message length
125(1)
7.2.3 Bayesian model choice based on marginal likelihoods
126(1)
7.2.3.1 Chib's method, limitations and extensions
126(1)
7.2.3.2 Sampling-based approximations
127(4)
7.3 Selecting G in the Framework of Model-Based Clustering
131(10)
7.3.1 Mixtures as partition models
133(1)
7.3.2 Classification-based information criteria
134(1)
7.3.2.1 The integrated complete-data likelihood criterion
134(1)
7.3.2.2 The conditional classification likelihood
135(1)
7.3.2.3 Exact derivation of the ICL
137(1)
7.3.3 Bayesian clustering
138(2)
7.3.4 Selecting G under model misspecification
140(1)
7.4 One-Sweep Methods for Cross-model Inference on G
141(7)
7.4.1 Overfitting mixtures
142(1)
7.4.2 Reversible jump MCMC
142(1)
7.4.3 Allocation sampling
142(1)
7.4.4 Bayesian nonparametric methods
143(2)
7.4.5 Sparse finite mixtures for model-based clustering
145(3)
7.5 Concluding Remarks
148(1)
Bibliography
149(6)
II Mixture Modelling and Extensions 155(336)
8 Model-Based Clustering
157(36)
Bettina Grun
8.1 Introduction
158(6)
8.1.1 Heuristic clustering
159(1)
8.1.2 From k-means to Gaussian mixture modelling
160(1)
8.1.3 Specifying the clustering problem
161(3)
8.2 Specifying the Model
164(9)
8.2.1 Components corresponding to clusters
165(3)
8.2.2 Combining components into clusters
168(2)
8.2.3 Selecting the clustering base
170(2)
8.2.4 Selecting the number of clusters
172(1)
8.3 Post-processing the Fitted Model
173(7)
8.3.1 Identifying the model
173(1)
8.3.2 Determining a partition
173(2)
8.3.3 Characterizing clusters
175(1)
8.3.4 Validating clusters
175(2)
8.3.5 Visualizing cluster solutions
177(3)
8.4 Illustrative Applications
180(4)
8.4.1 Bioinformatics: Analysing gene expression data
181(1)
8.4.2 Marketing: Determining market segments
181(1)
8.4.3 Psychology and sociology: Revealing latent structures
182(1)
8.4.4 Economics and finance: Clustering time series
183(1)
8.4.5 Medicine and biostatistics: Unobserved heterogeneity
183(1)
8.5 Concluding Remarks
184(1)
Bibliography
185(8)
9 Mixture Modelling of Discrete Data
193(26)
Dimitris Karlis
9.1 Introduction
194(1)
9.2 Mixtures of Univariate Count Data
194(3)
9.2.1 Introduction
194(1)
9.2.2 Finite mixtures of Poisson and related distributions
195(2)
9.2.3 Zero-inflated models
197(1)
9.3 Extensions
197(3)
9.3.1 Mixtures of time series count data
197(1)
9.3.2 Hidden Markov models
198(1)
9.3.3 Mixture of regression models for discrete data
199(1)
9.3.4 Other models
200(1)
9.4 Mixtures of Multivariate Count Data
200(9)
9.4.1 Some models for multivariate counts
200(1)
9.4.1.1 Multivariate reduction approach
200(1)
9.4.1.2 Copulas approach
202(1)
9.4.1.3 Other approaches
203(1)
9.4.2 Finite mixture for multivariate counts
204(1)
9.4.2.1 Conditional independence
204(1)
9.4.2.2 Conditional dependence
205(1)
9.4.2.3 Finite mixtures of multivariate Poisson distributions
205(1)
9.4.3 Zero-inflated multivariate models
206(1)
9.4.4 Copula-based models
206(2)
9.4.5 Finite mixture of bivariate Poisson regression models
208(1)
9.5 Other Mixtures for Discrete Data
209(3)
9.5.1 Latent class models
209(1)
9.5.2 Mixtures for ranking data
209(1)
9.5.3 Mixtures of multinomial distributions
210(1)
9.5.4 Mixtures of Markov chains
211(1)
9.6 Concluding Remarks
212(1)
Bibliography
213(6)
10 Continuous Mixtures with Skewness and Heavy Tails
219(20)
David Rossell
Mark F.J. Steel
10.1 Introduction
219(2)
10.2 Skew-t Mixtures
221(3)
10.3 Prior Formulation
224(1)
10.4 Model Fitting
225(4)
10.5 Examples
229(5)
10.5.1 Simulation study
229(4)
10.5.2 Experimental data
233(1)
10.6 Concluding Remarks
234(1)
Bibliography
235(4)
11 Mixture Modelling of High-Dimensional Data
239(32)
Damien McFarland
Thomas Brendan Murphy
11.1 Introduction
240(1)
11.2 High-Dimensional Data
240(2)
11.2.1 Continuous data: Italian wine
241(1)
11.2.2 Categorical data: lower back pain
241(1)
11.2.3 Mixed data: prostate cancer
242(1)
11.3 Mixtures for High-Dimensional Data
242(3)
11.3.1 Curse of dimensionality/modeling issues
243(1)
11.3.2 Data reduction
244(1)
11.4 Mixtures for Continuous Data
245(6)
11.4.1 Diagonal covariance
246(1)
11.4.2 Eigendecomposed covariance
247(1)
11.4.3 Mixtures of factor analyzers and probabilistic principal components analyzers
248(1)
11.4.4 High-dimensional models
249(1)
11.4.5 Sparse models
250(1)
11.5 Mixtures for Categorical Data
251(2)
11.5.1 Local independence models and latent class analysis
251(1)
11.5.2 Other models
252(1)
11.6 Mixtures for Mixed Data
253(1)
11.7 Variable Selection
254(3)
11.7.1 Wrapper-based methods
254(1)
11.7.2 Stepwise approaches for continuous data
255(2)
11.7.3 Stepwise approaches for categorical data
257(1)
11.8 Examples
257(6)
11.8.1 Continuous data: Italian wine
258(2)
11.8.2 Categorical data: lower back pain
260(2)
11.8.3 Mixed data: prostate cancer
262(1)
11.9 Concluding Remarks
263(1)
Bibliography
264(7)
12 Mixture of Experts Models
271(38)
Isobel Claire Gormley
Sylvia Fruhwirth-Schnatter
12.1 Introduction
271(1)
12.2 The Mixture of Experts Framework
272(4)
12.2.1 A mixture of experts model
272(1)
12.2.2 An illustration
273(1)
12.2.3 The suite of mixture of experts models
274(2)
12.3 Statistical Inference for Mixture of Experts Models
276(6)
12.3.1 Maximum likelihood estimation
276(2)
12.3.2 Bayesian estimation
278(2)
12.3.3 Model selection
280(2)
12.4 Illustrative Applications
282(11)
12.4.1 Analysing marijuana use through mixture of experts Markov chain models
282(3)
12.4.2 A mixture of experts model for ranked preference data
285(4)
12.4.3 A mixture of experts latent position cluster model
289(4)
12.4.4 Software
293(1)
12.5 Identifiability of Mixture of Experts Models
293(9)
12.5.1 Identifiability of binomial mixtures
294(2)
12.5.2 Identifiability for mixtures of regression models
296(4)
12.5.3 Identifiability for mixture of experts models
300(2)
12.6 Concluding Remarks
302(1)
Bibliography
303(6)
13 Hidden Markov Models in Time Series, with Applications in Economics
309(34)
Sylvia Kaufmann
13.1 Introduction
309(3)
13.2 Regime Switching: Mixture Modelling over Time
312(9)
13.2.1 Preliminaries and model specification
312(1)
13.2.2 The functional form of state transition
313(1)
13.2.2.1 Time-invariant switching
313(1)
13.2.2.2 Time-varying switching
314(1)
13.2.2.3 Nested alternatives
315(1)
13.2.3 Generalizations
315(1)
13.2.4 Some considerations on parameterization
316(1)
13.2.5 Stability conditions: combining stable and unstable processes
317(4)
13.3 Estimation
321(10)
13.3.1 The complete-data likelihood and the FFBS algorithm
322(1)
13.3.2 Maximum likelihood estimation
322(2)
13.3.3 Bayesian estimation
324(1)
13.3.3.1 Prior specifications for the transition distribution
324(1)
13.3.3.2 Posterior inference
325(2)
13.3.4 Sampler efficiency: logit versus probit
327(2)
13.3.5 Posterior state identification
329(2)
13.4 Informative Regime Switching in Applications
331(4)
13.4.1 Time-invariant switching
331(1)
13.4.1.1 Unconditional switching
331(1)
13.4.1.2 Structured Markov switching
331(2)
13.4.2 Time-varying switching
333(1)
13.4.2.1 Duration dependence and state-identifying restrictions
333(1)
13.4.2.2 Shape restrictions
334(1)
13.5 Concluding Remarks
335(2)
Bibliography
337(6)
14 Mixtures of Nonparametric Components and Hidden Markov Models
343(20)
Elisabeth Gassiat
14.1 Introduction
343(2)
14.2 Mixtures with One Known Component
345(1)
14.2.1 The case where the other component is symmetric
345(1)
14.2.2 Mixture of a uniform and a non-decreasing density
345(1)
14.3 Translation Mixtures
346(3)
14.3.1 Translation of a symmetric density
346(1)
14.3.2 Translation of any distribution and hidden Markov models
347(2)
14.4 Multivariate Mixtures
349(7)
14.4.1 Identifiability
349(2)
14.4.2 Estimation with spectral methods
351(1)
14.4.3 Estimation with nonparametric methods
351(3)
14.4.4 Hidden Markov models
354(2)
14.5 Related Questions
356(1)
14.5.1 Clustering
356(1)
14.5.2 Order estimation
356(1)
14.5.3 Semi-parametric estimation
356(1)
14.5.4 Regressions with random (observed or non-observed) design
356(1)
14.6 Concluding Remarks
357(1)
Bibliography
358(3)
III Selected Applications
361(2)
15 Applications in Industry
363(22)
Kerrie Mengersen
Earl Duncan
Julyan Arbel
Clair Alston-Knox
Nicole White
15.1 Introduction
363(1)
15.2 Mixtures for Monitoring
364(2)
15.3 Health Resource Usage
366(3)
15.3.1 Assessing the effectiveness of a measles vaccination
366(1)
15.3.2 Spatio-temporal disease mapping: identifying unstable trends in congenital malformations
367(2)
15.4 Pest Surveillance
369(5)
15.4.1 Data and models
369(2)
15.4.2 Resulting clusters
371(3)
15.5 Toxic Spills
374(5)
15.5.1 Data and model
375(3)
15.5.2 Posterior sampling and summaries
378(1)
15.6 Concluding Remarks
379(2)
Bibliography
381(4)
16 Mixture Models for Image Analysis
385(22)
Florence Forbes
16.1 Introduction
385(1)
16.2 Hidden Markov Model Based Clustering
386(3)
16.2.1 Mixture models
387(1)
16.2.2 Markov random fields: Potts model and extensions
387(1)
16.2.3 Hidden Markov field with independent noise
388(1)
16.3 Markov Model Based Segmentation via Variational EM
389(4)
16.3.1 Links with the iterated conditional mode and the Gibbs sampler
392(1)
16.4 Illustration: MRI Brain Scan Segmentation
393(7)
16.4.1 Healthy brain tissue and structure segmentation
393(1)
16.4.1.1 A Markov random field approach to segmentation and registration
394(1)
16.4.1.2 Experiments: Joint tissue and structure segmentation
397(1)
16.4.2 Brain tumor detection from multiple MR sequences
398(1)
16.4.2.1 Tissue interaction modelling
398(1)
16.4.2.2 Experiments: Lesion segmentation
400(1)
16.5 Concluding Remarks
400(2)
Bibliography
402(5)
17 Applications in Finance
407(32)
John M. Maheu
Azam Shamsi Zamenjani
17.1 Introduction
407(1)
17.2 Finite Mixture Models
408(6)
17.2.1 i.i.d. mixture models with volatility dynamics
408(1)
17.2.2 Markov switching models
408(3)
17.2.3 Markov switching volatility models
411(2)
17.2.4 Jumps
413(1)
17.3 Infinite Mixture Models
414(15)
17.3.1 Dirichlet process mixture model
414(6)
17.3.2 GARCH-DPM and SV-DPM
420(5)
17.3.3 Infinite hidden Markov model
425(4)
17.4 Concluding Remarks
429(2)
Bibliography
431(8)
18 Applications in Genomics
439(24)
Stephane Robin
Christophe Ambroise
18.1 Introduction
439(1)
18.2 Mixture Models in Transcriptorne and Genome Analysis
440(6)
18.2.1 Analyzing the genetic structure of a population
440(1)
18.2.2 Finding sets of co-transcribed genes
441(2)
18.2.3 Variable selection for clustering with Gaussian mixture models
443(1)
18.2.4 Mixture models in the specific case of multiple testing
444(2)
18.3 Hidden Markov Models in Genomics: Some Specificities
446(7)
18.3.1 A typical case: Copy number variations
446(1)
18.3.2 Complex emission distributions
447(3)
18.3.3 Complex hidden states
450(1)
18.3.4 Non-standard hidden Markov structures
451(2)
18.4 Complex Dependency Structures
453(2)
18.4.1 Markov random fields
453(1)
18.4.2 Stochastic block model
454(1)
18.4.3 Inference issues
454(1)
18.5 Concluding Remarks
455(1)
Bibliography
456(7)
19 Applications in Astronomy
463(28)
Michael A. Kuhn
Eric D. Feigelson
19.1 Introduction
463(1)
19.2 Clusters of Stars and Galaxies
464(8)
19.2.1 Galaxy clusters
464(2)
19.2.2 Young star clusters
466(1)
19.2.2.1 Star-cluster models
467(1)
19.2.2.2 Model fitting and validation
469(1)
19.2.2.3 Results from the mixture model approach
471(1)
19.3 Classification of Astronomical Objects
472(5)
19.3.1 Tests for multiple components
472(2)
19.3.2 Two or three classes of gamma-ray bursts?
474(1)
19.3.3 Removal of contaminants
475(1)
19.3.4 Red and blue galaxies
476(1)
19.4 Advanced Mixture Model Applications
477(4)
19.4.1 Regression with heteroscedastic uncertainties
477(2)
19.4.2 Deconvolution of distributions from data with heteroscedastic errors and missing information
479(2)
19.5 Concluding Remarks
481(3)
Bibliography
484(7)
Index 491
Sylvia Frühwirth-Schnatter is Professor of Applied Statistics and Econometrics at the Department of Finance, Accounting, and Statistics, Vienna University of Economics and Business, Austria. She has contributed to research in Bayesian modelling and MCMC inference for a broad range of models, including finite mixture and Markov switching models as well as state space models. She is particularly interested in applications of Bayesian inference in economics, finance, and business. She started to work on finite mixture and Markov switching models 20 years ago and has published more than 20 articles in this area in leading journals such as JASA, JCGS, and Journal of Applied Econometrics. Her monograph Finite Mixture and Markov Switching Models (2006) was awarded the Morris-DeGroot Price 2007 by ISBA. In 2014, she was elected Member of the Austrian Academy of Sciences.

Gilles Celeux is Director of research emeritus with INRIA Saclay-Īle-de-France, France. He has conducted research in statistical learning, model-based clustering and model selection for more than 35 years and he leaded to Inria teams. His first paper on mixture modelling was written in 1981 and he is one of the co-organisators of the summer working group on model-based clustering since 1994. He has published more than 40 papers in international Journals of Statistics and wrote two textbooks in French on Classification. He was Editor-in-Chief of Statistics and Computing between 2006 and 2012 and he is the present Editor-in-Chief of the Journal of the French Statistical Society since 2012.

Christian P. Robert is Professor of Mathematics at CEREMADE, Université Paris-Dauphine, PSL Research University, France, and Professor of Statistics at the Department of Statistics, University of Warwick, UK. He has conducted research in Bayesian inference and computational methods covering Monte Carlo, MCMC, and ABC techniques, for more than 30 years, writing The Bayesian Choice (2001) and Monte Carlo Statistical Methods (2004) with George Casella. His first paper on mixture modelling was written in 1989 on radiograph image modelling. His fruitful collaboration with Mike Titterington on this topic spans two enjoyable decades of visits to Glasgow, Scotland. He has organised three conferences on the subject of mixture inference, with the last one at ICMS leading to the edited book Mixtures: Estimation and Applications (2011), co-authored with K. L. Mengersen and D. M. Titterington.