Atnaujinkite slapukų nuostatas

Entropy Theory and its Application in Environmental and Water Engineering [Kietas viršelis]

(Texas A & M University)
  • Formatas: Hardback, 662 pages, aukštis x plotis x storis: 254x196x34 mm, weight: 1266 g
  • Išleidimo metai: 01-Feb-2013
  • Leidėjas: Wiley-Blackwell
  • ISBN-10: 1119976561
  • ISBN-13: 9781119976561
Kitos knygos pagal šią temą:
  • Formatas: Hardback, 662 pages, aukštis x plotis x storis: 254x196x34 mm, weight: 1266 g
  • Išleidimo metai: 01-Feb-2013
  • Leidėjas: Wiley-Blackwell
  • ISBN-10: 1119976561
  • ISBN-13: 9781119976561
Kitos knygos pagal šią temą:
Entropy Theory and its Application in Environmental and Water Engineering responds to the need for a book that deals with basic concepts of entropy theory from a hydrologic and water engineering perspective and then for a book that deals with applications of these concepts to a range of water engineering problems. The range of applications of entropy is constantly expanding and new areas finding a use for the theory are continually emerging. The applications of concepts and techniques vary across different subject areas and this book aims to relate them directly to practical problems of environmental and water engineering.

The book presents and explains the Principle of Maximum Entropy (POME) and the Principle of Minimum Cross Entropy (POMCE) and their applications to different types of probability distributions. Spatial and inverse spatial entropy are important for urban planning and are presented with clarity. Maximum entropy spectral analysis and minimum cross entropy spectral analysis are powerful techniques for addressing a variety of problems faced by environmental and water scientists and engineers and are described here with illustrative examples.

Giving a thorough introduction to the use of entropy to measure the unpredictability in environmental and water systems this book will add an essential statistical method to the toolkit of postgraduates, researchers and academic hydrologists, water resource managers, environmental scientists and engineers.  It will also offer a valuable resource for professionals in the same areas, governmental organizations, private companies as well as students in earth sciences, civil and agricultural engineering, and agricultural and rangeland sciences.

This book:





Provides a thorough introduction to entropy for beginners and more experienced users Uses numerous examples to illustrate the applications of the theoretical principles Allows the reader to apply entropy theory to the solution of practical problems Assumes minimal existing mathematical knowledge Discusses the theory and its various aspects in both univariate and bivariate cases Covers newly expanding areas including neural networks from an entropy perspective and future developments.
Preface xv
Acknowledgments xix
1 Introduction
1(32)
1.1 Systems and their characteristics
1(6)
1.1.1 Classes of systems
1(1)
1.1.2 System states
1(1)
1.1.3 Change of state
2(1)
1.1.4 Thermodynamic entropy
3(2)
1.1.5 Evolutive connotation of entropy
5(1)
1.1.6 Statistical mechanical entropy
5(2)
1.2 Informational entropies
7(14)
1.2.1 Types of entropies
8(1)
1.2.2 Shannon entropy
9(3)
1.2.3 Information gain function
12(2)
1.2.4 Boltzmann, Gibbs and Shannon entropies
14(1)
1.2.5 Negentropy
15(1)
1.2.6 Exponential entropy
16(2)
1.2.7 Tsallis entropy
18(1)
1.2.8 Renyi entropy
19(2)
1.3 Entropy, information, and uncertainty
21(4)
1.3.1 Information
22(2)
1.3.2 Uncertainty and surprise
24(1)
1.4 Types of uncertainty
25(2)
1.5 Entropy and related concepts
27(2)
1.5.1 Information content of data
27(1)
1.5.2 Criteria for model selection
28(1)
1.5.3 Hypothesis testing
29(1)
1.5.4 Risk assessment
29(1)
Questions
29(2)
References
31(1)
Additional References
32(1)
2 Entropy Theory
33(109)
2.1 Formulation of entropy
33(6)
2.2 Shannon entropy
39(3)
2.3 Connotations of information and entropy
42(4)
2.3.1 Amount of information
42(1)
2.3.2 Measure of information
43(1)
2.3.3 Source of information
43(1)
2.3.4 Removal of uncertainty
44(1)
2.3.5 Equivocation
45(1)
2.3.6 Average amount of information
45(1)
2.3.7 Measurement system
46(1)
2.3.8 Information and organization
46(1)
2.4 Discrete entropy: univariate case and marginal entropy
46(6)
2.5 Discrete entropy: bivariate case
52(27)
2.5.1 Joint entropy
53(1)
2.5.2 Conditional entropy
53(4)
2.5.3 Transinformation
57(22)
2.6 Dimensionless entropies
79(1)
2.7 Bayes theorem
80(8)
2.8 Informational correlation coefficient
88(2)
2.9 Coefficient of nontransferred information
90(2)
2.10 Discrete entropy: multidimensional case
92(1)
2.11 Continuous entropy
93(12)
2.11.1 Univariate case
94(3)
2.11.2 Differential entropy of continuous variables
97(2)
2.11.3 Variable transformation and entropy
99(1)
2.11.4 Bivariate case
100(5)
2.11.5 Multivariate case
105(1)
2.12 Stochastic processes and entropy
105(2)
2.13 Effect of proportional class interval
107(3)
2.14 Effect of the form of probability distribution
110(1)
2.15 Data with zero values
111(2)
2.16 Effect of measurement units
113(2)
2.17 Effect of averaging data
115(1)
2.18 Effect of measurement error
116(2)
2.19 Entropy in frequency domain
118(1)
2.20 Principle of maximum entropy
118(1)
2.21 Concentration theorem
119(3)
2.22 Principle of minimum cross entropy
122(1)
2.23 Relation between entropy and error probability
123(2)
2.24 Various interpretations of entropy
125(8)
2.24.1 Measure of randomness or disorder
125(1)
2.24.2 Measure of unbiasedness or objectivity
125(1)
2.24.3 Measure of equality
125(1)
2.24.4 Measure of diversity
126(1)
2.24.5 Measure of lack of concentration
126(1)
2.24.6 Measure of flexibility
126(1)
2.24.7 Measure of complexity
126(1)
2.24.8 Measure of departure from uniform distribution
127(1)
2.24.9 Measure of interdependence
127(1)
2.24.10 Measure of dependence
128(1)
2.24.11 Measure of interactivity
128(1)
2.24.12 Measure of similarity
129(1)
2.24.13 Measure of redundancy
129(1)
2.24.14 Measure of organization
130(3)
2.25 Relation between entropy and variance
133(2)
2.26 Entropy power
135(1)
2.27 Relative frequency
135(1)
2.28 Application of entropy theory
136(1)
Questions
136(1)
References
137(2)
Additional Reading
139(3)
3 Principle of Maximum Entropy
142(30)
3.1 Formulation
142(3)
3.2 POME formalism for discrete variables
145(7)
3.3 POME formalism for continuous variables
152(6)
3.3.1 Entropy maximization using the method of Lagrange multipliers
152(5)
3.3.2 Direct method for entropy maximization
157(1)
3.4 POME formalism for two variables
158(7)
3.5 Effect of constraints on entropy
165(2)
3.6 Invariance of total entropy
167(1)
Questions
168(2)
References
170(1)
Additional Reading
170(2)
4 Derivation of Pome-Based Distributions
172(41)
4.1 Discrete variable and discrete distributions
172(13)
4.1.1 Constraint E[ x] and the Maxwell-Boltzmann distribution
172(2)
4.1.2 Two constraints and Bose-Einstein distribution
174(3)
4.1.3 Two constraints and Fermi-Dirac distribution
177(1)
4.1.4 Intermediate statistics distribution
178(1)
4.1.5 Constraint: E[ N]: Bernoulli distribution for a single trial
179(1)
4.1.6 Binomial distribution for repeated trials
180(1)
4.1.7 Geometric distribution: repeated trials
181(2)
4.1.8 Negative binomial distribution: repeated trials
183(1)
4.1.9 Constraint: E[ N] = n: Poisson distribution
183(2)
4.2 Continuous variable and continuous distributions
185(18)
4.2.1 Finite interval [ a, b], no constraint, and rectangular distribution
185(1)
4.2.2 Finite interval [ a, b], one constraint and truncated exponential distribution
186(2)
4.2.3 Finite interval [ 0, 1], two constraints E[ ln x] and E[ ln(1 - x)] and beta distribution of first kind
188(3)
4.2.4 Semi-infinite interval (0, ∞), one constraint E[ x] and exponential distribution
191(1)
4.2.5 Semi-infinite interval, two constraints E[ x] and E[ ln x] and gamma distribution
192(2)
4.2.6 Semi-infinite interval, two constraints E[ ln x] and E[ ln(1 + x)] and beta distribution of second kind
194(1)
4.2.7 Infinite interval, two constraints E[ x] and E[ x2] and normal distribution
195(2)
4.2.8 Semi-infinite interval, log-transformation Y = ln X, two constraints E[ y] and E[ y2] and log-normal distribution
197(2)
4.2.9 Infinite and semi-infinite intervals: constraints and distributions
199(4)
Questions
203(5)
References
208(1)
Additional Reading
208(5)
5 Multivariate Probability Distributions
213(57)
5.1 Multivariate normal distributions
213(32)
5.1.1 One time lag serial dependence
213(8)
5.1.2 Two-lag serial dependence
221(8)
5.1.3 Multi-lag serial dependence
229(5)
5.1.4 No serial dependence: bivariate case
234(4)
5.1.5 Cross-correlation and serial dependence: bivariate case
238(6)
5.1.6 Multivariate case: no serial dependence
244(1)
5.1.7 Multi-lag serial dependence
245(1)
5.2 Multivariate exponential distributions
245(13)
5.2.1 Bivariate exponential distribution
245(9)
5.2.2 Trivariate exponential distribution
254(3)
5.2.3 Extension to Weibull distribution
257(1)
5.3 Multivariate distributions using the entropy-copula method
258(7)
5.3.1 Families of copula
259(1)
5.3.2 Application
260(5)
5.4 Copula entropy
265(1)
Questions
266(1)
References
267(1)
Additional Reading
268(2)
6 Principle of Minimum Cross-Entropy
270(20)
6.1 Concept and formulation of POMCE
270(1)
6.2 Properties of POMCE
271(4)
6.3 POMCE formalism for discrete variables
275(4)
6.4 POMCE formulation for continuous variables
279(1)
6.5 Relation to POME
280(1)
6.6 Relation to mutual information
281(1)
6.7 Relation to variational distance
281(1)
6.8 Lin's directed divergence measure
282(4)
6.9 Upper bounds for cross-entropy
286(1)
Questions
287(1)
References
288(1)
Additional Reading
289(1)
7 Derivation of POME-Based Distributions
290(20)
7.1 Discrete variable and mean E[ x] as a constraint
290(8)
7.1.1 Uniform prior distribution
291(2)
7.1.2 Arithmetic prior distribution
293(1)
7.1.3 Geometric prior distribution
294(1)
7.1.4 Binomial prior distribution
295(2)
7.1.5 General prior distribution
297(1)
7.2 Discrete variable taking on an infinite set of values
298(7)
7.2.1 Improper prior probability distribution
298(3)
7.2.2 A priori Poisson probability distribution
301(3)
7.2.3 A priori negative binomial distribution
304(1)
7.3 Continuous variable: general formulation
305(3)
7.3.1 Uniform prior and mean constraint
307(1)
7.3.2 Exponential prior and mean and mean log constraints
308(1)
Questions
308(1)
References
309(1)
8 Parameter Estimation
310(25)
8.1 Ordinary entropy-based parameter estimation method
310(15)
8.1.1 Specification of constraints
311(1)
8.1.2 Derivation of entropy-based distribution
311(1)
8.1.3 Construction of zeroth Lagrange multiplier
311(1)
8.1.4 Determination of Lagrange multipliers
312(1)
8.1.5 Determination of distribution parameters
313(12)
8.2 Parameter-space expansion method
325(4)
8.3 Contrast with method of maximum likelihood estimation (MLE)
329(2)
8.4 Parameter estimation by numerical methods
331(1)
Questions
332(1)
References
333(1)
Additional Reading
334(1)
9 Spatial Entropy
335(63)
9.1 Organization of spatial data
336(3)
9.1.1 Distribution, density, and aggregation
337(2)
9.2 Spatial entropy statistics
339(14)
9.2.1 Redundancy
343(2)
9.2.2 Information gain
345(7)
9.2.3 Disutility entropy
352(1)
9.3 One dimensional aggregation
353(7)
9.4 Another approach to spatial representation
360(3)
9.5 Two-dimensional aggregation
363(13)
9.5.1 Probability density function and its resolution
372(3)
9.5.2 Relation between spatial entropy and spatial disutility
375(1)
9.6 Entropy maximization for modeling spatial phenomena
376(4)
9.7 Cluster analysis by entropy maximization
380(4)
9.8 Spatial visualization and mapping
384(2)
9.9 Scale and entropy
386(2)
9.10 Spatial probability distributions
388(3)
9.11 Scaling: rank size rule and Zipf's law
391(2)
9.11.1 Exponential law
391(1)
9.11.2 Log-normal law
391(1)
9.11.3 Power law
392(1)
9.11.4 Law of proportionate effect
392(1)
Questions
393(1)
References
394(1)
Further Reading
395(3)
10 Inverse Spatial Entropy
398(38)
10.1 Definition
398(4)
10.2 Principle of entropy decomposition
402(3)
10.3 Measures of information gain
405(12)
10.3.1 Bivariate measures
405(5)
10.3.2 Map representation
410(2)
10.3.3 Construction of spatial measures
412(5)
10.4 Aggregation properties
417(3)
10.5 Spatial interpretations
420(6)
10.6 Hierarchical decomposition
426(2)
10.7 Comparative measures of spatial decomposition
428(5)
Questions
433(2)
References
435(1)
11 Entropy Spectral Analyses
436(56)
11.1 Characteristics of time series
436(10)
11.1.1 Mean
437(1)
11.1.2 Variance
438(2)
11.1.3 Covariance
440(1)
11.1.4 Correlation
441(2)
11.1.5 Stationarity
443(3)
11.2 Spectral analysis
446(18)
11.2.1 Fourier representation
448(5)
11.2.2 Fourier transform
453(1)
11.2.3 Periodogram
454(3)
11.2.4 Power
457(4)
11.2.5 Power spectrum
461(3)
11.3 Spectral analysis using maximum entropy
464(19)
11.3.1 Burg method
465(8)
11.3.2 Kapur-Kesavan method
473(1)
11.3.3 Maximization of entropy
473(3)
11.3.4 Determination of Lagrange multipliers λk
476(3)
11.3.5 Spectral density
479(3)
11.3.6 Extrapolation of autocovariance functions
482(1)
11.3.7 Entropy of power spectrum
482(1)
11.4 Spectral estimation using configurational entropy
483(3)
11.5 Spectral estimation by mutual information principle
486(4)
References
490(1)
Additional Reading
490(2)
12 Minimum Cross Entropy Spectral Analysis
492(25)
12.1 Cross-entropy
492(1)
12.2 Minimum cross-entropy spectral analysis (MCESA)
493(10)
12.2.1 Power spectrum probability density function
493(5)
12.2.2 Minimum cross-entropy-based probability density functions given total expected spectral powers at each frequency
498(3)
12.2.3 Spectral probability density functions for white noise
501(2)
12.3 Minimum cross-entropy power spectrum given auto-correlation
503(6)
12.3.1 No prior power spectrum estimate is given
504(1)
12.3.2 A prior power spectrum estimate is given
505(1)
12.3.3 Given spectral powers: Tk = Gj, Gj = Pk
506(3)
12.4 Cross-entropy between input and output of linear filter
509(3)
12.4.1 Given input signal PDF
509(1)
12.4.2 Given prior power spectrum
510(2)
12.5 Comparison
512(2)
12.6 Towards efficient algorithms
514(1)
12.7 General method for minimum cross-entropy spectral estimation
515(1)
References
515(1)
Additional References
516(1)
13 Evaluation and Design of Sampling and Measurement Networks
517(42)
13.1 Design considerations
517(1)
13.2 Information-related approaches
518(3)
13.2.1 Information variance
518(2)
13.2.2 Transfer function variance
520(1)
13.2.3 Correlation
521(1)
13.3 Entropy measures
521(9)
13.3.1 Marginal entropy, joint entropy, conditional entropy and transinformation
521(2)
13.3.2 Informational correlation coefficient
523(1)
13.3.3 Isoinformation
524(1)
13.3.4 Information transfer function
524(1)
13.3.5 Information distance
525(1)
13.3.6 Information area
525(1)
13.3.7 Application to rainfall networks
525(5)
13.4 Directional information transfer index
530(7)
13.4.1 Kernel estimation
531(2)
13.4.2 Application to groundwater quality networks
533(4)
13.5 Total correlation
537(2)
13.6 Maximum information minimum redundancy (MIMR)
539(14)
13.6.1 Optimization
541(1)
13.6.2 Selection procedure
542(11)
Questions
553(1)
References
554(2)
Additional Reading
556(3)
14 Selection of Variables and Models
559(22)
14.1 Methods for selection
559(1)
14.2 Kullback-Leibler (KL) distance
560(1)
14.3 Variable selection
560(1)
14.4 Transitivity
561(1)
14.5 Logit model
561(13)
14.6 Risk and vulnerability assessment
574(4)
14.6.1 Hazard assessment
576(1)
14.6.2 Vulnerability assessment
577(1)
14.6.3 Risk assessment and ranking
578(1)
Questions
578(1)
References
579(1)
Additional Reading
580(1)
15 Neural Networks
581(24)
15.1 Single neuron
581(4)
15.2 Neural network training
585(3)
15.3 Principle of maximum information preservation
588(1)
15.4 A single neuron corrupted by processing noise
589(3)
15.5 A single neuron corrupted by additive input noise
592(4)
15.6 Redundancy and diversity
596(2)
15.7 Decision trees and entropy nets
598(4)
Questions
602(1)
References
603(2)
16 System Complexity
605(28)
16.1 Ferdinand's measure of complexity
605(13)
16.1.1 Specification of constraints
606(1)
16.1.2 Maximization of entropy
606(1)
16.1.3 Determination of Lagrange multipliers
606(1)
16.1.4 Partition function
607(3)
16.1.5 Analysis of complexity
610(4)
16.1.6 Maximum entropy
614(2)
16.1.7 Complexity as a function of N
616(2)
16.2 Kapur's complexity analysis
618(2)
16.3 Cornacchio's generalized complexity measures
620(7)
16.3.1 Special case: R = 1
624(1)
16.3.2 Analysis of complexity: non-unique K-transition points and conditional complexity
624(3)
16.4 Kapur's simplification
627(1)
16.5 Kapur's measure
627(1)
16.6 Hypothesis testing
628(1)
16.7 Other complexity measures
628(3)
Questions
631(1)
References
631(1)
Additional References
632(1)
Author Index 633(6)
Subject Index 639
Vijay P. Singh,  Texas A & M University, USA