5
1
9789812567956
Preface to the Second Edition v
Preface to the First Edition vii
Introduction 1
Outline 1
Language 2
Two Philosophies 3
Notation 4
Basic Concepts in Probability 9
Definitions of Probability 9
Mathematical probability 10
Frequentist probability 10
Bayesian probability 11
Properties of Probability 12
Addition law for sets of elementary events 12
Conditional probability and independence 13
Example of the addition law: scanning efficiency 14
Bayes theorem for discrete events 15
Bayesian use of Bayes theorem 16
Random variable 17
Continuous Random Variables 18
Probability density function 19
Change of variable 20
Cumulative, marginal and conditional distributions 21
Bayes theorem for continuous variables 22
Bayesian use of Bayes theorem for continuous variables 22
Properties of Distributions 24
Expectation, mean and variance 24
Covariance andcorrelation 26
Linear functions of random variables 28
Ratio of random variables 30
Approximate variance formulae 32
Moments 33
Characteristic Function 34
Definition and properties 34
Cumulants 37
Probability generating function 38
Sums of a random number of random variables 39
Invariant measures 41
Convergence and the Law of Large Numbers 43
The Tchebycheff Theorem and Its Corollary 43
Tchebycheff theorem 43
Bienayme-Tchebycheff inequality 44
Convergence 45
Convergence in distribution 45
The Paul Levy theorem 46
Convergence in probability 46
Stronger types of convergence 47
The Law of Large Numbers 47
Monte Carlo integration 48
The Central Limit theorem 49
Example: Gaussian (Normal) random number generator 51
Probability Distributions 53
Discrete Distributions 53
Binomial distribution 53
Multinomial distribution 56
Poisson distribution 57
Compound Poisson distribution 60
Geometric distribution 62
Negative binomial distribution 63
Continuous Distributions 64
Normal one-dimensional (univariate Gaussian) 64
Normal many-dimensional (multivariate Gaussian) 67
Chi-square distribution 70
Student's t-distribution 73
Fisher-Snedecor F and Z distributions 77
Uniform distribution 79
Triangular distribution 79
Beta distribution 80
Exponential distribution 82
Gamma distribution 83
Cauchy, or Breit-Wigner, distribution 84
Log-Normal distribution 85
Extreme value distribution 87
Weibull distribution 89
Double exponential distribution 89
Asymptotic relationships between distributions 90
Handling of Real Life Distributions 91
General applicability of the Normal distribution 91
Johnson empirical distributions 92
Truncation 93
Experimental resolutions 94
Examples of variable experimental resolution 95
Information 99
Basic Concepts 100
Likelihood function 100
Statistic 100
Information of R.A. Fisher 101
Definition of information 101
Properties of information 101
Sufficient Statistics 103
Sufficiency 103
Examples 104
Minimal sufficient statistics 105
Darmois theorem 106
Information and Sufficiency 108
Example of Experimental Design 109
Decision Theory 111
Basic Concepts in Decision Theory 112
Subjective probability, Bayesian approach 112
Definitions and terminology 113
Choice of Decision Rules 114
Classical choice: pre-ordering rules 114
Bayesian choice 115
Minimax decisions 116
Decision-theoretic Approach to Classical Problems 117
Point estimation 117
Interval estimation 118
Tests of hypotheses 118
Examples: Adjustment of an Apparatus 121
Adjustment given an estimate of the apparatus performance 121
Adjustment with estimation of the optimum adjustment 123
Conclusion: Indeterminacy in Classical and Bayesian Decisions 124
Theory of Estimators 127
Basic Concepts in Estimation 127
Consistency and convergence 128
Bias and consistency 129
Usual Methods of Constructing Consistent Estimators 130
The moments method 131
Implicitly defined estimators 132
The maximum likelihood method 135
Least squares methods 137
Asymptotic Distributions of Estimates 139
Asymptotic Normality 139
Asymptotic expansion of moments of estimates 141
Asymptotic bias and variance of the usual estimators 144
Information and the Precision of an Estimator 146
Lower bounds for the variance - Cramer-Rao inequality 147
Efficiency and minimum variance 149
Cramer-Rao inequality for several parameters 151
The Gauss-Markov theorem 152
Asymptotic efficiency 153
Bayesian Inference 154
Choice of prior density 154
Bayesian inference about the Poisson parameter 156
Priors closed under sampling 157
Bayesian inference about the mean, when the variance is known 157
Bayesian inference about the variance, when the mean is known 159
Bayesian inference about the mean and the variance 161
Summary of Bayesian inference for Normal parameters 162
Point Estimation in Practice 163
Choice of Estimator 163
Desirable properties of estimators 164
Compromise between statistical merits 165
Cures to obtain simplicity 166
Economic considerations 168
The Method of Moments 170
Orthogonal functions 170
Comparison of likelihood and moments methods 172
The Maximum Likelihood Method 173
Summary of properties of maximum likelihood 173
Example: determination of the lifetime of a particle in a restricted volume 175
Academic example of a poor maximum likelihood estimate 177
Constrained parameters in maximum likelihood 179
The Least Squares Method (Chi-Square) 182
The linear model 183
The polynomial model 185
Constrained parameters in the linear model 186
Normally distributed data in nonlinear models 190
Estimation from histograms; comparison of likelihood and least squares methods 191
Weights and Detection Efficiency 193
Ideal method maximum likelihood 194
Approximate method for handling weights 196
Exclusion of events with large weight 199
Least squares method 201
Reduction of Bias 204
Exact distribution of the estimate known 204
Exact distribution of the estimate unknown 206
Robust (Distribution-free) Estimation 207
Robust estimation of the centre of a distribution 208
Trimming and Winsorization 210
Generalized p[superscript th]-power norms 211
Estimates of location for asymmetric distributions 213
Interval Estimation 215
Normally distributed data 216
Confidence intervals for the mean 216
Confidence intervals for several parameters 218
Interpretation of the covariance matrix 223
The General Case in One Dimension 225
Confidence intervals and belts 225
Upper limits, lower limits and flip-flopping 227
Unphysical values and empty intervals 229
The unified approach 229
Confidence intervals for discrete data 231
Use of the Likelihood Function 233
Parabolic log-likelihood function 233
Non-parabolic log-likelihood functions 234
Profile likelihood regions in many parameters 236
Use of Asymptotic Approximations 238
Asymptotic Normality of the maximum likelihood estimate 238
Asymptotic Normality of [Characters not reproducible] In L/[Characters not reproducible theta] 238
[Characters not reproducible]L/[Characters not reproducible theta] confidence regions in many parameters 240
Finite sample behaviour of three general methods of interval estimation 240
Summary: Confidence Intervals and the Ensemble 246
The Bayesian Approach 248
Confidence intervals and credible intervals 249
Summary: Bayesian or frequentist intervals? 250
Test of Hypotheses 253
Formulation of a Test 254
Basic concepts in testing 254
Example: Separation of two classes of events 255
Comparison of Tests 257
Power 257
Consistency 259
Bias 260
Choice of tests 261
Test of Simple Hypotheses 263
The Neyman-Pearson test 263
Example: Normal theory test versus sign test 264
Tests of Composite Hypotheses 266
Existence of a uniformly most powerful test for the exponential family 267
One- and two-sided tests 268
Maximizing local power 269
Likelihood Ratio Test 270
Test statistic 270
Asymptotic distribution for continuous families of hypotheses 271
Asymptotic power for continuous families of hypotheses 273
Examples 274
Small sample behaviour 279
Example of separate families of hypotheses 282
General methods for testing separate families 285
Tests and Decision Theory 287
Bayesian choice between families of distributions 287
Sequential tests for optimum number of observations 292
Sequential probability ratio test for a continuous family of hypotheses 297
Summary of Optimal Tests 298
Goodness-of-Fit Tests 299
GOF Testing: From the Test Statistic to the P-value 299
Pearson's Chi-square Test for Histograms 301
Moments of the Pearson statistic 302
Chi-square test with estimation of parameters 303
Choosing optimal bin size 304
Other Tests on Binned Data 308
Runs test 308
Empty cell test, order statistics 309
Neyman-Barton smooth test 311
Tests Free of Binning 313
Smirnov-Cramer-von Mises test 314
Kolmogorov test 316
More refined tests based on the EDF 317
Use of the likelihood function 317
Applications 318
Observation of a fine structure 318
Combining independent estimates 323
Comparing distributions 327
Combining Independent Tests 330
Independence of tests 330
Significance level of the combined test 331
References 335
Subject Index 341
Statistical Methods In Experimental Physics (2nd Edition) / Edition 2 available in Hardcover, Paperback

Statistical Methods In Experimental Physics (2nd Edition) / Edition 2
by Frederick James
Frederick James
- ISBN-10:
- 981256795X
- ISBN-13:
- 9789812567956
- Pub. Date:
- 12/04/2006
- Publisher:
- World Scientific Publishing Company, Incorporated
- ISBN-10:
- 981256795X
- ISBN-13:
- 9789812567956
- Pub. Date:
- 12/04/2006
- Publisher:
- World Scientific Publishing Company, Incorporated

Statistical Methods In Experimental Physics (2nd Edition) / Edition 2
by Frederick James
Frederick James
$73.0
Current price is , Original price is $73.0. You
Buy New
$73.00
$73.00
-
SHIP THIS ITEMIn stock. Ships in 1-2 days.PICK UP IN STORE
Your local store may have stock of this item.
Available within 2 business hours
73.0
In Stock
Overview
The first edition of this classic book has become the authoritative reference for physicists desiring to master the finer points of statistical data analysis. This second edition contains all the important material of the first, much of it unavailable from any other sources. In addition, many chapters have been updated with considerable new material, especially in areas concerning the theory and practice of confidence intervals, including the important Feldman-Cousins method. Both frequentist and Bayesian methodologies are presented, with a strong emphasis on techniques useful to physicists and other scientists in the interpretation of experimental data and comparison with scientific theories. This is a valuable textbook for advanced graduate students in the physical sciences as well as a reference for active researchers.
Product Details
ISBN-13: | 9789812567956 |
---|---|
Publisher: | World Scientific Publishing Company, Incorporated |
Publication date: | 12/04/2006 |
Edition description: | Second Edition |
Pages: | 364 |
Product dimensions: | 6.20(w) x 9.10(h) x 1.10(d) |
Table of Contents
Preface to the Second Edition v
Preface to the First Edition vii
Introduction 1
Outline 1
Language 2
Two Philosophies 3
Notation 4
Basic Concepts in Probability 9
Definitions of Probability 9
Mathematical probability 10
Frequentist probability 10
Bayesian probability 11
Properties of Probability 12
Addition law for sets of elementary events 12
Conditional probability and independence 13
Example of the addition law: scanning efficiency 14
Bayes theorem for discrete events 15
Bayesian use of Bayes theorem 16
Random variable 17
Continuous Random Variables 18
Probability density function 19
Change of variable 20
Cumulative, marginal and conditional distributions 21
Bayes theorem for continuous variables 22
Bayesian use of Bayes theorem for continuous variables 22
Properties of Distributions 24
Expectation, mean and variance 24
Covariance andcorrelation 26
Linear functions of random variables 28
Ratio of random variables 30
Approximate variance formulae 32
Moments 33
Characteristic Function 34
Definition and properties 34
Cumulants 37
Probability generating function 38
Sums of a random number of random variables 39
Invariant measures 41
Convergence and the Law of Large Numbers 43
The Tchebycheff Theorem and Its Corollary 43
Tchebycheff theorem 43
Bienayme-Tchebycheff inequality 44
Convergence 45
Convergence in distribution 45
The Paul Levy theorem 46
Convergence in probability 46
Stronger types of convergence 47
The Law of Large Numbers 47
Monte Carlo integration 48
The Central Limit theorem 49
Example: Gaussian (Normal) random number generator 51
Probability Distributions 53
Discrete Distributions 53
Binomial distribution 53
Multinomial distribution 56
Poisson distribution 57
Compound Poisson distribution 60
Geometric distribution 62
Negative binomial distribution 63
Continuous Distributions 64
Normal one-dimensional (univariate Gaussian) 64
Normal many-dimensional (multivariate Gaussian) 67
Chi-square distribution 70
Student's t-distribution 73
Fisher-Snedecor F and Z distributions 77
Uniform distribution 79
Triangular distribution 79
Beta distribution 80
Exponential distribution 82
Gamma distribution 83
Cauchy, or Breit-Wigner, distribution 84
Log-Normal distribution 85
Extreme value distribution 87
Weibull distribution 89
Double exponential distribution 89
Asymptotic relationships between distributions 90
Handling of Real Life Distributions 91
General applicability of the Normal distribution 91
Johnson empirical distributions 92
Truncation 93
Experimental resolutions 94
Examples of variable experimental resolution 95
Information 99
Basic Concepts 100
Likelihood function 100
Statistic 100
Information of R.A. Fisher 101
Definition of information 101
Properties of information 101
Sufficient Statistics 103
Sufficiency 103
Examples 104
Minimal sufficient statistics 105
Darmois theorem 106
Information and Sufficiency 108
Example of Experimental Design 109
Decision Theory 111
Basic Concepts in Decision Theory 112
Subjective probability, Bayesian approach 112
Definitions and terminology 113
Choice of Decision Rules 114
Classical choice: pre-ordering rules 114
Bayesian choice 115
Minimax decisions 116
Decision-theoretic Approach to Classical Problems 117
Point estimation 117
Interval estimation 118
Tests of hypotheses 118
Examples: Adjustment of an Apparatus 121
Adjustment given an estimate of the apparatus performance 121
Adjustment with estimation of the optimum adjustment 123
Conclusion: Indeterminacy in Classical and Bayesian Decisions 124
Theory of Estimators 127
Basic Concepts in Estimation 127
Consistency and convergence 128
Bias and consistency 129
Usual Methods of Constructing Consistent Estimators 130
The moments method 131
Implicitly defined estimators 132
The maximum likelihood method 135
Least squares methods 137
Asymptotic Distributions of Estimates 139
Asymptotic Normality 139
Asymptotic expansion of moments of estimates 141
Asymptotic bias and variance of the usual estimators 144
Information and the Precision of an Estimator 146
Lower bounds for the variance - Cramer-Rao inequality 147
Efficiency and minimum variance 149
Cramer-Rao inequality for several parameters 151
The Gauss-Markov theorem 152
Asymptotic efficiency 153
Bayesian Inference 154
Choice of prior density 154
Bayesian inference about the Poisson parameter 156
Priors closed under sampling 157
Bayesian inference about the mean, when the variance is known 157
Bayesian inference about the variance, when the mean is known 159
Bayesian inference about the mean and the variance 161
Summary of Bayesian inference for Normal parameters 162
Point Estimation in Practice 163
Choice of Estimator 163
Desirable properties of estimators 164
Compromise between statistical merits 165
Cures to obtain simplicity 166
Economic considerations 168
The Method of Moments 170
Orthogonal functions 170
Comparison of likelihood and moments methods 172
The Maximum Likelihood Method 173
Summary of properties of maximum likelihood 173
Example: determination of the lifetime of a particle in a restricted volume 175
Academic example of a poor maximum likelihood estimate 177
Constrained parameters in maximum likelihood 179
The Least Squares Method (Chi-Square) 182
The linear model 183
The polynomial model 185
Constrained parameters in the linear model 186
Normally distributed data in nonlinear models 190
Estimation from histograms; comparison of likelihood and least squares methods 191
Weights and Detection Efficiency 193
Ideal method maximum likelihood 194
Approximate method for handling weights 196
Exclusion of events with large weight 199
Least squares method 201
Reduction of Bias 204
Exact distribution of the estimate known 204
Exact distribution of the estimate unknown 206
Robust (Distribution-free) Estimation 207
Robust estimation of the centre of a distribution 208
Trimming and Winsorization 210
Generalized p[superscript th]-power norms 211
Estimates of location for asymmetric distributions 213
Interval Estimation 215
Normally distributed data 216
Confidence intervals for the mean 216
Confidence intervals for several parameters 218
Interpretation of the covariance matrix 223
The General Case in One Dimension 225
Confidence intervals and belts 225
Upper limits, lower limits and flip-flopping 227
Unphysical values and empty intervals 229
The unified approach 229
Confidence intervals for discrete data 231
Use of the Likelihood Function 233
Parabolic log-likelihood function 233
Non-parabolic log-likelihood functions 234
Profile likelihood regions in many parameters 236
Use of Asymptotic Approximations 238
Asymptotic Normality of the maximum likelihood estimate 238
Asymptotic Normality of [Characters not reproducible] In L/[Characters not reproducible theta] 238
[Characters not reproducible]L/[Characters not reproducible theta] confidence regions in many parameters 240
Finite sample behaviour of three general methods of interval estimation 240
Summary: Confidence Intervals and the Ensemble 246
The Bayesian Approach 248
Confidence intervals and credible intervals 249
Summary: Bayesian or frequentist intervals? 250
Test of Hypotheses 253
Formulation of a Test 254
Basic concepts in testing 254
Example: Separation of two classes of events 255
Comparison of Tests 257
Power 257
Consistency 259
Bias 260
Choice of tests 261
Test of Simple Hypotheses 263
The Neyman-Pearson test 263
Example: Normal theory test versus sign test 264
Tests of Composite Hypotheses 266
Existence of a uniformly most powerful test for the exponential family 267
One- and two-sided tests 268
Maximizing local power 269
Likelihood Ratio Test 270
Test statistic 270
Asymptotic distribution for continuous families of hypotheses 271
Asymptotic power for continuous families of hypotheses 273
Examples 274
Small sample behaviour 279
Example of separate families of hypotheses 282
General methods for testing separate families 285
Tests and Decision Theory 287
Bayesian choice between families of distributions 287
Sequential tests for optimum number of observations 292
Sequential probability ratio test for a continuous family of hypotheses 297
Summary of Optimal Tests 298
Goodness-of-Fit Tests 299
GOF Testing: From the Test Statistic to the P-value 299
Pearson's Chi-square Test for Histograms 301
Moments of the Pearson statistic 302
Chi-square test with estimation of parameters 303
Choosing optimal bin size 304
Other Tests on Binned Data 308
Runs test 308
Empty cell test, order statistics 309
Neyman-Barton smooth test 311
Tests Free of Binning 313
Smirnov-Cramer-von Mises test 314
Kolmogorov test 316
More refined tests based on the EDF 317
Use of the likelihood function 317
Applications 318
Observation of a fine structure 318
Combining independent estimates 323
Comparing distributions 327
Combining Independent Tests 330
Independence of tests 330
Significance level of the combined test 331
References 335
Subject Index 341
From the B&N Reads Blog
Page 1 of