STATISTICAL INFERENCE : THEORY OF ESTIMATION
Intended for the postgraduate students of statistics, this sequel to Statistical Inference: Testing of Hypotheses introduces the problem of estimation in the light of foundations laid down by Sir R.A. Fisher (1922), and follows both classical and Bayesian approaches to solve these problems.The book starts by discussing the growing levels of data summarization, and connects this with sufficient and minimal sufficient statistics. The book provides a complete account of theorems and results on uniformly minimum variance unbiased estimators (UMVUE) including the famous Rao and Blackwell theorem to suggest an improved estimator based on a sufficient statistic, and the Lehmann-Scheffe theorem to give an UMVUE. It discusses the Cramer-Rao and Bhattacharyya variance lower bounds for regular models, by introducing Fishers information and Chapman, Robbins and Kiefer variance lower bounds for Pitman models. The book also introduces different methods of estimation, including the method of maximum likelihood, and discusses large sample properties such as consistency, consistent asymptotic normality (CAN) and best asymptotic normality (BAN) of different estimators.Separate chapters are devoted to finding the Pitman estimator, among equivariant estimators, for location and scale models, by exploiting symmetry structure, present in the model, and Bayes, Empirical Bayes, Hierarchical Bayes estimators in different statistical models. Systematic exposition of the theory and results in different statistical situations and models are included. Each chapter finishes with solved examples in a number of statistical models, augmented by explanation of theorems and results.Key features:Provides clarifications of theorems and related eesults.Includes numerous solved examples to improve analytical insight on the subject.Incorporates chapter-end exercises to review student's comprehension of the subject.Discusses detailed theory on data summarization, unbiased estimation with large sample properties, and Bayes and Minimax estimation, separately, in different chapters.Read more
1122292782
STATISTICAL INFERENCE : THEORY OF ESTIMATION
Intended for the postgraduate students of statistics, this sequel to Statistical Inference: Testing of Hypotheses introduces the problem of estimation in the light of foundations laid down by Sir R.A. Fisher (1922), and follows both classical and Bayesian approaches to solve these problems.The book starts by discussing the growing levels of data summarization, and connects this with sufficient and minimal sufficient statistics. The book provides a complete account of theorems and results on uniformly minimum variance unbiased estimators (UMVUE) including the famous Rao and Blackwell theorem to suggest an improved estimator based on a sufficient statistic, and the Lehmann-Scheffe theorem to give an UMVUE. It discusses the Cramer-Rao and Bhattacharyya variance lower bounds for regular models, by introducing Fishers information and Chapman, Robbins and Kiefer variance lower bounds for Pitman models. The book also introduces different methods of estimation, including the method of maximum likelihood, and discusses large sample properties such as consistency, consistent asymptotic normality (CAN) and best asymptotic normality (BAN) of different estimators.Separate chapters are devoted to finding the Pitman estimator, among equivariant estimators, for location and scale models, by exploiting symmetry structure, present in the model, and Bayes, Empirical Bayes, Hierarchical Bayes estimators in different statistical models. Systematic exposition of the theory and results in different statistical situations and models are included. Each chapter finishes with solved examples in a number of statistical models, augmented by explanation of theorems and results.Key features:Provides clarifications of theorems and related eesults.Includes numerous solved examples to improve analytical insight on the subject.Incorporates chapter-end exercises to review student's comprehension of the subject.Discusses detailed theory on data summarization, unbiased estimation with large sample properties, and Bayes and Minimax estimation, separately, in different chapters.Read more
10.53 In Stock
STATISTICAL INFERENCE : THEORY OF ESTIMATION

STATISTICAL INFERENCE : THEORY OF ESTIMATION

STATISTICAL INFERENCE : THEORY OF ESTIMATION

STATISTICAL INFERENCE : THEORY OF ESTIMATION

eBook

$10.53 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

Intended for the postgraduate students of statistics, this sequel to Statistical Inference: Testing of Hypotheses introduces the problem of estimation in the light of foundations laid down by Sir R.A. Fisher (1922), and follows both classical and Bayesian approaches to solve these problems.The book starts by discussing the growing levels of data summarization, and connects this with sufficient and minimal sufficient statistics. The book provides a complete account of theorems and results on uniformly minimum variance unbiased estimators (UMVUE) including the famous Rao and Blackwell theorem to suggest an improved estimator based on a sufficient statistic, and the Lehmann-Scheffe theorem to give an UMVUE. It discusses the Cramer-Rao and Bhattacharyya variance lower bounds for regular models, by introducing Fishers information and Chapman, Robbins and Kiefer variance lower bounds for Pitman models. The book also introduces different methods of estimation, including the method of maximum likelihood, and discusses large sample properties such as consistency, consistent asymptotic normality (CAN) and best asymptotic normality (BAN) of different estimators.Separate chapters are devoted to finding the Pitman estimator, among equivariant estimators, for location and scale models, by exploiting symmetry structure, present in the model, and Bayes, Empirical Bayes, Hierarchical Bayes estimators in different statistical models. Systematic exposition of the theory and results in different statistical situations and models are included. Each chapter finishes with solved examples in a number of statistical models, augmented by explanation of theorems and results.Key features:Provides clarifications of theorems and related eesults.Includes numerous solved examples to improve analytical insight on the subject.Incorporates chapter-end exercises to review student's comprehension of the subject.Discusses detailed theory on data summarization, unbiased estimation with large sample properties, and Bayes and Minimax estimation, separately, in different chapters.Read more

Product Details

ISBN-13: 9788120349308
Publisher: PHI Learning
Publication date: 04/03/2014
Sold by: Barnes & Noble
Format: eBook
File size: 23 MB
Note: This product may take a few minutes to download.

Table of Contents

Preface 1. INTRODUCTION 2. DATA SUMMARIZATION AND PRINCIPLE OF SUFFICIENCY 3. UNBIASED ESTIMATION 4. INFORMATION INEQUALITY 5. ASYMPTOTIC THEORY AND CONSISTENCY 6. METHODS OF ESTIMATION 7. PRINCIPLE OF EQUIVARIANCE 8. BAYES AND MINIMAX ESTIMATION 9. CONFIDENCE INTERVAL ESTIMATION Index
From the B&N Reads Blog

Customer Reviews