The Basic Practice of Statistics
Why are David Moore's statistics books so successful? They have become the books to beat because Moore was the first author to present the teaching of statistics as a useful tool in practice. Unlike traditional statistics books, Moore's texts use real data and walk the student through the process of analyzing that data, as opposed to emphasizing formulas, drill-like exercises, and cookbook mathematics. Thus, for the first time, students learn to think like practicing statisticians and to apply what they learn to their lives.

When Moore and McCabe's Introduction to the Practice of Statistic (IPS) became the #1 book in the market about 5 years ago, statistics instructors realized that a shorter, lower-level, less detailed book based on IPS would also succeed. Hence, David Moore envisioned The Basic Practice of Statistics (BPS)—a text that applies the data analysis approach of IPS but is targeted for a one-term course only. BPS has now become the book to beat, being an ideal match for more schools than any other book in the market.

The second edition of The Basic Practice of Statistics builds on the strengths of the first: a balanced and modern approach to data analysis, data production, and inference; and an emphasis on clear explanations of ideas rather than formal mathematics or reliance on recipes. Moore's use of real world data and examples and his emphasis on statistical thinking show students how statistics can be used as a powerful tool for understanding the world we live in. Designed for students with a limited background in mathematics, The Basic Practice of Statistics 2/e is the ideal way to introduce the core concepts of statistics to the students of today and tomorrow.

1129759707
The Basic Practice of Statistics
Why are David Moore's statistics books so successful? They have become the books to beat because Moore was the first author to present the teaching of statistics as a useful tool in practice. Unlike traditional statistics books, Moore's texts use real data and walk the student through the process of analyzing that data, as opposed to emphasizing formulas, drill-like exercises, and cookbook mathematics. Thus, for the first time, students learn to think like practicing statisticians and to apply what they learn to their lives.

When Moore and McCabe's Introduction to the Practice of Statistic (IPS) became the #1 book in the market about 5 years ago, statistics instructors realized that a shorter, lower-level, less detailed book based on IPS would also succeed. Hence, David Moore envisioned The Basic Practice of Statistics (BPS)—a text that applies the data analysis approach of IPS but is targeted for a one-term course only. BPS has now become the book to beat, being an ideal match for more schools than any other book in the market.

The second edition of The Basic Practice of Statistics builds on the strengths of the first: a balanced and modern approach to data analysis, data production, and inference; and an emphasis on clear explanations of ideas rather than formal mathematics or reliance on recipes. Moore's use of real world data and examples and his emphasis on statistical thinking show students how statistics can be used as a powerful tool for understanding the world we live in. Designed for students with a limited background in mathematics, The Basic Practice of Statistics 2/e is the ideal way to introduce the core concepts of statistics to the students of today and tomorrow.

269.75 Out Of Stock
The Basic Practice of Statistics

The Basic Practice of Statistics

The Basic Practice of Statistics

The Basic Practice of Statistics

Hardcover(Eighth Edition)

$269.75 
  • SHIP THIS ITEM
    Temporarily Out of Stock Online
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Why are David Moore's statistics books so successful? They have become the books to beat because Moore was the first author to present the teaching of statistics as a useful tool in practice. Unlike traditional statistics books, Moore's texts use real data and walk the student through the process of analyzing that data, as opposed to emphasizing formulas, drill-like exercises, and cookbook mathematics. Thus, for the first time, students learn to think like practicing statisticians and to apply what they learn to their lives.

When Moore and McCabe's Introduction to the Practice of Statistic (IPS) became the #1 book in the market about 5 years ago, statistics instructors realized that a shorter, lower-level, less detailed book based on IPS would also succeed. Hence, David Moore envisioned The Basic Practice of Statistics (BPS)—a text that applies the data analysis approach of IPS but is targeted for a one-term course only. BPS has now become the book to beat, being an ideal match for more schools than any other book in the market.

The second edition of The Basic Practice of Statistics builds on the strengths of the first: a balanced and modern approach to data analysis, data production, and inference; and an emphasis on clear explanations of ideas rather than formal mathematics or reliance on recipes. Moore's use of real world data and examples and his emphasis on statistical thinking show students how statistics can be used as a powerful tool for understanding the world we live in. Designed for students with a limited background in mathematics, The Basic Practice of Statistics 2/e is the ideal way to introduce the core concepts of statistics to the students of today and tomorrow.


Product Details

ISBN-13: 9781319042578
Publisher: Freeman, W. H. & Company
Publication date: 12/22/2017
Edition description: Eighth Edition
Pages: 654
Product dimensions: 8.60(w) x 10.90(h) x 1.20(d)

About the Author

David S. Moore is Shanti S. Gupta Distinguished Professor of Statistics, Emeritus, at Purdue University and was 1998 president of the American Statistical Association. He received his A.B. from Princeton and his Ph.D. from Cornell, both in mathematics. He has written many research papers in statistical theory and served on the editorial boards of several major journals. Professor Moore is an elected fellow of the American Statistical Association and of the Institute of Mathematical Statistics and an elected member of the International Statistical Institute. He has served as program director for statistics and probability at the National Science Foundation.  In recent years, Professor Moore has devoted his attention to the teaching of statistics. He was the content developer for the Annenberg/Corporation for Public Broadcasting college-level telecourse Against All Odds: Inside Statistics and for the series of video modules Statistics: Decisions through Data, intended to aid the teaching of statistics in schools. He is the author of influential articles on statistics education and of several leading texts. Professor Moore has served as president of the International Association for Statistical Education and has received the Mathematical Association of America’s national award for distinguished college or university teaching of mathematics.

William I. Notz is Professor of Statistics at the Ohio State University.  He received his B.S. in physics from the Johns Hopkins University and his Ph.D. in mathematics from Cornell University.  His first academic job was as an assistant professor in the Department of Statistics at Purdue University.  While there, he taught the introductory concepts course with Professor Moore and as a result of this experience he developed an interest in statistical education.  Professor Notz is a co-author of EESEE (the Electronic Encyclopedia of Statistical Examples and Exercises) and co-author of Statistics: Concepts and Controversies.

 
Professor Notz’s research interests have focused on experimental design and computer experiments.  He is the author of several research papers and of a book on the design and analysis of computer experiments.  He is an elected fellow of the American Statistical Association.  He has served as the editor of the journal Technometrics and as editor of the Journal of Statistics Education.  He has served as the Director of the Statistical Consulting Service, as acting chair of the Department of Statistics for a year, and as an Associate Dean in the College of Mathematical and Physical Sciences at the Ohio State University.  He is a winner of the Ohio State University’s Alumni Distinguished Teaching Award. 
 

Michael A. Fligner is an Adjunct Professor at the University of California at Santa Cruz and a non-resident Professor Emeritus with the Ohio State University. He received his B.S. in mathematics from the State University of New York at Stony Brook and his Ph.D. from the University of Connecticut. He spent almost 40 years at the Ohio State University where he was Vice Chair of the Department for over 10 years and also served as Director of the Statistical Consulting Service. He has done consulting work with several large corporations in Central Ohio. Professor Fligner's research interests are in Nonparametric Statistical methods and he received the Statistics in Chemistry award from the American Statistical Association for work on detecting biologically active compounds. He is co-author of the book Statistical Methods for Behavioral Ecology and received a Fulbright scholarship under the American Republics Research program to work at the Charles Darwin Research Station in the Galapagos Islands. He has been an Associate Editor of the Journal of Statistical Education. Professor Fligner is currently associated with the Center for Statistical Analysis in the Social Sciences at the University of California at Santa Cruz.

Table of Contents

PART I: EXPLORING DATA

Chapter 0 Getting Started

Where data comes from matters

Always look at the data

Variation is everywhere

What lies ahead in this book

Chapter 1 Picturing Distributions with Graphs

1.1 Individuals and variables

1.2 Categorical variables: Pie charts and bar graphs

1.3 Quantitative variables: Histograms

1.4 Interpreting histograms

1.5 Quantitative variables: Stemplots

1.6 Time plots

Chapter 2 Describing Distributions with Numbers

2.1 Measuring center: The mean

2.2 Measuring center: The median

2.3 Comparing the mean and the median

2.4 Measuring spread: The quartiles

2.5 The five-number summary and boxplots

2.6 Spotting suspected outliers*

2.7 Measuring spread: The standard deviation

2.8 Choosing measures of center and spread

2.9 Using technology

2.10 Organizing a statistical problem

Chapter 3 The Normal Distributions

3.1 Density curves

3.2 Describing density curves

3.3 Normal distributions

3.4 The 68-95-99.7 rule

3.5 The standard Normal distribution

3.6 Finding Normal proportions

3.7 Using the standard Normal table

3.8 Finding a value given a proportion

Chapter 4 Scatterplots and Correlation

4.1 Explanatory and response variables

4.2 Displaying relationships: Scatterplots

4.3 Interpreting scatterplots

4.4 Adding categorical variables to scatterplots

4.5 Measuring linear association: Correlation

4.6 Facts about correlation

Chapter 5 Regression

5.1 Regression lines

5.2 The least-squares regression line

5.3 Using technology

5.4 Facts about least-squares regression

5.5 Residuals

5.6 Influential observations

5.7 Cautions about correlation and regression

5.8 Association does not imply causation

5.9 Correlation, prediction, and big data*

Chapter 6 Two-Way Tables*

6.1 Marginal distributions

6.2 Conditional distributions

6.3 Simpson's paradox

Chapter 7 Exploring Data: Part I Review

Part I Summary

Test Yourself

Supplementary Exercises

PART II: PRODUCING DATA

Chapter 8 Producing Data: Sampling

8.1 Population versus sample

8.2 How to sample badly

8.3 Simple random samples

8.4 Inference about the population

8.5 Other sampling designs

8.6 Cautions about sample surveys

8.7 The impact of technology

Chapter 9 Producing Data: Experiments

9.1 Observation versus experiment

9.2 Subjects, factors, treatments

9.3 How to experiment badly

9.4 Randomized comparative experiments

9.5 The logic of randomized comparative experiments

9.6 Cautions about experimentation

9.7 Matched pairs and other block designs

Chapter 10 Data Ethics*

10.1 Institutional review boards

10.2 Informed consent

10.3 Confidentiality

10.4 Clinical trials

10.5 Behavioral and social science experiments

Chapter 11 Producing Data: Part II Review

Part II summary

Test yourself

Supplementary exercises

PART III: FROM DATA PRODUCTION TO INFERENCE

Chapter 12 Introducing Probability

12.1 The idea of probability

12.2 The search for randomness*

12.3 Probability models

12.4 Probability rules

12.5 Discrete probability models

12.6 Continuous probability models

12.7 Random variables

12.8 Personal probability*

Chapter 13 General Rules of Probability*

13.1 The general addition rule

13.2 Independence and the multiplication rule

13.3 Conditional probability

13.4 The general multiplication rule

13.5 Showing events are independent

13.6 Tree diagrams

13.7 Bayes' rule*

Chapter 14 Binomial Distributions*

14.1 The binomial setting and binomial distributions

14.2 Binomial distributions in statistical sampling

14.3 Binomial probabilities

14.4 Using technology

14.5 Binomial mean and standard deviation

14.6 The Normal approximation to binomial distributions

Chapter 15 Sampling Distributions

15.1 Parameters and statistics

15.2 Statistical estimation and the law of large numbers

15.3 Sampling distributions

15.4 The sampling distribution of x

15.5 The central limit theorem

15.6 Sampling distributions and statistical significance*

Chapter 16 Confidence Intervals: The Basics

16.1 The reasoning of statistical estimation

16.2 Margin of error and confidence level

16.3 Confidence intervals for a population mean

16.4 How confidence intervals behave

Chapter 17 Tests of Significance: The Basics

17.1 The reasoning of tests of significance

17.2 Stating hypotheses

17.3 P-value and statistical significance

17.4 Tests for a population mean

17.5 Significance from a table*

Chapter 18 Inference in Practice

18.1 Conditions for inference in practice

18.2 Cautions about confidence intervals

18.3 Cautions about significance tests

18.4 Planning studies: Sample size for confidence intervals

18.5 Planning studies: The power of a statistical test*

Chapter 19 From Data Production to Inference: Part III Review

Part III Summary

Review Exercises

Test Yourself

Supplementary Exercises

PART IV: INFERENCE ABOUT VARIABLES

Chapter 20 Inference about a Population Mean

20.1 Conditions for inference about a mean

20.2 The t distributions

20.3 The one-sample t confidence interval

20.4 The one-sample t test

20.5 Using technology

20.6 Matched pairs t procedures

20.7 Robustness of t procedures

Chapter 21 Comparing Two Means

21.1 Two-sample problems

21.2 Comparing two population means

21.3 Two-sample t procedures

21.4 Using technology

21.5 Robustness again

21.6 Details of the t approximation*

21.7 Avoid the pooled two-sample t procedures*

21.8 Avoid inference about standard deviations*

Chapter 22 Inference about a Population Proportion

22.1 The sample proportion

22.2 Large-sample confidence intervals for a proportion

22.3 Choosing the sample size

22.4 Significance tests for a proportion

22.5 Plus four confidence intervals for a proportion*

Chapter 23 Comparing Two Proportions

23.1 Two-sample problems: Proportions

23.2 The sampling distribution of a difference between proportions

23.3 Large-sample confidence intervals for comparing proportions

23.4 Using technology

23.5 Significance tests for comparing proportions

23.6 Plus four confidence intervals for comparing proportions*

Chapter 24 Inference about Variables: Part IV Review

Part IV summary

Test yourself

Supplementary exercises

PART V: INFERENCE ABOUT RELATIONSHIPS

Chapter 25 Two Categorical Variables: The Chi-Square Test

25.1 Two-way tables

25.2 The problem of multiple comparisons

25.3 Expected counts in two-way tables

25.4 The chi-square test statistic

25.5 Using technology

25.6 The chi-square distributions

25.7 Cell counts required for the chi-square test

25.8 Uses of the chi-square test: Independence and homogeneity

25.9 The chi-square test for goodness of fit*

Chapter 26 Inference for Regression

26.1 Conditions for regression inference

26.2 Estimating the parameters

26.3 Using technology

26.4 Testing the hypothesis of no linear relationship

26.5 Testing lack of correlation

26.6 Confidence intervals for the regression slope

26.7 Inference about prediction

26.8 Checking the conditions for inference

Chapter 27 One-Way Analysis of Variance:

Comparing Several Means

27.1 Comparing several means

27.2 The analysis of variance F test

27.3 Using technology

27.4 The idea of analysis of variance

27.5 Conditions for ANOVA

27.6 F distributions and degrees of freedom

27.7 Follow-up analysis: Tukey pairwise multiple comparisons

27.8 Some details of ANOVA*

[Back matter print text]

Notes and Data Sources

Tables

TABLE A Standard Normal probabilities

TABLE B Random digits

TABLE C t distribution critical values

TABLE D Chi-square distribution critical values

TABLE E Critical values of the correlation r

Answers to Selected Exercises

Index

PART VI: OPTIONAL COMPANION CHAPTERS

(available online)

Chapter 28 Nonparametric Tests

28.1 Comparing two samples: The Wilcoxon rank sum test

28.2 The Normal approximation for W

28.3 Using technology

28.4 What hypotheses does Wilcoxon test?

28.5 Dealing with ties in rank tests

28.6 Matched pairs: The Wilcoxon signed rank test

28.7 The Normal approximation for W+

28.8 Dealing with ties in the signed rank test

28.9 Comparing several samples: The Kruskal-Wallis test

28.10 Hypotheses and conditions for the Kruskal-Wallis test

28.11 The Kruskal-Wallis test statistic

Chapter 29 Multiple Regression

29.1 Parallel regression lines

29.2 Estimating parameters

29.3 Using technology

29.4 Inference for multiple regression

29.5 Interaction

29.6 The multiple linear regression model

29.7 The woes of regression coefficients

29.8 A case study for multiple regression

29.9 Inference for regression parameters

29.10 Checking the conditions for inference

Chapter 30 More about Analysis of Variance

30.1 Beyond one-way ANOVA

30.2 Two-way ANOVA: Conditions, main effects, and interaction

30.3 Inference for two-way ANOVA

30.4 Some details of two-way ANOVA*

Chapter 31 Statistical Process Control

31.1 Processes

31.2 Describing processes

31.3 The idea of statistical process control

31.4 x charts for process monitoring

31.5 s charts for process monitoring

31.6 Using control charts

31.6 Setting up control charts

31.7 Comments on statistical control

31.8 Don't confuse control with capability!

31.9 Control charts for sample proportions

31.10 Control limits for p charts

Chapter 32 Resampling: Permutation Tests and the Bootstrap

32.1 Randomization in experiments as a basis for inference

32.2 Permutation tests for comparing two treatments with software

32.3 Generating bootstrap samples

32.4 Bootstrap standard errors and confidence intervals

From the B&N Reads Blog

Customer Reviews