Like New — packaging may have been opened. A "Like New" item is suitable to give as a gift.

Very Good — may have minor signs of wear on packaging but item works perfectly and has no damage.

Good — item is in good condition but packaging may have signs of shelf wear/aging or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Acceptable — item is in working order but may show signs of wear such as scratches or torn packaging. All specific defects should be noted in the Comments section associated with each item.

Used — An item that has been opened and may show signs of wear. All specific defects should be noted in the Comments section associated with each item.

Refurbished — A used item that has been renewed or updated and verified to be in proper working condition. Not necessarily completed by the original manufacturer.

Absolutely BRAND NEW ORIGINAL US HARDCOVER STUDENT 5th Edition / Mint condition / Never been read / ISBN-10: 0470134879. Shipped out in one business day with free tracking.

This carefully paced text presents fundamental material for the first course in statistics without sacrificing clarity or depth of understanding. Relates statistical theory to the realities of research and illustrates concepts with specific examples. Appendixes include a math review and numerous worked problems.

A textbook primarily for students of psychology, educations, and related fields whose only mathematical background is common arithmetic, an elementary understanding of simple equations, and the ability to plug numbers into an equation and boil them down to a single numerical answer. Emphasizes the development of concepts, but also explains procedures step-by-step with examples from the behavioral sciences. Considerably revised from the 1993 edition to lower the math requirements a half-step and focus more on fundamentals. It is also shorter, though still contains more material than most instructors would try to fit into a single semester. Annotation c. by Book News, Inc., Portland, Or.

Our reader reviews allow you to share your comments on titles you liked,
or didn't, with others. By submitting an online review, you are representing to
Barnes & Noble.com that all information contained in your review is original
and accurate in all respects, and that the submission of such content by you
and the posting of such content by Barnes & Noble.com does not and will not
violate the rights of any third party. Please follow the rules below to help
ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer.
However, we cannot allow persons under the age of 13 to have accounts at BN.com or
to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the
information on the product page, please send us an email.

Reviews should not contain any of the following:

- HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone

- Time-sensitive information such as tour dates, signings, lectures, etc.

- Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.

- Comments focusing on the author or that may ruin the ending for others

- Phone numbers, addresses, URLs

- Pricing and availability information or alternative ordering information

- Advertisements or commercial solicitation

Reminder:

- By submitting a review, you grant to Barnes & Noble.com and its
sublicensees the royalty-free, perpetual, irrevocable right and license to use the
review in accordance with the Barnes & Noble.com Terms of Use.

- Barnes & Noble.com reserves the right not to post any review -- particularly
those that do not follow the terms and conditions of these Rules. Barnes & Noble.com
also reserves the right to remove any review at any time without notice.

- See Terms of Use for other conditions and disclaimers.

Search for Products You'd Like to Recommend

Create a Pen Name

Welcome, penname

You have successfully created your Pen Name. Start enjoying the benefits of the BN.com Community today.

If you find inappropriate content, please report it to Barnes & Noble

## More About This Textbook

## Overview

## Editorial Reviews

## Booknews

A textbook primarily for students of psychology, educations, and related fields whose only mathematical background is common arithmetic, an elementary understanding of simple equations, and the ability to plug numbers into an equation and boil them down to a single numerical answer. Emphasizes the development of concepts, but also explains procedures step-by-step with examples from the behavioral sciences. Considerably revised from the 1993 edition to lower the math requirements a half-step and focus more on fundamentals. It is also shorter, though still contains more material than most instructors would try to fit into a single semester. Annotation c. by Book News, Inc., Portland, Or.## Product Details

## Table of Contents

CHAPTER 1 Introduction.

1.1 Descriptive Statistics.

1.2 Inferential Statistics.

1.3 Our Concern: Applied Statistics.

1.4 Variables and Constants.

1.5 Scales of Measurement.

1.6 Scales of Measurement and Problems of Statistical Treatment.

1.7 Do Statistics Lie?

Point of Controversy: Are Statistical Procedures Necessary?

1.8 Some Tips on Studying Statistics.

1.9 Statistics and Computers.

1.10 Summary.

CHAPTER 2 Frequency Distributions, Percentiles, and Percentile Ranks.

2.1 Organizing Qualitative Data.

2.2 Grouped Scores.

2.3 How to Construct a Grouped Frequency Distribution.

2.4 Apparent versus Real Limits.

2.5 The Relative Frequency Distribution.

2.6 The Cumulative Frequency Distribution.

2.7 Percentiles and Percentile Ranks.

2.8 Computing Percentiles from Grouped Data.

2.9 Computation of Percentile Rank.

2.10 Summary.

CHAPTER 3 Graphic Representation of Frequency Distributions.

3.1 Basic Procedures.

3.2 The Histogram.

3.3 The Frequency Polygon.

3.4 Choosing between a Histogram and a Polygon.

3.5 The Bar Diagram and the Pie Chart.

3.6 The Cumulative Percentage Curve.

3.7 Factors Affecting the Shape of Graphs.

3.8 Shape of Frequency Distributions.

3.9 Summary.

CHAPTER 4 Central Tendency.

4.1 The Mode.

4.2 The Median.

4.3 The Mean.

4.4 Properties of the Mode.

4.5 Properties of the Mean.

Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences?

4.6 Properties of the Median.

4.7 Measures of Central Tendency in Symmetrical and Asymmetrical Distributions.

4.8 The Effects of Score Transformations.

4.9 Summary.

CHAPTER 5 Variability and Standard (z) Scores.

5.1 The Range and Semi-Interquartile Range.

5.2 Deviation Scores.

5.3 Deviational Measures: The Variance.

5.4 Deviational Measures: The Standard Deviation.

5.5 Calculation of the Variance and Standard Deviation: Raw-Score Method.

5.6 Calculation of the Standard Deviation with IBM SPSS (formerly SPSS).

Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n - 1)?

5.7 Properties of the Range and Semi-Interquartile Range.

5.8 Properties of the Standard Deviation.

5.9 How Big Is a Standard Deviation?

5.10 Score Transformations and Measures of Variability.

5.11 Standard Scores (z Scores).

5.12 A Comparison of z Scores and Percentile Ranks.

5.13 Summary.

CHAPTER 6 Standard Scores and the Normal Curve.

6.1 Historical Aspects of the Normal Curve.

6.2 The Nature of the Normal Curve.

6.3 Standard Scores and the Normal Curve.

6.4 The Standard Normal Curve: Finding Areas When the Score Is Known.

6.5 The Standard Normal Curve: Finding Scores When the Area Is Known.

6.6 The Normal Curve as a Model for Real Variables.

6.7 The Normal Curve as a Model for Sampling Distributions.

6.8 Summary.

Point of Controversy: How Normal Is the Normal Curve?

CHAPTER 7 Correlation.

7.1 Some History.

7.2 Graphing Bivariate Distributions: The Scatter Diagram.

7.3 Correlation: A Matter of Direction.

7.4 Correlation: A Matter of Degree.

7.5 Understanding the Meaning of Degree of Correlation.

7.6 Formulas for Pearson's Coefficient of Correlation.

7.7 Calculating r from Raw Scores.

7.8 Calculating r with IBM SPSS.

7.9 Spearman's Rank-Order Correlation Coefficient.

7.10 Correlation Does Not Prove Causation.

7.11 The Effects of Score Transformations.

7.12 Cautions Concerning Correlation Coefficients.

7.13 Summary.

CHAPTER 8 Prediction.

8.1 The Problem of Prediction.

8.2 The Criterion of Best Fit.

Point of Controversy: Least-Squares Regression versus the Resistant Line.

8.3 The Regression Equation: Standard-Score Form.

8.4 The Regression Equation: Raw-Score Form.

8.5 Error of Prediction: The Standard Error of Estimate.

8.6 An Alternative (and Preferred) Formula for SYX.

8.7 Calculating the “Raw-Score” Regression Equation and Standard Error of Estimate with IBM SPSS.

8.8 Error in Estimating Y from X.

8.9 Cautions Concerning Estimation of Predictive Error.

8.10 Prediction Does Not Prove Causation.

8.11 Summary.

CHAPTER 9 Interpretive Aspects of Correlation and Regression.

9.1 Factors Influencing r : Degree of Variability in Each Variable.

9.2 Interpretation of r : The Regression Equation I.

9.3 Interpretation of r : The Regression Equation II.

9.4 Interpretation of r : Proportion of Variation in Y Not Associated with

Variation in X.

9.5 Interpretation of r : Proportion of Variation in Y Associated with

Variation in X.

9.6 Interpretation of r : Proportion of Correct Placements.

9.7 Summary.

CHAPTER 10 Probability.

10.1 Defining Probability.

10.2 A Mathematical Model of Probability.

10.3 Two Theorems in Probability.

10.4 An Example of a Probability Distribution: The Binomial.

10.5 Applying the Binomial.

10.6 Probability and Odds.

10.7 Are Amazing Coincidences Really That Amazing?

10.8 Summary.

CHAPTER 11 Random Sampling and Sampling Distributions.

11.1 Random Sampling.

11.2 Using a Table of Random Numbers.

11.3 The Random Sampling Distribution of the Mean: An Introduction.

11.4 Characteristics of the Random Sampling Distribution of the Mean.

11.5 Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X.

11.6 Random Sampling Without Replacement.

11.7 Summary.

CHAPTER 12 Introduction to Statistical Inference: Testing Hypotheses about Single Means (z and t).

12.1 Testing a Hypothesis about a Single Mean.

12.2 The Null and Alternative Hypotheses.

12.3 When Do We Retain and When Do We Reject the Null Hypothesis?

12.4 Review of the Procedure for Hypothesis Testing.

12.5 Dr. Brown's Problem: Conclusion.

12.6 The Statistical Decision.

12.7 Choice of HA: One-Tailed and Two-Tailed Tests.

12.8 Review of Assumptions in Testing Hypotheses about a Single Mean.

Point of Controversy: The Single-Subject Research Design.

12.9 Estimating the Standard Error of the Mean When σ Is Unknown.

12.10 The t Distribution.

12.11 Characteristics of Student's Distribution of t.

12.12 Degrees of Freedom and Student's Distribution of t.

12.13 An Example: Has the Violent Content of Television Programs Increased?

12.14 Calculating t from Raw Scores.

12.15 Calculating t with IBM SPSS.

12.16 Levels of Significance versus p-Values.

12.17 Summary.

CHAPTER 13 Interpreting the Results of Hypothesis Testing: Effect Size, Type I and Type II Errors, and Power.

13.1 A Statistically Significant Difference versus a Practically Important Difference.

Point of Controversy: The Failure to Publish “Nonsignificant” Results.

13.2 Effect Size.

13.3 Errors in Hypothesis Testing.

13.4 The Power of a Test.

13.5 Factors Affecting Power: Difference between the True Population Mean and the Hypothesized Mean (Size of Effect).

13.6 Factors Affecting Power: Sample Size.

13.7 Factors Affecting Power:Variability of the Measure.

13.8 Factors Affecting Power: Level of Significance (α).

13.9 Factors Affecting Power: One-Tailed versus Two-Tailed Tests.

13.10 Calculating the Power of a Test.

Point of Controversy: Meta-Analysis.

13.11 Estimating Power and Sample Size for Tests of Hypotheses about Means.

13.12 Problems in Selecting a Random Sample and in Drawing Conclusions.

13.13 Summary.

CHAPTER 14 Testing Hypotheses about the Difference between Two Independent Groups.

14.1 The Null and Alternative Hypotheses.

14.2 The Random Sampling Distribution of the Difference between Two Sample Means.

14.3 Properties of the Sampling Distribution of the Difference between Means.

14.4 Determining a Formula for t.

14.5 Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment.

14.6 Use of a One-Tailed Test.

14.7 Calculation of t with IBM SPSS.

14.8 Sample Size in Inference about Two Means.

14.9 Effect Size.

14.10 Estimating Power and Sample Size for Tests of Hypotheses about the Difference between Two Independent Means.

14.11 Assumptions Associated with Inference about the Difference between Two Independent Means.

14.12 The Random-Sampling Model versus the Random-Assignment Model.

14.13 Random Sampling and Random Assignment as Experimental Controls.

14.14 Summary.

CHAPTER 15 Testing for a Difference between Two Dependent (Correlated) Groups.

15.1 Determining a Formula for t.

15.2 Degrees of Freedom for Tests of No Difference between Dependent Means.

15.3 An Alternative Approach to the Problem of Two Dependent Means.

15.4 Testing a Hypothesis about Two Dependent Means: Does Text Messaging Impair Driving?

15.5 Calculating t with IBM SPSS.

15.6 Effect Size.

15.7 Power.

15.8 Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means.

15.9 Problems with Using the Dependent-Samples Design.

15.10 Summary.

CHAPTER 16 Inference about Correlation Coefficients.

16.1 The Random Sampling Distribution of r.

16.2 Testing the Hypothesis that r = 0.

16.3 Fisher’s z' Transformation.

16.4 Strength of Relationship.

16.5 A Note about Assumptions.

16.6 Inference When Using Spearman’s rS

16.7 Summary.

CHAPTER 17 An Alternative to Hypothesis Testing: Confidence Intervals.

17.1 Examples of Estimation.

17.2 Confidence Intervals for μX.

17.3 The Relation between Confidence Intervals and Hypothesis Testing.

17.4 The Advantages of Confidence Intervals.

17.5 Random Sampling and Generalizing Results.

17.6 Evaluating a Confidence Interval.

Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics.

17.7 Confidence Intervals for μX - μY.

17.8 Sample Size Required for Confidence Intervals of μX and μX - μY.

17.9 Confidence Intervals for ρ.

17.10 Where are We in Statistical Reform?

17.11 Summary.

CHAPTER 18 Testing for Differences among Three or More Groups: One-Way Analysis of Variance (and Some Alternatives).

18.1 The Null Hypothesis.

18.2 The Basis of One-Way Analysis of Variance:Variation within and between Groups.

18.3 Partition of the Sums of Squares.

18.4 Degrees of Freedom.

18.5 Variance Estimates and the F Ratio.

18.6 The Summary Table.

18.7 Example: Does Playing Violent Video Games Desensitize People to Real-Life Aggression?

18.8 Comparison of t and F.

18.9 Raw-Score Formulas for Analysis of Variance.

18.10 Calculation of ANOVA for Independent Measures with IBM SPSS.

18.11 Assumptions Associated with ANOVA.

18.12 Effect Size.

18.13 ANOVA and Power.

18.14 Post Hoc Comparisons.

18.15 Some Concerns about Post Hoc Comparisons.

18.16 An Alternative to the F Test: Planned Comparisons.

18.17 How to Construct Planned Comparisons.

18.18 Analysis of Variance for Repeated Measures.

18.19 Calculation of ANOVA for Repeated Measures with IBM SPSS.

18.20 Summary.

CHAPTER 19 Factorial Analysis of Variance: The Two-Factor Design.

19.1 Main Effects.

19.2 Interaction.

19.3 The Importance of Interaction.

19.4 Partition of the Sums of Squares for Two-Way ANOVA.

19.5 Degrees of Freedom.

19.6 Variance Estimates and F Tests.

19.7 Studying the Outcome of Two-Factor Analysis of Variance.

19.8 Effect Size.

19.9 Calculation of Two-Factor ANOVA with IBM SPSS.

19.10 Planned Comparisons.

19.11 Assumptions of the Two-Factor Design and the Problem of Unequal Numbers of Scores.

19.12 Mixed Two-Factor Within-Subjects Design.

19.13 Calculation of the Mixed Two-Factor Within-Subjects Design with IBM SPSS.

19.14 Summary.

CHAPTER 20 Chi-Square and Inference about Frequencies.

20.1 The Chi-Squre Test for Goodness of Fit.

20.2 Chi-Square (χ2) as a Measure of the Difference between Observed and Expected Frequencies.

20.3 The Logic of the Chi-Square Test.

20.4 Interpretation of the Outcome of a Chi-Square Test.

20.5 Different Hypothesized Proportions in the Test for Goodness of Fit.

20.6 Effect Size for Goodness-of-Fit Problems.

20.7 Assumptions in the Use of the Theoretical Distribution of Chi-Square.

20.8 Chi-Square as a Test for Independence between Two Variables.

20.9 Finding Expected Frequencies in a Contingency Table.

20.10 Calculation of χ2 and Determination of Significance in a Contingency Table.

20.11 Measures of Effect Size (Strength of Association) for Tests of Independence.

Point of Controversy: Yates' Correction for Continuity.

20.12 Power and the Chi-Square Test of Independence.

20.13 Summary.

CHAPTER 21 Some (Almost) Assumption-Free Tests.

21.1 The Null Hypothesis in Assumption-Freer Tests.

21.2 Randomization Tests.

21.3 Rank-Order Tests.

21.4 The Bootstrap Method of Statistical Inference.

21.5 An Assumption-Freer Alternative to the t Test of a Difference between Two Independent Groups: The Mann-Whitney U Test.

Point of Controversy: A Comparison of the t Test and Mann-Whitney U Test with Real-World Distributions.

21.6 An Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Sign Test.

21.7 Another Assumption-Freer Alternative to the t Test of a Difference between Two Dependent Groups: The Wilcoxon Signed-Ranks Test.

21.8 An Assumption-Freer Alternative to One-Way ANOVA for Independent Groups: The Kruskal–Wallis Test.

21.9 An Assumption-Freer Alternative to ANOVA for Repeated Measures:

Friedman's Rank Test for Correlated Samples.

21.10 Summary.

APPENDIX A Review of Basic Mathematics.

APPENDIX B List of Symbols.

APPENDIX C Answers to Problems.

APPENDIX D Statistical Tables.

Table A: Areas under the Normal Curve Corresponding to Given Values of z.

Table B: The Binomial Distribution.