 Shopping Bag ( 0 items )

All (17) from $17.32

New (5) from $138.42

Used (12) from $17.32
More About This Textbook
Overview
The sixth edition includes new information about the use of computers in statistics and offers screenshots of IBM SPSS (formerly SPSS) menus, dialog boxes, and output in selected chapters without sacrificing any of the conceptual logic and the statistical formulas needed to facilitate understanding. The example problems have been updated to reflect more current topics (e.g., text messaging while driving, violence in the media). The latest research and new photos have been integrated throughout the text to make the material more accessible. With these changes, students and professionals in the behavioral sciences will develop an understanding of statistical logic and procedures, the properties of statistical devices, and the importance of the assumptions underlying statistical tools.
Editorial Reviews
Booknews
A textbook primarily for students of psychology, educations, and related fields whose only mathematical background is common arithmetic, an elementary understanding of simple equations, and the ability to plug numbers into an equation and boil them down to a single numerical answer. Emphasizes the development of concepts, but also explains procedures stepbystep with examples from the behavioral sciences. Considerably revised from the 1993 edition to lower the math requirements a halfstep and focus more on fundamentals. It is also shorter, though still contains more material than most instructors would try to fit into a single semester. Annotation c. by Book News, Inc., Portland, Or.Product Details
Table of Contents
1.1 Descriptive Statistics.
1.2 Inferential Statistics.
1.3 Our Concern: Applied Statistics.
1.4 Variables and Constants.
1.5 Scales of Measurement.
1.6 Scales of Measurement and Problems of Statistical Treatment.
1.7 Do Statistics Lie?
Point of Controversy: Are Statistical Procedures Necessary?
1.8 Some Tips on Studying Statistics.
1.9 Statistics and Computers.
1.10 Summary.
CHAPTER 2 Frequency Distributions, Percentiles, and Percentile Ranks.
2.1 Organizing Qualitative Data.
2.2 Grouped Scores.
2.3 How to Construct a Grouped Frequency Distribution.
2.4 Apparent versus Real Limits.
2.5 The Relative Frequency Distribution.
2.6 The Cumulative Frequency Distribution.
2.7 Percentiles and Percentile Ranks.
2.8 Computing Percentiles from Grouped Data.
2.9 Computation of Percentile Rank.
2.10 Summary.
CHAPTER 3 Graphic Representation of Frequency Distributions.
3.1 Basic Procedures.
3.2 The Histogram.
3.3 The Frequency Polygon.
3.4 Choosing between a Histogram and a Polygon.
3.5 The Bar Diagram and the Pie Chart.
3.6 The Cumulative Percentage Curve.
3.7 Factors Affecting the Shape of Graphs.
3.8 Shape of Frequency Distributions.
3.9 Summary.
CHAPTER 4 Central Tendency.
4.1 The Mode.
4.2 The Median.
4.3 The Mean.
4.4 Properties of the Mode.
4.5 Properties of the Mean.
Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences?
4.6 Properties of the Median.
4.7 Measures of Central Tendency in Symmetrical and Asymmetrical Distributions.
4.8 The Effects of Score Transformations.
4.9 Summary.
CHAPTER 5 Variability and Standard (z) Scores.
5.1 The Range and SemiInterquartile Range.
5.2 Deviation Scores.
5.3 Deviational Measures: The Variance.
5.4 Deviational Measures: The Standard Deviation.
5.5 Calculation of the Variance and Standard Deviation: RawScore Method.
5.6 Calculation of the Standard Deviation with IBM SPSS (formerly SPSS).
Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n  1)?
5.7 Properties of the Range and SemiInterquartile Range.
5.8 Properties of the Standard Deviation.
5.9 How Big Is a Standard Deviation?
5.10 Score Transformations and Measures of Variability.
5.11 Standard Scores (z Scores).
5.12 A Comparison of z Scores and Percentile Ranks.
5.13 Summary.
CHAPTER 6 Standard Scores and the Normal Curve.
6.1 Historical Aspects of the Normal Curve.
6.2 The Nature of the Normal Curve.
6.3 Standard Scores and the Normal Curve.
6.4 The Standard Normal Curve: Finding Areas When the Score Is Known.
6.5 The Standard Normal Curve: Finding Scores When the Area Is Known.
6.6 The Normal Curve as a Model for Real Variables.
6.7 The Normal Curve as a Model for Sampling Distributions.
6.8 Summary.
Point of Controversy: How Normal Is the Normal Curve?
CHAPTER 7 Correlation.
7.1 Some History.
7.2 Graphing Bivariate Distributions: The Scatter Diagram.
7.3 Correlation: A Matter of Direction.
7.4 Correlation: A Matter of Degree.
7.5 Understanding the Meaning of Degree of Correlation.
7.6 Formulas for Pearson's Coefficient of Correlation.
7.7 Calculating r from Raw Scores.
7.8 Calculating r with IBM SPSS.
7.9 Spearman's RankOrder Correlation Coefficient.
7.10 Correlation Does Not Prove Causation.
7.11 The Effects of Score Transformations.
7.12 Cautions Concerning Correlation Coefficients.
7.13 Summary.
CHAPTER 8 Prediction.
8.1 The Problem of Prediction.
8.2 The Criterion of Best Fit.
Point of Controversy: LeastSquares Regression versus the Resistant Line.
8.3 The Regression Equation: StandardScore Form.
8.4 The Regression Equation: RawScore Form.
8.5 Error of Prediction: The Standard Error of Estimate.
8.6 An Alternative (and Preferred) Formula for S_{YX}.
8.7 Calculating the “RawScore” Regression Equation and Standard Error of Estimate with IBM SPSS.
8.8 Error in Estimating Y from X.
8.9 Cautions Concerning Estimation of Predictive Error.
8.10 Prediction Does Not Prove Causation.
8.11 Summary.
CHAPTER 9 Interpretive Aspects of Correlation and Regression.
9.1 Factors Influencing r : Degree of Variability in Each Variable.
9.2 Interpretation of r : The Regression Equation I.
9.3 Interpretation of r : The Regression Equation II.
9.4 Interpretation of r : Proportion of Variation in Y Not Associated with
Variation in X.
9.5 Interpretation of r : Proportion of Variation in Y Associated with
Variation in X.
9.6 Interpretation of r : Proportion of Correct Placements.
9.7 Summary.
CHAPTER 10 Probability.
10.1 Defining Probability.
10.2 A Mathematical Model of Probability.
10.3 Two Theorems in Probability.
10.4 An Example of a Probability Distribution: The Binomial.
10.5 Applying the Binomial.
10.6 Probability and Odds.
10.7 Are Amazing Coincidences Really That Amazing?
10.8 Summary.
CHAPTER 11 Random Sampling and Sampling Distributions.
11.1 Random Sampling.
11.2 Using a Table of Random Numbers.
11.3 The Random Sampling Distribution of the Mean: An Introduction.
11.4 Characteristics of the Random Sampling Distribution of the Mean.
11.5 Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X.
11.6 Random Sampling Without Replacement.
11.7 Summary.
CHAPTER 12 Introduction to Statistical Inference: Testing Hypotheses about Single Means (z and t).
12.1 Testing a Hypothesis about a Single Mean.
12.2 The Null and Alternative Hypotheses.
12.3 When Do We Retain and When Do We Reject the Null Hypothesis?
12.4 Review of the Procedure for Hypothesis Testing.
12.5 Dr. Brown's Problem: Conclusion.
12.6 The Statistical Decision.
12.7 Choice of H_{A}: OneTailed and TwoTailed Tests.
12.8 Review of Assumptions in Testing Hypotheses about a Single Mean.
Point of Controversy: The SingleSubject Research Design.
12.9 Estimating the Standard Error of the Mean When σ Is Unknown.
12.10 The t Distribution.
12.11 Characteristics of Student's Distribution of t.
12.12 Degrees of Freedom and Student's Distribution of t.
12.13 An Example: Has the Violent Content of Television Programs Increased?
12.14 Calculating t from Raw Scores.
12.15 Calculating t with IBM SPSS.
12.16 Levels of Significance versus pValues.
12.17 Summary.
CHAPTER 13 Interpreting the Results of Hypothesis Testing: Effect Size, Type I and Type II Errors, and Power.
13.1 A Statistically Significant Difference versus a Practically Important Difference.
Point of Controversy: The Failure to Publish “Nonsignificant” Results.
13.2 Effect Size.
13.3 Errors in Hypothesis Testing.
13.4 The Power of a Test.
13.5 Factors Affecting Power: Difference between the True Population Mean and the Hypothesized Mean (Size of Effect).
13.6 Factors Affecting Power: Sample Size.
13.7 Factors Affecting Power:Variability of the Measure.
13.8 Factors Affecting Power: Level of Significance (α).
13.9 Factors Affecting Power: OneTailed versus TwoTailed Tests.
13.10 Calculating the Power of a Test.
Point of Controversy: MetaAnalysis.
13.11 Estimating Power and Sample Size for Tests of Hypotheses about Means.
13.12 Problems in Selecting a Random Sample and in Drawing Conclusions.
13.13 Summary.
CHAPTER 14 Testing Hypotheses about the Difference between Two Independent Groups.
14.1 The Null and Alternative Hypotheses.
14.2 The Random Sampling Distribution of the Difference between Two Sample Means.
14.3 Properties of the Sampling Distribution of the Difference between Means.
14.4 Determining a Formula for t.
14.5 Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment.
14.6 Use of a OneTailed Test.
14.7 Calculation of t with IBM SPSS.
14.8 Sample Size in Inference about Two Means.
14.9 Effect Size.
14.10 Estimating Power and Sample Size for Tests of Hypotheses about the Difference between Two Independent Means.
14.11 Assumptions Associated with Inference about the Difference between Two Independent Means.
14.12 The RandomSampling Model versus the RandomAssignment Model.
14.13 Random Sampling and Random Assignment as Experimental Controls.
14.14 Summary.
CHAPTER 15 Testing for a Difference between Two Dependent (Correlated) Groups.
15.1 Determining a Formula for t.
15.2 Degrees of Freedom for Tests of No Difference between Dependent Means.
15.3 An Alternative Approach to the Problem of Two Dependent Means.
15.4 Testing a Hypothesis about Two Dependent Means: Does Text Messaging Impair Driving?
15.5 Calculating t with IBM SPSS.
15.6 Effect Size.
15.7 Power.
15.8 Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means.
15.9 Problems with Using the DependentSamples Design.
15.10 Summary.
CHAPTER 16 Inference about Correlation Coefficients.
16.1 The Random Sampling Distribution of r.
16.2 Testing the Hypothesis that r = 0.
16.3 Fisher’s z' Transformation.
16.4 Strength of Relationship.
16.5 A Note about Assumptions.
16.6 Inference When Using Spearman’s r_{S}
16.7 Summary.
CHAPTER 17 An Alternative to Hypothesis Testing: Confidence Intervals.
17.1 Examples of Estimation.
17.2 Confidence Intervals for μ_{X}.
17.3 The Relation between Confidence Intervals and Hypothesis Testing.
17.4 The Advantages of Confidence Intervals.
17.5 Random Sampling and Generalizing Results.
17.6 Evaluating a Confidence Interval.
Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics.
17.7 Confidence Intervals for μ_{X}  μ_{Y}.
17.8 Sample Size Required for Confidence Intervals of μ_{X} and μ_{X}  μ_{Y}.
17.9 Confidence Intervals for ρ.
17.10 Where are We in Statistical Reform?
17.11 Summary.
CHAPTER 18 Testing for Differences among Three or More Groups: OneWay Analysis of Variance (and Some Alternatives).
18.1 The Null Hypothesis.
18.2 The Basis of OneWay Analysis of Variance:Variation within and between Groups.
18.3 Partition of the Sums of Squares.
18.4 Degrees of Freedom.
18.5 Variance Estimates and the F Ratio.
18.6 The Summary Table.
18.7 Example: Does Playing Violent Video Games Desensitize People to RealLife Aggression?
18.8 Comparison of t and F.
18.9 RawScore Formulas for Analysis of Variance.
18.10 Calculation of ANOVA for Independent Measures with IBM SPSS.
18.11 Assumptions Associated with ANOVA.
18.12 Effect Size.
18.13 ANOVA and Power.
18.14 Post Hoc Comparisons.
18.15 Some Concerns about Post Hoc Comparisons.
18.16 An Alternative to the F Test: Planned Comparisons.
18.17 How to Construct Planned Comparisons.
18.18 Analysis of Variance for Repeated Measures.
18.19 Calculation of ANOVA for Repeated Measures with IBM SPSS.
18.20 Summary.
CHAPTER 19 Factorial Analysis of Variance: The TwoFactor Design.
19.1 Main Effects.
19.2 Interaction.
19.3 The Importance of Interaction.
19.4 Partition of the Sums of Squares for TwoWay ANOVA.
19.5 Degrees of Freedom.
19.6 Variance Estimates and F Tests.
19.7 Studying the Outcome of TwoFactor Analysis of Variance.
19.8 Effect Size.
19.9 Calculation of TwoFactor ANOVA with IBM SPSS.
19.10 Planned Comparisons.
19.11 Assumptions of the TwoFactor Design and the Problem of Unequal Numbers of Scores.
19.12 Mixed TwoFactor WithinSubjects Design.
19.13 Calculation of the Mixed TwoFactor WithinSubjects Design with IBM SPSS.
19.14 Summary.
CHAPTER 20 ChiSquare and Inference about Frequencies.
20.1 The ChiSqure Test for Goodness of Fit.
20.2 ChiSquare (χ^{2}) as a Measure of the Difference between Observed and Expected Frequencies.
20.3 The Logic of the ChiSquare Test.
20.4 Interpretation of the Outcome of a ChiSquare Test.
20.5 Different Hypothesized Proportions in the Test for Goodness of Fit.
20.6 Effect Size for GoodnessofFit Problems.
20.7 Assumptions in the Use of the Theoretical Distribution of ChiSquare.
20.8 ChiSquare as a Test for Independence between Two Variables.
20.9 Finding Expected Frequencies in a Contingency Table.
20.10 Calculation of χ^{2} and Determination of Significance in a Contingency Table.
20.11 Measures of Effect Size (Strength of Association) for Tests of Independence.
Point of Controversy: Yates' Correction for Continuity.
20.12 Power and the ChiSquare Test of Independence.
20.13 Summary.
CHAPTER 21 Some (Almost) AssumptionFree Tests.
21.1 The Null Hypothesis in AssumptionFreer Tests.
21.2 Randomization Tests.
21.3 RankOrder Tests.
21.4 The Bootstrap Method of Statistical Inference.
21.5 An AssumptionFreer Alternative to the t Test of a Difference between Two Independent Groups: The MannWhitney U Test.
Point of Controversy: A Comparison of the t Test and MannWhitney U Test with RealWorld Distributions.
21.6 An AssumptionFreer Alternative to the t Test of a Difference between Two Dependent Groups: The Sign Test.
21.7 Another AssumptionFreer Alternative to the t Test of a Difference between Two Dependent Groups: The Wilcoxon SignedRanks Test.
21.8 An AssumptionFreer Alternative to OneWay ANOVA for Independent Groups: The Kruskal–Wallis Test.
21.9 An AssumptionFreer Alternative to ANOVA for Repeated Measures:
Friedman's Rank Test for Correlated Samples.
21.10 Summary.
APPENDIX A Review of Basic Mathematics.
APPENDIX B List of Symbols.
APPENDIX C Answers to Problems.
APPENDIX D Statistical Tables.
Table A: Areas under the Normal Curve Corresponding to Given Values of z.
Table B: The Binomial Distribution.
Table C: Random Numbers.
Table D: Student's t Distribution.
Table E: The F Distribution.
Table F: The Studentized Range Statistic.
Table G: Values of the Correlation Coefficient Required for Different Levels of Significance When H_{0}: r= 0
Table H: Values of Fisher's z' for Values of r.
Table I: The χ^{2} Distribution.
Table J: Critical OneTail Values of SR_{X} for the MannWhitney U Test.
Table K: Critical Values for the Smaller of R_{+} or R_{} for the Wilcoxon SignedRanks Test.
Epilogue: The Realm of Statistics.
REFERENCES.
INDEX.