Quantitative Research Methods for Professionals in Education and Other Fields / Edition 1

Quantitative Research Methods for Professionals in Education and Other Fields / Edition 1

by W. Paul Vogt
ISBN-10:
0205359132
ISBN-13:
9780205359134
Pub. Date:
02/10/2006
Publisher:
Pearson
ISBN-10:
0205359132
ISBN-13:
9780205359134
Pub. Date:
02/10/2006
Publisher:
Pearson
Quantitative Research Methods for Professionals in Education and Other Fields / Edition 1

Quantitative Research Methods for Professionals in Education and Other Fields / Edition 1

by W. Paul Vogt
$142.2 Current price is , Original price is $142.2. You
$142.20 
  • SHIP THIS ITEM
    This item is available online through Marketplace sellers.
  • PICK UP IN STORE
    Check Availability at Nearby Stores
$54.37 
  • SHIP THIS ITEM

    Temporarily Out of Stock Online

    Please check back later for updated availability.

    • Condition: Good
    Note: Access code and/or supplemental material are not guaranteed to be included with used textbook.

This item is available online through Marketplace sellers.


Overview

This concise text discusses a wide range of quantitative research methods, including advanced techniques such as logic regression, multilevel modeling, and structural equation modeling. Because the text emphasizes concepts rather than mathematics and computation formulas, it is accessible to a wide range of users of research. Professional practitioners in areas such education, business, social work, and psychology can gain an understanding of research methods sufficient to base their work on advanced research in their fields. The text discusses the quantitative designs and analytic techniques most needed by students in the social sciences and in applied disciplines such as education, social work, and business. It teaches what the various methods mean, when to use them, and how to interpret their results. Since it emphasizes general understanding rather than mathematical foundations, students are able to review a broad range of methods in a comparatively short space.


Product Details

ISBN-13: 9780205359134
Publisher: Pearson
Publication date: 02/10/2006
Edition description: 1ST
Pages: 352
Product dimensions: 7.50(w) x 9.10(h) x 0.80(d)

Table of Contents

Preface.

Acknowledgements.

I. THE BASICS.

Introduction to Part I.

1. Design, Measurement, and Analysis.

What is the role of research questions in the process of planning research?

How are design, measurement, and analysis defined and related?

What are the main types of research design?

What is measurement and what are its main types?

What different kinds of statistical analysis are there?

What is statistical significance?

How have recent controversies changed statistical practice?

What are Type I and Type II errors, and why should I care?

2. Standard Deviation and Correlation.

What is a standard deviation and what does it tell us?

How do we calculate the standard deviation and the variance?

What are standard scores and how can we use them?

What is the normal distribution and how is it related to standard scores?

What is a correlation coefficient and how do we interpret it?

How is a correlation coefficient calculated?

How do we interpret correlations and their statistical significance?

How can correlations be used to find relations in and interpret real data?

What is a large correlation?

What is linearity, and why is it important for interpreting correlations?

What is the relationship of correlation and cause?

3. Variables and the Relations Among Them.

How are different types of variables related?

How can we depict relations among variables and use the depictions to understand our research questions?

How does the inclusion of effect modifiers make our understanding of our research questions more realistic?

What is causal modeling and how do we move from graphics to equations--and back again?

How can we use causal modeling to think about a research topic? The example of parental involvement.

How can we use causal modeling to think about a research topic? The example of student advisory programs.

What is the nature of causation when studying research problems?

What are the criteria for assessing causation?

4. The Uses of Descriptive Statistics.

How do researchers use the term "descriptive" statistics?

How are descriptive statistics used to depict populations and samples?

What are measures of central tendency and how does one choose among them?

How do we explore the shape of data distributions?

How does the theoretical normal distribution relate to descriptions of actual data distributions?

What do you do if your data are not continuous and not (approximately) normally distributed?

What are non-parametric statistical techniques and how are they used?

How can we use descriptive statistics to check assumptions that have to be true for the proper use of other techniques?

What are some substantive uses of descriptive (non-inferential) statistics?

5. Surveys and Random Sampling.

What criteria define a good sample?

What are the main varieties of probability samples and what are the chief features of each?

What can be learned from non-probability samples?

How important is sample size?

How can surveys be designed to elicit the most valuable responses?

How can questions be written so they will lead to effective measurement?

How can responses to survey questions be analyzed?

When are surveys likely to be a wise design choice?

6. Experiments and Random Assignment.

What is random assignment and why is it so important?

How are experimental results (size of outcomes) reported?

What are control groups and why are they so important?

What advantage does control over the independent variables confer?

What are the basic types of experimental design?

When do ethical issues become important in experimental research?

What analytic tools are used to interpret experimental results?

What are the relations of populations and samples in experimental research?

When are field experiments and quasi-experiments likely to be good design choices?

In general, what are the advantages and disadvantages of experimental methods?

7. Reliability and Validity.

What is reliability, and how does it relate to operational definitions?

What are the main measures of reliability?

What is Cronbach's alpha, and how is it used to assess measurement scales?

Why does the reliability of measurements matter?

What is validity, and how is it related to reliability?

What are the main types of validity, and how are they assessed?

What if you use existing measures for which reliability and validity have already been calculated?

What are threats to validity, and how can they be used to assess designs, measurements, and analyses?

What are the main threats to validity in research not conducted in laboratory settings?

Why do we take a "negative" approach by focusing on threats and problems?

8. Statistical Inference.

What are the controversies about statistical inference?

How is statistical significance related to the importance of research findings?

How does estimation with confidence intervals compare to statistical significance?

How can we interpret computer output for t-tests and confidence intervals?

How can we interpret computer output for ANOVA?

What are standard errors, and how are they used in statistical inference?

What is statistical power and why is it important?

9. Regression Analysis.

What is regression analysis?

What are the basic questions that regression answers?

How do we read the output of a regression analysis?

What is a regression equation, and how do we use it to predict or explain?

How are regression and correlation related?

Are regression and ANOVA antagonistic methods?

How can we use regression to analyze real data on a real problem?

What are the uses and the limitations of regression analysis?

II. ADVANCED METHODS.

Introduction to Part 2.

10. Back to Regression.

How do we decide which variables and how many variables to include in the analysis?

How can we handle categorical independent variables when they have more than two categories?

What happens to the analysis when the independent variables are highly correlated with one another?

How do we handle regression analysis when there are missing data?

What do we do when the data violate the assumption of linearity and other assumptions?

How do we test for interaction effects?

How are regression and ANOVA related in their approaches to advanced data analysis?

How do we use regression results to predict the likely results of a change?

11. Methods for Categorical Variables Including Logistic Regression

How is the chi-squared test used to study categorical variables?

What are the limitations of the chi-squared approach when applied to many variables with multiple categories?

How can odds and odds-ratios be used to study categorical variables?

How can log-linear methods advance depth of our understanding of contingency table problems?

How do two regression based methods, discriminant analysis and probit regression, compare in their analysis of categorical dependent variables?

What is logistic regression and how is it used?

Which methods work best for what sorts of problems?

12. Multi-level Modeling.

What kinds of research questions require the use of multi-level analyses?

What technical analysis problems give rise to the need for multi-level models?

How does multi-level analysis work in practice?

How can MLM be applied to concrete research problems (six examples)?

4.a. Example 1: Why do high school students flunk courses?

4.b. Example 2: Can school-wide curricular reforms promote student achievement?

4.c. Example 3: Do schools create the "learning gap?"

4.d. Example 4: Do small schools promote learning?

4.e. Example 5: Do small classes increase academic achievement?

4.f. Example 6: What are the effects of school budgets on student learning?

13. Factor Analysis.

What is factor analysis, conceptually?

How is factor analysis (FA) related to principal components analysis (PCA)?

How do we select the method of analysis--either PCA or one of the methods of factor analysis?

How do we begin the analysis and determine the number of factors to use?

How do we identify (extract) the factors?

How do we fine tune our factor solution to make it more interpretable?

How do we interpret the output of a factor analysis?

14. Structural Equation Modeling.

What is a model and how do we build one?

How do we incorporate factor analysis into the model?

How do we calculate the results of a full measurement and structural model?

How do we use what we have learned to interpret actual research results?

III. SPECIALIZED APPLICATIONS OF QUANTITATIVE METHODS.

Introduction to Part 3

15. Program Evaluation.

Different goals and audiences for research

Problems and criticisms of evaluation research

Process and outcome: formative and summative evaluations

A program's life cycle

20 concluding tips

16. Test Item Analysis, Individual Assessment, and Accountability.

Test theory, classical and modern

Applications of Rasch and IRT models: examples

How tests of students are used to evaluate schools and teachers

Provisions of the NCLB

How to measure proficiency and adequate yearly progress

Cut scores

Value-added methods

Simpson's paradox

17. Reviewing, Critiquing, and Synthesizing Research.

How to find research to review

What to look for when reviewing research reports: a checklist

What to include (and exclude) and how to write it up

Introduction to meta-analysis or research synthesis

Narratives versus numbers in research reviews

Comparing apples, oranges, and clones in research reviews

Populations and samples in meta-analysis

Quantitative techniques in meta-analysis

References.

Appendix: Answers for the Self-Tests.

From the B&N Reads Blog

Customer Reviews