Bibliometrics and Research Evaluation: Uses and Abuses
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.

The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.

Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

1123648408
Bibliometrics and Research Evaluation: Uses and Abuses
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.

The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.

Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

30.0 In Stock
Bibliometrics and Research Evaluation: Uses and Abuses

Bibliometrics and Research Evaluation: Uses and Abuses

by Yves Gingras
Bibliometrics and Research Evaluation: Uses and Abuses

Bibliometrics and Research Evaluation: Uses and Abuses

by Yves Gingras

Hardcover

$30.00 
  • SHIP THIS ITEM
    In stock. Ships in 3-7 days. Typically arrives in 3 weeks.
  • PICK UP IN STORE

    Your local store may have stock of this item.

Related collections and offers


Overview

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.

The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to.

Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.


Product Details

ISBN-13: 9780262035125
Publisher: MIT Press
Publication date: 10/07/2016
Series: History and Foundations of Information Science
Pages: 136
Product dimensions: 6.10(w) x 9.10(h) x 0.80(d)
Age Range: 18 Years

About the Author

Yves Gingras is Professor and Canada Research Chair in History and Sociology of Science, Department of History, at Université du Québec à Montréal.

What People are Saying About This

Endorsement

From RG Scores, to Altmetric, SciVal, Author Rank, and h-indexes—to be a scholar today is to participate in a wild economy of metrics poised to exert wide-reaching and intimate control over the valuation of knowledge. Gingras's manifesto calls academics to put to the test such data-driven indicators by asking, 'what is the proper measure of a metric?'

Jean-François Blanchette, Associate Professor, Department of Information Studies, UCLA

From the Publisher

Bibliometrics and Research Evaluation takes the reader through the challenges surrounding the use of quantitative indicators in research evaluation, considers the reasons behind their current misuse, and offers proposals and examples of the contributions that indicators can make to the analysis of the dynamics of science and to the evidence base for research policy. It provides a cautionary account that needs to be read by all those involved in the management of research, and who are perhaps being tempted by the promise that easy-to-access and easy-to-interpret indicators will deliver them from the need to make difficult decisions under conditions of uncertainty. It also provides an invaluable primer on the use of indicators in research evaluation for all those considering a research career in these fields. Finally, it will become a work of reference, and a very enjoyable read, for all those who, because of their work, may be better acquainted with the arguments it develops.

Jordi Molas-Gallart, Professor, Spanish National Research Council (CSIC)

From RG Scores, to Altmetric, SciVal, Author Rank, and h-indexes—to be a scholar today is to participate in a wild economy of metrics poised to exert wide-reaching and intimate control over the valuation of knowledge. Gingras's manifesto calls academics to put to the test such data-driven indicators by asking, 'what is the proper measure of a metric?'

Jean-François Blanchette, Associate Professor, Department of Information Studies, UCLA

Jordi Molas-Gallart

Bibliometrics and Research Evaluation takes the reader through the challenges surrounding the use of quantitative indicators in research evaluation, considers the reasons behind their current misuse, and offers proposals and examples of the contributions that indicators can make to the analysis of the dynamics of science and to the evidence base for research policy. It provides a cautionary account that needs to be read by all those involved in the management of research, and who are perhaps being tempted by the promise that easy-to-access and easy-to-interpret indicators will deliver them from the need to make difficult decisions under conditions of uncertainty. It also provides an invaluable primer on the use of indicators in research evaluation for all those considering a research career in these fields. Finally, it will become a work of reference, and a very enjoyable read, for all those who, because of their work, may be better acquainted with the arguments it develops.

From the B&N Reads Blog

Customer Reviews