Combining Pattern Classifiers: Methods and Algorithms / Edition 1

Combining Pattern Classifiers: Methods and Algorithms / Edition 1

by Ludmila I. Kuncheva
     
 

ISBN-10: 0471210781

ISBN-13: 9780471210788

Pub. Date: 07/01/2004

Publisher: Wiley

A unified, coherent, and expansive treatment of current classifier ensemble methods

Mail sorting, medical test reading, military target recognition, signature verification, meteorological forecast, DNA matching, fingerprint recognition. These are just a few of the areas requiring reliable, precise pattern recognition.

Although in the past, pattern

Overview

A unified, coherent, and expansive treatment of current classifier ensemble methods

Mail sorting, medical test reading, military target recognition, signature verification, meteorological forecast, DNA matching, fingerprint recognition. These are just a few of the areas requiring reliable, precise pattern recognition.

Although in the past, pattern recognition has focused on designing single classifiers, recently the focus has been on combining several classifiers and getting a consensus of results for greater accuracy. This interest in combining classifiers has grown astronomically in recent years, evolving into a rich and dynamic, if loosely structured, discipline. Combining Pattern Classifiers: Methods and Algorithms represents the first attempt to provide a comprehensive survey of this fast-growing field. In a clear and straightforward manner, the author provides a much-needed road map through a multifaceted and often controversial subject while effectively organizing and systematizing the current state of the art.

Covering a broad range of methodologies, algorithms, and theories, the text addresses such questions as:

  • Why should we combine classifiers?
  • What are the current approaches for building classifier ensembles?
  • What fusion methods can we use?
  • How do we measure diversity in a classifier ensemble and is diversity really a key factor to its success?

Replete with case studies and real-world applications, this groundbreaking text will be of interest to academics and researchers in the field seeking both new classification tools and new uses for the old ones.

Product Details

ISBN-13:
9780471210788
Publisher:
Wiley
Publication date:
07/01/2004
Edition description:
Older Edition
Pages:
376
Product dimensions:
6.48(w) x 9.47(h) x 1.09(d)

Table of Contents

Preface.

Acknowledgments.

Notations and Acronyms.

1. Fundamentals of Pattern Recognition.

1.1 Basic Concepts: Class, Feature, Data Set.

1.2 Classifier, Discriminant Functions, Classification Regions.

1.3 Classification Error and Classification Accuracy.

1.4 Experimental Comparison of Classifiers.

1.5 Bayes Decision Theory.

1.6 A Taxonomy of Classifier Design Methods.

1.7 Clustering.

Appendix.

2. Base Classifiers.

2.1 Linear and Quadratic Classifiers.

2.2 Nonparametric Classifiers.

2.3 The k-nearest Neighbor Rule.

2.4 Tree Classifiers.

2.5 Neural Networks.

Appendix.

3. Multiple Classifier Systems.

3.1 Philosophy.

3.2 Terminologies and Taxonomies.

3.3 To Train or Not to Train?

3.4 Remarks.

4. Fusion of Label Outputs.

4.1 Types of Classifier Outputs.

4.2 Majority Vote.

4.3 Weighted Majority Vote.

4.4 “Naïve”-Bayes Combination.

4.5 Multinomial Methods.

4.6 Probabilistic Approximation.

4.7 SVD Combination.

4.8 Conclusions.

Appendix.

5. Fusion of Continuous-Valued Outputs.

5.1 How Do We Get Probability Outputs?

5.2 Class-Conscious Combiners.

5.3 Class-Indifferent Combiners.

5.4 Where Do the Simple Combiners Come From?

5.5 Appendix.

6. Classifier Selection.

6.1 Preliminaries.

6.2 Why Classifier Selection Works.

6.3 Estimating Local Competence Dynamically.

6.4 Pre-estimation of the Competence Regions.

6.5 Selection or Fusion?

6.6 Base Classifiers and Mixture of Experts.

7. Bagging and Boosting.

7.1 Bagging.

7.2 Boosting.

7.3 Bias-Variance Decomposition.

7.1 Which is Better: Bagging or Boosting?

Appendix.

8. Miscellanea.

8.1 Feature Selection.

8.2 Error Correcting Output Codes (ECOC).

8.3 Combining Clustering Results.

Appendix.

9. Theoretical Views and Results.

9.1 Equivalence of Simple Combination Rules.

9.2 Added Error for the Mean Combination Rule.

9.3 Added Error for the Weighted Mean Combination.

9.4 Ensemble Error for Normal and Uniform Distributions.

10. Diversity in Classifier Ensembles.

10.1 What is Diversity?

10.2 Measuring Diversity in Classifier Ensembles.

10.3 Relationship Between Diversity and Accuracy.

10.4 Using Diversity.

10.5 Conclusions: Diversity of Diversity.

Appendix A: Equivalence Between the Averaged Disagreement Measure Dav and Kohavi—Wolpert KW.

Appendix B: Matlab Code for Some Overproduce and Select Algorithms.

References.

Index.

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >