Gift Guide

Computer Vision Metrics: Survey, Taxonomy, and Analysis [NOOK Book]


Computer Vision
provides an extensive survey and analysis of over 100 current and
historical feature description and machine vision methods, with a detailed taxonomy
for local, regional and global features. This book provides necessary
background to develop ...

See more details below
Computer Vision Metrics: Survey, Taxonomy, and Analysis

Available on NOOK devices and apps  
  • NOOK Devices
  • Samsung Galaxy Tab 4 NOOK 7.0
  • Samsung Galaxy Tab 4 NOOK 10.1
  • NOOK HD Tablet
  • NOOK HD+ Tablet
  • NOOK eReaders
  • NOOK Color
  • NOOK Tablet
  • Tablet/Phone
  • NOOK for Windows 8 Tablet
  • NOOK for iOS
  • NOOK for Android
  • NOOK Kids for iPad
  • PC/Mac
  • NOOK for Windows 8
  • NOOK for PC
  • NOOK for Mac
  • NOOK for Web

Want a NOOK? Explore Now

NOOK Book (eBook)


Computer Vision
provides an extensive survey and analysis of over 100 current and
historical feature description and machine vision methods, with a detailed taxonomy
for local, regional and global features. This book provides necessary
background to develop intuition about why interest point detectors and feature
descriptors actually work, how they are designed, with observations about
tuning the methods for achieving robustness and invariance targets for specific
applications. The survey is broader than it is deep, with over 540 references
provided to dig deeper. The taxonomy includes search methods, spectra
components, descriptor representation, shape, distance functions, accuracy, efficiency,
robustness and invariance attributes, and more. Rather than providing ‘how-to’ source
code examples and shortcuts, this book provides a counterpoint discussion to
the many fine opencv community source code resources available for hands-on

What you’ll learn
  • Interest
    point & descriptor concepts
    (interest points, corners, ridges, blobs,
    contours, edges, maxima), interest point tuning and culling, interest point methods
    (Laplacian, LOG, Moravic, Harris, Harris-Stephens, Shi-Tomasi, Hessian, difference
    of Gaussians, salient regions, MSER, SUSAN, FAST, FASTER, AGHAST, local curvature,
    morphological regions, and more), descriptor concepts (shape, sampling pattern,
    spectra, gradients, binary patterns, basis features), feature descriptor
  • Local
    binary descriptors
  • Gradient
  • Shape descriptors
    (Image moments, area, perimeter, centroid, D-NETS, chain codes, Fourier descriptors,
    wavelets, and more) texture descriptors, structural and statistical (Harallick,
    SDM, extended SDM, edge metrics, Laws metrics, RILBP, and more).
  • 3D
    for depth-based, volumetric, and activity recognition spatio-temporal
    data sets (3D HOG, HON 4D, 3D SIFT, LBP-TOP, VLBP, and more).
  • Basis
    space descriptors
    (Zernike moments, KL, SLANT, steerable filter basis sets,
    sparse coding, codebooks, descriptor vocabularies, and more), HAAR methods
    (SURF, USURF, MUSURF, GSURF, Viola Jones, and more), descriptor-based image
  • Distance
    (Euclidean, SAD, SSD, correlation, Hellinger, Manhattan, Chebyshev,
    EMD, Wasserstein, Mahalanobis, Bray-Curtis, Canberra, L0, Hamming, Jaccard),
    coordinate spaces, robustness and invariance criteria.
  • Image formation,
    includes CCD and CMOS sensors for 2D and 3D imaging, sensor processing topics, with
    a survey identifying over fourteen (14) 3D depth sensing methods, with emphasis
    on stereo, MVS, and structured light.
  • Image pre-processing
    examples are provided targeting specific feature descriptor families
    (point, line and area methods, basis space methods), colorimetry (CIE, HSV,
    RGB, CAM02, gamut mapping, and more).
  • Ground
    truth data
    , some best-practices and examples are provided, with a survey of
    real and synthetic datasets.
  • Vision
    pipeline optimizations
    , mapping algorithms to compute resources (CPU, GPU,
    DSP, and more), hypothetical high-level vision pipeline examples (face
    recognition, object recognition, image classification, augmented reality),
    optimization alternatives with consideration for performance and power to make effective
    use of SIMD, VLIW, kernels, threads, parallel languages, memory, and more.
  • Synthetic
    interest point alphabet analysis
    against 10 common opencv detectors to
    develop intuition about how different classes of detectors actually work (SIFT, SURF, BRISK, FAST, HARRIS, GFFT, MSER, ORB,
    STAR, SIMPLEBLOB). Source code provided online.
  • Visual
    learning concepts,
    although not the focus of this book, a light
    introduction is provided to machine learning and statistical learning topics,
    such as convolutional networks, neural networks, classification and training,
    clustering and error minimization methods (SVM,’s, kernel machines, KNN,
    RANSAC, HMM, GMM, LM, and more). Ample references are provided to dig deeper.
Who this book is for

Engineers, scientists, and academic researchers in areas including media processing, computational photography, video analytics, scene understanding, machine vision, face recognition, gesture recognition, pattern recognition and general object analysis.

Table of Contents

Chapter 1. Image Capture and Representation

Chapter 2. Image Pre-Processing

Chapter 3. Global and Regional Features

Chapter 4. Local Feature Design Concepts,
Classification, and Learning

Chapter 5. Taxonomy Of Feature Description Attributes

Chapter 6. Interest Point Detector and Feature Descriptor

Chapter 7. Ground Truth Data, Data, Metrics, and Analysis

Chapter 8. Vision Pipelines and Optimizations

Appendix A. Synthetic Feature Analysis

Appendix B. Survey of Ground Truth Datasets

Appendix C. Imaging and Computer Vision Resources

Appendix D. Extended SDM Metrics

Read More Show Less

Product Details

  • ISBN-13: 9781430259305
  • Publisher: Apress
  • Publication date: 5/22/2014
  • Sold by: Barnes & Noble
  • Format: eBook
  • Edition number: 1
  • Pages: 508
  • Sales rank: 356,166
  • File size: 12 MB
  • Note: This product may take a few minutes to download.

Meet the Author

Scott Krig is a pioneer in computer imaging, computer vision, and graphics visualization. He founded Krig Research in 1988 (, providing the world’s first imaging and vision systems based on

high-performance engineering workstations, super-computers, and dedicated imaging hardware, serving customers worldwide in 25 countries. Scott has provided imaging and vision solutions around the globe, and has worked closely with many industries, including aerospace, military, intelligence, law enforcement, government research, and academic organizations.

More recently, Scott has worked for major corporations and startups serving commercial markets, solving problems in the areas of computer vision, imaging, graphics, visualization, robotics, process control, industrial automation, computer security, cryptography, and consumer applications of imaging and machine vision to PCs, laptops, mobile phones, and tablets. Most recently, Scott provided direction for Intel Corporation in the area of depth-sensing and computer vision methods for embedded systems and mobile platforms.

Scott is the author of many patent applications worldwide in the areas of embedded systems, imaging, computer vision, DRM, and computer security, and studied at Stanford.

Read More Show Less

Customer Reviews

Be the first to write a review
( 0 )
Rating Distribution

5 Star


4 Star


3 Star


2 Star


1 Star


Your Rating:

Your Name: Create a Pen Name or

Barnes & Review Rules

Our reader reviews allow you to share your comments on titles you liked, or didn't, with others. By submitting an online review, you are representing to Barnes & that all information contained in your review is original and accurate in all respects, and that the submission of such content by you and the posting of such content by Barnes & does not and will not violate the rights of any third party. Please follow the rules below to help ensure that your review can be posted.

Reviews by Our Customers Under the Age of 13

We highly value and respect everyone's opinion concerning the titles we offer. However, we cannot allow persons under the age of 13 to have accounts at or to post customer reviews. Please see our Terms of Use for more details.

What to exclude from your review:

Please do not write about reviews, commentary, or information posted on the product page. If you see any errors in the information on the product page, please send us an email.

Reviews should not contain any of the following:

  • - HTML tags, profanity, obscenities, vulgarities, or comments that defame anyone
  • - Time-sensitive information such as tour dates, signings, lectures, etc.
  • - Single-word reviews. Other people will read your review to discover why you liked or didn't like the title. Be descriptive.
  • - Comments focusing on the author or that may ruin the ending for others
  • - Phone numbers, addresses, URLs
  • - Pricing and availability information or alternative ordering information
  • - Advertisements or commercial solicitation


  • - By submitting a review, you grant to Barnes & and its sublicensees the royalty-free, perpetual, irrevocable right and license to use the review in accordance with the Barnes & Terms of Use.
  • - Barnes & reserves the right not to post any review -- particularly those that do not follow the terms and conditions of these Rules. Barnes & also reserves the right to remove any review at any time without notice.
  • - See Terms of Use for other conditions and disclaimers.
Search for Products You'd Like to Recommend

Recommend other products that relate to your review. Just search for them below and share!

Create a Pen Name

Your Pen Name is your unique identity on It will appear on the reviews you write and other website activities. Your Pen Name cannot be edited, changed or deleted once submitted.

Your Pen Name can be any combination of alphanumeric characters (plus - and _), and must be at least two characters long.

Continue Anonymously

    If you find inappropriate content, please report it to Barnes & Noble
    Why is this product inappropriate?
    Comments (optional)