Test and Measurement: Know It All: Know It All

Test and Measurement: Know It All: Know It All

by Jon S. Wilson, Stuart Ball, Creed Huddleston, Edward Ramsden
     
 

View All Available Formats & Editions

The Newnes Know It All Series takes the best of what our authors have written to create hard-working desk references that will be an engineer's first port of call for key information, design techniques and rules of thumb. Guaranteed not to gather dust on a shelf!

Field Application engineers need to master a wide area of topics to excel. The Test andSee more details below

Overview

The Newnes Know It All Series takes the best of what our authors have written to create hard-working desk references that will be an engineer's first port of call for key information, design techniques and rules of thumb. Guaranteed not to gather dust on a shelf!

Field Application engineers need to master a wide area of topics to excel. The Test and Measurement Know It All covers every angle including Machine Vision and Inspection, Communications Testing, Compliance Testing, along with Automotive, Aerospace, and Defense testing.

• A 360-degree view from our best-selling authors
• Topics include the Technology of Test and Measurement, Measurement System Types, and Instrumentation for Test and Measurement.
• The ultimate hard-working desk reference; all the essential information, techniques and tricks of the trade in one volume

Product Details

ISBN-13:
9780080949680
Publisher:
Elsevier Science
Publication date:
09/26/2008
Series:
Newnes Know It All
Sold by:
Barnes & Noble
Format:
NOOK Book
Pages:
912
File size:
16 MB
Note:
This product may take a few minutes to download.

Read an Excerpt

Test and Measurement


By Jon Wilson Walt Kester Stuart Ball G.M.S de Silva Dogan Ibrahim Kevin James Tim Williams Michael Laughton Douglas Warne Chris Nadovich Alex Porter Ed Ramsden Tony Fischer-Cripps Steve Scheiber

Newnes

Copyright © 2009 Elsevier Inc.
All right reserved.

ISBN: 978-0-08-094968-0


Chapter One

Fundamentals of Measurement

G. M. S. de Silva

1.1 Introduction

Metrology, or the science of measurement, is a discipline that plays an important role in sustaining modern societies. It deals not only with the measurements that we make in day-to-day living, such as at the shop or the petrol station, but also in industry, science, and technology. The technological advancement of the present-day world would not have been possible if not for the contribution made by metrologists all over the world to maintain accurate measurement systems.

The earliest metrological activity has been traced back to prehistoric times. For example, a beam balance dated to 5000 BC has been found in a tomb in Nagada in Egypt. It is well known that Sumerians and Babylonians had well-developed systems of numbers. The very high level of astronomy and advanced status of time measurement in these early Mesopotamian cultures contributed much to the development of science in later periods in the rest of the world. The colossal stupas (large hemispherical domes) of Anuradhapura and Polonnaruwa and the great tanks and canals of the hydraulic civilization bear ample testimony to the advanced system of linear and volume measurement that existed in ancient Sri Lanka.

There is evidence that well-established measurement systems existed in the Indus Valley and Mohenjo-Daro civilizations. In fact the number system we use today, known as the Indo-Arabic numbers, with positional notation for the symbols 1–9 and the concept of zero, was introduced into western societies by an English monk who translated the books of the Arab writer Al-Khawanizmi into Latin in the 12th century.

In the modern world metrology plays a vital role to protect the consumer and to ensure that manufactured products conform to prescribed dimensional and quality standards. In many countries the implementation of a metrological system is carried out under three distinct headings or services, namely, scientific, industrial, and legal metrology.

Industrial metrology is mainly concerned with the measurement of length, mass, volume, temperature, pressure, voltage, current, and a host of other physical and chemical parameters needed for industrial production and process control. The maintenance of accurate dimensional and other physical parameters of manufactured products to ensure that they conform to prescribed quality standards is another important function carried out by industrial metrology services.

Industrial metrology thus plays a vital role in the economic and industrial development of a country. It is often said that the level of industrial development of a country can be judged by the status of its metrology.

1.2 Fundamental Concepts

The most important fundamental concepts of measurement, except the concepts of uncertainty of measurement, are explained in this section.

1.2.1 Measurand and Influence Quantity

The specific quantity determined in a measurement process is known as the measurand. A complete statement of the measurand also requires specification of other quantities, for example, temperature, pressure, humidity, and the like, that may affect the value of the measurand. These quantities are known as influence quantities.

For example, in an experiment performed to determine the density of a sample of water at a specific temperature (say 20°C), the measurand is the "density of water at 20°C." In this instance the only influence quantity specified is the temperature, namely, 20°C.

1.2.2 True Value (of a Quantity)

The true value of a quantity is defined as the value consistent with its definition. This implies that there are no measurement errors in the realization of the definition. For example, the density of a substance is defined as mass per unit volume. If the mass and volume of the substance could be determined without making measurement errors, then the true value of the density can be obtained. Unfortunately, in practice, both these quantities cannot be determined without experimental error. Therefore the true value of a quantity cannot be determined experimentally.

1.2.3 Nominal Value and Conventional True Value

The nominal value is the approximate or rounded-off value of a material measure or characteristic of a measuring instrument. For example, when we refer to a resistor as 100Ω or to a weight as 1 kg, we are using their nominal values. Their exact values, known as conventional true values, may be 99.98[ohm] and 1.0001 kg, respectively. The conventional true value is obtained by comparing the test item with a higher-level measurement standard under defined conditions. If we take the example of the 1-kg weight, the conventional true value is the mass value of the weight as defined in the OIML (International Organization for Legal Metrology) International Recommendation RI 33, that is, the apparent mass value of the weight, determined using weights of density 8000 kg/m3 in air of density 1.2 kg/m3 at 20°C with a specified uncertainty figure. The conventional value of a weight is usually expressed in the form 1.001 g ± 0.001 g.

1.2.4 Error and Relative Error of Measurement

The difference between the result of a measurement and its true value is known as the error of the measurement. Since a true value cannot be determined, the error, as defined, cannot be determined as well. A conventional true value is therefore used in practice to determine an error.

The relative error is obtained by dividing the error by the average of the measured value. When it is necessary to distinguish an error from a relative error, the former is sometimes called the absolute error of measurement. As the error could be positive or negative of another term, the absolute value of the error is used to express the magnitude (or modulus) of the error.

As an example, suppose we want to determine the error of a digital multimeter at a nominal voltage level of 10V DC. The multimeter is connected to a DC voltage standard supplying a voltage of 10V DC and the reading is noted down. The procedure is repeated several times, say five times. The mean of the five readings is calculated and found to be 10.2V.

The error is then calculated as 10.2 – 10.0 = +0.2V. The relative error is obtained by dividing 0.2V by 10.2V, giving 0.02. The relative error as a percentage is obtained by multiplying the relative error (0.02) by 100; that is, the relative error is 0.2% of the reading.

In this example a conventional true value is used, namely, the voltage of 10V DC supplied by the voltage standard, to determine the error of the instrument.

1.2.5 Random Error

The random error of measurement arises from unpredictable variations of one or more influence quantities. The effects of such variations are known as random effects. For example, in determining the length of a bar or gauge block, the variation in temperature of the environment gives rise to an error in the measured value. This error is due to a random effect, namely, the unpredictable variation in the environmental temperature. It is not possible to compensate for random errors. However, the uncertainties arising from random effects can be quantified by repeating the experiment a number of times.

1.2.6 Systematic Error

An error that occurs due to a more or less constant effect is a systematic error. If the zero of a measuring instrument has been shifted by a constant amount this would give rise to a systematic error. In measuring the voltage across a resistance using a voltmeter, the finite impedance of the voltmeter often causes a systematic error. A correction can be computed if the impedance of the voltmeter and the value of the resistance are known.

Often, measuring instruments and systems are adjusted or calibrated using measurement standards and reference materials to eliminate systematic effects. However, the uncertainties associated with the standard or the reference material are incorporated in the uncertainty of the calibration.

1.2.7 Accuracy and Precision

The terms accuracy and precision are often misunderstood or confused. The accuracy of a measurement is the degree of its closeness to the true value. The precision of a measurement is the degree of scatter of the measurement result, when the measurement is repeated a number of times under specified conditions.

In Figure 1.1 the results obtained from a measurement experiment using a measuring instrument are plotted as a frequency distribution. The vertical axis represents the frequency of the measurement result and the horizontal axis represents the values of the results (X). The central vertical line represents the mean value of all the measurement results. The vertical line marked T represents the true value of the measurand. The difference between the mean value and the T line is the accuracy of the measurement.

The standard deviation (marked σx) of all the measurement results about the mean value is a quantitative measure for the precision of the measurement.

Unfortunately the accuracy defined in this manner cannot be determined, as the true value (T) of a measurement cannot be obtained due to errors prevalent in the measurement process. The only way to obtain an estimate of accuracy is to use a higher-level measurement standard in place of the measuring instrument to perform the measurement and use the resulting mean value as the true value. This is what is usually done in practice. The line (S) represents the mean value obtained using a higher level measurement standard.

Thus accuracy figures quoted by instrument manufacturers in their technical literature is the difference between the measurement result displayed by the instrument and the value obtained when a higher-level measurement standard is used to perform the measurement. In the case of simple instruments, the accuracy indicated is usually the calibration accuracy; for example, in the calibration of a micrometer, a series of gauge blocks is used. If the values displayed by the micrometer over its usable range falls within ±0.01 mm of the values assigned to the gauge blocks, then the accuracy of the micrometer is reported as ±0.01 mm.

It can be seen that the definition of error given previously (Section 1.2.4) is very similar to the definition of accuracy. In fact error and accuracy are interchangeable terms. Some prefer to use the term error and others prefer accuracy. Generally instrument manufacturers prefer the term accuracy, as they do not wish to highlight the fact that their instruments have errors.

Relative accuracy and percent of relative accuracy are also concepts in use. The definitions of these are similar to those of relative error and percent of relative error; that is, relative accuracy is obtained by dividing accuracy by the average measured result, and percent of relative accuracy is computed by multiplying relative accuracy by 100.

1.2.8 Calibration

Calibration is the process of comparing the indication of an instrument or the value of a material measure (e.g., value of a weight or graduations of a length-measuring ruler) against values indicated by a measurement standard under specified conditions. In the process of calibration of an instrument or material measure, the test item is either adjusted or correction factors are determined.

Not all instruments or material measures are adjustable. In case the instrument cannot be adjusted, it is possible to determine correction factors, although this method is not always satisfactory due to a number of reasons, the primary one being the nonlinearity in the response of most instruments.

(Continues...)



Excerpted from Test and Measurement by Jon Wilson Walt Kester Stuart Ball G.M.S de Silva Dogan Ibrahim Kevin James Tim Williams Michael Laughton Douglas Warne Chris Nadovich Alex Porter Ed Ramsden Tony Fischer-Cripps Steve Scheiber Copyright © 2009 by Elsevier Inc.. Excerpted by permission of Newnes. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Read More

Customer Reviews

Average Review:

Write a Review

and post it to your social network

     

Most Helpful Customer Reviews

See all customer reviews >