- Shopping Bag ( 0 items )
Ships from: Waltham, MA
Usually ships in 1-2 business days
Ships from: Waltham, MA
Usually ships in 1-2 business days
Clyde F. Coombs, Jr, (Los Altos, CA) recently retired from Hewlett-Packard. He is the editor of all five editions of the Printed Circuits Handbook, the first of which was published in 1967. He is also the editor of two editions of the Electronics Instrument Handbook and the communications Network Test and Measurement Handbook.
None of these was sufficiently accurate for navigational needs. Latitude (angular distance from the equator) could be measured by knowing the angle of elevation of the North Star above the horizon. Longitude (the east-west angular distance from the Greenwich, England, zero meridian) was determinable only if the navigator knew the time in Greenwich. The sun travels across the sky at a predictable rate of about one degree in every four minutes. Thus, for every four minutes that the clock showing Greenwich time differs from the time determined by local noon of the sun, the observer is one degree of longitude away from Greenwich. In 1713, the British government offered an award of 220,000 for a clock that could determine longitude to within 1/2 degree. The prize was not claimed until 1761.
Today, emphasis on further precision dictated by space and military requirements and communication synchronization has required time standards that are even more accurate. Depending on the application, the necessary time accuracy might bepicoseconds per day.
A highly precise time standard consists of an ultra-stable oscillator a form of counter or integrator that tracks the number of elapsed cycles oscillator. This is because the indicated time of the time standard is the inte gral with respect to time, properly scaled, of the frequency of the oscillato The counter is usually arranged so that it reads out in conventional units time. A simple example of a clock is the pendulum clock, the very best which are only moderately good time standards. The swinging pendulum the oscillator and the gear train and hands are the counter or integrator. the two requirements for a good time standard are that its oscillator has known, constant rate of frequency and that the readout is set at some time to agree with the accepted time reference at that time.
Accuracy and precision
The accuracy of a clock is the degree of conformity of its time with respect a standard reference. The precision of a clock is the degree of mutual agreement among a series of individual measurements of essentially identical clocks. For example, a group of clocks might have an average time in perfact agreement with a reference standard, but each member of the group might differ widely. This group would be highly accurate as a group, but very imprecise as individual units. Similarly, a group of clocks might all agree with other very closely, but, as a group, be far from the reference standard. Each clock now exhibits high precision, but poor accuracy.
For the purposes of this chapter, precision implies the ability to achieve accuracy of one part in 101 with respect to a defined standard, with an uncertainty or instability in the oscillator of less than one part in 10".
A further measure of clock performance is stability, or more correctly, instanbility the spontaneous and/or environmentally caused frequency change within a given time interval, or within a given range of an environmental variable. All clocks are subject to random fluctuations in oscillator frequency with consequent effects on the indicated time. Most also exhibit perturbations caused by aging and environmental effects. Generally, one distinguishes between systematic effects, such as frequency drift, and stochastic frequency fluctuations. Radiation, pressure, temperature, humidity, etc. can cause systematic instabilities. Random or stochastic instabilities are typically chat characterized in either the time domain (for example, Allan variance) or the frequency domain (phase noise). Measurements of stochastic instabilities are dependent on the measurement system bandwidth and on the sample time or integration time.
For the purposes of this chapter, precision implies the ability to achieve an accuracy of 1 part in 101 with respect to a defined standard, with an uncertainty or instability in the oscillator of less than 1 part in 1010.
Customarily, the unit of time in the MKS system is the second. Universal time (UT) is based on the rotation of the earth about its axis. The time basis was chosen so that, on the average, local noon would occur when the sun was on the local meridian. This assumed that the earth's rotation rate was constant and would, therefore, produce a uniform time scale. It is now known that the earth is subject to periodic, secular, and irregular variations; time based on the rotation of the earth is subject to these same variations.
Because UT was not a satisfactory time scale, a variant of UT was defined. The UT, time scale corrects for the position of the earth's pole and thus measurementes the angular position of the earth and is useful for navigation. The seasonal and other variations remain.
To compensate for these variations, the UT2 time scale, the second variant of UT, was defined by international agreement. Corrections were applied for the seasonal variations. Annually, the non-periodic variations are examined, and, if necessary, a correction is made in the length of the second on January 31 of each year. The goal was to maintain the UT2 time scale to within 100 ms of the actual universal time. As the length of a second changed yearly, clocks all over the world had to be adjusted each year to run at a different rate.
To solve this problem the "leap second" was invented in 1972. UTC (Universal Time Coordinated) was defined to have seconds based on atomic time, described in the following section, but with the rule that UTC is always to be within 0.9 s of UT1. To keep this relationship, UTC is subject to stepwise corrections (leap seconds), which either add a second to or subtract a second from the last minute of the year in December or in the last minute of June. The minute in which the adjustment is made is thus either 59 or 61 s long....
|Pt. 1||Introduction to Electronic Instruments|
|Ch. 1||Measurements and Instruments||1.1|
|Ch. 2||Calibration, Traceability, and Standards||2.1|
|Ch. 3||Basic Electronic Standards||3.1|
|Pt. 2||Basics of Electronic Instrumentation|
|Ch. 4||Introduction to Electronic Instruments||4.1|
|Ch. 6||Analog-to-Digital Converters||6.1|
|Ch. 7||Signal Sources||7.1|
|Ch. 8||Microwave Signal Sources||8.1|
|Ch. 9||Signal Processing||9.1|
|Ch. 10||Microprocessors in Electronic Instruments||10.1|
|Ch. 11||Power Supplies||11.1|
|Ch. 12||Instrument-User Interfaces||12.1|
|Pt. 3||Current and Voltage Measurement Instruments|
|Ch. 13||Voltage, Current, and Resistance Measuring Instruments||13.1|
|Ch. 15||Power Measurements||15.1|
|Pt. 4||Signal and Waveform Generation Instruments|
|Ch. 16||Oscillators, Function Generators, Frequency and Waveform Synthesizers Generators||16.1|
|Ch. 17||Pulse Generators||17.1|
|Ch. 18||Microwave Signal Generators||18.1|
|Pt. 5||Frequency and Time Measurement Instruments|
|Ch. 19||Electronic Counters and Frequency and Time Interval Analyzers||19.1|
|Ch. 20||Precision Time and Frequency Sources||20.1|
|Ch. 21||Spectrum Analyzers||21.1|
|Ch. 22||Phase Noise Instruments||22.1|
|Pt. 6||Lightwave Test Instruments|
|Ch. 23||Lightwave Signal Sources||23.1|
|Ch. 24||Lightwave Signal Analysis||24.1|
|Ch. 25||Lightwave Component Analyzers||25.1|
|Ch. 26||Optical Time Domain Reflectometers||26.1|
|Pt. 7||Circuit Element Measurement Instruments|
|Ch. 27||Impedance Measuring Instruments||27.1|
|Ch. 28||Semiconductor Test Instruments||28.1|
|Ch. 29||Network Analyzers||29.1|
|Pt. 8||Digital Domain Instruments|
|Ch. 30||Data/Word Generators||30.1|
|Ch. 31||Logic Analyzers||31.1|
|Ch. 32||Protocol Analyzers||32.1|
|Ch. 33||Bit Error Rate Measuring Instruments: Pattern Generators and Error Detectors||33.1|
|Pt. 9||Waveguide Passive Devices|
|Ch. 34||Microwave Passive Devices||34.1|
|Pt. 10||Using Electronic Instruments|
|Ch. 35||Impedance Considerations||35.1|
|Ch. 36||Electrical Interference||36.1|
|Ch. 37||Electrical Grounding||37.1|
|Ch. 38||Distributed Parameters and Component Considerations||38.1|
|Ch. 39||Digital Interface Issues||39.1|
|Pt. 11||Instruments in Systems|
|Ch. 40||Introduction to Instrument Systems||40.1|
|Ch. 41||Switches in Automated Test Systems||41.1|
|Ch. 42||Instrument System Elements||42.1|
|Ch. 43||Computer-Controlled Instrument Systems||43.1|
|Pt. 12||Software in Instruments and Virtual Instruments|
|Ch. 44||Virtual Instruments and the Role of Software||44.1|
|Acronyms and Abbreviations||A.1|