Principles of Digital Audio, Sixth Edition

Principles of Digital Audio, Sixth Edition

by Ken C. Pohlmann
Principles of Digital Audio, Sixth Edition

Principles of Digital Audio, Sixth Edition

by Ken C. Pohlmann

eBook

$51.99  $69.30 Save 25% Current price is $51.99, Original price is $69.3. You Save 25%.

Available on Compatible NOOK Devices and the free NOOK Apps.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

The definitive guide to digital engineering--fully updated

Gain a thorough understanding of digital audio tools, techniques, and practices from this completely revised and expanded resource. Written by industry pioneer and Audio Engineering Society Fellow Ken C. Pohlmann, Principles of Digital Audio, Sixth Edition, describes the technologies behind today's audio equipment in a clear, practical style. Covering basic theory to the latest technological advancements, the book explains how to apply digital conversion, processing, compression, storage, streaming, and transmission concepts. New chapters on Blu-ray, speech coding, and low bit-rate coding are also included in this bestselling guide.

  • Learn about discrete time sampling, quantization, and signal processing
  • Examine details of CD, DVD, and Blu-ray players and discs
  • Encode and decode AAC, MP3, MP4, Dolby Digital, and other files
  • Prepare content for distribution via the Internet and digital radio and television
  • Learn the critical differences between music coding and speech coding
  • Design low bit-rate codecs to optimize memory capacity while preserving fidelity
  • Develop methodologies to evaluate the sound quality of music and speech files
  • Study audio transmission via HDMI, VoIP, Wi-Fi, and Bluetooth
  • Handle digital rights management, fingerprinting, and watermarking
  • Understand how one-bit conversion and high-order noise shaping work

Product Details

ISBN-13: 9780071663472
Publisher: McGraw Hill LLC
Publication date: 10/06/2010
Series: Digital Video/Audio
Sold by: Barnes & Noble
Format: eBook
Pages: 816
File size: 23 MB
Note: This product may take a few minutes to download.

About the Author

Ken Pohlmann, a professor emeritus and former director of the Music Engineering program at the University of Miami, is a consultant for audio manufacturers. He is the author of several audio engineering books and is a monthly contributor to Sound & Vision magazine. Pohlmann chaired the Audio Engineering Society's International Conference on Digital Audio, and co-chaired the International Conference on Internet Audio.

Read an Excerpt

Chapter 1: Sound and Numbers

Digital audio is a highly sophisticated technology. It pushes the envelopes of many diverse engineering and manufacturing disciplines. Although the underlying concepts have been well understood since the 1920s, commercialization of digital audio did not begin until the 1970s simply because theory had to wait 50 years for technology to catch up. The complexity of digital audio is all the more reason to begin the discussion with the basics. In particular, this chapter begins our exploration of ways to numerically encode the information contained in an audio event.

Physics of Sound

It would be a mistake for a study of digital audio to ignore the acoustic phenomena for which the technology has been designed. Music is an acoustic event. Whether it radiates from musical instruments or is directly created by electrical signals, all music ultimately finds its way into the air, where it becomes a matter of sound and hearing. It is therefore appropriate to briefly review the nature of sound.

Acoustics is the study of sound and is concerned with the generation, transmission, and reception of sound waves. The circumstances for those three phenomena are created when energy causes a disturbance in a medium. For example, when a kettle drum is struck, its drumhead disturbs the surrounding air (the medium). The outcome of that disturbance is the sound of a kettledrum. The mechanism seems fairly simple: the drumhead is activated and it vibrates back and forth. When the drumhead pushes forward, air molecules in front of it are compressed. When it pulls back, that area is rarefied. The disturbance consists of regions of pressure above and below theequilibrium atmospheric pressure. Nodes define areas of minimum displacement, and antinodes are areas of maximum (positive or negative) displacement. The displacement is quite small; in normal conversation, particle displacement is about one millionth of an inch. A crowd's acoustic outpouring might cause displacement of one thousandth of an inch.

Sound is propagated by air molecules through successive displacements that correspond to the original disturbance. In other words, air molecules colliding one against the next propagate the energy disturbance away from the source. Sound transmission thus consists of local disturbances propagating from one region to the next. The local displacement of air molecules occurs in the direction in which the disturbance is traveling; thus sound undergoes a longitudinal form of transmission. A receptor (like a microphone diaphragm) placed in the sound field will similarly move according to the pressure acting on it, completing the chain of events. Incidentally, the denser the medium, the easier is the task of propagation. For example, sound travels more easily in water than in air.

We can access an acoustical system with transducers, devices able to change energy from one form to another. These serve as sound generators and receivers. For example, a kettle drum changes the mechanical energy contributed by the mallet to acoustical energy. A microphone responds to the acoustical energy by producing electrical energy. A loudspeaker reverses that process to again create acoustical energy from electrical.

The pressure changes of sound vibrations can be produced either periodically or aperiodically. A violin moves the air back and forth periodically at a fixed rate. (In practice, things like vibrato make it a quasi-periodic vibration.) However, a cymbal crash has no fixed period; it is aperiodic. One sequence of a periodic vibration, from pressure rarefaction to compression and back again, determines one cycle. The number of vibration cycles that pass a given point each second is the frequency of the sound wave, measured in Hz (hertz). A violin playing concert A, for example, generates a waveform that repeats about 440 times per second; its frequency is 440 Hz. Alternatively, the reciprocal of frequency, the time it takes for one cycle to occur, is called the period. Frequencies in nature can range from very low, such as changes in barometric pressure around 10-5 Hz, to very high, such as cosmic rays at 1022 Hz. Sound is loosely described to be that narrow, low-frequency band from 20 Hz to 20 kHz-roughly the range of human hearing. Audio devices are designed to respond to frequencies in that general range.

Wavelength is the distance sound travels through one complete cycle of pressure change and is the physical measurement of the length of one cycle. Because the velocity of sound is relatively constant-about 1130 ft/s (feet per second)-we can calculate the wavelength of a sound wave by dividing the velocity of sound by its frequency. Quick calculations demonstrate the enormity of the differences in the wavelength of sounds. For example, a 20-kHz wavelength is about 0.7 inch long, and a 20-Hz wavelength is about 56 feet long. No transducers (including our ears) are able to linearly receive or produce that range of wavelengths. Their frequency response is not flat, and the frequency range is limited. The range between the lowest and highest frequencies a system can accommodate defines a system's bandwidth. If two waveforms are coincident in time with their positive and negative variations together, they are in phase. When the variations exactly oppose one another, the waveforms are out of phase. Any relative time difference between waveforms is called a phase shift. If two waveforms are relatively phase shifted and combined, a new waveform results from constructive and destructive interference.

Sound will undergo diffraction, in which it bends through openings or around obstacles. Diffraction is relative to wavelength; longer wavelengths diffract more apparently than shorter ones. Thus, high frequencies are considered to be more directional in nature. Try this experiment: hold a magazine in front of a loudspeaker-high frequencies will be blocked by the barrier, while lower frequencies (longer wavelengths) will go around it.

Sound also can refract, in which it bends because its velocity changes. For example, sound can refract because of temperature changes, bending away from warmer temperatures, and toward cooler ones. Specifically, velocity of sound in air increases by about 1.1 ft/s with each increase of 1°F. Another effect of temperature on the velocity of sound is well known to every wind player. Because of the change in the speed of sound, the instrument must be warmed up before it plays in tune (the difference is about half a semitone).

The speed of sound in air is relatively slow, 740 mph. The time it takes for a sound to travel from a source to a receptor can be calculated by dividing the distance by the speed of sound. For example, it would take a sound about onesixth of a second to travel 200 feet. Sound is absorbed as it travels. The mere passage of sound through air acts to attenuate the sound energy. High frequencies are more prominently attenuated in air; a lightning strike close by is heard as a sharp clap of sound, and one far away is heard as a low rumble because of high-frequency attenuation. Humidity affects air attenuation; specifically, wet air absorbs sound better than dry air. Interestingly, moist air is less dense than dry air (water molecules weigh less than the nitrogen and oxygen they replace) causing the speed of sound to increase.

Sound pressure level

Amplitude describes the sound pressure displacement above and below the equilibrium atmospheric level. In absolute terms, sound pressure is very small; if atmospheric pressure is 15 psi (pounds per square inch), a loud sound might cause a deviation from 14.999 to 15.001 psi. However, the range from the softest to the loudest sound, which determines the dynamic range, is quite large. In fact, human ears (and hence audio systems) have a dynamic range spanning a factor of millions. Because of the large range, a logarithmic ratio is used to measure sound pressure levels. The decibel (dB) uses base 10 logarithmic units to achieve this. A base 10 logarithm is the power to which 10 must be raised to equal the value. For example, an unwieldy number such as 100,000,000 yields a tidy logarithm of 8 because 108 = 100,000,000...

Table of Contents

1. Sound and Numbers; 2. Fundamentals of Digital Audio; 3. Digital Audio Recording; 4. Digital Audio Reproduction; 5. Error Correction; 6. Optical Disc Storage; 7. Compact Disc; 8. DVD; 9. Blu-ray; 10. Low Bit-Rate Coding: Theory and Evaluation; 11. Low Bit-Rate Coding: Codec Design; 12. Speech Coding for Transmission; 13. Audio Interconnection; 14. Personal Computer Audio; 15. Telecommunications and Internet Audio; 16. Digital Radio and Television Broadcasting; 17. Digital Signal Processing; 18. Sigma-Delta Conversion and Noise Shaping;Appendix. The Sampling Theorem; Bibliography; Index
From the B&N Reads Blog

Customer Reviews