Brand New. US Edition Book. We do not ship to Military Addresses. Fast Shipping with Order Tracking. For Standard Shipping 7-8 business days & Expedite Shipping 4-6 business days, ...after shipping.Read moreShow Less
Owners of Audio Production and Critical Listening, some copies of the disc may have issues with the Mac version of the software. For the correct files, please visit www.taylorandfrancis.com/info/contact/
Making decisions about how and when to apply sound processing effects and recording techniques can make or break your song mix. The decisions you make come down to your listening skills--what you hear and how you perceive it. Your ability to properly discern sound, identify a problem, and act accordingly, especially when the decision needs to be made quickly, makes all the difference to the success of the final track.
Audio Production and Critical Listening develops your critical and expert listening skills, enabling you to listen to audio like an award-winning engineer. The interactive "ear training" software practice modules give you experience identifying various types of signal processes and manipulation--EQ, dynamics, reverb, distortion, delay, and more. The software sits alongside the clear and detailed explanations in the book, offering a complete learning package that will help you train your ears to listen and really "hear" your recordings.
As Assistant Professor of audio engineering and performing arts technology at the University of Michigan School of Music, Theatre & Dance since 2003, Jason Corey teaches courses in sound recording, technical ear training, and musical acoustics. He holds Ph.D. and M.Mus. degrees in sound recording from McGill University in Montreal and has diverse experience as a recording engineer. His research is concerned with multichannel audio recording techniques, technical ear training methods, and the development of audio processing algorithms for multichannel audio.
He is a recipient of the Audio Engineering Society Educational Foundation Scholarship, as well as the Paul D. Fleck Fellowship at The Banff Centre in Banff, Canada, where he has worked as an Audio Research Associate.
He has presented his research at conferences in Europe, Canada, and the United States.
Serving as Chair of the AES Education Committee since fall 2004, he is actively involved in the organization of student and education events at AES conventions, as well as the AES student website.
He is the founder of the University of Michigan AES Student Section, serving as its faculty advisor since 2003.
In addition to being an AES member, he is also a member of the Acoustical Society of America and the International Computer Music Association.
We are exposed to sound throughout each moment of every day regardless of whether we pay attention to it or not. The sounds we hear give insight into not only their sources but also the nature of our physical environment—surrounding objects, walls, and structures. Whether we find ourselves in a highly reverberant environment or an anechoic chamber, the quality of reflected sound or the lack of reflections informs us about the physical properties of our location. Our surrounding environment becomes audible, even if it is not creating sound itself, by the way in which it affects sound, through patterns of reflection and absorption. Just as a light source illuminates objects around it, sound sources allow us to hear the general shape and size of our physical environment. Because we are primarily oriented toward visual stimuli, it may take some dedicated and consistent effort to focus our awareness in the aural domain. As anyone who works in the field of audio engineering knows, the effort it takes to focus our aural awareness is well worth the satisfaction in acquiring critical listening skills. Although simple in concept, the practice of focusing attention on what is heard in a structured and organized way is challenging to accomplish in a consistent manner.
There are many situations outside of audio production in which listening skills can be developed. For instance, walking by a construction site, impulsive sounds such as hammering may be heard. Echoes—the result of those initial impulses reflecting from nearby building exteriors—can also be heard a short time later. The timing, location, and amplitude of echoes provide us with information about nearby buildings, including approximate distances to them.
Listening in a large concert hall, we notice that sound continues to linger on and slowly fade out after a source has stopped sounding. The gradual decaying of sound in a large acoustic space is referred to as reverberation. Sound in a concert hall can be enveloping because it seems to be coming from all directions, and sound produced on stage combines with reverberant sound arriving from all directions.
In a completely different location such as a carpeted living room, a musical instrument will sound noticeably different compared with the same instrument played in a concert hall. Physical characteristics such as dimensions and surface treatments of a living room determine that its acoustical characteristics will be markedly different than those of a concert hall; the reverberation time will be significantly shorter in a living room. The relatively close proximity of walls will reflect sound back toward a listener within milliseconds of the arrival of direct sound and at nearly the same amplitude. This small difference in time of arrival and near-equal amplitude of direct and reflected sound at the ears of a listener creates a change in the frequency content of the sound that is heard, due to a filtering of the sound known as comb filtering. Floor covering can also influence spectral balance: a carpeted floor will absorb some high frequencies, and a wood floor will reflect high frequencies.
When observing the surrounding sonic landscape, the listener may want to consider questions such as the following:
What sounds are present at any given moment?
Besides the more obvious sounds, are there any constant, steady-state, sustained sounds, such as air handling noise or lights humming, that are usually ignored?
Where is each sound located? Are the locations clear and distinct, or diffuse and ambiguous?
How far away are the sound sources?
How loud are they?
What is the character of the acoustic space? Are there any echoes? What is the reverberation decay time?
It can be informative to aurally analyze recorded music heard at any time whether it is in a store, club, restaurant, or elevator. It is useful to think about additional questions in such situations:
How is the timbre of the sound affected by the system and environment through which it is presented?
Are all of the elements of the sound clearly audible? If they are not, what elements are difficult to hear and which ones are most prominent?
If the music is familiar, does the balance seem the same as what has been heard in other listening situations?
Active listening is critical in audio engineering, and we can take advantage of times when we are not specifically working on an audio project to heighten our awareness of the auditory landscape and practice our critical listening skills. Walking down the street, sitting in a café, and attending a live music concert all offer opportunities for us to hone our listening skills and thus improve our work with audio. For further study of some of these ideas, see Blesser and Salter's 2006 book Spaces Speak, Are You Listening?, where they expand upon listening to acoustic spaces in a detailed exploration of aural architecture.
Audio engineers are concerned with capturing, mixing, and shaping sound. Whether recording acoustic sound, such as from acoustic musical instruments playing in a live acoustic space, or creating electronic sounds in a digital medium, one of an engineer's goals is to shape sound so that it is most appropriate for reproduction over loudspeakers and headphones and best communicates the intentions of a musical artist. An important aspect of sound recording that an engineer seeks to control is the relative balance of instruments or sound sources, whether through manipulation of recorded audio signals or through microphone and ensemble placement. How sound sources are mixed and balanced in a recording can have a tremendous effect on the musical feel of a composition. Musical and spectral balance is critical to the overall impact of a recording.
Through the process of shaping sound, no matter what equipment is being used or what the end goal is, the main focus of the engineer is simply to listen. Engineers need to constantly analyze what they hear to assess a track or a mix and to help make decisions about further adjustments to balance and processing. Listening is an active process, challenging the engineer to remain continuously aware of any subtle and not so subtle perceived characteristics, changes, and defects in an audio signal.
From the producer to the third assistant engineer, active listening is a priority for everyone involved in any audio production process. No matter your role, practice thinking about and listening for the following items on each recording project:
Timbre. Is a particular microphone in the right place for a given application? Does it need to be equalized? Is the overall timbre of a mix appropriate?
Dynamics. Are sound levels varying too much, or not enough? Can each sound source be heard throughout piece? Are there any moments when a sound source gets lost or covered by other sounds? Is there any sound source that is overpowering others?
Overall balance. Does the balance of musical instruments and other sound sources make sense for the music? Or is there too much of one component and not enough of another?
Distortion/clipping. Is any signal level too high, causing distortion?
Extraneous noise. Is there a buzz or hum from a bad cable or connection or ground problem?
Space. Is the reverb/delay/echo right?
Panning. How is the left/right balance of the mix coming out of the loudspeakers?
1.1 What Is Technical Ear Training?
Just as musical ear training or solfège is an integral part of musical training, technical ear training is necessary for all who work in audio, whether in a recording studio, in live sound reinforcement, or in audio hardware/software development. Technical ear training is a type of perceptual learning focused on timbral, dynamic, and spatial attributes of sound as they relate to audio recording and production. In other words, heightened listening skills can be developed allowing an engineer to analyze and rely on auditory perceptions in a more concrete and consistent way. As Eleanor Gibson wrote, perceptual learning refers to "an increase in the ability to extract information from the environment, as a result of experience and practice with stimulation coming from it" (Gibson, 1969). This is not a new idea, and through years of working with audio, recording engineers generally develop strong critical listening skills. By increasing attention on specific types of sounds and comparing successively smaller differences between sounds, engineers can learn to differentiate among features of sounds. When two listeners, one expert and one novice, with identical hearing ability are presented with identical audio signals, an expert listener will likely be able to identify specific features of the audio that a novice listener will not. Through focused practice, a novice engineer can eventually learn to identify sounds and sound qualities that were originally indistinguishable.
A subset of technical ear training includes "timbral" ear training that focuses on the timbre of sound. One of the goals of pursuing this type of training is to become more adept at distinguishing and analyzing a variety of timbres. Timbre is typically defined as that characteristic of sound other than pitch or loudness, which allows a listener to distinguish two or more sounds. Timbre is a multidimensional attribute of sound and depends on a number of physical factors such as the following:
Spectral content. All frequencies present in a sound.
Spectral balance. The relative balance of individual frequencies or frequency ranges.
Amplitude envelope. Primarily the attack (or onset) and decay time of the overall sound, but also that of individual overtones.
A person without specific training in audio or music can easily distinguish between the sound of a trumpet and a violin even if both are playing the same pitch at the same loudness—the two instruments sound different. In the world of recorded sound, engineers are often working with much more subtle differences in timbre that are not at all obvious to a casual listener. For instance, an engineer may be comparing the sound of two different microphone preamplifiers or two digital audio sampling rates. At this level of subtlety, a novice listener may hear no difference at all, but it is the experienced engineer's responsibility to be able to make decisions based on such subtle details.
Technical ear training focuses on the features, characteristics, and sonic artifacts that are produced by various types of signal processing commonly used in audio engineering, such as the following:
Equalization and filtering
Reverberation and delay
Characteristics of the stereo image
It also focuses on unwanted or unintended features, characteristics, and sonic artifacts that may be produced through faulty equipment, particular equipment connections, or parameter settings on equipment such as noise, hum or buzz, and unintentional nonlinear distortion.
Through concentrated and focused listening, an engineer should be able to identify sonic features that can positively or negatively impact a final audio mix and know how subjective impressions of timbre relate to physical control parameters. The ability to quickly focus on subtle details of sound and make decisions about them is the primary goal of an engineer.
The process of sound recording has had a profound effect on the development of music since the middle of the twentieth century. Music has been transformed from an art form that could be heard only through live performance into one where a recorded performance can be heard over and over again via a storage medium and playback system. Sound recordings can simply document a musical performance, or they may play a more active role in applying specific signal processing and timbral sculpting to recorded sounds. With a sound recording we are creating a virtual sound stage between our loudspeakers, in which instrumental and vocal sounds are located. Within this virtual stage recording engineers can place each instrument and sound.
With technical ear training, we are focusing not only on hearing specific features of sound but also on identifying specific sonic characteristics and types of processing that cause a characteristic to be audible. It is one thing to be able to know that a difference exists between an equalized and nonequalized recording, but it is quite another to be able to name the specific alteration in terms of center frequency, Q, and gain. Just as experts in visual art and graphic design can identify subtle shades and hues of color by name, audio professionals should be able to do the same in the auditory domain.
Sound engineers, hardware and software designers, and developers of the latest perceptual encoders all rely on critical listening skills to help make decisions about a variety of characteristics of sound and sound processing. Many characteristics can be measured in objective ways with test equipment and test signals such as pink noise and sine tones. Unfortunately, these objective measures do not always give a complete picture of how equipment will sound to human ears using musical signals. Some researchers such as Geddes and Lee (2003) have pointed out that high levels of measured nonlinear distortion in a device can be less perceptible to listeners than low levels of measured distortion, depending on the nature of the distortion and the testing methods employed. The opposite can also be true, in that low levels of measured distortion can be perceived strongly by listeners.
This type of situation can be true for other audio specifications such as frequency response. Listeners may prefer a loudspeaker that does not have a flat frequency response over one that does because frequency response is only one objective measurement of the total sound produced by a loudspeaker. In other areas of audio product design, the final tuning of software algorithms and hardware designs is often done by ear by expert listeners. Thus, physical measurements cannot be solely relied upon, and often it is auditory perceptions that determine the final verdict on sound quality.
Professionals who work with recorded sound on a daily basis understand the need to hear subtle changes in sound. It is important to know not only how these changes came about but also ways in which to use the tools available to remedy any problematic characteristics.
1.1.1 Isomorphic Mapping
Professionals who work with recorded sound on a daily basis understand the need to hear subtle changes in sound. It is important to know not only how these changes came about but also ways in which to use the tools available to remedy any problematic characteristics. One of the primary goals of this book is to facilitate isomorphic mapping of technical and engineering parameters to perceptual attributes; to assist in the linking of auditory perceptions with control of physical properties of audio signals.
With audio recording technology, engineers have control over technical parameters that correspond to physical attributes of an audio signal, but often it is not clear to the novice how to map a perceived sensation to the control of objective parameters of the sound. A parametric equalizer, for instance, usually allows us to control frequency, gain, and Q. These physical attributes as they are labeled on a device have no natural or obvious correlation to perceptual attributes of an audio signal, and yet engineers engage them to affect a listener's perception of a signal. How does an engineer know what a 6-dB boost at 315Hz with a Q of 2 sounds like? Without extensive experience with equalizers, these numbers will have little meaning in terms of how they affect the perceived timbre of a sound.
1.1 Everyday Listening
1.2 What is Technical Ear Training?
1.3 What is Timbre?
1.4 Shaping Sounds
1.5 Sound Reproduction System Configurations
1.6 The Listening Environment
2 Spectral Balance and Equalization
2.1 Types of Filters and Equalizers
3 Spatial Attributes
3.1 Time Delay
3.3 Delay and Comb Filtering
3.4 Reverberation in Multichannel Audio
3.5 Description of the Software Training Module
3.6 Mid-Side Matrixing, Stereo Shuffling, and Polarity Reversal
4 Dynamic Range and Levels
4.1 Level Changes
4.2 Compressors and Limiters
4.3 Expanders and Gates
5 Distortion and Noise
5.1 Clipping and Overdrive
5.2 Noise, Hum and Buzz
5.3 Quantization Distortion: Bit Rate Reduction
6 Amplitude Envelope
6.1 Digital Audio Editing: The Source-Destination Technique
6.2 Software Exercise Module
6.3 Focus of the Exercise
7 Analysis of Sound
7.1 Analysis of Sound from Electroacoustic Sources
7.2 Graphical Analysis of Sound
7.3 Multichannel Audio
7.4 Selecting Listening Material
7.5 Analysis Examples
7.6 Sound Enhancers on Media Players
7.7 Analysis of Sound from Acoustic Sources
A About the software practice modules