- Shopping Bag ( 0 items )
The Infant Does Not Talk, But ...
Our opinion is that it is proper for man to suppose there is something unknowable, but that he must not set a limit to his research.
--Johann Wolfgang von Goethe
The Newborn Child: A Stranger
Newborns recognize the voice and smell of their mothers. They look at the faces that peer into their cradles. They distinguish smells and listen to words with endless pleasure. Only mothers, with their keen attentiveness and powers of observation, can recognize the extraordinary capacities of the newborn. According to the clichés, happily now outmoded, that for centuries surrounded newborn infants—lacking all knowledge and wrapped in their swaddling clothes—they have to learn everything from scratch, as though they need to have acoustic and spatial structures of the world transferred into their bodies. Until that happens, the newborn remains empty. In Aristotle's expression, the newborn is a blank slate.
We now know that not only is the brain of the baby not empty, but in a certain sense it is fuller than that of the most brilliant scientist. The latter's brain contains about 10 billion ([10.sup.10]) neurons; the number in the baby's brain is reckoned to be still greater. In the developing brain, during the course of embryogenesis, neurons are generated at a rate of roughly 250,000 per minute. The creation of all cortical neurons takes place between the sixth and the seventeenth weeks of gestation. Afterward, and for the rest of one's life, not a single new neuron will be created. The loss of neuronsand axons begins with the end of gestation (Rakic et al., 1986). After birth, brain development consists in pruning and arrangement, the loss of neurons being offset by an exuberant creation of synaptic junctions among neurons. These come to be linked together in associative networks whose efficiency would be the envy of all the world's media put together. Indeed, a neuron forms about 1,000 connections and receives still more. It can receive 10,000 messages at the same time. Since the human brain contains [10.sup.10] neurons, the number of junctions may be reckoned at [10.sup.15]—more than the number of stars in the universe.
If synaptic density begins to grow in the last months of gestation, it literally explodes at birth. Thanks to anatomical studies done on the brains of nonhuman primates, we have a reasonably good idea of how synaptic development occurs in human beings (allowing for the need to extrapolate rhythms of development associated with longer periods of growth and life in humans than in the nonhuman primate). In the primate, the peak of synaptic density occurs between two and four months of life, which corresponds to between eight and ten months in human babies. The creation of synapses tends then to become stabilized, reaching the adult level at sexual maturity. In young humans between nine and twenty-four months, the density of short cortical synaptic connections is still some 150 percent greater than that observed in adults. It begins to decrease in the course of the third year.
Although the significance of such exuberant growth and elimination of neurons and synapses is not yet well understood, these events are thought to be related to processes of competition and selective stabilization, so that neuronal and synaptic redundancy supply the baby with the potential for development. After birth, this maximal connectivity of still labile synaptic contacts presents an unequaled opportunity for choice, as selections and reinforcements are made in response to the outside world. The brain thus sculpts itself under the joint influence of both internal and external experiences, which determine its final architecture and its modes of functioning (Changeux and Danchin, 1976).
Thus constituted, the immature brain provides the child with a capacity for evolution and plasticity that makes possible, as Jean-Pierre Changeux has put it, a "fringe of adaptability that introduces a margin for adaptation."
Speech Is Not the Infant's Language
The adaptation that permitted the production of articulate speech is altogether peculiar to the human species. With the exception of some birds—parrots and mynah birds—that are capable of unharmoniously reproducing certain aspects of the sounds that constitute speech, only human beings can articulate the range of sounds employed in spoken languages.
To speak, it is necessary to master a vocal apparatus having quite particular characteristics. It is necessary to control and coordinate the movement of the larynx, glottis, soft palate, jaw, lips, and tongue. Beyond this, the respiratory cycle must be combined and synchronized with the activity of the vocal cords. The coordination of the muscles involved by articulation is extremely complex. When you enter a room and say something as simple as "Hello, nice weather today," your normal rhythm of speech is fifteen sounds a second, and you have brought into play motor capacities involving the coordinated use of more than 100 muscles.
But evolution has not favored rapid motor development in the human species. While studies of infant perception reveal the existence of surprising and rather mysterious gifts, the picture presented by the newborn lying in a cradle is that of a fragile, helpless, and dependent creature whose head is unstable and too heavy for its body and who is incapable of controlling posture and motivity.
Thus, though their native capacity for listening is great, human beings at birth control none of the organs that will permit them to speak. So long as these organs are not functional, the vocal tract of the newborn remains physically unfit for speech.
Spoken language is a more subtle, more abstract, more cultural phenomenon than other motor behaviors, but it displays certain patterns that are found, in the biology of the development of motor control. The learning of speech depends on a process of maturation and reorganization of the relevant organs. In the first months, physical changes accompany changes in the production of sounds. Even if these physical changes do not suffice by themselves to explain the evolution of vocal productions during the first year—hearing language spoken is also necessary—the reconfiguration of the vocal tract during the first years needs to be examined (see figure 1.1).
The vocal tract of the newborn does not exhibit the famous right-angle curve associated with upright posture that furnished the basis for the development of articulate language during the course of phylogenesis. In the newborn, the shape of the vocal tract resembles that of nonhuman primates. In fact, the infant's vocal tract is not simply a fragile miniature of the adult's. Its shape differs radically from that of the adult. As in primates, the curve of the oropharyngeal canal is gradual. Steven Pinker (1994, p. 265) describes the vocal tract of the newborn thus: "The larynx comes up like a periscope and engages the nasal passage, forcing the infant to breathe through the nose and making it anatomically possible to drink and breathe at the same time." The pharynx is proportionally shorter in the infant than in the adult, while the oral cavity is relatively larger. The mass of the tongue, significantly, is situated more to the front. It fills up the mouth, and so its possibility of movement is limited. The soft palate and epiglottis are relatively near each other. Thus constituted, the vocal tract does not allow the infant to produce articulate sounds (Kent and Murray, 1982). Furthermore, the infant does not control its own breathing, which feeds the production of sounds.
At three months, the palate becomes lowered and moves forward, so that it now can close off the passage of air into the nose. The tongue becomes lengthened, its musculature becomes more developed, and the opening of the pharynx permits it to move from front to back. The first clear effect of these changes is manifested in the control of the respiratory cycle. The control of phonation is thus rather rapidly acquired: at five months, babies are capable of breathing and using their larynxes roughly as adults do (Koopmans van Beinum and Van der Stelt, 1979).
The development of articulatory control takes longer. In this case it is a matter of governing the machine as a whole (tongue, lips, pharynx, larynx). While the vocal tract has been profoundly remodeled between two and six months, its transformation is still not complete at the end of the first year (Kent, 1976). In the course of the second half of the first year, the child's vocal tract begins to resemble that of the adult and allows for the production of more varied sound patterns, increasingly like those produced by the adult. But not until the age of five or six does control of all the articulators become possible. Their maturation begins with the most central organs, extending next to the peripheral organs. Gross movements are mastered before specific movements, and control of the tip of the tongue and of the lips is the last to be acquired, shortly before the age of five or six.
This general remodeling continues to affect the infant's ability to produce certain groups of sounds during the first three or four years. This slow evolution explains why, for example, French children go on until a rather late age saying "Alisk" for "Alix" or "Obélisk" for "Obélix."
A Competent Newborn
When everything is already known, but when nothing is begun.
--Madame de Staël
If the human brain possesses an innate disposition or even, as some would say, an instinct for the acquisition of language, behavioral correlations with this genetic specification must exist from the earliest age, no doubt from birth. But what are they? And how can babies be questioned about them? They cannot be expected to give, with their heads or hands, any signs of either agreement or disagreement or, of course, any vocal responses.
Recent research provides a plausible picture of how infants, making use of a sophisticated biological and cognitive apparatus, perceive the sounds that constitute speech—not only how they hear them but also how they extract, dissect, recognize, and analyze them. A huge problem immediately arises, however, from the fact that no strict correspondence exists between the acoustic signal and phonetic segments. Despite years of research, and despite the study of all the acoustic parameters involved in the realization of a phoneme, phoneticians have always been disappointed in their quest: the occurrences of a single segment are never acoustically identical. They depend on the contexts in which the segment is found. Thus the [p] in the word purchase is acoustically different than the [p] in the word stop. How, then, can we recognize the sounds that permit words to be identified? How can we attribute to segments whose physical manifestation is highly variable a stable value that enables them to be recognized as identical in all contexts? Researchers at Haskins Laboratories in New Haven, Connecticut, devoted themselves to this problem in the 1940s. They found the answer to the enigma not in the characteristics of the acoustic signal but in the human being's biological abilities.
Let us look first at what happens in the adult. The human psychoacoustic system automatically imposes boundaries within which sounds that are heard form stable categories. Thus, despite the physical continuity of the signal, listeners cut up the acoustic space into successive categories. This capacity of the psychoacoustic system to perceive sounds discontinuously and in the form of discrete units is known as categorical perception (Liberman, Harris, Kinney, and Lane, 1961): phonemes are perceived in a discontinuous manner throughout a sound series that physically is continuous. It seems reasonable to suppose that a property of the auditory system in primates was exploited over the course of evolution for the purpose of organizing the sounds that constitute speech. In humans, in any case, categorical perception became one of the fundamental mechanisms serving to distinguish these speech sounds.
If things are thus in the case of the adult, what happens with newborns? Are they born with a fundamental capacity for organizing the speech they hear? And if so, how is this known? How can we confirm the hypothesis that language learning is due to innate biological mechanisms?
In 1969, Einar Siqueland and Clement DeLucia had the remarkable idea of making use of the only behavior mastered by the newborn—sucking. To survive, newborns must know how to suck. For the most part, they do so enthusiastically. Thanks to this enthusiasm, Siqueland and DeLucia were able to employ a method that made it possible for the first time to carry out experimental research on newborns and language learning.
This method is known as high-amplitude sucking (HAS) or, to use Siqueland's and DeLucia's (1969) term, nonnutritive sucking. One begins by putting the infant in a baby carrier. A rubber nipple, supported by a rigid rod, is connected to a computer. The amplitude of the infant's sucking activity is measured for two minutes in the case of each child to define a personal baseline measure in the absence of any sound presentation. Once the baseline has been established, the period of familiarization begins. During this period, each sucking action whose amplitude exceeds the baseline activates the sound circuit and gives rise to the presentation of a sound. Thus, the number of sounds presented depends on the baby's attentiveness in sucking on the nipple. After a certain time has lapsed, the rate of sucking decreases.
The procedure passes then to the next phase, which makes up the test proper and begins with a change in the type of stimulus. The sound that the baby receives on the occasion of the next strong sucking action is different from the one heard during the familiarization period. The idea is that, on the one hand, babies like to be stimulated and, on the other, they have a great capacity for relating events to each other. They therefore establish a relation between the appearance of sounds and their sucking. At first, interested by what they hear, they suck vigorously. Then, monotony being mother to boredom (as the French proverb has it), lassitude sets in, and their ardor for sucking diminishes. The opposite is true as well: novelty arouses renewed interest and, once perceived, leads the child to resume sucking to take advantage of the new stimulation. Thus, the resumption of sucking as a result of a change in stimulus indicates that the baby has indeed perceived a difference between the two stimuli. Conversely, the absence of revived vigorous sucking indicates that the difference between the stimuli has gone unnoticed.
This ingenious way of measuring nonnutritive sucking made it possible to investigate the ability of infants to distinguish the sounds that form the framework of the languages spoken around them. The first question was whether babies discriminate among speech sounds categorically, as adults do.
Peter Eimas, E. Siqueland, P. Jusczyk, and J. Vigorito (1971) gave a preliminary answer. They found that four-month-old babies distinguish the syllable [ba] from the syllable [pa], not by identifying their acoustic difference alone but by identifying when this acoustic difference locates the syllables on either side of a boundary that is close to the one used by adults to distinguish [ba] and [pa]. Infants do indeed discriminate categorically (see figure 1.2).
Since 1971, dozens of experiments, some of them conducted with babies as young as three and four days old, have shown that infants are capable of distinguishing almost all the phonetic contrasts found across natural languages (Jusczyk, 1985). They discriminate between the contrasts of voicing, place, and manner of articulation that constitute phonetic categories. Babies who are only a few days old show themselves to be small geniuses in this domain.
Questions then began to multiply. Is the predisposition to process language sounds limited to a talent for distinguishing language segments? Or are babies also precociously sensitive to other particularly important aspects of language, such as the prosodic contours of sentences, with their melody and rhythm? Experiments quickly confirmed the importance of prosody. Newborns only a few days old prefer to listen to the voice of their mother when this is presented in competition with that of another mother talking to her baby. But the mother's intonation must be natural; if a tape recording of her voice is played backward, the child's preference no longer holds. This preference is related to the dynamic aspects of maternal speech, such as intonation, rather than to the static aspects of sounds, which are preserved when the tape is played backward. The child's attention therefore is not related to the static characteristics of the voice but on those characteristics of the voice present under normal conversational circumstances (Mehler, Bertoncini, Barriere, and Jassik-Gershenfeld, 1978).
In addition to a preference for the mother's voice, the child shows a preference for the mother's language (Mehler et al., 1988). When sequences of French speech are presented following sequences of Russian speech, four-day-old French infants show stronger renewed sucking than when these sequences are presented in the reverse order. Since the two samples are recorded by a single bilingual speaker, it is not a matter of different speakers. Furthermore, this preference is maintained when the sequences have been filtered to remove most of the phonetic information while leaving the prosody intact. The prosodic differences between the child's native language and the foreign language are therefore sufficient to arouse a livelier reaction during the presentation of the native language. Is this familiarity with the native language uniquely the product of the baby's contact with the mother during the first few days following birth? Is so short a time really sufficient to orient the infant's attention to certain general properties that characterize the prosody of the language spoken in its environment? Or might this process of familiarization have begun earlier, during the course of prenatal life?
The Infant Is Prepared Before Birth
The embryo of the first months does not seem to have much to tell us about language, but the fetus of the final months does. Traditionally, it was supposed that the future child, comfortably insulated inside its mother, bathed in amniotic fluid, enjoys an agreeable silence that enables it to develop peacefully before having to brave the noisy, air-filled atmosphere in which it will later live. Physicians long dismissed as maternal imagination the observations of mothers who felt the fetus react to sharp noises and jump at the sound of a loud telephone ring. It is now known that the child's senses gradually begin to function before birth. The auditory system of the fetus is functional from the twenty-fifth week of gestation, and its level of hearing toward the thirty-fifth week approaches that of an adult. Auditory sensory data reach the fetus both from the intrauterine area—from the mother's living body—and from the outside world. The first recordings of sounds reaching the fetus gave the impression of a very loud atmosphere within the womb. Internal sounds (respiratory, cardiovascular, and gastrointestinal noises) were therefore thought to partly mask external sounds, already muffled by the uterine membrane and the beating of the mother's heart. More recent recordings have somewhat changed this picture of the fetus's acoustic environment. These recordings, made by slipping a hydrophone inside the uterus of pregnant women at rest, show that intrauterine background noise is located in the low frequencies, which limits its masking effect (Querleu, Renard, and Versyp, 1981). The mother's voice and the voices of others in the environment thus manage to pass through this background noise. The intensity of the mother's voice in utero is not far removed from its intensity ex utero. The high frequencies are attenuated, but the spectral properties of the mother's speech remain the same, and the chief acoustic properties of the signal are preserved. Words spoken by the mother are transmitted through the air and also through her own body. They are therefore more perceptible than sounds coming from the outside alone, though these too are perfectly audible by the fetus. The prosody is particularly well preserved: the intonation of the speech recorded in utero was faultlessly recognized by adult listeners; the same was true for 30 percent of the phonemes.
But how can we explore prenatal capacities for speech? The technique of nonnutritive sucking involves an interruption, and then a revival, of attention during a change of stimulus. This same type of approach can be used to test perception in the fetus. We possess physiological measures of its behavior in more or less profound states of waking and sleeping. Cardiac and motor responses can give us some idea of what surprises and alerts the fetus when it is in a state of rest. When presented with a repetitive sound, by means of a speaker positioned twenty centimeters (or about eight inches) above the mother's abdomen, the fetus gradually becomes used to it. The beginning of the presentation of the sound provokes an initial reaction of arousal, which is manifested by a reduced heart rate. Cardiac deceleration then subsides and finally disappears, and the heart resumes its normal rhythm with repeated presentations of the sound. This is the period of habituation. If, after habituation, the sound is changed, a new round of cardiac deceleration indicates that the novelty of the sound has been perceived. This habituation-dishabituation paradigm is the basis of methods used to test the capacities of the fetus (Lecanuet et al., 1987).
A number of studies reveal how the fetus reacts to variations of the physical characteristics of the stimulation by examining its behavioral state (see, for example, Lecanuet and Granier-Deferre, 1993; see also Lecanuet, Granier-Deferre, and Schaal, 1993). Differences in both the intensity and frequency of sound stimuli elicit discriminatory reactions in the form of cardiac deceleration. The same is true for variation in the order of speech sounds. Jean-Pierre Lecanuet and C. Granier-Deferre (1993) presented fetuses between thirty-six and forty weeks old with sixteen repetitions of the disyllable [babi]; when the fetus was habituated, this disyllable was changed to [biba]. The change in the order of the syllables provoked a deceleration in the rate of the heartbeat of the fetus, tested in a state of calm sleep. This deceleration indicates that the two sequences were distinguished. Nothing allows us to say that the fetus recognized them. Nonetheless, the fetus did react to a simple change in the order of the two phonetically similar syllables that made up the disyllables. The second disyllable was new to it by comparison with the first.
The question arises, then, whether exposing fetuses to their mother's language before birth favors perceptual adjustment to the phonetic and prosodic parameters that characterize this language and differentiate it from others. We have seen that categorical discrimination is universal in newborns. At the same time, however, they recognize the voice of their mother only when prosody is preserved. Does there exist a prenatal framework that helps regulate certain sophisticated infant perceptual capacities? Does external stimulation leave an imprint on the brain of the fetus? Or are the observed reactions simply signs of intermittent arousal in response to changes in stimulation?
To better determine the source of the discriminations observed in the fetus and their impact on the capacities of newborns, attempts were made to discover whether memories of prenatal experiences persist. Using the well-tested method of nonnutritive sucking, it was first asked simply whether newborns between one and three days old were able, by virtue of the prenatal experience of their mother's voice, to distinguish this voice from that of other speakers (DeCasper and Fifer, 1980). When they had no more than twelve hours of effective contact (ex utero) with her, newborns preferred the voice of their mother to that of another woman. The questions that followed were more specific. Are the effects of exposing the fetus to important acoustic characteristics for speech carried forward with the newborn? To find out, Anthony DeCasper and Melanie Spence (1986) used a more sensitive variant of the procedure of nonnutritive sucking. In their procedure, one of the stimuli is presented while the newborn makes long pauses between suckings; the other is presented during brief pauses. Newborns regulate the rhythm of their sucking according to their preference for the stimulus: slow sucking generates one of the stimuli, and rapid sucking the other.
Using this method, the authors showed that newborns would give one rhythm to their sucking to hear a passage of prose that had been recited by the mother in a loud voice during the last six weeks of pregnancy and another rhythm to hear a new prose passage read by the mother but not previously heard during pregnancy. One might suppose that the mother's voice simply enjoys an altogether special status and serves as the model for recognizing the intonation and the regularities of the passage that had been heard in utero. But the newborns preferred the passage read by the mother before their birth even if it was read by another woman during the test. The fetus therefore appeared to be responsive to the general acoustic properties of the speech signal and not simply to the voice and specific intonations of the mother.
This conclusion called for verification: the authors carried out another experiment, now testing recognition not in newborns but in the fetus (DeCasper and Spence, 1986). They asked future mothers to read a poem in a loud voice every day for four weeks. At the end of these four weeks, when the mother was in the thirty-seventh week of gestation, the fetus listened to the poem that the mother had recited in alternation with another poem never heard before. These alternating sequences were recorded by a third person and retransmitted through a speaker positioned at the level of the head of the fetus. Variations in the heart rate served as an index of discrimination. This technique confirmed the role of prenatal exposure. In fact, the heart rate systematically decreased only in response to the poem read by the mother during the preceding four weeks and did not vary during the reading of the other poem. What cues enabled the fetus to react to the familiar poem? These were not characteristics of the mother's voice, since the test poems were recorded by another woman. It was not a matter of some distinctive rhythm peculiar to a quite particular poem, chosen for just this reason, for precautions had been taken not to habituate all the fetuses to the same poem. It must be concluded, then, that every language event with normal intonation and rhythm alerts the fetus and leads it to regulate its listening to this linguistic model, whose imprint persists at least for a certain time.
Familiarization with the mother's language therefore takes place in the last months of prenatal life. Sound stimulations received during the last months of intrauterine development are likely to contribute to the priming of sensory pathways and to attune the perceptual calibration to certain characteristics of speech sounds.
The Talents of Infants
But let us return to infants, who are not so naive as had been thought, since they are prepared for listening during the prenatal period. At birth, they are capable of distinguishing a broad range of consonant and vowel contrasts, whether or not these contrasts belong to the repertoire of the language spoken in their immediate environment. What is more, babies very quickly show evidence of perceptual constancy: they recognize the similarity of sounds belonging to a single phonetic category, despite physical variations. A single sound can be phonetically realized in many different ways, yet each variant must be recognized as the same sound. Let us take an example: the sound [a] spoken by a man with a deep bass voice, by a child with a high-pitched voice, by a person with a southern accent, by a person with a northern accent, with a rising intonation or with a falling tone, in different contexts, must be categorized as the same vowel /a/. Studies have shown that at five months, infants are able to neglect the variations of a vowel due to changes in speaker and intonation (Kuhl, 1983). They arrange the different samples of a single sound into a single category.
Another talent of two-month-old newborns is the particular status that they accord the syllable. The syllable is perceived by them as a whole rather than as a combination of distinct elements. This has been demonstrated experimentally. Two-month-old babies were familiarized with a series of syllables—for example, [bi], [si], [li], [mi] (that is, a common vowel with different consonants). It was then observed that the infants were capable of detecting the addition of a new syllable—for example, [di] or [bu]—following a series of syllables that they were familiar with. The babies noticed that [bu] is different from [bo] [ba] [be] and also from [du]. The fact that they noticed the novelty of [bu] shows that the babies did not extract the phoneme /b/ as the property common to the habituation stimuli—that is, they did not decompose the syllables into smaller elements (Juscyzk and Derrah, 1987; see also Bertoncini et al., 1988). This study showed that babies distinguish between sequences of disyllables and sequences of trisyllables, even when the total duration of the sequences remains the same (Bijeljac-Babic, Bertoncini, and Mehler, 1993). This again indicates that the perception of a sound series is organized by syllables.
These aptitudes of the infant are highlighted by experiments in which the acoustic cues are presented in isolation. However, we may ask whether the same discriminatory performance will be found when other auditory promptings, such as prosody, compete for the baby's attention. Recent experiments by Denise Mandel, P. Jusczyk, and D. Kemler-Nelson (1994) sought to establish that performance is similar. They formed the hypothesis that the prosodic cues detected by infants in the first weeks after birth are likely to play an important role in helping the infant organize speech information. They therefore tested the discrimination of phonetic contrasts presented in sentences and compared it against phonetic contrasts presented in word lists. The results of these experiments confirmed their hypothesis: babies of two months detect changes of phonemes better when they occur as part of short sentences than in lists of words. The babies' rate of sucking strongly increased when a series of sentences of the type The (r)at chases the white mouse follows the sentence The (c)at chases the white mouse. The babies reacted less strongly to the change of the phoneme /r/ to /k/ when it appeared in a list of words read in succession than when it appeared in sentences uttered with a natural intonation.
In everyday life, the natural prosody of the language of infants' mothers commands their listening attention. As the authors suggest, prosody serves as a sort of perceptual glue that holds together sequences of speech. Certainly mothers, who amplify the variations of intonation and play with their voices when they talk to their children, feel this to be so. Thanks to such variations, babies not only retain their capacity for discrimination but find their capacity reinforced by the exaggeration of rhythm and prosodic contours. One observes furthermore that babies better distinguish phonetic contrasts when sentences are read by a woman speaking directly to a child than when they are read by an adult addressing another adult.
What's in a Name?
Are newborns sensitive only to the superficial characteristics of speech? Do some patterns in particular come to acquire a meaning?
The infant's name is often spoken when her parents cuddle or play with her. Does this sound form, which is often associated with feelings of personal well-being, take on particular significance? Can the child recognize when her name is pronounced? Denise Mandel, P. Jusczyk, and D. Pisoni (1995) studied infants of four and a half months to determine whether their names had special status for them.
The method of nonnutritive sucking no longer works for infants at this age. Fortunately, it becomes possible to inquire into their preferences more directly. Two speakers are placed on either side of the infant. Above each speaker there is a small light. So long as the child gazes toward one of the lights, a sound stimulus (the child's own name, on the one hand, and, on the other, three other names spoken with the same tone) is broadcast through the corresponding speaker. The cumulative listening time—or, more exactly, the amount of time spent looking at the light sources—indicates the child's preference for one or the other stimulus.
It turns out that babies listen more attentively to their own name than to the names of their friends. One's name is therefore a recognized signal. However, to say that it is a signal does not imply that the baby of four months connects sound patterns with meanings. Dogs recognize their name, which is as much of a signal for them as the sight of their leash or that of their masters putting on their coats. For the dog as for the baby, names are sound signals arousing attention in one or more particular situations. Babies of four months react to their name, without necessarily realizing that the sound forms have a referential function.
The brain of the newborn is therefore far from being empty. But is the newborn's language capacity organized like that of an adult?
|1||The Infant Does Not Talk, But ...||13|
|2||The Emergence of Speech||37|
|3||The Communicative Universe of the Baby||71|
|4||Discovering the Meaning of Words: Nine to Seventeen Months||95|
|5||The First Lexical Steps: Eleven to Eighteen Months||127|
|6||To Each Baby His Own Style||149|
|7||Languages, Cultures, and Children||177|
|8||Speech Becomes Language: Eighteen to Twenty-Four Months||189|
|App. A||The Principal Stages in the Development of Speech from Before Birth to Two Years||217|
|App. B||International Phonetic Alphabet||221|