View Single Post
apeiron
#441
Jan23-12, 02:25 AM
PF Gold
apeiron's Avatar
P: 2,432
Quote Quote by bohm2 View Post
There are a lot more than just 2 deaf infants. And many other papers by Petitto. And I side with her. She taught at my university. So I'm biased. What can I say?
So your biases count for more than any arguments?

Deaf kids will spontaneously babble, but hearing kids do not spontaneously sign. Petitto's claims of an equivalent hand babbling phase in deaf kids learning sign shows at best only that the brain is plastic enough to respond to other kinds of speech input. The imitative reflexes are strong.

The debate here is about language evolution and why the vocal tract supplied the critical constraint. MacNeilage suggests a convincing story on one of the reasons why vocalisation was special - the dichotomous nature of vowel and consonant production that cuts up an analog flow of noise into a digital stream, thus creating the computational elements to ground symbol-mediated semiosis.

So the adaptability of the brain to other modalities, such as signing, or reading, is beside the point to the evolutionary thesis.

Reading is obviously unnatural. No-one claims it to be a brain module or innate ability. Although you see parents working hard to train their kids with flash cards these days.

Signing is interesting as it is in fact easier for babies to learn earlier. So it is an even newer fad to sign to your baby. See for a laugh...http://www.babysignlanguage.com/dictionary/

So presumably it is less cognitively demanding than vocalisation. But that would just make it more of a puzzle why it is not then the primary modality of speech in humans.

The answer has to be that signing lacks that "by necessity" digitisation of action that MacNeilage suggests with his opposed opening~closing of the mouth which paves the way for the sharp divisions between vowel and consonant production.

As Petiitto says, "a well-formed syllable [sign] has a handshape, a location, and a path movement (change of location) or secondary movement (change in handshape, or orientation)." The hands are less constrained and so that would make it much less likely they could ever lead to a symbolic method of communication that depends on a digital-level constraint over motor output in a communicative, social setting.

So the general hypothesis here is that brains are already hierarchically structured, but they needed some new novel constraint that led to symbolic speech. The flow of sound (or hand gestures, or scribblings on paper, or whatever) had to be broken up into a digitised stream, in just the same way DNA is string of discrete codons. You had to have a sharp epistemic cut between code and metabolism, or in the case of speech, between semantics and phonology.

Lieberman makes a case for the vocal tract which in humans has the critical novel feature of being split in the middle by the hunched ball of the tongue. You then have a whole tree of further vocalisation dichotomies (mouth vs nasal vocalisation, tongue blade vs tongue point, pursed lip vs bitten lip, etc) to create rich phonological structure (a nested hierarchy constructed out of digital components).

MacNeilage comes along and offers another element in the story by pointing out the basic antagonistic nature of vowel and consonant production. The babbling part of the story shows just how simple an alternating motor pattern it is. It is like the gait reflexes kids have which are the starting template for learning to crawl or walk. MacNeilage may well push the "development recapitulates evolution" angle too hard. But that is not the critical part of the story IMO.

So this is how it goes. You start by zeroing in on the crucial evolutionary novelty when it comes to language. And this is the digitised phonology that allows for the construction of recursive or nested hierarchical patterns. These hierarchical patterns then in turn result in a new realm of semantic control because each word is a symbol, a top-down acting constraint on the state of the brain.

Saying "cat" is just a noise, a puff of air. But it puts your brain into a specific anticipatory state. And I can construct a hierarchy of such semantic constraint by stringing words together, like "the pink cat that sat on the blue fluffy rug".

Having focused on the critical advance, the "deep structure" explanation, then you can start to look for the detailed story of how it might have evolved. That is why some stories, such as Lieberman and MacNeilage, stand out as immediately plausible, and may others are just lost at sea.

Chomsky got that there is hierarchical structure in there at the centre of the story. But he does not seem to understand that is just standard neurology. That is what optimal computation in fact looks like when it comes to forward-modelling the world.

And then he does not get the phonology digitisation angle which - as with DNA - is the only way to get a sharp epistemic cut between a code and the world it represents/controls. Or at least Newport thinks he is kind of coming around to the linearisation at the motor output interface (or whatever).

And again, this idea of modelling complexity as semiotic mechanism, as the implementation of epistemic cuts, is a general one. It applies to cellular structure like membranes and pores, it applies to neural structure like spikes and synapses. It is a general theory of systems causality, and so fundamental to the whole mind~body problem.