 Having discussed the principles of articulation in the clip Articulation and Control 1, let us now focus on how the central nervous system controls these activities. This is our program, we will first of all look at the central nervous system and its control activities and will finally introduce models of speech, production, including evidence against and in favor. Well, the central nervous system consists of the brain which is contained within the cranium and the medulla spinalis or spinal cord lodged in the vertebral canal. The brain itself, which you can see here, is a jelly-like substance which in adults weighs about three pounds. It is divided into three parts. The brain stem, which is an extension of the spinal cord. The cerebellum, now the cerebellum is important for coordinating voluntary movements, for example walking, posture, speech and for learning motor-skilled behaviors. Well, and then of course we have the main part which is referred to as the cerebrum. Let us look at the cerebrum in more detail. Now the cerebrum can be subdivided into its outer surface, the so-called cortex, which is divided into a left and into a right hemisphere and various subcortical elements. The motor cortex is the first one we're looking at, that's this area here, the motor cortex. This part here, that's the motor cortex. And this area is primarily responsible for the control of movement over the whole body. Right next to it we find the sensory cortex, which is responsible for the control of sensory actions. Well, here in the frontal lobe we have Broca's area, named after the French physiologist Paul Broca, who lived in the 19th century. Broca's area is involved in the movement of the articulators and appears to control speech aspects of the oral tract. The counterpart is, well, here in the back is Wernicke's area, named after the German physiologist Karl Wernicke. Now Wernicke's area appears to be centrally involved in the meaning aspects of production and comprehension of spoken and written forms. In between we have this area here, let's use the blue color again, which is the auditory cortex. Now the auditory cortex has some specialized speech auditory functions that are not found in the corresponding areas of the right hemisphere. And finally we need a controlled mechanism for vision, the visual cortex. Well, and within the visual cortex extraordinary specializations of cells can respond to particular orientations of an edge and other forms. In speech production brain activity starts in the motor cortex and is transmitted via several nervous pathways to the articulators. But it has to be controlled. So what sort of control activities are there? Many body actions are automatic. We do not have to consciously remind our hearts to beat or our gastric juices to flow after we have eaten something like our birthday cake. But what about the speech signal? That is the transmission of motor cortex activity to the articulators. How long does it take? Well, the central assumption is that the total time between the initiation of neural action and articulation is about 0.2 seconds. This value has been confirmed in so-called shadowing tasks where subjects have to repeat something that is said to them verbatim. It takes a little bit longer because you have to perceive the input signal as well. So 0.2 seconds. Does that reflect that articulation is a reflex? Well, if so, we have to find out what is a reflex first. So let's look at the reflex loop. A reflex is an automatic and innate response to a particular stimulus. The simplest reflex involves two neurons, a sensory nerve cell with a receptor and a motor neuron connected to a muscle or gland. The sensory neuron, the first one, transmits a signal directly to the motor neuron through a connection within the brain or in the spinal cord. Well, the whole thing is known as the reflex arc. Here is the receptor. Well, and here is the spinal cord and the link into the brain. Thus, a reflex response is extremely rapid. Let's look at it again. Here it goes. A very rapid reaction. Examples are the withdrawal reflex of the hand as shown here from a painful stimulus and the stretch reflex. Even though articulatory activities are normally subconscious and are thus reflex-like, they can be made conscious. For example, we can control the position of the tongue. So we know where the tongue is in a duh, duh, duh. We can feel it. We can make it conscious. Or we can control the state of the glottis. Something like that. Creaky voice. Or the position of the vealum by producing nasalization, a constant type of nasalization, that would be. So, in other words, speech must involve some sort of non-reflexive type of feedback and this is called proprioceptive feedback. Now, proprioception is the ability to control where body parts are and what they're doing. For example, even without looking, I can create an angle of 90 degrees between my upper and my lower arm. So, something like this. This is sort of 90 degrees. How can I do this? Well, proprioception provides information on where the limbs are in space even without looking. The receptors for this sense are located in the joints. Well, proprioception helps to control the amount of force needed for different tasks. For example, petting a dog or without hitting the dog. Part of this process involves oral motor skills which requires proprioceptive awareness as we do not visually monitor a mouth movement but need input from the muscles and joints in the mouth. So, it looks like that proprioception is an important part in speech production. But how can we model the entire process? Well, here you see two models. In order for the motor program, for the articulators and articulation to run, the abstract phonetic plan which serves as the input to the articulation and control process, this plan must be translated into neural commands that execute the main articulatory organs. One question in modeling, this highly complex process concerns the context dependency or not. So, how do we take into account what comes next, for example? So, the context of the motoric execution of speech. Put more simple, do we produce a sequence of sounds one by one, one step after the next one or not? Well, two central models have emerged to answer this question. The first one is called the open loop model or also called the comb model. You see, this sort of model here looks like a comb for your hair. Now, this implies that all articulatory planning, that is, the units A and A apostrophe, that all these units are retrieved at the same time. There is no feedback between the units. Let's imagine A, B, C and D and their apostrophe counterparts as words or as phonemes as units used in speech. Feedback between these units is impossible. The entire articulatory plan must be available instantaneously. Now, the opposite model is the closed loop model, which is also referred to as associative chain or associative chain model. Now, here you see the chain of elements. Well, the central implication is that each articulatory planning unit is fully controlled before the subsequent unit is retrieved. Do we have any evidence for or against these models? Well, we do. Assimilation. Now, let's look at something like this. You may have the name John and John played football. Now, what may happen here is that we assimilate the final alveolar nasal to a bilateral one. Instead of saying John played, we could say John played, John played, John played. But how can we account for articulatory postures that lie ahead? Well, the general idea is that the open loop model must account for this effect because we have knowledge about what lies ahead. The same applies to co-articulation. Now, co-articulation processes are things like those. Look at the following word. Here we have the item strupe. Now, in terms of a narrow phonetic transcription. This was phonemic, by the way. We know that lip rounding occurs already on the first three consonants. So we do not say strupe, but strupe, strupe. So even before we produce the alveolar fricatives, we round our lips. Again, we have an anticipatory activity that can only be taken into account by the open loop model. On the other hand, we have compensatory effects like illustrated here. Now, very often we have something in our mouths. We have oral blockages. We have food in our mouth. Or damage might impair the articulation process. But we can speak. So for example, I can speak with a finger in one mouth. You can still understand me. It sounds a little bit awkward. How can we account for that? Well, only if we take into account constant feedback. So we have two competing models. And effects. The first two effects clearly argue for an open loop model. Well, I should have gone on here. So these are the arguments for an open loop model. Well, and the effect of compensation. Well, that is clearly an argument for a closed loop model. The open loop models can account well for assimilatory and co-articulation effects. Close loop models can explain compensatory effects, oral blockages, very well, since these disturbances require feedback from the articulatory to the planning units. Maybe a combination of both seems to be the most suitable model for speech production.