 And the tube, the main building blocks, electronic blocks for executing instruction such, you know, typically instructs. This is so power consuming that they have something like a chimney for the blowing out heat so that it was the electricity consuming power. Due to this great experience in building this computer, it was built for that period a very brand new computing center using IBM computers. This is the 65. And this institute that originally was also my institute was equipped with great computers for that time. The minus line of IBM computer was the 37 computers. That was a very strong computer for that period. But it was typical for that period to have a center of computation. This is a typical center of that here. And this is the taudue, the terminal audio that I'm going to explain what is the main reason. The machine for producing music. By the way, that's me when I was back photographing when I was very youth. And this is the group with Maestro Rossi. This is Graziano Bertini and Clementi who physically built these apparatus under the direction of the researcher of that time. And that's me in 78 and another colleague, very young students that we were. And this was a very, very spare technology. Maestro Rossi, coming from an experience of electronic music, he wanted to hear music just like a normal synthesizer. But as I am going to explain now, that it was impossible to get music from computer on that time. And this is what his motto. I want to listen music just after pressing a return key. That time the mouse was not yet invented. And so his philosophy involved the transcription of score and music, self-generating algorithm music and remote music. This is the three topics that I'm going to tell you about. And so a library of music, score at Paganini, Stravinsky, and Delbac, Scott Joplin, many, many different authors of the tradition was coded and then executed. But this is digital versus analog in the sense of going back in the years, going back in the history. We are now used to use digital electronics, the digital approach in making music. But the only way to follow at that time was analog. You know what this means. I have not to explain to you what this means. But if we go back to the 60s and 70 years, we have to realize that the power of computer was very, very, very low in respect to also with a small laptop. You have to think that the typical characteristics of that period was 1 to 5 megahertz of clock, run memory up to 1 megabyte, the big, large computer, and the mass storage of not more than 20, 60 megabytes of the big rails of tapes was the typical power you have using big computer was this. But I don't know if you know that the digital technology was born not in the audio field, but in the telephony. And it's interesting to have a short, heavy, rapid look in what happened during the 50s and the 60s because the crucial man in this field for what interest in computer music comes from telephony. You know that for a small town, for making content, people was the patchboard managed by girls that was plugging in the police content, putting content with the other person, blah, blah, blah. You have seen in many films, something like that. But when the network or telephone was more and more growing up, what is called the automatic selection was built using electromechanic devices. But when the population of telephone was growing up to millions, also the electronics mechanics was not more usable. So how computer address memory was a good suggestion for making a selection of many, many different user of telephone. So it was mandatory that a conversion from analog to digital because it is easy to understand how to select a memory cell in the computer. But then you have to send a transformation of analog signal to digital. Who was busy in that moment at Bell Telephone in US was Max Matthews, a young engineer who had the task of put a solution to making analog telephone to digital telephone. But just because it was also musician, he was thinking, why not use a computer to synthesize music? So in this chain, we drop the first part and using a computer like a synthesizer. And for the first time, during the 60s, here, Max Matthews brought a program, a series of program called Music One, Music Two, Music Four, up to Music Five, having in mind that building a software synthesizer had a number of advantages I have illustrated here. And this, but in that moment, just because having a signal of relevance means to have at least 33 kilos of sampling rates. And just because the power of that moment was around about 1 megahertz of clock, for computing a sample after the previous sample was emitted by digital to analog converter, you have only 32 microseconds. And this was not enough for synthesizing a signal of musical meaning just to justify the substitution of analog synthesizer to a computer synthesizer. So it was chosen the strategy of non-real-time music. The non-real-time music was the offline music. And the software he developed, just I mentioned before, it was Music One, Music Two, up to Music Five, used the mass storage of that moment, typically tapes, for accumulating sample computed by the program, by your composition in some way. This is well described. This is a classical book where many people during the 60s and the 70s studied the computer music. You can download it from the web. It is free now. And in this book, Max Niatius explained it. I think this is the central philosophy of working that Music Five program. Number storage in the computer memory are successively transferred to a digital converter. Digital to analog converter. But the key word is successively, because each sample was stored on the real tape and then rewinded. And the tape was read digitally speaking and converted in audio signal. But non-real-time music, this is a typical Music Five composition. And you can recognize that C-sound, I think you know C-sound what it is, comes from Music Five. You can recognize that the logic, the typical configuration of C-sound composition is the orchestras and the score parts where the instrument is defined using lines that describe oscillator, envelopes, shaper, filter, noise generator, and so on. And this is the typical page on that book that describes an instrument graphically and then test well, have you used the C-sound, I suppose, no? So you know what we are speaking about. Music Five was developed in the first language. The first language is the correspondent of C-languages of our days. In that moment, the first language was the only scientific programming of the language of the moment, of that moment, because it was very simple, but very powerful in describing the typical operation of programming. So, but once again, we have to have a look of the characteristics of the magnetic tapes of that moment. You have to think that magnetic tape is a very, very sophisticated machine in that moment and it was very rapid in rewinding and stopping, searching a particular section of the tape. But it is the typical characteristic. So, in a tape like that, it was a real of that dimension about no more that from four to 12 minutes of music were possible to store as a result of the computation. In any case, 12 minutes with the density of 6,000 bytes per inch. So the question was, how long to wait? Because you wrote the program, the music fight composition, the same to say now C sound composition, how to wait? It depends at that time. It depends from the number of instruments you have defined in your composition, the complexity of each instrument and the length of the score, the composition. So in short cases, have a coffee break and they come back. But in a more complicated, more sophisticated composition, go to lunch after lunch. And if you have a very big composition for many instruments lasting 10, 15, go to sleep and come tomorrow morning. So this is a very big problem. But the fascination of having a software synthesizer, a software system for composing music was very, very fascinating for that moment. Because the advantages, you know, if you are using a classical MOOC synthesizer, very, very difficult when you have set up a patch to reproduce the exact combination of plugins and the knobs that define the frequency, the filter, cutoff frequency, and so on. It is very, very difficult to reproduce the same situation. You have produced it one day or one week or one month before. So to have a problem that when you have set up the different parameters you have tomorrow, the computers give you the same result as yesterday. So it was very fascinating to also to have many, many different of the synthesis algorithm. Because with the experience of Max Matthews, the very new thing was to have the possibility to experiment a new way of producing timbres, timbre, different timbres. Because with the analog synthesizer, you know that no more the square waves and the filtering, noising filtering, you have nothing more. So with the Music 5, a new season was started. The decision of searching a new way of producing new timbres, the FM, well-known FM by Chowning, the first experiment was made using Music 5 program. You know that there is a Mr. Chowning, who was a very young musician that period. I don't know if you know the short story between Max Matthews. Because Chowning said, maybe I have invented a new way of producing sounds with computers. Max Matthews said, oh, good, patent it. At once, patent it. Why? Patent it. And you know that using the after having patent FM synthesis, the Yamaha, purchased the license for building the first synthesizer, the DX7 synthesizer. And they purchased the license, if I remember well, for about $1 or $2 million. And using that money that John Chowning built the mythical, no, the mythical karma institute in Stanford. So patent it. And the first experiment was made using offline synthesis. I don't know if you know these news about. And in the film Space Odyssey, the crucial sequence in the film when Hal, the computer, lost memory. Have you seen that film? The music was produced using Music 5 program by Max Matthews. But, and this is the keep of young people that you have, I think you have mentioned it many times. So coming back in Italy, Mr. Rossi, now a week, knowing all those experiences, having in mind to have a system for producing music in real time, because it was interested much more than, not in timbre, but as we will see very soon, in music in the sense of notes, composition of notes, of events. So also he had a great experience with electronic music. But just coming back in the two towns, his town and my town in Pisa, in respect to the Max Matthews approach, where the sound processing was the main topic in which was focused the experience, Rossi focused his attention on even processing. What is the timbre doesn't matter. I want to process music in the sense of the traditional composer who was composing music, putting the notes on the staff. So due to the power of the computer at that moment, he wanted to choose the real time approach. And so he developed, using that language I was mentioned before, for some language, he developed a program called like that, digital computer music program, where the three basic features was the transcription of current music with simple transcription, frequency and duration, frequency, duration, frequency, and duration. And then this is the main part of manipulation and transformation of the text. And then self-generating algorithms for producing music using many different kinds of approaches like combinatorial calculus, Markov chains, and blah, blah, blah. So the main thing was manipulation of sequence of couples of information, frequency and duration, no more than that. And so from an original to have, for example, a backward execution, inversion in some way, and modifying many different ways, the array where the notes were coded, stretching, shrinking, and completing in different ways. Or to write down a program that, with a create command with many different parameters, it was possible to generating blocks of information with random between some limitation or a range where to limit. This is a typical, this is a part of the original program that performed this. This is original, written by Maestro Grossi in that moment using the, and this was the typical commands using which to have transformation in different ways a coded score. So the first approach during the end of 16, first years of 70 was to use a single voice with the square wave in real time, this time, using, think about, using the bit zero of the register of a big computer that was of the cost of $1 billion at that time, only a bit of switching from 01 and computing the time between one switch to the other. This was the main routine for producing music. And then with the delay in an end cycle, you can get different, just in the gadget of a Christmas way, something like that. A big computer, $1 million of that time. In any case, using a very, very sophisticated program, he was also producing this disk of, it was the end of 60, or no, it was 71, 72, I don't remember. And it is the sound of a computer. And you can, this is a two minutes now example, where you get, you can realize how the transformation of music by the program, the results of this experience was well, well come in the Institute. Also, it made many interview in the specialized magazine, also transmission of radio and television. It was a very big success for that. But the single voice squared wave was not enough for producing a significance of electronic music. So it was decided by the Institute to build an audio terminal. This is for reaching polyphony, for reaching real time music for that moment, an novelty, as we have seen. What is, what these two words, audio terminal, terminal and time sharing is a very obsolete word, terminology. Terminal because you have to think that in that moment, the computers, it typically was, the name was a mind frame computer, using channels of communication, it was possible to, the access to the computer was using terminals, teletype writer. Or everything was a terminal, everything was connected to the main computer, disk tapes, printers, card readers, terminals. In the sense of the big mother and the many children to link it to the mother. So also a special device for producing music was a terminal. Time sharing means that the time, the CPU times of the computer were shared with the different user. But for example, one cent of second to you, one second to you, one second to you. But if you think that when you are typing, maybe that for typing a lines, you last half a minute, one minute. So in that moment, the computer, you think that you are linked to the computer. But the computer is not linked to you. It is working for other people. So it is much more the time when, during which, it is not connected, but you think that you are connected. So it was possible, in that moment, that up to 100, 200 people was connecting, working with a phantom computer. Because it was much more the time in which the computer was not busy for you, but for the others. It was a time sharing approach of the computer. Because the cost of the computer was very, very high. So having a special device that produces sound while many other user are using that computer was rigid linking this device through special channels and to have to part the digital control for communication with the computer and the synthesizer itself. That in that moment was an analog synthesizer. The characteristic of that synthesizer is now, unfortunately for the moment, it was abandoned in the store. But we are sent now to recover for putting it in the museum because it's a very important piece of technology where many people are working in that moment. This is the digital part. This is the analog part for synthesizing. This is the power suppliers. Because they're very power consuming that moment. This part is for controlling the different kinds of oscillators. Basically, it's an additive synthesis. Nothing else, an additive synthesis. This is the control panel. And this is how it was installed in the part of the institute where we are used to go for music. We had to, the main characteristic, the tau duet, tau 2. Tau 2 because it was an experiment before a tau 1, not linked with the computer. But a standalone synthesizer, the main structure is a hybrid device, analog synthesis and digital control. It has 256 analog oscillators with a resolution at the distance between 1 third of a semitone, 1 sixth of a temperate semitone. The range was from 32 to up to 404,000 hertz. But there was an extension also. Too much complicated to describe. In any case, the main thing is the oscillator. It was possible to control with eight different levels from 0 to 7. So for each oscillator, it was possible to simulate an envelope. And in some way, it was enough sophisticated for producing a real additive synthesis, controlling the envelope shape of each oscillator. So it was possible to define an instrument, classically using a C-sound terminology, an instrument, but only and only with additive synthesis. And it was possible to group a configuration of the oscillator to define an instrument. And at the same time, 12 instruments. But the array of 25, 256 oscillators was defined for the whole 12 voices. So it was possible, also, that some oscillator overlaps with the other. But it was not the problem, the grid problem. The update time was 1 cent of seconds for the configuration. So the envelope shape was working enough well. So from the original digital computer music program that worked with a single voice of the square wave or the big computer, the Thomas program was developed with the same philosophy, but now using time control and polyphony. This is the typical configuration from a score. You put notes in the computer, and then they are translated in the stream of coding frequency duration. And then it was possible to modify and, in different ways, the coded program just to have information. This is an anteliteramid because the information from the computer was coded in some way and transmitted to the tau duet, where the different information of each oscillator were sent to the digital part of communication and translated in the real control of the array of oscillators. And this was working with the speedy of 100 times per second, so it was possible to have control enough for what concerned time. This was some writing of Maestro when there was nothing. Because he was living in Florence, but every morning was by train coming in Pisa one hour at a time. And during the trip, he was also correcting programs, inventing programs. Another activity was to invent what is telematics music. In some way, he made many, many demo using a very, very big implication of people. And you are not. You, Serra. It's a mountain, call it Serra, just behind Pisa. I have noticed now. This is a very big mountain where the antennas of broadcasting television, a very complete wood of antennas. And down in Pisa, where the array was the broadcasting national of Italy. At that moment, only array was existing. So the technician of array comes in Pisa with a Paraboloid antenna transmitting the audio signal from the institute. And then I don't know how. Technician of array, within another town, for example, Rome, Naples, Milan, captured the signal. And the audio signal was put, for example, building like that for getting sounds. And using a typical modem of that moment, it was possible to communicate with a central computer. So a very complicated approach. But in any case, it was working. So it was possible to have demo and concert offline, completely offline in another location that was not Pisa. And it was claiming, every time it was doing this demo, in the next future, music will be transferred on the network. It was a very visionary idea for that moment. Just once again, I want to list. And now I give a very short example of music produced by Tau Due. And this evening, the concert was much longer using all the disks, a copy of the disks you have also. Nineties, this is the conclusion. When the computer are growing and growing with power, he retired at home. No, in the first period, he came back at the conservatory using LAP computer. But he also started a painting activity using a small computer like the Commodore 64 and many others computer, using the same approach used for composing music and for composing drawings. So for example, algorithmically generated. So the same program with changing some parameters, producing different drawings. And that's for you. That's for you. The same, but different. That's for you. homage from Maestro Groszi. And produced many, many different kinds of programs using basic language. But so I think that this is a conclusion for his artistic activity from a musician in the last part as a cybernetic painter, something like that. Oh, so once again, the historical photograph. And after that, a new arguments was followed as a research by myself and the other colleagues that remained in PISA. This is the basic. In 1991, this is the starting of this new activity, my machine interaction, live performance. There was a very, many people from all over the world came in this. And many ideas started. And so the new activity of producing gestural interface was initiated. But this is another story. That this evening in the concert, I shall use two basic gestural recognition interfaces that I and my other colleagues in the Institute were developed. Basically, very shortly, half a minute, using the infrared beams and real-time age processing. This is a tablet where the infrared beams can recognize is much more complicated now that I shall use this evening. Many beams, it is possible to detect the height and rotation of the hands. And the other one is the possibility of recognizing the position of the hands in the space and also to have the dimension of the frame. So it is possible to have the angle of the hands. So it's possible to give with your spare hands to give many information, to detect many information. The webcam of the computer used, so it's very, also from setting up, it's very, very quick to setting up. The program make parts of the recognition activity. So everything is integrated in the program for controlling the synthesized music. This is the typical situation. I shall use this evening. I think that that's all. OK, thank you very much.