 Thanks for coming. I'll keep you entertained with some discussion of some music technology and some music that's been made with it and Welcome again to sea base. By the way, if you haven't been on the tour, I really recommend going on the tour You should go on the tour. It's it's really something else I'm Edgar Bridal. I'm an assistant professor at Louisiana State University But I did my Postdoctoral studies here at the TU Berlin and I had some master students there And so I'll be presenting this right now, and then they'll be presenting a few things later It kind of related to it and it all sort of ties together Around the topic of open source haptics for music and Really, I would just say that I've been inspired by all the open source work that you all have been doing And so this is part of my and our efforts to try to contribute back What we're able to while we're doing research and music of course So so the title is open source haptics for music and it's basically separated into two sections Can you hear me? Okay? Maybe it's a very compressed. I guess that's okay The first part is called haptics and force feedback and the second part is called physical modeling And so I tried to organize things as separately as possible as I as I could and Along the way, they'll they'll talk a little bit about some music compositions using these technologies and Some of the things that these technologies enable and this will be revisited with each music composition, but Some some ways of creating timbres that sound uncannily familiar but are nonetheless new which is exciting if you're composing music generating high fidelity and Highly immersive sound so sound for many channels. I'll show an example of a physical model I used to create a piece for 62 loudspeakers new touch controllers for enabling fundamentally new interactions Algorithmic ways of generating music Enabling more accurate performance of musical gestures using haptics to provide touch cues designing instruments that are fundamentally possible but without electronics would be very inconvenient to build that's something else that this this technology enables and Enabling even the performance of musical gestures that would otherwise be maybe impossible or certainly very difficult such as very fast drum rolls And so I'll just sort of revisit those here and there throughout the the presentation And by the way, please feel free to stop me or interrupt if you have a question about something Technical because I didn't plan to go into a lot of technical detail, but if anyone is very interested in something I can always Talk talk more about it So so the first section is on haptics and force feedback Why why do we even care actually in the the audio community? That's a good question I guess we've known for quite a while that we can create any perceivable sound using digital sound synthesis Or at least that that certainly can be argued and it has been argued since the 60s If not earlier, but I think there's a good question of how to control sounds Because there's so many of these possible sounds, but how do we how do we use them to create music? And so I like to draw this diagram of Person interacting with a musical instrument because this is the way we usually think about it that There is a performer who's providing a mechanical excitation to a musical instrument, which could be a computer and receiving auditory feedback and That's sort of the I guess maybe the minimum of what you need But it's also you really useful if if you're interacting with an instrument or an interface to have some sort of visual feedback So it's nice to you look at the keys on a piano before you put your hands down on the piano That's that's a really useful aspect, but I think the the haptic or the feedback having to do with a sense of touch is something that's missing in a lot of user interfaces and it's it's been there in in Traditional acoustic instruments you feel the vibrations of the sound which enables you to control it more precisely and intimately And you feel that with a very low latency, which also helps you control a musical sound in live performance very Precisely there are other benefits to haptic feedback also, which I think we sometimes forget about but you know One of them is that we have so many different receptors haptic receptors all over our body We can you know, we may want to look at the piano keys before we put our hands on the piano keys but We can also feel where the keys are and that can help us find them if we're looking at a score So we don't always want to use the visual channel while we're performing We might want to look at an audience instead for example or another musical score or a conductor and Also, actually one other benefit of haptic feedback or feedback having to do with a sense of touch when interacting with interfaces is the fact that the the human reaction time is Can be faster for haptics Feedback then for any of the other Feedback channels, which is to say that you can produce you can respond faster to a haptic Stimulus than you can to a visual or auditory stimulus if you're just sitting there waiting for it psychologists make tests like that So And of course the brain is also important So I kind of like to also draw the brain and show where the feedback loop is Because there are more feedback loops inside The brain and the human body and actually the human motor control system is distributed to some extent So you even use neurons throughout the body a little bit. I didn't put them in here, but it's kind of fun to talk about them I guess But anyway, I just wanted to show the slide just to motivate why this is interesting I mean that we can respond to so it has to do with the human biology That that's part of it Yeah, because we have these neurons distributed through the body that can have their parameters Adjusted by the brain so a response can go up just part of the arm Maybe to the spinal column and then return before going all the way up to the brain. That's one reason that I know of Other questions Yes, I would imagine so but I don't know I'm not enough of a neuroscientist. I'm not a neuroscientist at all really. I'm just interested in some of these things Yes, so if I put this on and I can carry the microphone around. Yes, I understand. Yes Great. Thank you. Yes All right, so there are lots of Devices out there that have haptic response of all sorts of different kinds There are game controllers that have motors in them that provide vibrational responses. There are mice or joysticks Phones just about all phones have vibrotactile feedback in them. I Listed the mode guitar here. Maybe that's a bit of a stretch, but it has audio feedback into the strings That's a hard disk that's been repurposed to provide haptic feedback the remote has vibrate vibrational feedback Then there are robotic arms That sense the position of an end effector in real time and provide feedback very precisely and Some of these are open and some of these aren't and but there are nice people out there who helped open them So thank you for that If you're interested in this stuff, you might want to look at this get repo it has some that's sort of where I Keep the sum of everything that I can It's not the easiest place to find things in because there's so many various things in it The other ones will be more clear, but anyway, if you're looking for a Driver for some of these things you might find it in there For instance with the the Novin Falcon, which is a game controller, which has the three degrees of freedom That was hacked by a colleague of mine in Berkeley, which eventually caused the Designer also to release a driver for it, which they otherwise wouldn't have released And so now there are there are drivers for this and so we use this Device after after getting access to it through the source. We used it It's a microphone going in and out or is that just me hearing that it's only the loudspeaker I'm just wondering if this is getting streaming out on the internet sounding weird Anyway, so Using this I wrote a piece called when the robots get loose and this was Because I wanted to experiment with the idea of being able to have a Human user being able to remotely control some robots playing percussion instruments and a lot of the musical robotics That's going on these days is in one one axis Because it's a lot easier, but with this System it was easier to do things in three dimensions and so that's that's what we experimented with here and Later, I'll be sort of coming back to physical modeling more, but the way this this work was organized this this piece was written was that basically the computer simulated virtual Springs in between the The first had a virtual spring between this one and whichever one of these was enabled and then Live the performer records Loops basically where the loop instead of in music we we have a lot of loop-based music But in this piece we instead record the trajectory of the motion of the gesture instead of The sound and use that to reproduce it because it sounds interesting if you speed up or slow down The recorded gestures when they're played into these musical instruments So this one is a tambourine that one is a snare drum and that one is a shaker and there's another shaker one that you can't see But let me let's see. Let me try to play that Here and Jack and also are fighting a little bit. So or maybe it's a pulse audio I should have removed pulse audio from this machine. I don't recall if I did or not Anyway, let's see. So I'm just going to play some excerpts from it So it starts out kind of simply like that as you can imagine and by the end it builds up more I think you get the idea basically that you can when you speed up the gestures You get this kind of interesting sound of someone playing an instrument really fast. That's very hard to do or impossible to do otherwise Yes That's I have to repeat the question again so per per Instrument per gesture How many samples do you need because I would believe that the excitation you control the excitation of the of the The velocity of the instrument somehow and then you need to like Have for different it's a lot of velocities You have to have the different samples Which were recorded like like I've done Tambourine hit slightly to tambourine hits hit hard Yeah, that was the only or do you somehow do you take one sample and just just Tune the volume according to the velocity Um Well, I admit that when I wrote this piece I was thinking more about the shakers mainly so that I would create a pattern or a loop for one of the shakers and get It to play that single loop and So actually I only recorded one loop for each of the instruments And that's part of that's why the piece starts out simply because the performer has to record each of the loops Live as the piece is starting and then as the piece progresses you play the loops back faster or more slowly Or with a different spring stiffness Inside the virtual model To transform the sound and each of those samples is a three-channel wave file because it has the x y and z position of the instrument Velocity of the um this each sample is always the same you right I Guess I guess that's what I got wrong right. Okay. You're not recording Yeah, the audio but then only the the justice, okay Thank you. Thanks There yeah, there would be lots of different ways to do that But that was an exploration of a way to use the the driver for the Falcon and then The next step was to make a device which was easier to reconfigure So the Falcon was it was not easy to take the Falcon apart and put it inside other projects So even though this the the source was opened for it It wasn't possible to make any anything out of it And so we decided to go back to first principles and try to create a completely open-source hardware Force feedback haptic device to use in these projects and So that was the fire fader project and you can see a picture of it right there those are two faders from mixing console that have motors attached to them and that's this device here and the workshop tomorrow at I think it's 10 a.m. We'll be using these. So if you want to try one out you can try one tomorrow morning other devices that are supported in the in this GitHub repo include bilver planks device called the plank, which is a repurposed hard disk drive So if you're very hardcore like bilver plank or his colleagues in Copenhagen You can take a hard drive and cut it open very carefully to expose the DC motor, which is a nice motor And add a position sensor to it and use this to build a device that has a very Low friction and low mass, which is a nice device And so that's also supported if you're interested But we've been using the faders because they're the least expensive and So it's easy to incorporate them into lots of things and also because they're hard to break is one reason We use them in a lot of things So here's here's a device that has eight of them inside of it They're oriented kind of like a mixing console with force feedback for music. This one is Called the haptic hand and you can use four of them vertically. So Dennis Huber will be telling about this this afternoon This one is an embedded instrument with a raspberry pi inside and a midi keyboard that are Contrused used to control the sound along with two force feedback faders And then this is a student project which also has one of the faders inside of it that was one of these Early prosthetic arm Devices that was low-cost and open source So that's the benefit of having an open design is that it's easy to be incorporated into lots of things and I go around Letting you take them for a test drive at Conferences and they're various in various labs people have rebuilt the device also because we tried to make it An easy easy device to reproduce. So I'll tell you about a piece that I wrote for These faders this is called transmogrified strings and the idea was to start out the piece with some traditional pluck string sound So you pluck a string and you can feel it and then you can interact with it in various ways And then as the piece progresses The strings become more and more strange And so the sounds kind of the timbre of the sounds kind of resembles a string But it starts to transform more and more until it becomes very different So let me play an excerpt of that So that that at the end there is what a string sound would sound like if you could tune it to half of a Hertz Actually, there were four of those It's would be a very strange string because it would actually feel more like friction when you're interacting with it Because you have to wait so long for the wave to come back and hit you again So that's why that you could see that the force feedback interaction with it is quite strange and the sounds that you get Are very interesting. You get these transients right when you're touching it and releasing it Before that you're hearing sounds of the strings as their pitch was being automated Rapidly there was a point before this where I was using sawtooth waves to vary the pitch of the strings, but at that point it was more randomly Adjusted so band limited random noise for adjusting the length of a string which you could you could build in real life But it would be very difficult Yes, what was the apted fit feedback here. You were filling the strings of vibrating. Yes, yes, okay And that's the score So here's here's the virtual model of that was calculating the sound and the force feedback So if you have a very long string that just goes off as long as you need and you have a slide that's moving back and forth Controlled by a computer program then you could use a plectrum to serve plucket right here And you're just hearing this half of the string. You're not hearing this part of the string on the other side of the Slide that's changing its length. But anyway, that's where that's the physics controlling that that model so You'll have to try it to to maybe to agree with this But I think in this The haptics in this case enable the performance of more accurate musical gestures because as you're moving the slider You can actually feel the strings So like there's a point where there's a zither with 20 strings that you're playing and it's much easier to accurately try those strings If you can feel them, so I'll give you a chance to try that out later I thought the timbres that it created were interesting because they sounded kind of like strings, but not entirely I was surprised at how much like explosions they sounded actually. I didn't anticipate that I guess that's why we call it experimental music One thing you can do with this technology is design instruments that are fundamentally possible without electronics But we very inconvenient to build you could of course build eight of these and have these slides moving back and forth Although I think you would have a very hard time making a slide move back and forth A thousand times a second as for it was evident in some of the examples And Maybe you could it's maybe a bit of a stretch But maybe you could say your algorithmically generating music at the end when the sounds of the strings become Where the pitches become so low that the strings are rhythmic So that that was one example of something you can make for the fire fader So I thought I would tell you a little bit about the open-source hardware in case you're interested in making your own So if you get an Arduino and plug it into a motor controller like this, so this is the gravatex To motor nano motor controller the nice thing about that one is it fits into a breadboard So you can literally plug one of these into a breadboard and then hook things up on the breadboard However, you want and make something interesting And the Arduino fits nicely on top of it in Europe It might be a little bit easier to use the Arduino flavored motor controller Which there is one that the Arduino Leonardo plugs into so you could alternatively get an Arduino Leonardo and plug it into that But in either case you the the schematic is the same It's quite easy to hook up because each of the faders Is just uses a potentiometer to set the position so you connect one wire of the potentiometer to ground you hook one to the analog reference voltage and then the other one goes to an analog input and It also has a capacitive sensing feature which turned out to be very useful these that's what the one other nice thing about these faders Is they have an internally? There's a wire that goes to the knob and so you can hook that up to a digital pin along with the one mega ohm resistor and then the Firmware knows whether you're touching it or not, which is also also useful and with this you can get the firmware from the github repo and You can modify it if you wanted to change something about it or you can just use it as it is And it takes care of everything and sends the data out over a USB serial connection to the computer and this is This is the feedback loop of what that's actually doing. So here's the here's the Arduino and So it senses the position of the fader it It's an analog voltage which it measures and then it converts that to a digital Stream that it sends over the USB serial into your laptop and then that goes into whatever software you're using Which uses a floating-point Computations to calculate the sound and the force feedback signal which goes back over the USB serial connection Through the the Arduino to the motor controller Which then becomes a force on the motor using the pulse width modulation Output of the motor driver So it's easily adaptable to a lot of different cases and We Provided and so in inside the repo if you want to use it if you want to use it inside PD there's a There's an object. I made in PD that lets you access it which just uses PD's comport object To access the serial connection you can use it within max if you want Because there's also one for max or there's one Which uses Qt Qt-generated application you can use I guess I won't go into that detail, but Basically, this is what the Qt-based Driver does I might come back to that later So did anyone have questions about the fire fader So what's the sampling frequency of the slider position? Is it like 50 Hertz enough or not has to be faster than that Reduce the leg I guess. Yeah, it's about one kilo Hertz Okay, and for the motors as as well, I guess it's the same. Yeah, so this This diagram here shows the timing Which is it's the timing is initiated by the software and the computer so that what the software sends Whenever the application in the computer sends a new force over the USB whenever that's received it immediately sends it to the motor and then it Samples the analog to digital converter and immediately sends it back over the serial and That turned out to be an easy way to synchronize The driver so that because you don't want it sending and receiving or I found it troublesome if it was sending and receiving at two different rates And so by doing it that way I kept them relatively synchronized Arduino was strong enough to do this. I think I won't say that it's perfect, but I've put a low-pass filter in the firmware Which that is that that was actually more of the issue that the Analog input pins are not super accurate But if you have a digital low-pass filter, you can compensate for that to some extent cool, thanks But if if you're interested bring and if you have a motor with a position sensor Bring it tomorrow and we can we can try to hook it up Not so much a question is are you aware with the Arduino? You can change the resolution of the analog input and you can then actually get a faster response time of it and You get that's just a bit less jitter because you're using a low resolution. I did change Yeah, the firmware does change the timer that adjusts the Maximum sampling rate and so that's one of the reasons why the the low-pass filter works better because it receives data at a high rate to use And so that that is how I address that issue And and if you're curious come try it yourself and you can see what you think about how it feels so another piece That was written for fire fader was for the laptop orchestra of Louisiana a Piece called throw which is a network piece for eight performers each performer has a fire fader and This was the first piece for more than two performers where you throw where the performers interact with the same objects Because basically when you throw an object off the screen it lands Depending on the settings it lands on someone else's Fader and so that was part of the concept of the composition that when you throw something away. It actually goes somewhere else Which is easy to forget I guess in our Western society I apologize for showing a max patch at this conference This piece is written in max I should have written it in PD But I didn't Edgar there's a question from IRC that I've only just picked up on now So Robin Garry's asked her. What is the round-trip latency for the fire fader a? Few milliseconds. All right. Thank you It also depends what software it's communicating with on the computer, but Yeah, that's that's what it usually is Definitely less than 10. Yeah, 10 would not feel very good. I Only measured it once with a scope But I actually measured the analog signals to check for sure using max and I was getting about three milliseconds at that time And you can't get much lower than that anyway with a normal operating system Much less over an easy to use Serial interface like USB Here I plotted the way the sound synthesis works in the piece But maybe I should show the video first. It will be more clear. I think So you may have noticed I was cheating a little bit by changing gravity. I Guess I some the most rigorous way of composing music for physical modeling You might say you're not allowed to break the rules of physics, but sometimes I do So I guess you notice you heard a sound every time one of the Masses was sitting on one of the on top of one of the devices actually I guess you couldn't see that but each person when It's probably not too unobvious when you move the fader when the performers move the faders the faders on the screen Move up and down and this shows just how that works basically if you imagine the blue line is the position of the device and the The dotted black line is showing a person throwing a mass up and down The difference between those curves is proportional to the force and so that curve is used then To as an amplitude envelope for synthesizing the sound and so that's where the sound came from in that piece so it's an interesting way of thinking about physical modeling where you combine a physical model and some more traditional sound synthesis to make musical sound That was set by a score. So that was predetermined those were predetermined That was a good question. Yes. So the the frequencies were not algorithmically Generated but in some sense the rhythmic pattern was To some extent algorithmically generated and it really depends a lot on the mood of the performers what happens with the masses and where they land up and up so it it It turned out a bit different than I had imagined because I thought it was going to be the same or similar every time But they really do something different every time So it's kind of interesting because it's an instrument that's sort of fundamentally different or certainly different than traditional acoustic instruments It would be you could build it without electronics, but it would be hard or very inconvenient So So that was sort of the part of the discussion on Haptics and force feedback and then I thought I would talk about physical modeling, which is a way of programming force feedback Using physics, but you already saw that I was doing a lot of physics And this is sort of the physics framework that we'll be presenting tomorrow at 10 o'clock If you want to install it on your computer and take it for a spin So and this is using Faust so thank you very much Faust developers for creating the Faust compiler and tool chain and philosophy and amazing system So if you have a Faust DSP file you can compile it into all these different targets And I'm sure I'm missing a lot of targets that are on there And of course there are the targets that you will create when you take one of the Faust architecture files Modify it for the target you want to compile your audio for and and submit it back to the community Maybe I should back up a moment and say why Maybe it's not obvious to everyone why this is so useful But it just seems like people always write external objects for the environment. They're programming audio in But then later someday you want to have your external object running in another audio program So then you have to go move your audio code into another place But which takes a lot of time so it's better to write a Faust DSP file and then you can compile it into all these future targets using a wide array of Programming languages just to mention that there was a new target for Bella and and tomorrow I guess people from Bella will be there to Demonstrate it Very cool. Yeah, so you can run you can run this on Bella tomorrow if you want So what the physical modeling software I will show you does is it lets you specify a physical model in an MDL file And then you can compile it with the synth a modular compiler into a DSP file and then go from there wherever you want. I Call it synth a modeler because it's your it's it's like instead of synthesizing Sound you're synthesizing models in this in this environment, and then you can use them later to synthesize sound and My my own personal goal was I just wanted to have oh Thank you. Thank you Cool if you Basically, I wanted to be I found myself for all these different haptic devices And I didn't show them all because I've programmed a lot of other ones I just would keep re-implementing the same thing over and over again in all sorts of places and it was too much work So then I said, okay, I want to for the next 40 years create everything I need in here and Then every time I want to use it I just compile it into whatever target I need and then I can keep reusing them and have a resource of things and so that's my My goal and you can create model files in many different ways. You can do it using a text editor You can do it using scripts You can do it using the designer GUI a Peter Fossel will be here tomorrow I believe and so you can meet with him and learn more about the GUI. He's created for it It would be system identification is an interesting way of creating them or analytical solutions There'll be another presentation about Analytically solving acoustics problems and using that to create models Or even statistical methods which sounds kind of crazy, but it's kind of fun to randomly generate physical models until you find one that you'd like So I I encourage that approach also. I found some percussion models that way that I liked So so this is the tool chain so tomorrow if you come at 10 10 o'clock to What is it the seminar room? I've forgotten where it is. I think it's back that way. It's that way So if you come at that time We can help you install the tool chain on your computer because it does have a few components to it So this is just zooming in on that so why Why not write the physical models directly in Faust? Actually, it's great and many physical models are I was just trying to do it on a very abstract level and so I ended up doing it this way So if you have a five masses that are interconnected by springs Five virtual masses interconnected by virtual springs This is how you could specify that in symptom modelers. So you list the masses Which all have a mass of point zero zero one kilograms I gave them all initial positions and velocities of zero and then you specify Springs in between them so each spring needs a name and then you tell it what things it connects and then you just Define the parameters for it here. In fact, if you In a model file if you have a line that says Faust code colon It just creates Faust code that is directly translated into the output. So a lot of what you see in the model files It's just literally Faust code there and that generates in this case This is what the Faust code would look like. So it creates a feedback block structure that has the masses It simulates the masses and it simulates the links and it figures out how to interconnect them all and the sort of What why is this? challenging that so if you have the forces for the the links Have to get exerted on both, you know, each of the masses that connected to and if you forget one of those inside here Then you accidentally created something unreal and it will probably blow up when you simulate it Who's ever had a filter blow up before? surprise not more We should be we should be programming more filters Anyway, it happens it happens And so the you have to bounce so this L2 shows up here And it also shows up Where is it? Here So so you have to make sure you don't Forget any of those. Oh, no, sorry here. That's the one. Yeah, those are the two that makes sense And that those are right there and then L0 is there L3 is there So if you forget one of those oops and then it blows up So if you use the symptom modeler that won't happen and also we created so there's a physical modeling dot lib file Which is used by symptom modeler, but that's pure fast code and that has the Primitives that are used inside here and so in in deciding how to do this We also wanted to try to enable the creation of more models than or more different kinds of models than we had before basically the Most of the physical modeling or most of the larger physical modeling movements for simulating structural physics in music happened either at karma at Stanford University or at Eircom in Paris or at a Koi in Grenoble and So there's these three paradigms mass interaction modeling modal synthesis or digital waveguide synthesis And I just wanted to be able to combine them to discover things that are Different so this here so the space of models that are modal models and digital waveguide models are in here And so it's interesting to think about well, what can we get there or what if we combine all three here? What what model structures might we find that we haven't found so far in these other projects? So that's one of the other Benefits of using symptom modeler. What are the elements these? Well, these actually the most basic elements now I keep adding elements, but So the simplest ones are mass like objects, which are Point a point mass is you could think of in moving in only one axis is basically an object that has Inertia and moves up and down only and then if you take a point mass and you give it infinite number of kilograms which it weighs then It becomes a mass. It doesn't move which in symptom modeler is called a ground object And then you have to connect the mass like objects together which you can do using link like objects So a linear link simulates a spring and this is like the coils of a spring sort of going into the page So already using these objects you can create a damped harmonic oscillator by connecting a ground through a link to a mass Which is sort of I guess maybe the most fundamental physical model. Anyway, it only resonates at one frequency But in order it's very hard to play that using a device like this if there's no non-linear interaction, it's very hard to play So it helps to have other links which are non-linear. So a touch link Looks like this and you can imagine that as being sort of a spring that disengages So if you press into a table you feel a stiffness and when you pull back up, it's free again So that's I guess what I consider a touch link to be so you can and then if you have a port object that represents a haptic device then you can Use that to connect to a mass and actually play it So I should demonstrate this model. I think now I start Jack And so here the synth a modeler models You can see the models that are in there right now Are those and I already compiled them which you can do by typing make Jack QT They go inside the Jack QT directory As you can see the compiled ones Inside there, but if I wanted to start that one Then you see this nice application show up here, which is made using QT and The distortion actually made that sound better because it had some overtones But just a mass and one link doesn't sound that musical But the nice thing about physical models is then you have all these parameters you can adjust so we could increase the damping Which changes the sound a lot and it changes Maybe the way it feels also if you change the mass So if you make the mass lower then it sounds like that and it's kind of interesting when you're touching the That's so low. We can't hear it with me. You hear the pitch goes up when I touch it That's because of the additional stiffness inside the touch link that inside So when this touch link is also touching the mass then it has two stiffnesses acting on it Which is this stiffness and that one so that causes the resonance frequency to go up again Just kind of interesting So that's a really elementary model But if you start adding more more Objects and you can get more models. So I was showing you a drum before so maybe I should finish the drum What's wrong with this drum? What happens if you hit this drum? Maybe it's not a problem depending on your point of view, but Yes, that's right Yeah, I didn't include gravity in syntha modelers. So if you hit this it will just fly away forever Which makes it harder to play although if you make it bounce if you made it bounce back and forth off of the moon or something Well, it have to be moving pretty fast to hear it. I guess but anyway, that would be interesting But yeah, yeah, you want to have some links connecting it to ground around the edges to keep it from flying away usually And so that gives you a sound that's very linear if you're plucking it. So so how do you hit that? You have to have a port And in this case Here I'm using a technique as suggested by Claude Kato is to connect this spring To a mass and then use that to hit the membrane so that you It changes the timbre if you do it that way But it's still the sound of this is going to be linear when you're not touching it So it's kind of interesting to add some snares So I added some if you take another mass and you put a touch link between it and that one and do the same thing here And the same thing here then you get like little balls sort of bouncing on top of this membrane and then I I did this trick also recommended by Claude Kato is using springs to simulate gravity kind of and So that way you can keep the snare sort of landing right on top of it. So I made this model It's actually I made four of them let's try this one So I didn't do that. Cool So it doesn't sound that much like Doesn't sound that much like a drum that you're familiar with because the parameters aren't tuned that way We kind of hear the snares bouncing off of it But because I put four of them you can play two of them on each fader. So that was just changing the parameters That's something that people learned in non-linear science that if you sometimes if you change a parameter a little bit The dynamics can suddenly change a lot and that's That's sort of This one So it's kind of interesting to so with with physical modeling you can find timbres that are reminiscent of ones You've heard before but that are different and just by by tuning the physical model Values you can change them. I guess what is this one? I could change the masses I guess and oh so here I labeled the items and Then I added one more thing here to try to make the dynamics a little bit more interesting if you want to play drum rolls Using your fingers Which is I'm not going to really go into that because that's based on another project That's kind of unrelated to the presentation, but Basically if you use a pulse touch link instead of a normal touch link Just one finger, which is kind of interesting I Think later today you'll see a video demo of that in a different context on a slightly different device It makes more sense. I just thought I would mention it now because I had this slide open Are there questions about this drum model? Yes, how expensive is it? That's a good question. I made this model with so few masses because I wanted it to be cheap And I don't recall. I think you could probably if I just guessed I would say you could run 40 of them On a standard laptop at the same time. I mean certainly more than 10 or 20 I think maybe even more than 40 it actually what makes a big difference is zipper noise With the physical models you may have noticed when I was adjusting that mass parameter it didn't Have any glitches in the zipper noise, but if you as soon as you put a smooth colon smooth After the age slider in the code for specifying the slider then it's calculating the It's recalculating the physical pram modeling parameters more frequently and so it uses more CPU Maybe not so much in this model, but in some of the other models it is possible You have to be careful not to you have to accept zipper noise sometimes if you want it to be cheap Something I found But yeah, if you want it to sound more like a normal drum use more masses use more masses and More snares, I guess I only had three But it sounded pretty dense already just with three three snares and so adding another Model to make it possible to do modal synthesis within symptom modeler is called the resonators object here and Pascal Kopp has now arrived and he has worked some with this this object in finding ways to specify this but that's nice because With physical modeling sometimes it's it's a good exercise to be adjusting all the physics parameters Or sometimes you're in a hurry and you just want you want to know what frequency something is going to oscillate at then You can use the resonators object to do it And so and there's also another link like object called a pluck link, which is kind of like a plectrum So I'll show a model where you're plucking a modal synthesis object called the resonators so this model Is just analytically calculating the resonant frequencies for a rectangular membrane And if you set the cutoff frequency, you know, they can either various things you can change You know you can change its length also Which is kind of interesting so that's convenient if instead you wanted to Do it a different way if you wanted to actually specify all the frequencies for the modal synthesis object you could instead do this several Right, so we could Do this model. I'll look at that later. Anyway, there's another model where you can set the frequencies directly So I wrote it so I decided To write a piece for 62 loudspeakers using physical modeling Because this was the number we had available at our laboratory at that time And I wanted to have a different sound signal coming out of each loudspeaker so and I was I was inspired by some other prior works by Steve Reich involving swinging microphones over feedback loudspeakers or Ligites piece with a hundred metronomes So in this case the balls or the masses bouncing on top of the resonators are kind of like metronomes and then each of the modal synthesis resonators is tuned to a different note Each one has four resonant frequencies and then there are two and they're so small You can't really see them, but there's one here's one port of a haptic device And here's the other port on one and so what one of these does is it raises all of the all of these balls So you can get them all bouncing at the same time the other one picks them up sort of gradually sort of It's starting with with these and then as you push the it further up it lifts up all the other ones and So that's that's kind of an interesting Piece to listen to so I'll show that So that's how that that one ends and that was made with these Just masses bouncing on top of those resonators So actually there are a lot of other interesting things you could do with this model I just that was what I did with this one I like the fact that it sounded so much not like a bunch of masses bouncing on resonators But there's another part in the piece where they're all bouncing in phase And they slowly go out of phase and it's nice to listen to on 62 loudspeakers I guess if if if there's an 8-channel system tonight, I might play it over 8 channels Be nice but with 62 loudspeakers it makes a highly immersive sound and It's it's again It's it's an instrument that it's sort of a fundamentally new interaction in terms of how you sort of pick up all of these Masses that are bouncing it would be really hard to pick up 62 different balls bouncing on top of resonators using a physical object in the real world, but you could do it But anyway, it's a lot easier with Electronics and it's has this nice way of algorithmically generating music if you're just dropping a few of them also Other elements in synthomodeler are the waveguide elements which model strings So you can make a pluck string in synthomodeler for example by having two terminations with the waveguide spanning in between them and a junction that connects it to something that can can Excite it so that's a pluck string model Or you can make a waveguide drum like this I've never seen anyone make a drum by connecting a bunch of strings together, but actually you could do it It might be hard though But anyway, and it would feel kind of weird to be plucking something like this, but you can make it in synthomodeler So let me demonstrate those models Maybe I'll just go ahead and demonstrate the harp which is 20 of those of the string model So that's that's a harp. It has a lot of parameters you can adjust like each of its frequencies individually That's just the way they're initialized But let's see sound a less bright or more bright Of course the damping is a nice thing to change. You can change how much damping there is in the plectrum You can change the width of the plectrum. You can change its stiffness The wave impedance of the strings is kind of interesting to adjust You can also make them all longer or shorter or detune them You can imagine I think what that that sounds like. I'll show the waveguide drum made with just four strings It's kind of interesting to adjust the delays of the different strings But if you're careful how you adjust them, it sounds kind of like a drum Let's see. Basically, it's kind of like tuning four Delay lines in a reverb. Who's ever made a reverb using four delay lines? Or maybe more. Maybe it sounds better with more But it's the same problem if you just have four you want to tune them to relatively prime lengths To have a more a sound with more different resonant frequencies inside it Although beating is kind of interesting with this drum if you get it beating which you can do but I think I Don't recall what a reverb sounds like if it's beating. That's probably a strange kind of sound Anyway, so that's that's the drum so we're collecting a library of models For various uses. So come come try them out. See what you think Let's see Questions about physical modeling Yes, yeah, there were some clicks was was that because of the D-zippering less lack of D-zippering. I think I think in one case there was some zipper Noise in that piece. I didn't adjust those parameters in real time. So I didn't have a smoothing Function on them, but it would have been nicer with that. I also think there's somewhere up here. There's some noise Getting in sometimes which was part of it, but yeah, I think it was mainly the zipper noise We there are some new elements now like stiffening springs and softening springs and things like that that make for some pretty interesting sounds Yes, I have a question. Yes, since now you can combine all this one Model mm-hmm, but do you think is there are you always using one model more than the others or? Do you have a favorite approach? That's a political question. I Love them all they're all great, and I love them even more together Let's see So here if you could build a string with springs that gets softer When they get Extended which there are actually some opera gongs like this that have sort of a softening spring sort of characteristic Then the pitch would go down as you would hit it harder So that that model is kind of interesting I suppose I should show the stiffening variant of that So Steve Beck used this to write a piece for laptop orchestra called quartet for strings Let's see. So this is the lowest one of those strings. There's also you'll find in the library also a viola and a violin, but And again, it depends a lot how you have the parameters set, but let's see The stiffening string the harder you hit it the more the pitch goes up Out all the damping but anyway that that's a pretty interesting model I'll show a model that combines all three at the same time I need to remember the name though Okay, let's try this one the mass you're plucking a mass that bounces back and forth Resonators object and the waveguide string and actually we could look at the We went to the GUI now that's the post window isn't it You'll have to forgive me. I don't use my Linux computer that much But before coming here, I wanted to make sure everything worked on Linux Because the POSIX serial drivers should be the same for Mac and Linux. However, I Had some discoveries in that area I think maybe I had it in here But if we if we look inside Here we could take a look at the ratchet model. Here it is So the model didn't say where the objects were so it had to figure out where to put them So here's the string with a junction connected by a touch link to this mass that's bouncing between the string and the Resonators object up here and here's the device. It's used to to pluck it and I'm gonna stop it from doing that because it's confusing but anyway, that's that's the model you're hearing there and Anyway, that that that should give you a good idea of a lot of the stuff that's in there So Thanks a lot for having me New technology and new music are sort of intertwined this way and so we're working on Writing new music and then seeing what ideas that gives us for new technology and then iterating like that What will you make with these technologies? I I hope I don't even know but I have lots of ideas and I'm sure you do too and I wanted to Thank the organizers for having me here and my master's students Pascal Kopp Denys Huber and Peter Vassel for the help with the project and I Also just wanted to thank you all for helping make Linux audio Possibility Because it's and I'm very happy to be here among you all Where there's such strong support for open source software, so thank you. Thank you One more question, please I Was wondering whether the special effect guys shouldn't be interested in these physical models. Have you thought about that? Are you in touch with them which which special effects while the guy is working on Star Wars movies or something like that? they they do stuff in non in non real-time usually which is a very different way of simulation than real-time simulation and Actually, it's an interesting question When I encourage you all when you meet people from the graphics community to convince them They should learn more about audio because I find a lot of the graphics people think they know about audio And I find often they don't in in my opinion or at least not about real-time Real-time audio and they they think that it's all a very arbitrary thing that we just Get together and put a bunch of weird code and but there there's Method and reason behind every step that I know of taking anyway, and I think sometimes that's underappreciated Apologize if that wasn't your question But but have you tried to some of your models on the smartphones? We haven't published that yet, but yes, we have actually Pascal Karpas. I don't know if you were going to say something about this But but do has these these modern Models for iPhone Compete of a new um geschrieben in in assembly code and now it's probed But I have a good idea I think because I compile them for Raspberry Pi all the time all these models run on Raspberry Pi 2 hmm, I Think every one of them runs on right the Raspberry Pi 1 is not nearly as powerful as the Raspberry Pi 2 for DSP for some complicated reason maybe having to do with the The new compiler But if you come to NIME you can see what or I'll tell you later what what we are doing. Yes great I've forgotten what's next on the program. Oh Oh perfect You're so prepared. Okay, open AV on And in the seminar room also so In five minutes