 Our next speaker is Jean-Micaël Celerier, whose passions include art and code and computer music. He combined these ideas in his PhD thesis on authoring interactive media, and he is now an independent researcher and is with the Ossia I.O. non-profit, where he is a lead developer on Ossia, the open software system for interactive applications, where he tries to bring some of the results of his research to the broad masses of interactive artists out there. In his talk, he will show us that interactive media can be so much more than just a movie with three different pre-recorded endings you can choose from. Yes, so, one welcome to Jean-Micaël Celerier. Hey, thanks, and this was, I think, the best introduction I ever got, so I'll try to remember it for the next time. So, hello everyone, so I'll be mainly presenting and doing a small demonstration of Ossia Score, which is a free software that we've been developing with the Ossia I.O. non-profit for something like, well, we are not far from ten years now, which is a kind of free software sequencer to combine various kinds of media with all that intermediate. So, for instance, video sound but also controls with hardware devices and inputs that people may not expect at first to be part of interactive artworks. So, is the screen share okay? So, for the first few minutes, I'll quickly explain how the software looks and for something like 20 minutes, I'll go and try to build some scenes from scratch to show what are the capabilities of the software, then we'll be able to discuss and if you have any questions, I'd be super happy to answer them. So, the software is a sequencer, so if any of you do music, you may know the simple paradigm of putting a sound on timeline, hitting play, and so this is the most simple thing one can do, but what is less common in this kind of software is also the ability to put not only sound but also video content. So, like this, for instance. So, I can also come and take some video effects and say, okay, you will play at the same time than my sound. Of course, with various levels of controls on these systems. So, this is the very basic, the very gist of it and now I'll try to build something that, well, bring some sound. So, the first thing that one has to do when using score and if anyone wants to follow the same steps while I'm doing the presentation, the software is freely downloadable at the OCR.io website. So, feel free to roam along and try to do stuff at the same time. Always good to have people trying it. And so, the first thing that one may want to do is to first define which kind of peripherals we are going to use for our interactive artworks. So, for instance, here on my desk I have a joystick, I have a keyboard, I have this little thing which is called a leap motion, which is a kind of camera for hand gestures. And all these things have parameters that we are going to be able to use as part of the performance. So, for instance, if I add my leap motion here and you can see some tables of number moving and that is the position of my hand, for instance, things like that. This, I also want to use my trusty MIDI keyboard. And I will turn some parameters. So, I'm hitting the parameters that I want to use for the controls. Okay, here, so this gives me an array of things that I can use to map to sound effects, things like that. And what else do I want to use? Let's see, oh, I want to use a joystick. Okay, so this is basically the raw material for the interactivity of the work. Oh, okay, and I'm getting a fantastic demo effect. That is great. Where are we? This, and so I'm mostly using Linux, and this is mainly developed in our Linux, but if you want to try on other systems, of course, it also works on Mac and Windows, and we are especially in need of people who try it on embedded like Raspberry Pi and things like that, and if you want to try on various embedded board and reports, then please tell us, and it's super useful things to know. So, to get started, I'll just put a very simple drum loop that I will be modifying over time. So, here we have all the things that we can use. We call them processes, and those are things that will just basically trigger things at various times. So, for instance, if I want to have some sound, okay, I'll just do this. This is a bit too fast, so this is a very, very simple sound thing, and now what if I want to control this with some effect? For instance, if I want to apply some reverberation or maybe some bit crushing, it's much more fun, I think. Now there is something running, and now what if I want to, for instance, control this part with, say, my joystick, then I can just go inside here and take, okay, I have my joystick's axis, for instance, when I rotate it like that, and I can just drag it here. I could want also to apply some effects on the reverberators. So, let's say this on the witness of my reverberation. Okay, and this gives us a very simple basis of work. Now what I want to do is to, of course, build something a little bit more elaborate on this. So I'll start by adding some drum loop because this is going to be very boring very quickly. So at any point one can just press play here. And if I do this, it will play only one time. So what I want to do is say, okay, I want you to keep running forever. And do that. All right, now another thing that I may want to do is to play with live instruments. So, for instance, I need to add some bass guitar. So what I need to do is to add an audio device. And on this audio device, so I know that my bass guitar input is one two. Do it like that. Okay, and I will add a looper here. So I can do, for instance, some level of live looping. And I start my looping thing only when I'm ready. And hold bar and play forever, please. And we'll start at a very low volume like that. Okay, so now here it's waiting for me to start recording things. Oh, and I didn't record because I'm stupid that I forgot to set my bass as input here. So for instance, I have a set of patterns. So I want to do another pattern. This is a very small synthesizer for it. So I want to focus on editing this sound. It seems that I have not set the right pitch, but it's okay. Now here I may want to control this, for instance, add some effects after that sound with the movement of my hand. I hope that will be there. So for this, I also try to take the score support for these kinds of effects. Yes, but also yes, I think it's like repair effects, things like that. Yeah, like this. No, I can just take it off and do on leverage various kinds of hardware and build some kind of live media systems. So this is the first example. Let's do a second example quick. And this one, so maybe some of you have recognized this musical theme. If not, I will just say that it is from Robert Prince from the 90s. All right, so we'll just leave that. So doing this means that this part will never play and it won't even use CPU resources. And now what I want to do is to use not only my bass as an input, but also something else. So like this, we'll name it game. All right, and it will use my three and four inputs. Okay, don't care about you. And we want to add a new scenario. So we are going to do our second work in this blank slate here. Okay, and here what we want is to have some video input. So I have some scripts to launch. Okay, you for instance. Yes, all right. Go here, you will go here. All right, perfect. Okay, and this. And now we want to use not only audio content, but also video content. So I'm going to add some video input to my system. Okay, right here. And what I can do is say, okay, as soon as this score runs, I want to play my audio input. Okay, I'll just put a gain effect. And as input, I'll say game. And let's see. You may recognize the sound. And I will also add a video filter here. So for instance, so you can use shaders and things like that. And right here, I can start. Now, if I do that, you may start to recognize some stuff from older days. So what you can do is stage other things as video inputs and process them. For instance, I also want to use my joystick input to control the things here. For instance, now I can play green and do so while applying the filters on the thing. I can do the same with audio. For instance, if I want to add some effects on the sound output. And if I want, I can even use stuff like visible meters. Say, okay, I want my audio input to go and control, for instance, some visual parameters. Which means that if I enter some sound effects. And here, as you can see, it's analyzed some video control stuff. So yeah, so basically score is a score which helps to combine all this kind of media systems and things like that. And put them in a timeline and score them over time. And we'll do a kind of live creation with various kinds of media. And then we'll make scores of it basically. So that's the gist of it, I'd say. So well, if there are questions, I think I can take them now. Yes, thank you for this interesting short introduction into some things that OCR can do. OCR score can do. Remember for the viewers out there, you can ask questions in the IRC or in Twitter and Macedon. And yes, one thing I noticed, you had just this presentation was mostly audio and video because that is that works on an online conference. But if I understand correctly, the intermediate aspect of OCR encompasses far more than that. Yes, so basically, so we have this abstraction of our protocols. And so in the system, you have all these set of protocols. So for this particular thing, I didn't use any kind of, for instance, network protocols, but it's super common in intermediate systems to use, for instance, the OSC protocol, which is a standard protocol across media apps to send messages to each other. For instance, one can have a pure data program or a processing program and communicate and be scored or send control messages to OCR. There is also built-in support for various kinds of other, so joysticks, remotes, so live motions like this. And well, we are working on that more so you can also directly use Arduino's and things like that. So if you have an Arduino, you can just plug it and start exchanging messages on the serial port without having to go through some sort of bridge, which is useful when you do stuff like kinetic installations, things like that. And we are looking forward at adding more protocols. So for instance, there has been more and more demand for IoT protocols, for instance, to like MQTT, things like that. Because people don't only want to do art, but they also apparently want to control their house and turn on the lights at the right moment and things like that. And for this, MQTT and things like that are super useful protocols to have. And then it's an open system and as soon as there is something useful that we hear of, then it's going on the roadmap. Because yeah, it could be cool to have that new way to control things. And yeah, yeah. Well, there is also support for light, you know, DMX for stage lights, things like that. So you can directly automate, for instance, let's say, if you have a projector, you can directly control it and send the DMX data through ArtNet for relevant protocols. So we try to make it usable for any kind of art show and things like that, museum installations. So yeah. Regarding all those features, can you say that OCS score could be something like OBS Studio on Asset? Good question. Well, for me, it's more complementary. I use OBS quite a bit for streaming things like that. And for instance, from OCS, OBS has this OSC over WebSocket add-on. And you can add it in OCS and control, for instance, these things of OBS. So the idea is more to integrate with that kind of software. And for instance, OCS isn't going to have streaming capabilities, I think, to Twitch or things like that, like OBS has. And it makes more sense for me to score OBS if that's needed, but leave to OBS what OBS does best. But then if someone wants to come and have nice pull requests, then I will be happy to merge them. So it's more like a Streamjack on Asset where you control OBS if you want to do with everything in your environment. But well, that's today at least, and I don't know what the feature has in store. Another question more related to the music part. Can you read classical sheet music? And do you think that classical music education is useful or necessary to produce music? In general, or in this context, in general, I'll say yes. Personally, I didn't have a classical music formation and actually I regret it a bit. And some things will have been much easier if I had taken a few classes on harmony and things like that. But for using score, it's not necessary. So in score, you can import MIDI files, for instance. Let's say I should have some very simple MIDI files like single chords. Or do we have some MIDI here? Okay, well, you can add MIDI. Can you write any more if you play it? Oh yeah, that's a very good point. So there isn't direct sheet music notation. However, there is this project which I think was developed at Gram, which is Gidoo. I'm not sure if it was developed at Gram. And it's an open-source system for rendering notation. It's used, for instance, in the in-score software. And well, it's been requested for a long time to add the Gidoo delivery to have some level of rendering of sheet music. But personally, I am barely able to read sheet music. So it's not a super high priority for me. My needs are covered with simple MIDI files and things like that. But for instance, we are working a lot with, let's say, composers from non-classical pieces and music and contemporary, cosmetic music, concrete, things like that. So we've been working for a partition and score writing system for contemporary music, which is called Akusmos Krib. So I don't have the demonstration here, but it's a separate way of rendering scores adapted to contemporary music. And it was relatively easy to integrate. It was done by a few students during the student project this year. And something similar for sheet music would be meaningful, I guess, if someone wants to do it. So again, I'd be super happy to accept any pull requests, which adds some sheet music rendering to MIDI and score and things like that. Talking about scores for songs, can you arrange a complete score or complete song in Ossia with some interactive elements? And then, for example, some actions at specific points? Yeah, so for me, that's my goal. So I come from the back in my proprietary lifetime a decade ago. I was using a lot of Ableton Live to make music. And I really want to reach the point where what you can do in Ableton, you can also do it in score. Maybe not as easily, because not everything is the same, of course. And both so far, do things differently in some ways. But at some point, I want to be able to write a whole song in it. What's missing of today is good, say, mixing user interface, for instance, to easily have all traditional scores. There is the idea of tracks and things like that. That is always present. And in scores, there are things similar to tracks. Like you can say, OK, you, for instance, will be mixing this and then you can adjust the volume in here. But that's not enough, I think. This and also we lack, and that's a focus for us in the version, all the recording things. Like right now, you can't record audio or media during things. And I want this to be doable because for me, this is the final frontier before starting to actually make songs. So it's on the roadmap. I'd say that today you can do it if you are really dedicated and if you, say, use Audacity or something like that next to it to record songs. At some point, I want score to be a fully digital studio workstation. But it's not there yet on the working nights. Just to give maybe to the viewers an idea of what are the things that can be done with score. I can maybe quickly show the gallery on the website, which shows various pieces that were done. For instance, there are modular pieces. There are live performance over the internet, for instance. There are museum installations. There are plays, stage plays, things like that. Various kinds of things. But there isn't a single song in all of those. It's always, there is always often some musical element. But I don't think anyone wrote an entire... No, there isn't. There is Piano Tronics No. 3 from Alain Bonhardy, which is actually a piano piece which was composed in score. So there was one. Score uses a graph editing feature which was probably very easy to get started. Do you ever get lost in complicated graphs and hope for S-expressions? Very good question. If you are used to say pure data, maximum SPA, things like that, for me, as you can see, it's graphs. But what you'll notice is that, for instance, the graphs that you have in score, the objects are much bigger than the objects in, say, pure data or maximum SPA. And the idea is to have it's closer to, you know, devices in other software. So you don't have, for instance, something that just does a multiplication or something like that. You really have high-level components which do a lot of things and which are, I think, more usable by composers and musicians. And if you want to, on one hand, to get down to the... So let's make a new document. See if you want to get down to a more precise level, you can just drop pure data patch and you can actually edit pure data directly in score. And... So this one is a white noise patch, for instance. And at the other end of the spectrum, if you really like coding, then you can use, for instance, right now, Java script. So you can have some... some JS script which will, well, generate things. And if you are really motivated, but here I'm not sure it will work on that computer. Not confident. No, it didn't work. But you can use also C++ and at some point there were, since the last two months, I had three or four requests to introduce some scheme dialects. So I've been... It's been on my roadmap to have some level of... Yeah, having an embedded Lisp. But then I'm not a big Lisp person. I really like the language and I always think of the XKCD comic where you see this bliss of... people being in... God being in bliss about the universe being made of S expressions, but I'm more of an imperative and objective person, but I recognize we need for it and I think at some point there will be a way to also put Lisp in here, like you can put JS and you can put your data and it will be another language that you can use. So... Another question is if... can you use OSEA just, for example, for the video and then link it somehow to Ableton or Bitwig for the music performance? Absolutely. So for instance, if you want synchronization on Linux, and for now it's only on Linux, we support Jack Transport, so you can say, OK, that score will be, say, either client or master, but you want something to be the master in that case because you want to drive to have a single timeline for your whole system. Then there are a few things that allow, for instance, to... well, for instance, if you need to play back video, well, you just do it and also what a case we had recently was wanting to use score with Touch Designer and we support protocols for sending and receiving videos. For instance, for the Doom screen, I was doing it with ShumData, which is developed at SATA at Morial. There is also experimental NDA support to be able to send video of a network. Then in France, we've been working on a public, well, kind of publicly funded research project to allow, how to say, mainly thinking about making a network at Artworks and basically Artworks, where you have three peoples in three different cities because, well, COVID and all that led to the need for such things. As part of this, there is, of course, research on all that we need in terms of protocols for exchanging various kinds of media between software. At some point, it should be possible, relatively easily, to say, okay, I want to maybe generate one sound at one point in score and I have most of the sound being done in live or between and things like that. It's theoretically possible. I haven't done it much. But I mean, I know that there are people in this gallery page, here there are some stuff that use both score and live in the same performance, for instance, so it's not unheard of. There's another remark from the chat in the same direction, not quite a question, but he says that, yeah, integration is the answer. You've showed all those protocols that OCR supports and media alone has consumed many of these things. Yeah, well, media just has this small resolution problem, but at first, when the first version of the software it didn't even handle audio or video. First, it was only a controller for other software. But then at some point, we had a lot of people who were like, okay, but I just want to play a sound at five seconds. And I don't want to open, say, pure data and to make some player and then we added sound. And then people say the same thing with videos. We also added a video. And now for simple things, well, you can do already a lot in score. But if you have a super complex video or audio workflow in live between Touch Designer or whatever software you are using, then by all means use it because at first, the software is a controller for other software. It's original. Yes, okay.