 I'm Anfa. Welcome to another monthly livestream. This month it's very special, me and my co-host John de Bard are having a very special guest. So I'm not going to draw out the introduction. Ladies and gentlemen, Paul Davis, creator of Ardor. Paul, can you hear us? Can we hear you? I can hear you. I am here. I'm glad to be here, Anfa and John. Yes. How are you doing, everyone? I'm here as well. Hopefully you can hear me. Hopefully I'm coming through. Looks like it. So me and John are going to be looking at the chat for questions. If you have any questions for Paul, throw them in and we'll try to filter them out and Paul is going to answer the ones that he doesn't feel particularly assaulted with. The rest is going to be pure improvisation because I think that's going quite well. Robin says that there's a huge AV offset. Maybe it's just there and let's see if anyone else says so. And going to be pure improvisation because I think that's going quite well. Hello, chat. On my side, it seems that it's fine. The question is, can you hear everyone balanced? Paul, can you say something? Yes. Yes, this is Paul talking so we can check on the balance. Yeah. You see, we've done hours of preparations, but we always need some time on air to actually fix things or figure out if the levels are fine. All right, Paul, would you like to tell us something about yourself specifically in the context of open source audio, Linux? How did you came around to making Ardor? Sure. So I guess the story really starts in the late 90s when I was looking around for hobbies to go along with raising my young child at the time. And I'd always had a long interest in, I guess, electronic music. In particular, I thought maybe I would try to make some of that stuff myself and bought myself a synthesizer and a hardware sequencer and stuff and started fooling around with things and realized after a relatively short period of time that it would be good to be able to record what I was doing. I think in retrospect, that's probably not true. It was very good that I didn't record what I was doing, but at the time it seemed as though I should record what I was doing. And I had been reading that all of the cool kids those days were starting to record things on computers and maybe I should do that. And since I had taken a vow in the 80s to never use Windows, I couldn't do that on Windows, Macintosh seemed to be too expensive and I didn't really want to deal with them. But I had been using Linux, I had used Linux for a little while at my prior job, which may come up during the talk, I mean, not and thought, oh, well, let's have a look at this operating system and see if there's anything for it. And there actually was a program back at that time called Multitrack, which promised to be, you know, in the box recording solution. And I thought, oh, this is great. I should just get a, you know, some machine that I'll run and look on it and I can multitrack and record the weird things that I'm doing with my Oberheim. And it turned out, of course, that Multitrack was actually terrible and you really couldn't do any of the things I needed to do with it. So I started out down the track of writing some little bits of software, initially just doing MIDI stuff, sequencing and control surface type things. And then at some point in 1999, RME bought out the, I think it was the first Hammerfall PCI audio interface. It had 24 channels. And I remember thinking, oh, if I had 24 channels, I could obviously make a masterpiece because all musicians know that you just need more channels. Of course. If you have more channels, it's going to be great. So the next task was to write a driver for the RME, which was much easier than it sounds since there already was a driver for it that had been written by a guy called Winford Rich in Austria, but it wasn't part of the kernel and it hadn't been done in quite the right way. So I got the RME working. Now we had 24 channels in and out. And the next task was to, well, sorry, I had 24 channels in and out, but I couldn't really do anything with them. So the next task was to write something that I could record with. And that actually turned out to be much easier than you might think. I think I had 24 track recording working in less than a month. But the part that we immediately ran into, sorry, not we at that point, it was still just myself, the thing that immediately became clear was it wasn't actually very much use. Just being able to record stuff and play it back the way that you would do it on a tape machine that you didn't want to cut the tape with. I mean, it's not useless, but all you could do is record stuff and play it back. And I was more interested in the possibilities of editing and transforming things. So after a few months of working on this multi-track recorder, I was happy to be joined by some other people. I had made the project open source in the beginning. There were a couple of other people around and we sort of had a conversation and said, well, I guess we should just add an editor. How hard can it be? That was in early 2001 and we're now in 2022 and I don't think the editor is finished. But I do know how hard it can be. And so, you know, for the last 22 plus years, I've been working on Ardor, which is an open source digital audio workstation that runs on Linux, macOS and Windows, as well as a few other operating systems or variants. And we are just about to release version seven of the software. Yeah, we've got some questions about Ardor seven. And obviously part of why we are here is to talk about Ardor seven, because this is the long awaited major version that hopefully will bring some long awaited improvements. But well, yeah, maybe we can look at the chat questions still referring to what you've said before. Okay, let's see. Max asks, hey, about Ardor's donation model, where one has to give money to give the ready to run program for from the website, where one has to give money to get ready to run program from the website, but the source code is free. Has this been an effective way of getting funding? Yeah, it has. It's taken it's been a long, slow build. I was in a very fortunate position when I started working on Ardor that I didn't need to make make a living from doing so. And so the first eight or so years of the program, there was really no financial model for it. It was just it was it was a regular open source project. But at some point, I realized that my life situation was switching around that I needed to make an income again. And actually got this idea from Radiohead, which is a band I'm not a huge fan of, but my son was at the time or still is. And they had just released, I think in rainbows, using what they call a pay tunnel model, where you could download it, but you got to choose the price that you paid. And I think they were the first people to do this. They were certainly the first people that will, you know, well publicize doing this. And I remember thinking, wow, what an amazing idea. You could just get the album and you could name the price for it. I wonder what would happen if we tried doing that with software. But I didn't really want to do I mean, it's still it's still open source is still GPL. Anyone can get the source code. Anyone can copy it modify and distribute it and so on. And so it didn't really seem terribly likely that the model was going to be you have to pay in order to get the source code. Because that was just that that wasn't part of the idea of this open source project. But the idea that, you know, there were a lot of people, even at that point, we started doing this in about 2009, I think. And there were a lot of people for whom just building the program was really difficult. There was a big challenge and they didn't really want to do that. And so one of the ideas that I have was well, you know, perhaps we could save people the hassle of building it, we could just offer a prebuilt version and and they could pay for that. And they still want the source code and they want to build it themselves or they'd like to get edit from their Linux distribution. That's all totally fine. But if they'd like a version that we build for them and we sort of bless and provide some kind of a support commitment for it, perhaps that would work. And, you know, it was slow in the beginning. But it's built up over the last 13 years or so. To the point now where the program, I mean, we're probably one of the most financially successful independent open source projects. You know, we bring in enough to support myself at a, you know, reasonably comfortable, middle class kind of life. We provide support for another person who works full time on the software, Robin Garrius. And over the last few years, we had actually a surplus, which we've slowly started using to support other aspects of the project, including documentation and stuff like that. So although it was a slow start, and it wouldn't be a process I would really recommend to somebody who is starting out with this kind of project today, it's actually worked quite well over the wallhole. Sweet. So I was trying to ask this in the first place, but I guess I was muted. Lurkingislife asked, how long did it take you to code our door to be in a working state, which I guess should be interpreted as how long did the editor take to get to a working state? Does it work today? I've made songs with it. Well, then I guess about 22 years. Let's see. So I mean, the initial attempt that we made at the beginning, there was an existing open source audiophile editor called SND, which is an amazing audiophile editor, particularly if you're a hacker. I mean, it has Lisp as the extension language, and you can do all kinds of really insane stuff with it. And we somehow managed to convince its incredible author to port the whole thing to GTK so that we could use it, which he did in one weekend. And then we started using it and realized that it was not suitable. SND did all of its file IO in a way that's not compatible with real-time work. It's totally based around the idea like you load the file into, you load the whole file into memory and you start working with it. So we did an initial version of the editor, which maybe took a month or so of sort of trying to integrate SND in and realize that that just wasn't going to work. It just didn't do things in the right way for something that was supposed to be doing real-time audio. And then after that, I think it was just mostly an incremental process. I mean, I would think probably within a month or two of us, this is hiding to drop the SND integration effort. We probably had an editor, but pretty much every single day since then, the editor has changed and mostly improved and occasionally gone backwards. So it's really been a 20-year effort also, and I'm sure that that will just carry on out into the future. But because what people think the editor should do, what we want the editor to do, that just keeps expanding and growing. There is no fixed state where when we reach it, it's done. Yeah, I guess that's the same with art or any kind of creative work where it's done when you call it done. Yeah, or in this case, it's actually done when the users call it done and the users will never call it done. I can guarantee that myself. I think I've reported more than a hundred tickets into Ardor's development tracker. I remember that moment where I hit a hundred and I was like, what? Whoa. You are saying to me by now. No, we love Goodbug reports and most of yours are pretty great. Oh, that's a relief. There is a question in the chat, which is I think a really good one. Skyhawk77 asks, why would I use Ardor when one has a recent version of Presonus Studio One? That's a tricky and deep question, or we could be dismissive of it as well. But I think more seriously, there's a lot of doors floating around. It always struck me, I guess in the middle 2000s, like 2005 to 2015, and I would frequently make this comparison. If you look at image editing software, that was a time when there was really only one image editing program that the vast majority of the world knew about. That was called Photoshop, and it actually became a verb in our culture and in fact in cultures around the world. And that situation has changed somewhat today. There's now a multiplicity of them. But in the door world, there have been loads of doors right from the beginning. There's at least a dozen right now that anybody who's thinking of heading into digital audio workstations should think, is this the right door for me to use? They may have particular qualities or preferences that they would impose on that. They might be concerned about using open source, for example. And if that was a concern of yours, then Ardor would be a fairly natural choice. You might be concerned about a type of workflow that you're involved in. Do you do completely in the box production? Or are you doing more recording of people performing that ends up on a timeline which you didn't edit and move around? Your choice of door for both of those two workflows is probably going to be a little bit different. Studio One is a great door. They've done a lot of really great design work. There's some really great workflows in there. And there's some really crappy workflows as well. We were watching a video last week of how you should do tempo mapping in Studio One. And it looks cool in the video. And then you sit back and think about what they actually did and what they didn't do. It's pretty terrible. So which door you choose to use will depend on what type of workflow you think you're going to be using or will actually use. It depends on your level of commitment to things like free and open source software. It may depend a little bit on your platform. But the reality is that most of the doors that are out there at this point that more than 10 people have heard of are pretty great pieces of software. And you'll be able to do a lot with them. I would recommend that you sit down and try out and Studio One and decide which one you prefer. It might be that you prefer Studio One in which case you should use it. There might be that you prefer Ardor, then you should use that. Or if you have a commitment to free and open source software, then Ardor is definitely going to be preferential to you over Studio One. This reminds me a little when I learned to produce at school. And that was after I have had half a decade of experience working with Ardor, mostly Ardor 2 at that era. And I was like, man, I mean, Prodors has some nice ideas, but it feels so weird and clunky and needlessly limited and pointlessly skew morphic in some ways like how the mixer is made, how you have a limited amount of slots for every track or like a mixer strip. And Ardor doesn't have these limitations. It's like, I felt like I wouldn't want to use Prodors even if it was also open source like Ardor. So that's also always preference. People just get used to stuff. I got used to Ardor first. Yeah, I mean, there are the workflows that work the best in each of the doors. I mean, most of the DAW support more than one workflow. Well, and you know, there's all kinds of little details that can really affect people. I mean, I remember hearing from Cubase users very early on and in versions of Cubase, at least at that time, I was trying to think what this detail, it was to do with what happens when you solo a track. And I think that by default, they only had exclusive solo. So the default behavior was that you click on solo on a track and you hear that track and you click solo on another track, the other track gets muted and there you hear this track. And that was a default behavior. And they had very strong opinions that that must be the default. That's the only proper default for solo. And at that point, we didn't support that model of solo at all. We only had, I'm not even sure that there's a name for I guess, non exclusive solo. So we added that feature and there is a preference item to control which one it uses. But that seems like a tiny thing to somebody that doesn't use a DAW is not super into using them. But to a person who's already got very used to the idea that when I click on this button that says solo, just like it does in the other program, this is what it should do. It doesn't do it. It's crap. We have to work with that kind of experience and feeling all of the time. But as I said, my position is on DAWs these days is that if more than 10 people have heard of it, it's probably a pretty good piece of software. And if you think there's a reason why it might be the right tool for you, then you should try it out. But you should probably try it more than one at the same time and try to form an opinion about which one is closer to the way that you like to think about things or like to go about doing things. Yeah, the exclusive solo thing is kind of funny because I'm pretty sure that's the opposite of how hardware mixers work. Yeah. But let's see here. So Phoenix is wondering how our DAW 7's development has been compared to 6. Oh, well, therein lies a long story. So we bought out, we released 6.9, I think, August of 2021. And in the background, while the build up to the release of 6.9 was happening, I had been spending a long time doing some fundamental architecture changes. So the way that we represent time inside of the program, things like where is this region or where is this MIDI note, where is this control point for a plug-in parameter and so on. And the plan was that when after 6.9 was released, and that was definitely going to be the last dot release for the 6 series, the plan was I would merge all of that work into the main version of the program, and we would bring out a release as soon as possible after that. And it wouldn't be a very interesting release at all. Because in theory, almost nothing was going to change. It hopefully would solve a few bugs with MIDI notes. But the theory was that actually always nothing should happen as far as users were concerned. We would just have changed the internal engineering of the software. And so that merge took place as kind of a traumatic process because I've been working on that other code for, I don't know, it was the second attempt to doing it. And I think I'd already spent many months working on it. And there was a previous attempt working on that back in 2017. So the merge was tricky. But in the end, things seem to be working okay. And most stuff seemed to be functioning once it was done. And it just didn't really seem worth bringing out that as a release. It was really just not going to have anything for users that was new. We would just be waving our hands saying, oh, we're so much better inside. And users who ran into these weird cases where a MIDI note wouldn't play, they would have been happy. But everybody else would just have been, I don't get it. Why did they release R6.9? It seems exactly like R6.9. But we could have done that. That was the original plan. So instead, for some reason, well actually, I do fully understand it. I was really tired and bored with integrating all this work and time representation. It was just really tedious work. It was kind of fun to do the initial part of the architectural changes in the beginning because it was quite difficult to come up with the designs that we needed or wanted or thought we needed or wanted. But it had gotten really boring after a lot of months. And after the merge appeared to be done, I was really looking around to find something fun to do that would be more interesting. And initially, I went back to working on essentially a built-in plugin called the Beatbox, which is a step sequencer that you could use for programming grooves. Speaking of all the doors that are out there, the Beatbox was actually modeled a lot on the same feature in another door called MuLab, which I think more than 10 people have heard of, but it might only be 12. But you should check out MuLab Step Sequencer. It is a really amazing step sequencer, a really great thing. So I worked on the Beatbox again for a little bit, and then it didn't really feel like that was going anywhere either. And suddenly thought, eh, I've been wanting to do clip launching the way that Ableton Live does for quite some time now. Let's just see how hard that could be. And so I just started hacking up some code to try out the basic idea of you press a button somewhere, or you press a key or a keyboard. And the program starts playing some piece of audio or some piece of MIDI to get stretched to match the current tempo. And as usually happens with software development, within three weeks, I had 80% of the functionality. It was awesome. It's like, oh my God, we've just rewritten Ableton Live and here we go. But of course, the regular truth of software development is this law that 80% of the development takes 20% of the time and the last 20% takes 80% of the time. So starting back in, I guess starting somewhere in December or January, that's when the 20% that was left started to become the focus of my own development. And it was a lot of fun. This whole clip launching stuff, which we can talk about more or not, as we go along. It's a huge amount of fun to play with. I'm not as convinced as I was when I started that it's actually an incredibly great way to make music, but it's certainly a great way to play around with sounds and beats and grooves and stuff like that. That feature bubbling up is what has really pushed 7.0 out to the present day, trying to get all of that stuff into good shape, fixing all the bugs that we discovered along the way with this time representation stuff. And then all the work that other people, who will be Robin, have been doing in the meantime on a whole bunch of other features that have crept in over the last year. It's really added up to be, it's going to be a pretty massive release in terms of new features. And in that sense, pretty different than 6.0. Way more work, way more new features. 6.0, as I recall, was a lot more about bug fixes and some new features, but nothing as big as what we're doing in 7.0. So it was a bit of a long-winded way of answering that question, but it hopefully gives some idea of what went down. Let me read another question. I lost it. Oh, I got it. Zelf-Paul asks, what motivated you to commit to an open source model? So my journey with open source began back in the 80s. I was a PhD student in Germany, working at an international research institute there. And one of the guys that helped to run the computing systems, a man called Neil Mansfield. I don't remember exactly how Neil got into it, but Neil had already discovered and knew about the GNU movement and knew about open source. And he taught me about that while I was there. I quit my PhD and moved back to England coincidentally to work at another job with Neil. He became my actual boss and spent a year and a half mentoring me. And I feel eternally grateful to him for that. But along the way, he did a lot to impress upon me the ideas and ideals that were behind the open source movement. And he also tortured me with not answering questions as well, when I would ask him things about TTY drivers. And he would just say, you'll figure it out. Have fun. Which eventually I did. I did. But it was like, God damn it, man, you know the answer. Could you just tell me the answer? Yeah. Incredibly valuable. If you're entering people, don't answer their questions. So Neil was the person that introduced me to the GNU project, taught me to use Emacs, gave me some sense of the broader moral perspective on open source. And I then ended up working, is probably about another 10 years or so in and out of academia and business. Sometimes when I was in academic context, I'd be working on open source stuff. And in the business world, not so much. But we would be using open source tools because that's just what programmers do, at least programmers who are using Unix systems. And so once I started, once I stopped being a professional programmer to raise my daughter, I don't know. I mean, to me, open source, it just seemed, it was the obvious choice. It was the obvious choice from a moral and philosophical perspective. But it also seemed to me it was the correct choice from a pragmatic perspective too. I didn't know this back in the late 90s or the early 2000s, but hiring people to work on this kind of software is really, really difficult. I mean, there's very few people out there who are able to do this kind of work and hiring them. Why would they come and work for you in particular? It's just crazy hard to do. If you go talk to the proprietary commercial companies, they have exactly the same story. It's very, very hard. And by creating an open source program, for 22 years, I've had, I slash other slash the other community have had contributions from a bunch of absolutely brilliant programmers, brilliant developers who've been willing to participate in this project. Maybe not as a job, maybe not for years and years or even decades and decades, but they've been willing to be a part of the project for a few months, a year, several years. And we've all benefited from their insight and brilliance and intelligence in ways that if this had been a closed source project, it wouldn't just never have happened. And so even if you didn't believe in, you know, that the software ought to be free, that users should have had the freedom to control their own software, to develop it, to redistribute it, even if you don't believe any of that stuff, if you just want to actually productively write audio software, making it open source means you're going to get way more contributions from people than you will do by having a closed source project. Now, I do happen to believe it in the philosophical side of it, so I'm very happy with how this has turned out. You know, my moral feelings about software have aligned with the pragmatics of how are we going to write a major project like this without huge amounts of money and, you know, hiring a bunch of people. And it's been really good to experience that kind of alignment of philosophy and pragmatic outcome. Yeah, so that kind of actually leads into, although I'm going to quickly interrupt with a random question from Lurking, wondering what's Paul's favorite Linux distro? Oh, God. That's going to be a long answer. It doesn't have to. I don't have a favorite. When I started using Linux, I used Red Hat but was not attached to it. I moved to Fedora, wasn't attached to it. And then, I don't know, about eight years ago, I can't remember the exact reasons why, but I was looking around to maybe move to something else and Robin Garry has convinced me that I should use Debian. I've been using Debian for the last, I don't know, eight years. Do I really care? No, not really. That's fair. But I just had to ask when I saw it. So then, uh, Destin, I can't read that, is wondering if you have any tips for programmers who are starting to get into audio programming? Well, one of the very first things that you must do is read a paper by Ross Benzina, B-E-N-C-I-N-A, uh, called something like Real Time Waits for No One, a Google search on Ross Benzina and Real Time will find it for you. This is absolutely mandatory reading for anyone who is doing this. It introduces a bunch of ideas that if you've written other kinds of software, possibly you hacked up an app on a phone or you hacked a little spreadsheet thing to do a useful thing for you or did some Python, Ross's paper will, will introduce you to the critical ideas that are at the heart of writing audio software. And if you don't know these, you'll go ahead and write audio software, but it won't work very well. And then secondly, don't write a door. You know, you've seen how that turns out. I just spent the last 22 years of my life doing it. We don't need another one. I'm sort of serious here, and I'm sort of joking obviously. But, you know, we live in a world now where the capabilities of plugins are expanding slowly, but progressively all the time. And it's very compelling to sit down and think, oh, I'm going to start my own door from scratch because it's so cool. And you know, I'm going to be able to make it do this and do that. And it's true. And one of the reasons why there are lots of doors in the world is because it's a lot, hell of a lot of fun. I mean, the reason why I wake up each morning and I look forward to, you know, coming in here and sitting out of the computer all day is that writing DAWs is freaking fantastic. Paul, I think your cable came loose. Oh, really? Hold on. Which one? Oh, you mean the ethernet? Do you mean my network connection? No, no, the mic one. I think it's better now. It was crackly. It's good. Okay, sorry. Carry on. Yeah, it's really fun to work on a DAW, but if you set out to write your own one, your own complete standalone application that will do all the stuff that you want from audio, it's only my opinion, but I think you're wasting your time and you're wasting the time of people that your software might be useful for. You know, there is a halfway house. There was a pretty cool non-open source project that popped up last month called Beat Scholar, which is an amazing plugin for doing drum programming. It works as a plugin and a standalone application, but those guys still wisely resisted their temptation to actually write a DAW. They said, okay, we're going to write something that just does this, and we're going to keep it fairly contained, and I'm on their Discord channel, and they're getting hammered all the time by people saying, could you add this? Could you add that? And you can see the writing on the wall, which is that a lot of their users want the standalone version to turn into a DAW. And I know what the users want that, but as a programmer, if you go down that path, you're just going to be reinventing so much stuff that has been invented previously. So if there's any way for you to find a place for your vision as an audio programmer by staying in the sort of plug-in world, everyone benefits from that. You won't need to reinvent the wheel that I reinvented and the guy that wrote Studio One reinvented and the guy that wrote Bitwig reinvented and so on, and you'll be able to produce something that's useful to weigh more people. Okay, we had a question earlier, why use Arti versus Studio One? Well, how about the users of your software? Don't have to make that choice. How about you write a plug-in that they can use in Arti or Studio One? So all of those things and get more sleep than I did when I started working on Arti. That's the other piece of advice I give. I guess people are now going to ask about Xerithm. Yeah, they are typing Xerithm. Have you heard about Xerithm? Yes, I do know about Xerithm and Alex and I have chatted on RRC over the years that he's worked on it. I'm happy to see that he's getting close to a release. I still think that the effort and ingenuity and invention that he's put into it would have been, as far as users are concerned, it would have been better for people if he'd have put all that into order. But then I would say that wouldn't I? Yeah, I also think that this is kind of the nature of the open source community or mindset that people like to have options and lots of people like to just make things by their own, even if they already exist. So, and maybe, yeah, maybe that's, as you said, maybe that is a waste of time if you look at it objectively. But for them, subjectively, it's a journey and possibly a very important one. Absolutely. No, I absolutely agree with that. And that's why I said that in some ways, I was joking when I say things like, don't write a DAW. I just think people sometimes need to be reminded of the fact that when you start down that, if you start down that kind of path, you're going down a path that many other people have gone down. And if you, you know, all depends on what your reasons for going down that path are, right? I mean, if you want to just learn how to be a better audio programmer, then you should probably just try to do whatever you're thinking about and see if you can make it work. If you're thinking about it with more of a perspective on, you know, what could I do that would be a really great thing for other people to use when they're doing a certain kind of work, then maybe don't go down that path. Maybe think a little bit more about a framework and a context in which to do some really cool and useful stuff for people that, that does not involve in you reinventing the wheel again. And I would stress that, you know, one of the reasons that I wrote Arta in the first place, I mean, this is a sort of secondary answer to the question of why was it open source in the first place? You know, when I started working on Arta, if you wanted to write a web browser or a text editor, or I think at that point, you know, we even had GIMP for audio editing, there were plenty of other photo editing software, there was some drawing software, you could go find an example of how to do that. If you didn't know how text editors work, you could go read the source code of a whole bunch of text editors and you could learn how they do their stuff. And if you wanted to then go on and do your own, great, you could do that. But when it took, when you came to the DAW world, there was nothing to learn from, right? I had already, I mentioned earlier that the S&D program, amazing audio file editor, not a DAW, not a real-time audio tool. There weren't any examples of anything floating around that a user could go, sorry, not a user, really, that an audio programmer could look at and say, well, I wonder how you do this in a door? You know, I wonder what types of data structures you use for that? I wonder what approach you use for that? And I really, I had the goal then, and I still have the goal now, that, you know, even if Arta is a bunch of crap, which some people think it is, even if there's no reason to use it in preference to any other DAW, it still stands right now as a big blob of code that other developers, including Alex and some others, can go look at and say, well, how did they do that in order? And they might turn around having seen that and said, oh, my God, these people are morons. I mean, what are they doing? This is terrible code. It's a stupid design. Why did you do that? And that's fine. My goal wasn't, you know, I didn't think I would be able to get everything right. I just thought it would be a great idea for there to be an example of how you do this stuff. And there were no examples. And there there is. And so if that leads people to go off that, you know, on their own garden path towards this thing that they want to do, if Arta can help them in that, then that's great. It really is. So Jen, you've got questions? Oh, yeah, sure. Sorry, I thought it was your turn. So a quick question is should be pretty quick is what engine components of our door are written in assembly? I know you answered that to me the other day. There it's just really a few very small pieces of code. We discovered back in the late 2000s that metering, that is to say looking at looking at every sample that goes through the program and seeing if it's the largest or smallest or maybe it's just if it's the largest value that's gone down a particular signal pathway was consuming by far the most CPU power of anything that Arta does. You have to remember that Arta doesn't have any built-in processing of its own. It certainly didn't at that point. It maybe has a couple of things that could be counted as that now, but it had none at that point. And the metering was really burning the CPU for us. And so a wonderful finished program and Sabalainen dedicated himself to the task of cooking up handwritten assembler to optimize all that code. And at that point, if you were running with a fairly large buffer size, like 1234 samples, his code cut 30% of the DSP load of running Arta. Since that time, we've slightly expanded the set of things that we did with with assembler. There's a couple of other specialized functions that apply a gain level to a buffer. And I think a couple of other functions too. But also in the meantime, we discovered that compilers have gotten much better. And the last round of improvements that we made to that code, which was, I think, earlier this year, possibly last year, that actually we can't do that much better anymore than the compilers can. So although we still have assembler in place for doing metering, some game applications, stuff like that, the actual benefits from it have really gone down from where they would have been 10 years ago. All right. Let me read a question from chat. Edwardusj asks, what midi workflow changes you definitely would like to implement in the future? Well, let's see what's the best way to go about answering this question. For the people that don't know Arta, and I know that there's a few people watching today who are not that familiar with it, one of the relatively unique characteristics in the program is the fact that we do not have a separate piano roll window for working with midi. We're not totally unique about that. Pro Tools also offers the same kind of workflow that we do. One of my favorite musicians is a huge fan of that workflow. But there's an awful lot of other people who really have a strong attachment to the idea that when you're doing midi editing, somehow on the window, there should be this thing that pops up or emerges somewhere that is a dedicated panel for doing midi editing. This is one of the two big things that people complain about with midi editing. Where is the piano roll? And when we tell them, oh, it's right in the track, you don't need a separate window. A lot of people are not convinced by this, including our host today, I believe, I'm not totally certain. And then the second thing that really bugs people is that we do not have a bunch of velocity lollipops, which are these little vertical sticks with balls on top of them that show you what the velocity of the notes are laid out along some horizontal line and let you move them up and down to control velocity. This is another feature that lots of people basically say, well, I just can't use this program for midi because you don't have velocity lollipops. As we said earlier, these are the kinds of things that are very much affected by what software you've learned to use and precisely what your workflow is. There are people who work with midi who hardly ever change the velocity of anything. There are people for whom it's very important to be able to control velocity. Actually, even more than you can with lollipops, people who, for example, want modulation sources from somewhere else or nice curves. So the question is, for these two features in particular, do we have any plans in the future? My own personal take is you don't need a goddamn midi piano roll and I'm not going to work to get into it to you. However, it does turn out that for a completely different reason, which is to do with the clip stuff that we've done, there actually is a use case for a piano roll type editor. Anybody who knows Ableton Live or has messed around with the pre-release version of Art of Seven will know that when you're on our Q's page or live session view page and you have all the clips in there, you can click on one and there's a bunch of boxes down at the bottom that let you do stuff to the clips and one of them is a little editor that will let you change what's in a midi clip or change the length of it and so on. And so we now have a new place in the program where you kind of need a window that would let you edit midi or at the very least change the boundaries of it, but probably edit it too. Paul. As a result, yes. Would you like to show this around? Demonstrated in Ardor, 7RC1? Not really, because the thing I'm talking about doesn't actually exist. All right. So you'd be pointing at a blank space and saying, this is what should go here. To do. Yeah, to do. But anyway, that clip launching stuff has generated a new and independent reason why we should be able to pop up some sort of area that you can do this kind of stuff in. And so for those fans of a separate piano roll window, there is hope. I'm not telling you when it's going to happen. I'm not even telling you that it is going to happen, but there will probably be work that will lead to this happening. On the velocity lollipops, everybody who wants these writes the worst chords or they don't use chords since the velocity lollipops are horrible for manipulating their velocity of chords because you need to typically edit all of them at once or it's tricky anyway. I'm not sure what we're going to do about velocity lollipops. There is a bug report, at least one of them, about this in a bug tracker. And somebody has made a really good suggestion that actually convinced me. I don't remember what it is right now. But at some point after this release is done, I'll go back and read the bug report again. And there is an idea in there for how to do this. I would stress that the main reason we haven't done it is not, I mean, doing stuff like this in the user interface is always complicated. But it really has been to do with the fact that I had this standing objection, which is like, how are you going to deal with chords? And until I got a valuable, sorry, not a valuable, until I got a viable answer to that question, I was not willing to even think about doing it. We do have a somewhat viable suggestion now. And I suspect that at some point in the future, we will implement something along those lines. In the meantime, I'd like to point out to you that there is a little velocity bar in every single note drawn in auto now. And you can just go there and click on it, and you can change the velocity of that note right where the note is, not down at the bottom and try to line up exactly which lollipop is which note and so on. Or you shouldn't have to do that because it would be selected and you could see it. But anyway, it's not hard to change velocities now in the program. It's just that you don't do it in the way that people think works better for their workflow. And I suspect that at some point in the future, now that we have a good suggestion for chords, we will do something about that. In the meantime, in version seven, there's been a bunch of other almost trivial changes that I personally think add up to a somewhat enhanced workflow for MIDI. As I said, a lot of them are really trivial stuff to do with how note selection works, providing the ability to scroll around in the note ranges, providing ways to explicitly control what note lengths, channels and velocities you're using. There's just been a recent fairly major change by Robin that allows MIDI regions to be opaque. That has not been true in our software previously. In order, you're allowed to layer regions on top of each other, which is something that most other DAWs do not do. And with audio, you've always had the choice of when you put one region on top of the other one, can you hear the one underneath it? And most people, I shouldn't say obviously, most people want the top one to be opaque, which is, if I put this region over this region, I don't want to hear the one underneath it. That has not been true for MIDI. They were always transparent. And so if you layered one MIDI region over another, you would hear both layers. It's now possible to make that choice and you can have an opaque MIDI region. We've also added a combined operation for MIDI regions, which a lot of people have been wanting. It's not something you couldn't do before, but it's a much simpler way if you've got three different MIDI regions with a bunch of stuff you like in them and you want to combine them into one that's now a trivial operation. So there's a bunch of small changes that we've made. Another example of what I mean, if you transpose notes, now the track will shift the note range that it's looking at so that you don't lose the notes. It's a small thing, but if you transpose stuff all the time, it's super annoying that you transpose them and they, where are the notes gone? They've just vanished. They went off the bottom or they went off the top. So I'm hoping that people who work on MIDI a lot will find themselves appreciating a lot of these little tweaks. And for those people who are real separate piano roll and velocity lollipop freaks will have something for you coming down the pipeline later. Yeah, excellent. I mean, I've come to like having the inline editor in certain ways because it helps you see where you're editing relative to what's going on in the track instead of just being at the bottom. So it can help like writing a baseline to line up with chords. So yeah, let's see here. So that's a good, where's the question I was looking for? It is right here. Daniel Murray asks, any possibility of a mixed template for our door that is like Mixbus 32 without doing it yourself? I don't know. I think that's the thing that you should probably turn to our community on our forums and ask about. There's certainly no reason why you couldn't do something quite close to that in order. You'd have to pick the plug again carefully. You won't be able to get exactly the same interconnection routing that Mixbus has since that routing is internal. The other big feature of Mixbus is not just the single processing, but the fact that you've actually got knobs for all of the built-in processing right there on the screen. But on the other hand, now that we have inline plug-in controls that you can choose to display, you could do some version of that too. It's not something that I think those of us who are involved in ARDA development would likely do, but it's possible that if you ask on the forums that you could generate some interest and some insight and expertise and actually come up with some kind of a template for that. I wonder. There is this extension to LV2 plug-in format that I believe you and Robin developed, the inline displays. Just Robin. Yeah. So these are non-interactive, but maybe it would make sense to allow interaction for them and then one could implement like a mixer strip using the inline display. So we have like an embedded little GUI for a plug-in right in the mixer strip. Yeah. I mean, right now you can combine the displays with the inline bar controls to control a specific parameter. So you could have a display of say a parametric EQ and then you could have knobs for each band. And that would eliminate the need to actually interact directly with the parametric display. But I also appreciate that quite a lot of people like that, like having a little parametric curve that you could tweak. I would question the utility of it when the amount of screen real estate is so small. I think probably just adding the doing the thing you can already do, which is to say I want to bring up the knee control for my compressor and put it directly in line in the mixer strip. That's something you can do right now. And as long as you've got plugins that don't have too many parameters that you really need to control, it will be quite easy to put those all in line in the mixer strip and have instant access to all of them. I guess the issue then would be that to expose these, you would need to do it individually for every single instance of the plugin. There is no way to just copy and paste the plugin with the settings for which parameters are displayed in. I don't know. Robin would know the answer to that for sure. I don't know where to copy and paste. Copy and paste might carry that over, but I'm not certain. But I would certainly say that it sounds like that's the thing you'd want to be able to do with that feature. Let me bring another question from the chat. Spiros Theodorides, I hope I'm pronouncing that right, asks, do all popular DWs, including Ardor, provide the same audio recording slash playback quality given all else equal? This is a little bit of a contentious topic because there are people out there who make claims about what DWs do internally and whether they have better sound and so on. At some fundamental level, all DAWs that I know of on the market today have a state they can be in where almost every bit in corresponds to the bit that you get out and they're the same bit or the same value if you like. Essentially, they're capable of completely transparent audio playback. Whatever files you tell them to use or whatever audio they have passing through them directly from the audio interface is going to be pretty much exactly what you hear on the other side. That being said, there are some DAWs where there is some evidence that they do actually, unless you disable it explicitly, they do actually do a little bit of processing on their own by default. But yes, I suppose that by default is a reasonable way to put it. Some of them do have things in the signal processing chain that mean that they are actually changing the signal a little bit. Obviously, they do that bit because they think it's for the best. Some users will like that they do that and some don't. Now, an extreme case version of this would be something like Mixboss, Harrison Mixboss who's based on order. Mixboss makes a very particular point of saying the sound you get out is not exactly what it pulls off of the disk or what you get in from your audio interface. And that's a feature rather than a bug because that's the point of it is to emulate what would happen when you put it through a traditional console and there would be some compression and some limiting and various other things going on. But I believe that all the DAWs that are around you can't, there are ways to turn off any of that kind of hidden implicit processing. The only exception to that is that some DAWs including order do have at least the option to apply what I just did about it so you can turn this off. The one thing the order does that prevents it from being bit-for-bit perfect playback by default is we apply tiny little fades at the beginning and ends of regions. So if you were literally doing a null comparison, if you were taking a piece of order, if you took an audio file and you play it, you put it into a region and order and then you start to turn to a roll and you play it, if you null it you'll notice that at the beginning and end of the region it doesn't quite null. But once again, that's a feature that you can turn off if you don't want that there. So yeah, I would say in general all of them really provide the same quality playback because I would consider the same quality playback to be we don't touch the audio at all. That to me is the definition of what DAWs should be doing. If you want it to sound different, if you want an EQ or you want a compressor or you want some tube warmth or something else in the sound then go ahead and add a plug-in. That's what the plug-ins are for and that's where you should get that stuff from. The DAW should be capable of giving you exactly the output that it had as its input and I believe all the doors on the market right now have a way to do that, although you may have to turn off a couple of settings to actually get that. So we've actually had two people ask I believe but what is your opinion on pipe wire and I'm going to extend that with what is our door going to properly support the pipe wire back end or just continue to use its jack implementation and I know you'll say so but I know you suggest also as the Linux back end. Yeah, so first of all I'm very happy that Wim Taiman started the the pipe wire project. I think it's a very exciting development for Linux audio and I think it's really just great that it's gotten in a relatively short period of time as far as it has. For those who don't know what pipe wire is just to clarify if you use audio or Linux right now you're confronted essentially with the lowest level of audio IO on Linux which is ALSA which is the kernel provided drivers and then you've got a choice after that of whether the software uses pulse audio which is a sort of consumer desktop oriented audio system. It lets you do a bunch of cool things that you probably don't want to do but they're cool anyway or using jack which is targeting more pro audio and music creation workflows and both of these two things pulse audio is people complain about pulse audio but I think the pulse audio is actually pretty great at what it does and jack is pretty great at what it does. The difficulty is lots of people run into collisions between them. They for example want to work in order on Linux and they want to watch a YouTube video at the same time that they're working on their session because the YouTube video has something that they want to replicate or something in the session and they find that they can't do it because their browser is probably using pulse audio and Arta is using jack for example or the ALSA back end and so it's quite frustrating for a lot of people and unfortunately as always happens with this it's most frustrating for the beginner users who really deserve a much smoother early experience on Linux but even for the experienced users you know it can be a pain and pipe wire is here to really fix all that by integrating what pulse audio does integrating what jack does into a single API and audio system for Linux and I think that I think this is going to be just great it's really a fantastic idea and it will put us in an even better situation than either Windows or macOS on Linux too. I mean we will have features that people can just use out of the box that are better than anything you can get on those other operating systems. Yes that said yeah absolutely that said pipe wire is not quite ready for use on the pro audio side of things yet they're making they're making rapid progress they respond rapidly and excellently to useful bug reports but most of the people I'm watching who are using as a substitute for jack are running into little details of parts of the jack API that they haven't implemented yet I have absolutely no doubt that they will implement them in time and when they do and when enough of you poor people who volunteered to test it out early say that it's working just fine then um then you know will you know our public position on pipe wire will just be yeah of course you should use it I mean it's what's there on Linux we're not quite there yet and I don't know how long it will be from now until you know when pipe wire does all the things it needs to do on the pro audio side certainly for the consumer and desktop side of things I think it's already better than pulse audio so it's a good system will we use it directly in order it's hard to answer that I always said we would never use we'd never provide a pulse audio back end and then Robin I think he was on a train going somewhere and he decided to implement a playback on the pulse audio back end um it's kind of you it is kind of useful uh in in some scenarios especially if you just want to do playback if you're just editing for example um so I would I'd never say never uh I don't think we have any plans for it right now um I personally feel that Linux audio world is better served really well I don't know I'm not sure what position is on this I I don't really want to see another API like the pipe wire API start start becoming one of the standard ways that applications write uh their audio IO stuff I feel that that would be a mistake I think we've got also in place already if that doesn't suit your purpose then it's a little bit more of a it's a bit more tricky to decide what you should do what what library do you use to to make it easy um if you're doing pro audio stuff you should almost certainly be using jack um just because it's the right API and it'll make sure that you design things in the right way but there is that gray zone you know for somebody who is doing something and they really don't want to use ulcer because it's too low level and they can't grapple with it what are they going to use I mean realistically in a few years they're probably going to end up using pipe wire and so we will end up in a scenario where it is a new API at that point where we had support for in order I don't know I mean there don't seem to be very many very many benefits of doing so um but we might do it anyway have you uh had any particular issues with pipe wire as of late or have you heard any user stories because I've been using it for a quarter maybe half a year now for normal desktop stuff and for audio production with ardor and other tools and I all the issues I had have been fixed so I don't use pipe wire at all I mostly just sit and watch other people use it and smile quietly to myself um but I believe there are still issues with its latency reporting stuff uh it does not get that correct at this point which means that if you're doing things with ardor in which latency for the signal paths matter you will get things that are either misaligned or or otherwise incorrectly aligned if you're using pipe wire um however for a lot of scenarios this doesn't I mean if you're literally just doing in the box production with ardor um it's it's gonna it's gonna just work um you know these sorts of things only start showing up when you start doing more complicated routing yes and then you'll see that blinking red no align button just because the information could be incorrect for example um so you know I think most of the issues with it are in are in the little corners of the jack api at this point um there I saw Rob I saw Rob obin noted in the chat that recording overdubs with pipe wire they will not align right now so you know stuff to do with with their latency reporting needs to be tied it up and there's a couple of other little corners of the whole jack api that they need to to uh improve a little bit um it'll certainly work for you know if you're doing stuff almost entirely in the box and don't have crazy routing stuff happening then uh it'll just work and as I said earlier I want to stress the positive side rather than rather than the negative side I mean pipe wire really does represent a much brighter future for audio on linux um I mean I've been doing things on on linux for long enough now my system is set up that I can have my browser running and auto running and other jack applications and other pulse audio applications everything just works but I'm really aware of the fact that when I try to explain to anybody how this works it's really hard to to explain how to get this set up and make it work reliably and you know even doing this live stream to this live stream today to actually get my nice microphone to be feeding into the video web based system that you're all looking at I mean I didn't have to hand over any of my children or anything but it wasn't a trivial task to to to get that working so I think you know the the maturation of pipe wire and its white and its adoption everywhere is going to be really fantastic for audio on linux yeah I would agree yeah the only thing I've seen is uh I think it was with uh making routing what are those called inserts uh I had some inserts wouldn't connect to other inserts using the jack api and I had to switch back to also for a crazy project I was working on but the routing on that project is insane so I yeah makes sense I think it works in normal jack I don't remember if I tested but yeah that's but for the most part uh I've been using pipe wire full time on all my systems and it's been great so let's see here I think umford you have a question yep I can I can read a question yep borheim borheim borheim him yusef asks hello I have always found that it's so hard to work with my local sounds something like a media manager as a side panel instead of a standalone window thank you um we have made steps in this direction which you'll see in order seven um probably not enough steps for the questioner or anybody else but some steps in that direction um I want to outline first of all why we're not that convinced we want to go too far down this path um if you look at a really good sample library application and a linux the obvious tool for that is sample cat um sample cat has a bunch of you know has a lot of features in it it's a full blown application for managing your sample libraries it's not just a list or a tree view or some other relatively sort of simple presentation of here's a bunch of files um it lets you do things like renaming files it lets you move them around it it lets you copy them it lets you copy parts of them it lets you change the metadata associated with with files I think it lets you recode them it's a really full fully blown extremely useful tool um assuming that it works which apparently some some people think that it doesn't but there were versions of it that worked maybe it's bit rotted in just a few years but even if it doesn't work today it represented what a fully blown sample library manager should look like and you know frankly the idea of building all of that into order uh it really goes against a little bit of the philosophy that that we tend to have with the program um for a long time we did not bother to well actually we never have done for example cd burning I know nobody burns cds anymore but back in the 2000s when people wanted to burn cds from their door and somebody of the proprietary doors would let you do this we just said look burning cds it is an operation with loads of different parameters and options and choices that you might want to make and we don't really want to re-implement a whole well-designed user interface for doing this when there are dedicated programs to do that so we'll help you try to integrate with those other programs but we're not going to re-implement that ourselves and sample library management is sort of an example of something oh dear sorry I just heard a noise outside of my house that's very bad news for me um okay gotta grab your shotgun no just a piece of building material that has just been broken by the wind um anyway let me get back to the point um yeah so we have a little bit of reluctance against trying to build all the sophistication of something like sample cat into order and and to be honest I think if I was going to do this that's what I would want to do I don't want like a totally half baked thing for this so having said all that about why you won't do it version seven there is a new uh panel that's a part of the it's not a standalone window uh it's part of the end of the lists area um which is over on the right hand side if you're showing it and if you're not showing it it's up in yep view show end of the list um clips and this is our our first step towards allowing you to um structure your own sample libraries and go and find things in there uh it doesn't do a whole lot right now um but it does provide some limited sense of organizing things by directories so we leave that for you to do in in your system file manager we leave the renaming in your system file manager but you can at least see some kind of a structure inside of arder um nice that if you know how to how to use your file manager you'll be able to actually get a very useful and coherent um presentation in that list so there are steps in that direction I would imagine that over the next year or plus you'll see additional improvements being made uh to that particular list panel um with new features new capabilities um stuff like that I do not think it is going to evolve all the way into sample cat I doubt if it will even evolve into the capabilities that some of the doors offer because they have got some of them have gone a long way down that path um but for working with for working with the clip manager sorry for working with clip launching you know my experience has been it gives you 70 of what you want right now to remind yeah this sorry oh god this reminds me a little bit of the situation with blender the open source 3d creation toolkit package maybe they had their own game engine and now um there wasn't something like godot game engine that is coming up now and it's like actually competitive with other proprietary game engines out there starting to become competitive so it it also I always thought that it was a great idea that they asked the game engine in blender because it was a wasted effort there are better game engines that are separate and are also open sourced and are great so it would be better to work on integrating with them rather than waste your time reinventing the wheel and just spending your resources on something that isn't really needed and never gonna be as good as a dedicated tool yeah that's uh that that's a direction we generally like to go in I have to say that there have been times in the past where that's that's been the philosophical position and then we go look at the standalone tools and you're like yeah you know what we could do this better and and and then we end up redoing it ourselves but um you know uh if if sample cat really is broken at this point it would be a really awesome thing for an audio developer to come along and or not an audio developer but somebody to come along and sort of work on on fixing the the features of that just because that was a program that had a lot of promise for that kind of of work um yeah there's another one called sample oh I'm muted my mic there's another one called sample hive too that I think umph probably knows a bit more about um I forget who's developing it but yeah I think that's a fair point that you don't need if you don't need to do it if someone else can provide it uh then it's probably better to let them handle it and keep working on what you're what you're focused on you know yeah so let's see here where was I had one more question oh yeah uh Ronnie asks are there any new plugins added by the our door team in our door seven no that's a quick answer so that moving on yeah right oh here we go uh pyrox asks is elastic audio for timing corrections on the horizon for our door it is definitely on the horizons now so one of the so internally one of the big aspects of doing clip launching is the idea that all the clips that you play are going to be playing at the tempo in the session at that point and so although we've had time stretching a time switching tool in order for many many years so that you could stretch a region or shrink it and it would you know maintain the correct temp the correct pitch while changing tempo with the clip launching stuff we now are actually doing that in real time um if you launch an audio clip that's at 134 bpm but the sessions at 72 bpm um we're time stretching it to make it fit the tempo as a result you know I've had substantially more experience than I have ever had in the past with using rubberband which is the library we use for doing this stuff um and it's it's relatively clear to me how we would do this for stuff on a timeline in in the future um I think will also allow us to start doing things like warp what some people call warp editing for example where you can go in and take a a region and say okay I want to stretch this but this beat and this beat and this point here they all need to stay at the same times or the same relative distances from each other and so on so you get sort of a generalized stretch but it's not just a uniform stretch um the way to do stuff like that is now relatively of is now relatively obvious as well um so I think that we will probably be adding a lot more things in that area however if you're literally just doing timing corrections then it depends a lot exactly on what you mean so for example if you're trying to uh you know some if you have a drum track and the drummer is you know he he hit the hi-hat too early um you can use warp editing to adjust that so even when we get there that would be a tool um I'm not sure that our workflow for that is is going to be as friendly as some of the other proprietary doors partly because I have a philosophical attachment to to saying you should tell the drummer to play it again um and not quite my tempo yeah or well it was the right tempo but you hit the hi-hat too early or you know can you bring it back uh you know I have very mixed feelings about some of the stuff that the technology we've seen emerging in the last 15 years in this niche that many of us work and live in um to take a tool like melodyne I mean you know it's not open source it's an incredibly powerful tool for manipulating audio you can take polyphonic parts and separate them by note and restructure chords with the sound of somebody singing and make them sing a different chord and it actually sounds really quite good um and I think with all of these tools there are creative purposes that they can be put to but there's also the thing they tend to get touted for more than anything which is fixing stuff that somebody didn't get right and I know that my own personal philosophy tends to be yeah the the creative side this seems interesting but the bit you didn't get right would I understand why time is money and you just want the what are you engineer to fix it in in the studio but I would personally prefer that you just go and improve what you did in the first place I don't really want to spend my time uh I think most of the other people have worked on order over the years I'm not super interested in spending their time working on things to to fix the mistakes that people are making when they try to perform um there's a lot of money in doing that obviously because you know commercial restore commercial recording studios and you know commercial music production benefits an awful lot from not having to have people come back into the studio um and and do another take it's just not an area that I'm super super interested in providing incredibly good workflows for but I think over the next year or two you will see the emergence of some new features that use elastic time in ways that will make some of that kind of thing a little easier to do I would like to add to that because um my experience with using like making music is that I I had the same I had the same idea that I should just if I recorded a bad part I should just record it again and again until I get it right so for the longest time uh on all of my existing albums which are there are four no five long place and some there are some vocal songs there is no pitch correction at all because I just rerecorded as long as I had to to get a usable take and they some of them aren't perfect anyway but when I like my my life changed a lot and I have much less time to actually do this I can't really I just have no way of uh rerecording my takes over and over or like spending three days to just rehearse my vocals to record them perfectly so tools like that could actually help amateurs as well to get usable results from very limited resources that they have yeah well this goes back if I can just speak now to a even bigger philosophical point and it's one that I struggle with struggles too strong of a word but it's a thing that has I've thought about a lot over 20 plus years of writing all writing audio software and the way I like to frame it is you know one of my favorite composers is the American minimalist Steve Reich and I think about Reich being a young man in New York City in the late 1960s and early 1970s and you know creating essentially a new style of music and you know because Reich had connections with some music academies in New York City and because he lived in New York City he had lots of connections to people and he could form ensembles that could come and play his music even if that meant you know playing this repeating sixteenth note riff for an hour and gradually shifting by a sixteenth note you know every eight bars he could find people to come and do that and he could just therefore hear what his like musical ideas sounded like and if Reich had grown up in a small town in Kansas City or in eastern Poland or in I don't know many parts of the African continent most parts of the world in fact out of the New York City this option is not available right he don't you don't have your network of of experimental music friends and people with basement studios and lofts to try all the stuff out in our friendships with the local piano music store where you can go in and actually use six pianos at once to rehearse your piece and all the technology that is you know connected with all the DAWs and the plugins and everything that we've seen you know emerge over the last 25 to 30 years now mean that that person who is much more isolated is not part of a large thriving live musical culture is now able to realize a bunch of their musical ideas and in many ways that just seems awesome I mean it's fabulous it's you know expanding human human creativity and expanding the the the reach of what a person who through no fault of their own or maybe their own preferences just can't cannot utilize other people in that way and instead they're now able to use computers and so on the one hand I think that's pretty awesome that the software and the technology has is allowing the potential for another Steve Reich whatever your particular preference might be to come along and do their own experiments with music and with composition and and that whole whole thing without needing to be in a in a particular kind of place but at the same time I also have a very fairly strong conviction that most of the music from many different traditions and cultures around the world that I consider to be really great seems to support the idea that the music is really a social activity and that humans have been involved in it as a social activity for centuries or millennia and I am a little troubled by the way in which the the the technology that I help to create is is making it less of a social activity and more of an individual an individual activity and although I think that's that can be great because I've sat here in this very room you know making my own stuff with VCV rack and it's a very enjoyable experience and I don't want to have to go out and find other people who can make a noise like the basal module in rack you know I want to just use the basal module in a rack and it's going to do exactly what I tell it to but I do have some doubts and and unease about the way in which you know as we continually push the the technology towards oh well Anfa doesn't need to actually either improve his singing or find you know a friend or somebody in in his neighborhood who can sing he can just do pitch correction and everything sounds great or I can shut the fuck up yeah or you can shut the fuck up on the one hand being able to do that technological stuff seems awesome for in this case for Anfa's creativity and by extension for many other people's creativity I just don't know what that does for the social aspect of music more broadly and I worry that um that that's a very important component of most musical cultures that we don't really want to lose I'm done yeah I can I kind of want to add to that but I think we'll just get way too deep into a philosophical discussion on music and completely lose sight of our door so instead I'll just as a uh someone deep into sound design and electronic music uh being able to warp samples in a creative way is helpful beyond just pitch correction since I can for example take a drum loop that's traditionally uh straight and swing it because you know I'm not a drummer I don't own a drum kit uh I can't just go and as you're saying oh I guess I have a gating issue but as you're saying the uh the social aspect there hopefully that's better the social aspect of it is definitely lost and not finding a drummer but it also means that if I just want a drum beat I can take anyone I can change maybe I can offset the kick using the editor and get a different feel to it instantly instead of having to maybe I can use the same loop for the entire song and shift it around five times so I think there is merit to having this in there in a creative right but I do see what you're saying in that it it encourages you to not be good at your instrument I mean I I absolutely I agree with all the possibilities that opens up and I think sometimes even even literally just straight musical ones as opposed to like you know interesting sound design things I mean one thing that I remember was a really profound moment and this is not really due with time stretching the way that you know with elastic time and rubber band and stuff like that there was a there's a vcv rack um what can you call him youtuber instructor person wonderful guy on Omri Cohen has a great youtube channel if if you want to learn how to use vcv rack Omri is the guy and he was doing a live stream and had cooked up this you know there's a pretty great piece that was running at I don't know knowing Omri is probably around 110 bpm and you know it was uh it's quite rhythmic and structured and I wouldn't exactly say funky but really really pushing the beat and the groove and he's sort of finished working on it and at some point the the question came up you know what happens if we change the tempo and he put out a call to to say like okay so what tempo should we try and people were throwing out all like 120 or 85 and somebody at some point just said 33 which seemed kind of absurd right because they're slowing it down so much now of course it doesn't there's no audio artifacts problem we're doing it in rack like this you're just changing the clock speed and he slowed it down to 33 and everyone who was watching that live stream including me and including Omri everybody just sat back and we were just like wow what the hell like it was a completely new piece of music with new timbres and new nuances even though everything was happening it was all just rack it was all just pretty much not completely deterministic but mostly and it was astounding and that was just from you know a simple tempo change and I know when I've been playing with with clips while I've been working on a seven you know the difference when you load in you know a fairly upbeat funky african drum beat and you slow it down significantly it has a totally different kind of a feel to it and so I definitely feel that you know this ability to to manipulate things and mess around with them and transform them in ways that yeah maybe it is undermining the fact you didn't bring the drummer back in and say hey could you you know could you try playing it much slower yeah it's undermining that but it's also opening up so many possibilities for people that just otherwise wouldn't be there because they wouldn't ask the drummer back they don't even know the drummer you know and not to mention then you can try it at y'all they're different speeds determine which speed you like the best and then call your drummer back in afterwards to come and record as well if you wanted to go that route which I do know a few uh musicians who do that so that's not an uncommon workflow but yeah it's funny the whole slowing it down thing is a pretty much that's what the vaporwave producers quote unquote discovered is that if you just take a popular song you slow it down 20% and they just do it with pitch uh which pitch shifting so it's just the whole thing's pitched down slowed down and it's a whole different experience uh so yeah it's pretty crazy it's just like it's just uh like speed adjustment there is no re pitch correction right no re sampling it's just slowing it down so yeah exactly the the notes and tuning everything just and there are also other examples that I could give you know one one that I like again not accessible to most people and depending on the style of music you're making completely not relevant at all but um there's a progressive rock band porcupine tree and they did an album where all the drum I mean was done on drum machines and you know relatively sophisticated drum programming I mean it was pro rock it wasn't you know four to the floor house or something but the the leader of that band wasn't ever really super satisfied with the way that the drum sounded on the album and in the end he bought in a live a live drummer and let him listen to it for a long time and they re-recorded the whole album with with the live drama essentially playing the same stuff but free to do his own fills and his own feel to it and everyone thought the result was you know was way better now I'm sure there are a few examples in musical history where it's gone the other direction too so yeah I don't I don't want to suggest there's hard lines here that like you know all music needs to happen by people getting together and playing instruments to together and being in the same room I just also don't want us to lose that element of what making music has always been about in our species and at this point we should probably move on to another question um and I I've got one I'd like to answer actually if if I can just jump in here go ahead so Skyhawk 77 says has Paul Davis considered working for another door developer like PreSonus who can use his programming skills in door development so some more personal background history here well yeah I have considered working not so much for PreSonus but with them because before they developed Studio One they were trying to decide what to do and they did actually meet with the head of of PreSonus because they were considering adopting Arter as the door that they would go with um obviously as time has told us they did not decide to do that and in fact they hired the guy that had written um Cubase and Nuendo already to write Studio One um I have worked quite extensively with waves um who are most well known as a plugin uh company based in Israel um but we did a project several years ago uh called Waves Tracks Live which was a DAW design specifically for live front of house audio work um and it strips a lot of features out there's no editing there's no plugins um they had had some features targeting front of house uh engineers um and of course I work very closely all the time with Harrison who produced Mixbus which is a commercial open source project um that is based on Arter but adds in a bunch of Harrison's ideas about workflow and signal processing um more broadly frankly it's inconceivable to me that I would ever go work for a not a company um the lifestyle that I get out of running my own project working on my own project um is uh unbeatable um and I don't think there are any companies out there that could really offer me um anything that would be more attractive than this um I'm also old and the tech that the tech industry suffers from and ages and product of them too so the the the chances of me you know jump jumping ship to another company right now I'm sure they'd much are the higher somebody who can talk rust or other modern pro programming languages so bring out your rust now it's quite fascinating to to hear about the connections in the music software industry like that you you've been working with different companies and different people that like everybody recognizes even though they probably most of them won't never heard about Ardor yet yeah I'm trying to rectify that and you're doing a good job um thanks I'm hoping yeah I mean you know the original story of this which you know I mean the the first well actually they weren't really the first the first company but at around about the same time that I the Harrison began talking about collaborations on what we could do together um I had a phone call one day with a British voice on the other end um my wife had answered it and so she gave the phone to me and you know I said hi you know this is Paul and the voice on the other end said oh hi hi this is peter gabriel and I said oh hi peter how are you um trying not to be too starved up so you know gabriel had just bought um solid state logic which uh is another very you know long-lived uh mixing console company and they they were probably the first company to literally just say we'll pay you to work on Ardor um he didn't last for very long for various reasons that were internal to solid state logic um but it was a very nice um what's the right way to put this it was very nice to see that the work I was doing was being seen it was being seen by people who at least ought to know what they're talking about and that they thought there was some merit to it um and I suspect at this point in time I probably don't need that feeling as much but I think when that happened um it was a very good feeling and really made me feel like I'm probably doing something good here yeah I mean it's definitely it's definitely helpful to me so I'm glad to have our door around so then I guess uh want to talk a little bit more about uh your collaborations with Harrison yeah um so the collaboration with Harrison has really been a fantastic thing for the program um and it also is illustrative of another more a broader point too one of the issues that happen with solid state and has happened with other companies too that I've spoken with is they just don't understand how they could take a free Libra open source piece of software and ever make any money with it because it just doesn't seem to make any sense the software is free right I mean how do we ever make money from doing this and Ben Loftus at Harrison approached me again sort of in the 2000 I think it was 2007-2008-ish kind of time frame um because he had some ideas for some products that Harrison was interested in creating that could be based on order and to Ben's enormous credit um he never really had a problem with this question like to him it was just kind of obvious that well we could produce something really great for our customers in the early things we were doing were very specialized things but you know it would be really great for our customers our customers don't really care whether it's open source or not you know they they need a tool that gets the job done and if we can build it for them with open source you know every everyone is going to be happy um and as time went on you know Ben had the idea of what happens if we took order and we changed it so that the workflow was more similar to working on a traditional console and to just talk a little bit about what that really means one of the things that's happened with DAWs you know if you go back and look at the first DAW which almost certainly was not Pro Tools but Pro Tools was the first one that became big and widespread um it the workflow it tends to promote is one where to use a well-worn industry phrase you fix it in the mix which is to say that you capture a lot of stuff into the into the DAW and then you start messing around with it um and that's really if you go back and look at what used to happen in analog recording studios in the 67 days eight days into the early 90s that's not really how you would work there'll be a lot of focused on on trying to make sure that what you were capturing was going to be as good as possible and what you would do afterwards was really just going to be deicing on the cake you couldn't possibly go in and fix fix everything and so most other proprietary DAWs at least in 2007-2008 um the whole manner in which you use the program did not really encourage setting things up to capture the best possible sound from the outset um and also they didn't really have to say even the same concept of channel strips that you would find in a traditional analog console um those traditional or the big analog consoles would have a you know that they would have EQs and they'd have compressors built into each channel and all the audio engineers knew how to use them um they knew they would be recording to tape so you would get possible tape saturation effects that you could use at the end of it um they would have limiters and other things all built in and those DAWs didn't I mean you could add that you could add those plugins that each individual track if you wanted to but they weren't there by default and a lot of people didn't use the DAWs in that way by default and Ben and the rest of the guys at Harrison wanted to build something it was more like the way you work with analog console which is to say all these basic plugins are already there and there's a knob for every single one of them and we've already set up the signal flow for you so that we've already got your mixed bosses which is what you would have had on an analog console and we've already got your master boss which has its own separate processing section and um that would be encouraging a different way of working and so you know they added they changed the whole the whole GUI design of the mixer uh to look the way they wanted it to look they didn't really touch the editor they added their own DSP processing capabilities that are modeled on their consoles and came up with a product that is entirely open source except for the DSP part um has a different sort of selling point to Archer has a different target for why you should use this tool and in fact a lot of people when they started using Mixbus I mean the stories we read all the time was oh yeah I just I like I use this jack thing or I use audio hijack to route from my Pro Tools session into Mixbus and at the time we were like oh god like we wanted you to actually use it as a DAW and record into it but initially you know a lot of people were using it just as a post-processing thing um but you know Mixbus has gone on to have you know a life of its own um we came to a very fair financial arrangement that was designed to make sure that even if Mixbus stole every possible Arta user in the entire world I would make about the same amount of money from Arta as as as I would if Mixbus didn't exist um Robin who's in the chat works I mean you people who see our commit list or hang around Arta a lot will will realize that Robin is probably the most productive developer that's ever worked on the program in in some technical sense he actually works for Harrison but almost all the work that he does actually goes into Arta first and then Harrison's Mixbus leader although he does some other work that's specifically focused on Harrison the Harrison side of things um and so you know our collaboration has been great I you know we interact with the Harrison folks on a regular basis daily I attend their their weekly technical meetings um and it's just all around been a really great thing for the program and it's been a really great example of as I said Ben very early on just realized that we we can do a product with this open source thing it's not uh it's not a it's not an issue for us and I think every other company that I've interacted with they love the idea of the program but when they start to get down to actual product stuff they're like yeah but how's this open sourcing work how do we make money and you know I think Harrison approved that even in this tiny niche area of audio technology that that you can make it work so very glad to be in in a partnership with them um and I hope and believe that it'll continue to be productive for for both the Arta project and Harrison Mixbus maybe we can look at the Arta 7 changelog and go through that what do you think Paul I think that'd be a great idea let's do it or the not yet properly written up but sort of finished version all right so I see the big big point on top is cues or clip launching yeah do you think we can like take a look at how this actually works inside of Ardor yeah we could it'll probably crash but no that's great that's what I want to see why would I work with software we can have a look at it just before you do that I want to give two little bits of background here as to why we did this or what the goal here was the first one is is that for years lots of people have wanted to do something with Ardor where the way of where basically that their need was like I've got this drum beat and I just want to use it in my session or I just want to pull up a drum beat so I can start playing over it or something and although you could imagine lots of different ways of doing that clip launching is one way of doing that um the other background to why we're doing this is because for many many many years I have long admired what Ableton has done with live they fundamentally changed the way that people make music with computers they changed the whole workflow away from this tape model where you have a timeline and you record things along the timeline into something totally different and before they created live there were plenty of experimental people who were doing the sorts of things that live that you do but they made it possible for everyone and their brother and sister in their bedrooms to do this stuff really easily and I've really admired that and clip launching was sort of a very fundamental part of that and I wanted to offer as much as we could of that kind of capability to Ardor users as well so that said show us how to use it on for yeah I have no idea you tell me so I suggest you switch to the page where we can see Ardor mostly instead of my so now now now everyone sees Ardor I'm not oh there it is yeah now we do okay now everyone sees Ardor so the first thing that you're going to want to do so let again let's just talk about the general idea this is if you've used Ardor in the past or you've used any of the traditional dw's pro tools logic qbase blah blah blah the rest of them you'll know that most of the workflow for editing and composing and creating is in this window which is we call the end of the window and you've got things laid out along a timeline and so you might import material you might record material you might draw in mini notes and so on but you've got to somehow create it lined up along this timeline with this sort of implication like well I want to put this region here this is where I played the base and it sounded like this so the clip launcher thing is a completely different model and instead of things being laid out on a timeline I will get back there's a big caveat to this which is one of our extensions but let's just talk about the live bitwig model first is a way of actually playing material that doesn't have a particular position on the timeline it's material that you launch just because you want to hear it right now so obviously because it's so different we're not going to do very much with it in this editor window so if you go over to the upper right there used to be three buttons up here that will label rec edit and mix and there's now an extra one that says q and if you click on q you will get into a completely new tab that didn't exist before you'll notice it's mostly blank right now and just for the simplest possible demonstration because you may not have any other material right now if you uh click on the flak link or thing there if you double click on it you'll hear it yeah it's played because I have autoplay enabled that's right um there's a checkbox and there we yeah so if you turn autoplay off you have to double click it if you have autoplay on it'll just play I'll check if it plays on stream it does okay everybody hears it except you okay except me great the magic of live audio routing the magic of a live audio routing that's fine so I think I can fix that so so it so to do a completely trivial example which is almost useless but so if you simply click and drag on that over into the open space yep so that created a new track and we have in the win in the track here it looks a little bit like the mixer strip but it's missing all the all the ratios of things have changed a little bit and at the very top of it rather than the buttons for input and output and phase control and all this kind of stuff we have a set of eight uh slots that you can put bits of audio or bits of midian uh obviously if they're or if it's an audio track we just create an audio track because this is an audio clip um we've got eight slots in here and if you click on the little white arrow next to the slot that has something in it it plays it plays now this happens to be Arda's own click at 120 bpm which is the default session how to stop it stop it yeah this is confusing in live too you see that little rotating pie thing down the bottom oh that's the somehow I did it yeah I think I switch you can also you can also click on any empty slot on the square button there yeah that's exactly how live does it because then at the bottom one is basically your stop button yeah which uh can be represented on your controller could just not yeah so that's not very interesting because everybody's heard the clip heard the click before but if you now go to your tempo button which is just below the clock and you click on the tempo and change that to whatever 85 or something let's do 85 as you say yeah and then now go back and click on this again okay it now fits the tempo now the other thing you didn't notice if you can stop it again for me and stop the transport as you don't need to stop the transport but that's fine so if you look up at the up at the clock oh I was gonna yeah I'm not sure I'm not sure if it can you hear it can you hear it I try to route the order so I can hear it okay does it sound right because I feel like it doesn't change the tempo of the click yeah I'm not sure it did either it seems like it's delayed but go press the real click button uh the top corner where the metronome no in our doors regular metronome so we can hear him yeah my regular okay there you go yeah so the metronome plays 85 but the clip the clip here does play 120 that was a really successful demonstration wasn't it uh I have no idea why that's happening I guess that means I'm gonna have to go back into testing again um that's why it's not finally yet yeah that's really weird because I've been testing this like day in day out for days and that that should have just changed the tempo of that um let me just yeah there's an option for stretching tempo I see we have a panel and it's turned on right let me just briefly try something here and just see if if indeed it's just been broken in some recent build or if there's something else going on um live outdoor debugging yeah watch paul work on our door live the excitement no that won't just find interesting so we can move on because uh I I understand what it was supposed to happen and I believe I saw a very early version of this where the clip launcher was a plugin and that did work I I remember that working so yeah that's something doesn't click here pun intended but um I guess we all know what it's supposed to do and there's lots of interesting things here oh I think we can't hear you paul uh oh our door might have stolen his could it be that yeah yeah look at it so then I guess we'll just we'll talk for one second well all sorts that yeah I'm pretty excited for clip launching I used to write music like that for a little while um and just restructure what I would do actually I would uh write all the sections to my song and then I'd split it up on the clip launcher and then I would just play through it and like when it felt like it was time for you know the next section or the breakdown or whatever I would just launch the clips then this works great for like progressive house and stuff they're really straightforward genres I'm guessing you can hear me again yes you're back we can hear you all right good um yeah so what should have happened was that it would stretch and fit the current tempo uh the other important part of this is that when you launched it if you look down at the bottom uh in the second box the launch options the launch quantized setting was one bar and so if you actually click on it right now um you will notice that the the clip doesn't actually start playing until the transport rolls through the next bar boundary one bar will make it a little bit more obvious what's happening if you set it to a small value you may not be able to tell the difference so if you leave it at one bar and go up and start the clip I don't know what's happening here I'm for you really wish this was not happening but well you have to listen to my voice over rather than the demonstration that's happening here uh highly awkward um all right so that should have started exactly on the bar and and when you stop it um it will actually stop on less than the bar quantization but again it will stop in time with the music and so the real feature of using this clip launching stuff and this is the stuff that live really um pioneered is that you're introducing new you're introducing musical material it is automatically stretched to fit the tempo and it automatically starts and stops on your musical time grid and so nothing ever really goes in and out of time um everything is aligned and and not off obviously you know if you're doing more experimental kind of music um that may not be what you want but for a lot of the kind of music that many people make that's exactly what you want um we have not replicated everything that is in live so let's talk a little bit about the properties down at the bottom um that john is currently sitting directly on top of i gotta move you out of the way john sorry yeah oh should no no no if um so there is un for can you move the window over towards the it's right hand side so that's you oh you move there we go they're in the blank space so we have we have four boxes down here that talk about the clip various things to do with the clip these boxes show up whenever you select a clip you'll see how it's surrounded by red in the strip up there the clip properties is the first one um and they are really not to do with the audio itself they're sort of metadata kind of things uh to do with it um second one launch options this is where you get really into the live stuff um the most significant ones here are probably the launch type and some of these do not function correctly yet but there are if you click on the trigger you'll see there's several different types of launch styles um and trigger just means it starts playing uh and there's other ones down here that will have they'll give you different behaviors what when the clip starts up that are kind of useful launch quantize controls what musical grid point the clip will start to play on and for a lot of things one bar is about right but if you're working with one off uh sounds you might want them to be on a quarter note for example so that they would play pretty much exactly when you hit the controller um there's uh i actually can't even read them on your screen can you zoom in a little bit on for because i'm actually reading it off the video yep there we go there's legato mode which again i think is not quite working and needs to be handled before the release so legato mode is a kind of sophisticated thing where you switch switch between two clips but you maintain relative position within them so if you are halfway through the first one you you thought halfway through the second one q isolate we can talk about later it's a bit complicated and related to something else velocity sense um if you are controlling this from a a device that can send midi velocity um you can turn up how much the midi velocity affects the volume of the clip um by default it has no effect at all and you just hear the audio as is if you turn this all the way up then your midi keyboard or midi pad controller or something will control the will control the volume now the next thing that live did that was sort of the basis of the in revolution really that that they created with this was it's not enough to just play this particular sound the question is what happens after the sound's finished right um obviously if you are playing for example drum pads um the answer is easy which is i'm going to make it start and when it's finished it should just stop because i'm trying to play a kick drum and that will be fine if you are manually playing drums but if you are playing loops for example um you might want one loop to play and then switch to another loop that was somehow related to it and then maybe to another loop and then maybe come back to the first loop um and so if you click on the follow options drop downs there are two possible follow options for each clip each cue uh stop is the trivial one which means after you play just stop don't do anything else but you can also make it so you can pick again which would be the just keep on looping forever um that would be useful for some fairly basic things you can also choose forward or reverse which means it'll go to the next occupied slot or the previous occupied slot and these will loop around or you can make it jump and the jump one brings up a dialogue which will give you the two slots or you can have a multi jump thing where you can say where you want it to jump so if you close that window for a moment or maybe you can imagine it if you imagine the other slots were filled you could actually say that after a is played i want you to jump to either d e or f but i'm not going to pick which one so if you engage them now we don't have anything in in those slots who won't do anything but that would mean introduce some controlled chaos that's right and obviously how much chaos will depend a lot on what you actually had in slot d e and f if if you had a loop in that it was extremely similar to the loop in a it's going to be just a subtle variation um you know a drum pattern where where there's a couple of extra hits or a couple of less hits on the other hand if if you change your one type of sound to a completely different type of sound then now you're talking about real chaos um there are two uh follow options um and there's a slider below them which controls the probability of picking one or the other and so this introduces yet another layer of chaos um in a sense that most of the time it might pick repeat or again but every once in a while it might go on to the next clip um and so you so you wow yeah no no no live does live does all this stuff this um tickles my imagination i have not worked with ableton live so i this is due to me and then below that we have a follow count and what the follow count says is if you're using something like forward or reverse for example if you bump follow count up to say two what this means is the way you have to set up right now let's move the probability slider all the way back to 100 left um and this means okay we're going to pick again i'm sorry no again it's the wrong so move it all the way to the right i'm sorry move the probability slider all the way to the right okay so this means every single time we're going to go forward to the next slot when this clip finishes oh but wait the follow counts only one okay so we'll do it right after it finishes but if you bump it up to two what this actually means is it's going to play it twice before it picks the follow action and you could have a three or four or have many times that you want and so this is sort of like a built-in a built-in repeat count before we decide what to do next there's also follow length and this is a bit sophisticated for me to try to explain in in in a in this context um let's just say that this lets you control how much of the clip will actually play before you consider the follow option and you can set it to less than the clip length or more than the clip length to get some very different kinds of effects um i'll i'll be doing a a demo video of this when we get once we get the release out to sort of talk through it or you can watch the you can watch the many videos there are about ableton live um and then finally in the last box we have the stretch options and this will show you what one of the most important things here is the bpm arda will try to detect the tempo of the clip um and it does a relatively good job it's not perfect um we give you divide by two multiply by two buttons to help you fix the most obvious errors um if it detects bpm and it thinks it's got a good answer then the stretch button will be turned on and that means that when we play it we will try to stretch it to fit the tempo um and the button on the other side of that is the stretch options which anybody who's seen the time effects tool in arda will have some idea of what these settings mean um what just happened i pressed tab i was being smart oh yeah you've missed escape so we we lost the button if you click on it again there we go there we go and then and then finally there's a clip length uh setting which again can be used to to determine how long arda actually thinks the clip is and this sort of interacts with the bpm counter so arda analyzed this clip and decided it was four beats long and it's 120 bpm you could change the clip length i think if you change it down to two for example you'll see the bpm change clip length that's an auto yeah there we go so now it thinks the bpm is only 60 um and that will affect you know what what the stretch does now the thing that's missing here i mentioned this earlier if you if you would use live or bitwig the area that your mouse pointer is in right now to the right of all those boxes would have a a clip editor in it and it would let you actually control the starts and ends of the clips and if it was midi data it would actually allow you to edit the midi contents of that clip as well um that is that will be coming in some future version i don't want to commit to any particular time frame um now it's unfortunate that there's some bug in this session that is not stretching things properly i think because of that it's not worth us because we have a lot of other other stuff to get to we should perhaps move on um but let's just do one little experiment here if you stop the transport just because because the clock is distracting me oh yeah it's been up to if you go up to window uh the window main menu and go down to library downloader for the first time otter is going to be providing some content and thanks to some very fine beat and loop makers at looperman.com who provide free totally free samples for you to use um and i have their explicit agreement to redistribute them under a cc by zero license in almost every case we've now got some loops in here that you can download and start using and the smallest one i think is lennox is there's a dance hole one in there somewhere i think it's only like 37 megabytes or something it's seven megabytes even and that's the dance hole one yeah yeah so if you just click on the install button it's showed okay installed now you now you can close the window and if you go back if you click on otter bundle content oh right now we've seen nika's beats dance hall and now you can drag one of those into a different track just drag it into the open area no place immediately okay so now click on on this little start button now try to change the change in their tempo on this one change the main tempo under the transport clock let's go back up to like 120 or something this works so now we could if we wanted to mess around with putting other clips in and have looking at the follow actions and stuff but i think since this is not actually a clip the clip launching tutorial yeah you know we can move on but yeah this is very very interesting i i haven't seen any of that i looked at rdor 7 like pre release builds i'd like stop looking at them kind of to get let myself be surprised and i'm happy i did because this has changed and developed immensely it's a it's a completely different thing and it's a it seems to be a completely different workflow i did not imagine otter could go in this direction and yeah allow such things and this is really cool because i have been looking for a live sequencer and there are many open source live sequencers but all of them have their good and bad they are nice and they are ugly sides and unfortunately more ugly sides than nice plus i really like to keep my things in the door so it's easy to pick up a project i've been working on before it's easy to archive it send it to someone i'm not very keen on using session managers that open up their multiple programs to route them together i prefer not to need that so i'm interested in seeing how rdor can make that happen especially with midi because yeah i synthesize everything now the other thing i would like to just mention briefly uh about the midi stuff that is kind of cool one of the big limitations in live and i still to this day don't understand why they did this is that live has zero concept to midi channels um in the sense that they force everything to channel one okay now because we have midi channels everywhere in auto we've never tried to hide them or do anything with them this means that you can actually load multi channel midi clips um and it means that you can actually do things in the clips that you can't do properly with only one channel so for example drum drum flams um you know where you actually need to have to have one hit on channel one and then the next hit on channel two so you actually get two hits even though the first one's supposed to keep ringing while the second one happens it's impossible to do this in live or bitwig or on it actually bitwig i don't think we hails it but anyway live you certainly can't do this because they force everything to one channel was here you can actually have multi channel multi timble um midi loops and this allows for a lot of possibilities they can also include program changes to make sure you always get the right sound there is another button that if we did a midi clip here you would see over in the clip properties you can turn off the patch changes because sometimes it's not actually what you want it's like i want to use this midi pattern but i don't want to use the program that the patch that it came with i'm just going to leave it and do my own sound um and so the midi stuff is is really cool i've been some of the midi loops that i was playing with in the last couple of weeks um multi timble drum stuff and just listening to what you can do by just changing out the voices for one of the hits um is just fantastic um obviously you can do that in live too it's just that here splitting them across channels allows you to do timing stuff that would not be possible otherwise um before we get away from clips just to talk about the one other big thing that we've done here um partly because we didn't implement the editor our guilty conscience said oh we have to come up with something else that's really cool what could we do that if we're not going to have a proper editor down there to let you like to determine where my sample begins and ends or have you let's do something else that's really cool uh if you go back to the editor window or tab or we're gonna call it and you'll notice down and all the rulers up top there's a ruler at at the bottom they're now called q markers i think q markers yes and if you right click anywhere on there qa and uh let's have qa qb and let's mute yeah well you don't you've only got one slot filled so it's not really worth it but let's mute the first track because it's got the horrible clip sample it doesn't do anything yep now go just move the top move the playhead all the way back to the beginning and oh i'm sorry not back to the beginning let's move it back to ju i didn't realize we were so far in yeah me either well just just drop another q marker somewhere oh there it is put the playhead there and just roll so now you can sequence these cues from the timeline now now just jump to a little bit after that green q marker oh it picks up sample accurate it knows that there should be a clip here and it just picks up just wow uh this is crazy and it's crazy awesome oh i have so many questions and ideas so that's uh so this is a workflow that you cannot do in any other door uh live and bit we obviously have had clip launching for a long time dp10 has it logic has it now i think there might be another one or two to have it none of them have this notion of being able to actually put what these essentially are to use their terminology if you go back to the back to the q page for a moment um those green buttons on the very far side these are scene launch buttons so what when you click one of these they activate the whole scene which is everything in every every slot in every track will start so everything in the row a for example if i everything in row a right um now you know live has gone a bit overboard and we did not do this they actually have follow actions for scenes which you know maybe we'll implement it at some point but it it just seemed like really i don't know i mean i i guess it's kind of handy but it gets a bit crazy when you start doing that stuff maybe i'm not over complicated first yeah um well what we're essentially allowing you to do is to trigger those scene buttons from the timeline so that you can play around with a bunch of stuff in this q page find some arrangements that you like find some combinations of different sounds of the different grooves that sort of go together go back to the timeline oh by by the way i should say that you can record the q markers as well um that that i can't quite remember how i know yeah yes record cues there we go and so now yeah if you start playing and you launch slots or if you want scenes they'll be recorded into the timeline yeah that'll be good now now go back to the yep stop the transfer go back to the edit window there are my cues and there are sink to time sink to grid there they are somewhat sink to the grid but if you drop one manually you can put them anywhere you want um manually but the clip still will not start until its quantization tells it to start so if it has bar quantization but you drop it a few hits you know a little bit before the bar it doesn't mean it's going to start where you put the mark it means it'll start at the quantized point because the whole point of this is to get everything lined up yeah by the way you can turn off quantization and that means that when you launch that slot it just starts right now and you would use that for example if you had samples that were just like i don't know you got some crazy boom right and or some other drum sound or some other percussive sound and you just want to boom if you want to play drum accents just you know add a little percussion that is exactly improvised yeah um so that that timeline thing the idea is mess around in the q-page come up with some things that you'd like maybe record yourself playing the triggering the scenes that record all that now it's on the timeline now go back to the timeline maybe you want to play play live over it maybe you want to play your guitar or you want to sing out of tune over it or you know so something else but basically everybody i've spoken to who uses live the one thing they've always said about it that has impressed me is that it may not be the DAW that you want to finish a track in but as a way of playing around and experimenting with what the track might be it it was sort of unsurpassed until bit we came along and what we're trying to do here is to integrate um that idea with the notion that actually you might you might go back to the timeline in order and actually really do a lot more work on the timeline and so having the cues there means that you can still use these loops that you've liked and enjoyed but now go back to the editor and actually do the kind of work that you were already used to doing in a linear editor i have a question and an idea um is because one thing that i might i guess people might want to do is to compose a track in the on the timeline in the editor view and then pick clips pick loops and put them into into the cues so they can then perform their composition live totally trivial so you can record in the timeline in fact why don't you do this right now on on that on that first track why don't you just record an empty region for i don't know a few seconds okay and now let's yeah i think if you i actually don't know how to do this robin tell me how to do this try right clicking on it a minute right click uh because i actually don't know i i don't know maybe it's maybe it's ranges oh wait there's no no no no it's nothing to do with that it's it's in the region itself there's an operation to put it directly in into the slot um wow i wonder if robin's even still watching uh there is an operation that's very easily accessible where you could take that region and just put it into one of your slots i i imagine i could like you know sequence a drum beat or a baseline and then slice the different versions of that into into clips and then be able to trigger my synthesizers or samplers live yes and in fact another another workflow that we were trying to support here is one like if you watch videos on on people using live one of the things that you'll notice is that particularly the ones able to produce is is that a lot of the time they get everything right the first time it's really remarkable like they record a little four bar thing on their guitar and it's exactly correct and as as we were thinking about how many people might actually come up with material if they play an instrument this is not really the in-the-box thing so much but if you play an instrument you know you're probably going to be grooving around and doing a bunch of stuff and and maybe you've got a beat going but you're playing your guitar you're playing bass or playing something else and you're just finding around and you know eventually something comes up that seems to be about right um and robin in chat said how to do this it's an option and bounce what about midi though can we bounce midi and they can be right it is now it is it is here now it's muted so because uh you can move it to a different track oh that's awesome yeah all right well i i think i don't know maybe i didn't understand what he meant but it does work all right so this is for audio yeah i'm i'm oh it does i see it's it's bounce yeah it's bounce all right so so you could have been jamming along for a while and somewhere in the middle there was this point where you suddenly thought that was awesome and you can go back and trim the region down and get that little thing and just bounce it straight into a slot um and then use that and so we were trying to support both this kind of it totally in the box thing where you've got loops and clips and samples that you know you want to use or think you might want to use but you could also be a person who just like is messing around with musical ideas and after a half an hour or two hours of playing there was some stuff in there you thought you'd like to create a track with so i wonder if i can make a a clip from a midi track or something i made a little drum beat yeah right it's hardly a beat but it is a drum loop say a kick drum i wonder if i can bounce this bounce okay but it's going to bounce into audio i guess bounce to clip library no just go to the clip library not the slot now there is a okay yeah part of the reason for this is i think that you created the the track here can you just do cancel for a minute and click on the q button up in the top right again yep so you can see the q tab oh what is that um that is the midi track you created uh huh oh uh so actually i would have expected the bounce here to offer the choice to go directly into a slot um because it would it would bounce to midi not to audio yes it will bounce to midi it won't bounce to audio uh robin writes no actually i think it's robin writes uh the bounce dialogue has bounced to slot option midi works too without processing you can add it to the slot there we go so with processing is right with processing is going to generate audio and so that goes into the clip library so you could use it without processing leaves you the midi which you can use directly in the track yeah so we have two options for midi clips bounce without processing and with processing so i guess with processing produces an audio clip audio yeah and that goes to the library and then you can use that in an audio track right because we can't put an audio clip as into as a q into our midi track that makes perfect sense yeah maybe it would be good to add like a little um with processing or like into audio maybe that would make it make it a little bit easier to understand for people yeah we are overloading that menu item a little bit there whoa okay bounce to trigger slot yeah let's go let's make d because i didn't use that one bounce oh it's there it doesn't seem to play it does play it's place yeah yeah so i have managed to synthesize a kick drum even in this live stream are you happy guys and and you can and now i'll go back up to the tempo button and and and decide you want all of it do you want all of this at 92 or something maybe 91 because that's the era was born yeah it just works i don't know what happens to my timeline so does my timeline fall apart do all the cues stay on at the cues stay where they're where so they fell out of alignment with the that's an interesting interesting observation i think i have not thought of that before so that's a good thing for me to scribble down i also always like to provide feedback and bug reports with these live streams for zero of them there was more stuff i could i could show probably needing work all right robin gary says with processing could also mean midi filters no sinf yeah yeah so i think if you have midi processing on a midi track you probably want to not bake it in and then run it again for the same processing midi processing i mean yeah for example if i have a midi plugin that i don't run wisely randomly plucks my chords i probably don't want to pluck pre-pluck my chords and then we pluck them again yeah or as we were discussing earlier maybe you do because that's part of your experiment and what's down design workflow right yep so you can do you can do you do them here wow crazy stuff do you think we should maybe move on through the change list because we have 30 minutes left and this is amazing i'm i'm literally looking forward to this i would all i would like to ask a question about midi and how the midi time is is handled but maybe it's gonna come up in the changelog so what do you think i'm just gonna very quickly answer john and say that in this release there there is only going to be support for using the push 2 surface from ableton to to launch clips the internal architecture is already there to launch them from midi notes but coming up with a design that would let you use a let's say a standard midi keyboard and do something sensible with it is a little bit challenging so this release will see the push to support another dot release maybe seven point one maybe seven point two will have support for the for the innovation launch pad to do this stuff with um and then along the way i may come up with ideas for how to provide just a totally generic mapping um the issue is is that you could have more tracks than fit on the screen and you could actually have more slots that you can see and and deciding what this note is actually supposed to trigger um to match up with what a person's really looking at on the screen it becomes quite complex um even in the case of the pads like the push 2 and the launch pad this notion that you need to swallow around up and down makes things somewhat complicated as well and so for that reason why we held back on it even though the mechanism is already in place in internally all right so moving on yep uh there we go internal time representation yeah i think it's not worth saying a lot about this uh because for most users um this is just a purely it's a bit of technical internal stuff that they don't really need to know about what i will say is that the design the main thing that design is intended to do is to limit uh or hopefully eliminate uh bugs that happen in r to 6 and earlier where we do well particularly with meeting with midi and the question is you know how do we represent the time of that meeting note is it measured in samples or is it is it is it measured in some sort of musical duration and in r to 7 we have a completely clear separation between these two time domains and um anything that's supposed to be in audio time is in audio time everything's supposed to be musical time is in musical time and we try to limit the the conversions that we do between them um but it basically means that hopefully you will no longer run into bugs such as i have a midi note at the very beginning of this region and it sometimes doesn't play um and i don't think users need to know a lot more about it than now unless they're into some really hairy code that i'm sort of proud of but wish we didn't need and if they're into that then they should go read the source code um so let's talk about ripple modes because this is cool let's go i i think um so we didn't do a very good job with ripple editing in previous versions and there are still some things left to be done particularly if you like to do um classical four-point editing um that they use in classical music production but we have changed and in i think dramatically improved and made more understandable the way that ripple editing works so ripple editing for those who don't understand what it means if you select a range on a range of of a track or the whole session and you cut it out the default behavior in order is that all the material that came after that just stays in the same place it does not move which unfair is about to demonstrate so he chopped out part of that region and the bit that was later stayed exactly where it was now if he was to undo that operation and then he goes up to these uh editing mode button and selects ripple oh actually i don't want you to do this with a region selection can you make that a time selection instead of a region one and just yeah there we go okay so now delete and this time we deleted it and later material moved back to to to fill up the gap um so that's the basic of ripple editing which is you cut out a chunk of time and the material afterwards ripples backwards in time to fill in what was removed and you know for this is not necessarily of so much use for music editing although it can be used there but it's really valuable for use in things like podcasts um any kind of vocal production or anything like that where you've literally just got words that you do not want in the finished version and you want to get rid of them and that means all the later words should should move up yes recording like voice acting voice so far so to all the lip smacks this speeds this work a lot yeah so the next thing that we've done is to split ripple into three related and hopefully logically understandable variants so ripple selected is the one where when you make that time selection which unfair did a few minutes a minute ago um that time selection just showed up in a single track and when we deleted it if you leave it as selected because you don't have enough other tracks here yet when he deleted it just that track rippled now if he had another track in in the session here which i'll create which he's about to get i was hearing a different beat so that's gonna be bad but there we go it's fine i mean you can mute it if you're gonna play but we're not gonna play it back anyway it doesn't it doesn't really matter yeah so now if you select on what if you do a time select on one of those tracks so r for range yep time select and now we are in ripple selected i'm like hitting delete okay so we're only gonna ripple the selected track that's the only track that was selected and that's what happens now there are some scenarios where this is precisely what you want um there are other scenarios where it's not what you want you want to operate across the whole session at once so we can undo if you go back to ripple all mode okay so now select a different time range it would have been nice if it were propagated but yeah so now the key thing here which john helped me to understand the other day and trying to describe this the real way to understand the relationship of these different modes is how does the selection process behave when you're in ripple selected you select something in one track and you or you select something in however many tracks you want to and those are the tracks that will be affected when you're in ripple all mode that the selection propagates across every track and then when you do the deletion it will affect every track okay the difference is you don't have to think did i select all the tracks i want to work across all the tracks all the time just put me in ripple all mode and i don't have to i can now just drag in a single track but the section propagates across all the tracks and therefore when i delete then the ripple will happen across all tracks and so you know if you were working with material here where everything needs to stay aligned um then ripple all is is what you would probably be using however uh as some of you know a few months ago or at the beginning of this year i sat down and did a very long chat with um justin frankle the lead developer of reaper and we recorded that in reaper and then i edited it in order and it was the first experience i had of editing you know a podcasty two person speaking to each other thing and it was awful um because you don't actually want either of these two modes uh what you actually want is what we call interview mode and so if you can go back up to there and go to interview mode and what happens in interview mode and my my mind is now going blank and i can't let me just go back and read what's new to myself how this works um that's right so so if you do exactly what john just said imagining that the upper track in our session here is paul speaking and he has just said um and you want to get rid of it but in the meantime justin is speaking in the other track so that so now uh hold on uh can you press escape to cancel the track selection first hmm there we go okay i had to click on that so you do the selection yeah so you do the selection of paul saying um in that top track but but unfair or the other person is speaking the other track and we definitely don't want to cut that out okay um so hold on we i didn't didn't want you to propagate the selection oh yeah sorry okay so now if you delete that it doesn't ripple okay yep so you just cut out the arm or the r or the tongue smack or the whatever it was but everything has stayed in alignment what the upper person was saying the lower person is saying is still properly aligned okay and the same there if you delete that okay i thought it's gonna ripple i thought i was gonna ripple now but now now let's pretend that the end of that lower region we've got that big block away form let's suppose that both people speaking in in that are both i don't know they both made a horrible noise that shouldn't be part of the recording so you do a time range selection yep yep and then you probably you'd propagate across both tracks and you delete it all right so okay so this the interview mode only ripples when we have more than one track selected correct our range tool correct that's interesting then i have a quick question from ov what happens to automation lens uh so that's a good question uh i don't know whether i want to answer it so in general well i know i know what happens to all automation lanes it's not necessarily good it won't be the answer that you're looking for i think but we should try it and see right so you'll notice that doing a time range selection does not propagate down into the automation track if you make it propagate down into it it doesn't ripple yeah it doesn't it doesn't ripple yeah so maybe in maybe in all mode it does ripple let's see i think it does not because we don't consider the automation tracks to be completely distinct so the workflow here that's imagined is that you would do your editing first before you did any automation for for example noise suppression or sibilant suppression stuff like that you would edit it first and then you'd add automation but that's something that we probably could and shall improve in in the future yes please yeah as a person who works a lot of media and automation being able to edit automation alongside media and audio regions that is would be very helpful because sometimes you just spend a lot of time control clicking automation lanes trying to make sure you didn't miss any just to copy and paste a song section or great and then to go back to to go back to a question was asked earlier about why should you use Ardress at a studio one this is something that we have been looking at studio one very carefully at because they have a very nice concept in there where essentially you you can you can define time ranges in a way that literally means i mean everything i don't care whether it's on the screen or whether it's off the screen i mean everything from here to here is what this section is and when i cut and paste it or i cut it and ripple or whatever it is that i'm doing it's everything it's the whole session and myself and Robin and Ben and Harrison have taught on and off over several years about the need to have that kind of editing workflow um because one of the things can be trivially happen right now is that you know you just maybe you actually did have a track selected but it's off screen what does that mean when you cut like you couldn't see it so did you mean to cut it or did you not mean to cut it we need to have better ways of answering that question for for users so so three new just sorry go ahead i was gonna say i see one real quick question on automation before we get back to the change so is there any plans for curved automation do i just not know it exists um no it does not exist okay um it does not exist we that adding that will probably be a part of another major effort that we might do could be a part of our rate i'm not sure most is with modulation in general makes sense if we start doing modulation in general it will probably start to include curves sweet i'm excited for that so then i guess next uh we're on to mixer scenes then right which is another great feature yeah i'd say so mixer scenes is all robin's work to give credit where it's due and it's pretty simple to explain um they show up in the lower left of the mix of you uh as long as you have show mixer lists enabled and um lower left of the mix you're in the editor view aren't you yeah so you have to go to the yep they're there and then we're in the way of them i think yeah you are in the way of them coming down on my head i need to get rid of myself and and john john for a while right click store scene name hey does it also store plugin layouts and presets oh so i can do this and even maybe that make my kick really bad okay store the second one recall mixer scene a this will override your mix settings the separation can't be undone yeah it got back to mixers stuff it didn't change the plugin ordering and didn't roll back the plugin state gotcha oh yeah that's oh that that's on my end uh either paul just keep talking your back there we go mixer scenes are awesome and do not restore plugins they restore the parameters of plugins but they do not restore the presence of plugins robin says a scene restores all automatable plugins so is it possible that we can't automate that portion of gion kick gion kick isn't quite automatable it doesn't have any automatable parameters that makes sense all right so the ordering of the other processing stack i don't know if it is expected to to be stored it did not yeah i did not robin did not change yeah he said plugin positions are not saving a restore okay now the one other cool trick with this is if you put your middle mouse button over one of those buttons down there and click on that yep oh it previews moment i can momentarily listen to how it sounds yeah oh john i think this is perfect for you million mix versions i think i think you may just finish that the in the realm mix with this feature by the end of the year we can only pray i'll at least have another 700 scenes for it yeah there's not enough right tester for mixer scenes in about two days when i expect a didi awesome moment is actually not a distribute just a denial of service attack from john you're right keep my eye keeping my eye on a cooking click ticking clock um so that's mixer scenes uh very powerful um very easy to use um could could maybe get extended in a few subtle uh directions in the future but essentially does what we think what we think it should do right now uh midi editing um i don't know if we really have time to go through at this point and demonstrate all these sort of tweaks to midi editing but there's you know a good dozen to 10 to 20 tweaks that happen to midi editing um that will hopefully make working with itty that little bit easier and simple for most people um the thing that i am most interested in is are if there are no timing errors with midi and like missing first you know in the region because these are the most troublesome things to me at least we we do believe that that has been fixed uh several people had sessions that they've redone in seven and they don't have the problem now um the moment i hear from anyone but that's happening again i will first vomit and then immediately jump on the problem and try to fix it then we should do it on the live on the live stream no definitely you guys shouldn't see that um one other one other thing that actually came from a suggestion from john last week that was one of those great examples of how art of development happens which is that sometimes it just takes somebody asking about a problem or a feature in just the right way that makes it suddenly totally obvious that that is the right thing to do even when it has not been previously and so anybody who's used to using art or midi knows that we have linked midi region copies and unlinked midi region copies and until last week when you made an unlinked midi region copy it was only as long as the visible region that you copied it from so if you had recorded let's say 12 bars of midi and you trimmed it down four bars and then you copied it unlinked the other bars didn't exist you couldn't trim it back out that's not true anymore so when you make an unlinked midi region copy it has all the bars accessible so you can just trim it back out to include them it's a yes tiny little detail but you know as i said sometimes it just takes the right explanation awesome and luckily that only took you a day uh i don't even think you worked on it a whole day you know a few minutes yeah yeah it was about 20 minutes worth of work it was yeah it was very easy and it saved me 20 minutes of work already there we go the power the power of programming i worked for for 20 minutes it saves a thousand people 20 minutes um midi stem export has been added um so you in order forever you could well not forever but for quite a few versions you could do stem exports of your audio tracks which gives you one file per track instead of it all being mixed down this is typically what gets used when moving sessions from door to door or making archival copies or making stems that you can be used by rick biato on his youtube channel to take apart your music um we have now added this for midi tracks too you couldn't do that before but so if you have midi tracks in the session you can do a stem export of them and get one midi file per track um it's quite you it's quite useful if you're in in that kind of workflow um let's see another sort of big deal for people who've been around arta for a long time this is the return of an old feature for people who have not been around arta there is a magnificent website uh run out of a university in basalona called freesound that has thousands tens of thousands hundreds of thousands of free meaning creative commons by something licenses oodles of samples covering almost everything that you can think of not so much musical material in a sense of loops like the stuff that we're providing in the in the library downloader but just a bunch of cool stuff that you can use for doing soundtracks or even music um the album that i made last year i got some samples of kids playing it in a playground in in south africa it it's incredible resource we used to let you browse that from inside arta's import dialogue but then freesound changed their authentication process and it became impossible to to do that anymore now you can do it again um the difference is because of what freesound did is you have to have your own freesound account which you set up on their website you're going to provide arta with your um credentials one time and after that you can browse their collection from inside of the import dialogue the way unfur is showing here um you can preview and then you can download stuff um as i said if you're doing work that needs the kinds of samples they have very powerful we're really glad to have it back uh colin fletcher in the uk um was responsible for the changes to bring this back again um it was unfortunate that they changed their authentication stuff but we understand why they did it um it wasn't anything weird or strange they just switched to something a bit more conventional um i think did you actually bring in a oh yeah okay now it's actually for you to log in yeah so that makes perfect sense that is amazing i've been using freesound especially creative commons zero samples sounds recordings a lot in professional work uh for video game sound effects uh also i have lots of my own stuff on freesound and i sometimes just fetch my own recordings of of it so this will make things a lot easier and faster for me on that front and i'm sure for a lot of other people working with sound effects or films audio like folly and stuff yeah that's really cool um so as i said we had it many versions ago i think it's been gone for at least two major versions possibly more than that but it's back um lastly on our major items list is a new feature that i don't actually think is going to be that useful to most people um i don't think we need to spend much time in it but it's now possible to have what we call i o plugins and at the moment in order if you add a plugin you're adding it to a track or to a bus possibly to the monitor section but it's running in a within the context of the session itself um it's connected to a track or a bus and i o plugins instead are plugins that run essentially outside of the session they are handling audio coming in from your audio interface or you're going out to your audio interface they're not connected to any tracks although they are saved i believe as part of the session in most senses they're not really a part of the session at all they're they're doing processing on the audio before the session sees it and processing on on the audio after the session has done its job of creating sound um these were created uh by Harrison um for a specialized version of mixed bus that they make that uses um a new sort of broadcast uh it's a network audio and video protocol that's that's bubbling up in the industry and so if you have a plugin for example that does network audio i o you can stick it in here and it just shows up as a new track input similarly if you have one that does network audio output you could stick it in in as a as an i o plugin and it would be a new output for tracks there is an open source plugin called sonobus sonobus actually it's a standalone application i this is what comes to my mind as does it use yeah in the open source space sonobus would have to get reworked a little bit to be usable in this context but that would be exactly the kind of thing i should also mention sonobus was written by by jessie chapel who was one of the was the second person to help develop arder and was very responsible for a lot of its early features um so yeah there are some new possibilities for on the i o plugin side but it probably won't have much impact for most users it's uh kind of a specialized uh thing to want to do um we got a bunch of improvements for control services mostly bug fixes and support for new devices um there's been a great improvement in anybody uses a push two our pad tuning is now correct uh because paul doesn't know music and got it all wrong the first time um when it says it goes up by fourths it actually goes up by fourths now um and then we have a whole bunch of bug fixes which i'm not going to go through but just to say there's a whole bunch of things in here um many of which are very small most of which will not affect most people but for the people that they help they'll be critical or will make them extremely happy i think um and so all of that stuff will be out just as soon as we can get it packaged up and we feel comfortable that the bugs that remain are not total show stoppers so as soon as i stop breaking it they'll send it out that would still leave unfair to right yeah you've got to get on that oh i didn't hear i don't hear you yeah i gotta debug this yeah exactly wow paul it's been amazing to have you here i'm really grateful that you wanted to do this we tried it to do it for for the audience we tried to do this before but we had to um postpone the the interview and i'm happy that we did because we are much closer to our door seven release yep well it's been a great pleasure uh i've enjoyed being here and talking through things sorry that we didn't we didn't get to answer everyone's questions but yeah that's unfortunately just physically impossible but you can always catch paul in my community chat so i'll plug that really soon i i'm over there and i'm on rsc and you can you can reach us on the art of forums as well so yep yeah thank you so much thanks to everyone who has joined this stream thank you john for co-hosting once again thank you for thank you paul for coming and doing this interview it's it's it's a real pleasure to to be able to to talk to you and geek out over all this stuff and i also want to just thank you for making our door and for putting up with my bug reports and complaints and me sometimes writing i can't see shit that wasn't a particularly uh nice email from me but i apologize if i ever was uh not very pleasant and well yeah if all if all of our bug report if all of our bug reporters did as good of the job as you did then then then we'd be a lot further along so well and and thank you for all that you do for free and open source music and other creative software because that's a very valuable part of this whole effort too thanks all right special thanks to robin for hanging out and chatting answering some extra questions yeah thanks robin yes it's also great that robin made it to to the chat and we've had about 90 people watching throughout the whole thing which is quite a lot and i'm really happy that we've had the audience that we had yep it's time to say goodbye so yeah that's all thank you for for joining this live stream uh if you would like to catch uh catch paul in in the community chat you can also go to the anfa community chat at chat.anfa.xyz there is a discord uh discord sign to it to it but by default you'll go to rocket chat which is open source because open source and uh also a huge thanks to everyone who is supporting my work uh if you are enjoying this and would like to help me keep doing stuff like that you can go to patreon.com slash anfa or liberapay.com slash anfa wrong side sorry for the mic slap and you can donate to me so i can keep making stuff and promoting open source software and having awesome guests that's all for today now you should better go and make some music