 Welcome. I'm Chris Wilson. I'm a developer advocate at Google. And one of my passions is web audio and music in general. So I'm here to chat with Chris Lois, who, among other things, is the editor of the Web Audio Weekly newsletter. So welcome, Chris. Nice to have you here. We're good to be in the same place at the same time for once, which is rather nice. So I have to say, first of all, thanks for doing Web Audio Weekly. I think it's a tremendous resource for anyone who's interested in audio on the web and music in general, for that matter. I think what I'd start off by asking you is just from what you see coming through Web Audio, what's most exciting today? What do you find most compelling? So I think the technology has been around in its infancy at the moment, we could say. But it's starting to become something that people are relying on and can use across lots of different devices and platforms. So I think there's been a huge early adopter crowd, people who want to build digital audio workstations and modular synthesizers and that kind of thing, really pushing the boundaries early on, which has been great in the sort of development of the specification and the capabilities. But I think we're starting to see people using the technology to do really interesting things that are not specifically for audio enthusiasts. I'm enjoying the work that people are doing on collaborative editing of sheet music and being able to share the pieces with your bandmates and sort of things that when it's on the web and everyone can just use it and listen to a part and print out their piece for their orchestra or for their band, it's kind of putting all of that technology and software that previously would have been very difficult to use and install and very expensive immediately on the web, making it collaborative and those kind of things. Yeah, I think the exciting part for me too is just, as you said, seeing all these things that used to be sort of science experiments now be deliverable on the web and on web audio. Several years ago, I wrote a pitch detect demo like I was like, I wonder how hard it is to use web audio to tune my guitar. It turns out it's a bit hard, but I figured it out eventually and I put this demo together. It didn't make a huge deal about it because it wasn't particularly well built, but I learned a lot about how tuning works. And then last week, one of my coworkers, Paul Lewis, went and built a guitar tuner, which I now have installed on my phone. In fact, I last week deleted every other guitar tuner off my phone. And this is what I use to do my guitar tuning with because it just works. I mean, it works like an app, feels like an app. It works offline, et cetera. He followed all the best practices, beautiful interface, and uses auto-correlation to figure out the tuning. And I think that the same thing is going to become true of, as you said, sheet music, of all these other little pieces of music that really collaboration is a key feature. And I think the web makes that so much easier. Absolutely, yeah. I've seen some interesting things that the BBC and Al Jazeera have done with tools for journalists in the field where you don't have the time or the space, or you can't carry all the equipment you need to set up something. But being able to just open it, record it, do a simple edit, and upload it back to home is straight on your tablet or laptop. You don't need all of the capabilities, perhaps, of a full blown piece of Pro Tools somewhere. I think for both of us, when we started in audio, the equipment was a huge amount of stuff that you had to carry around with you. If you wanted to record in a portable system, you had to carry a backpack full of gear. And today, most of what you need is in your phone, or your tablet, or whatever device you happen to have in front of you is probably reasonable to do recording and editing and everything with. Yeah, and exciting as well is just the, when I was learning how to program for the web, you would open a page view source, see the HTML, probably no CSS at that time, see all the table layouts, those kind of things. But as it went along by reading other people's code, and that idea that you're just a JavaScript console away from a fully programmable audio environment, just as an educational tool, I think, is really powerful. I think a way of getting children into programming, or maybe people who have an interest in audio but would like to do a programming or vice versa, this kind of immediately accessible, it's there. And it's a full-featured programming language in audio environment, and there's not a lot you couldn't do given enough time and the advances with this. As usual, although that is, I mean, we were talking earlier, I think that is one of the challenges today, is building user experiences is hard, whether it's in native or web, for that matter, but building the user experience is still sometimes a significant piece. I mean, like Paul's guitar tuner, he has a much more intuitive feel for user experience than I do. So it looks beautiful. It actually works. And it's something that I can replace my guitar tuner with because it works just as well as a user. Whether technically or not is not really the issue. But then you have, by putting it there in the web environment, you're opening the field to such a huge number of people who are coming from lots of different backgrounds to do audio interfaces. I think there's quite an interesting movement at the moment of using web audio and people's smartphones to do distributed performances, either having audience participation or using the phones as a diffuse array of loudspeakers all around the audience and distributing the sound. No one needs any special equipment to do that. But it's also making, it's opening up the possibility of interfaces for making music and sound to a huge number of people. Absolutely. We'll push the boundaries in that, I think. It'll be interesting to see the first concert where that becomes a thing. Instead of holding your phone up just to show the screen, like the old lighter thing, you'll actually be participating and creating the music. That's pretty actually totally possible today, given all of the features that we have. I think the fact that we've pushed to get all of web audio onto mobile and have it work well in mobile is a huge piece of this. The fact that not only does my phone work well for mobile, but my tablet, my laptop, my desktop computer, my TV, for that matter, whatever I need as a surface to interact with it can be there. I think that's going to be a huge advantage for web audio and the web as a music production environment to me is getting a big screen up with a big display is actually quite easy. But connecting five of those together without the web as a collaborative environment can be quite hard. But I can easily stack a couple of iPads next to each other and my Android phone and a TV and everything and have every member of the band have their own set list and music chart. We're surrounded by bits of props here, a few keyboards and things. I mean, that's another really interesting development. This is a hard-working keyboard right here. This is my workhorse when I travel with it all the time. Now we can connect these. I mean, you're saying that was one of the things that you've seen that's really interesting with how traditional instrument manufacturers are embracing the web using web MIDI and web audio to sort of. I think between having devices like this that are an easy and obvious, much more intuitive controller, the first thing, this actually was where web MIDI came from, is when I first saw web audio and looked at the first question I asked myself as a synth geek from the 80s was, can I build a synthesizer with this? And it turns out the answer is actually fairly simply. And so I built this and then immediately I was like, wow, I can't stand using my computer keyboard to play music with. That's not a musical instrument to me. And the latency on touch input was problematic at the time and using a mouse to tap on screen keyboard, I don't even want to go there. So I want to be able to plug a device like this in. I have piles of synthesizer keyboards at home and controllers. I want to be able to plug those in as well. So why can't we do that from the web? So I went and made a proposal about web MIDI. I got some people excited about building it internally in Chrome. And now it's a thing. They're building it in Firefox today. And hopefully we'll see it in the other browsers as well. But I think that idea of I can use whatever device is attached to my local keyboard. Like, I don't see this isn't going to be something that every web browser in the world has a MIDI device plugged into. But if I want to be working on music, likely the device I'm using has MIDI support. Or audio, if not MIDI, in which case we need audio I.O. to work really well. The interesting thing about MIDI as well is we're not just taking the web platform back to the 80s. It's a really simple protocol. You can get a library for an Arduino that will do MIDI. And suddenly you can connect the button into your web applications. It doesn't necessarily even need to be an audio application to have that bring in hardware devices and connecting them to the front. I mean, MIDI has a very simple protocol for the basics, the controllers and note-on, note-off messages. But then it has this massive open-ended expansion that it details what you can do or what the parameters of how to build your own custom things. And manufacturers have been going nuts with that for 30 years now. So there's plenty of extended uses of MIDI beyond that. But the basics are still there. And the basics of I want to be able to plug a controller in and play notes on a piano keyboard, that's quite a powerful thing. Or I want to be able to plug in my drum pads and play in a beat very easily or use the pads on that keyboard over there to play in a beat. That's quite a powerful thing to be able to easily do. And for educational purposes as well, I think this is immense. I've seen a number of companies like Piano Marvel, I think, and other ones and notation software like Note Flight. And you get these experiences where you can do training so much better when you have a computer connection. And any time you do that, but you require a certain operating system and configuration, and you have to put your laptop right next to your piano and hopefully it recognizes the notes or whatever, that's not a great experience for students. And having been a piano student myself, I know the hardest thing is you have a lesson maybe once a week. And in the meantime, you just practice by yourself and hopefully you're getting it right. But now software can actually watch that. And it can guide you even when you're not there with your instructor at the time. And it can say, hey, your timing is a little rough on this part. And here's how you need to correct it. Things that otherwise you would have been practicing it wrong for a week. And then listening to your instructor say, nope, go back and do it again. And I think that kind of education is massively helpful. And at the same time, screens and web browsing capabilities are essentially widely available at this point. I have multiple iPads and tablets and phones in my house. I can put any one of them up on my piano and use it as sheet music notation or a backing score even and have it play that for me. So I think that's pretty compelling. I mean, one of the things that I'm not so familiar with that you've talked to a lot of people about, I'm sure, is games. Games is an interesting one. Where are you out with audio and games? Yeah, games is a really interesting one. Because I think that I remember back in the late 80s, I guess the early 90s more, there was a push to start using MIDI in games. This idea that you could provide really good quality music at a low bandwidth by using MIDI. And it turns out that wasn't a great idea because people don't have the same MIDI synthesizer. Even general MIDI, it's not identical. And musicians, you really want to make sure that the bass drum does sound exactly like the bass drum you programmed, not some other rough approximation. But the audio capabilities or the audio needs for gaming is something that Web Audio was originally, it was very closely designed for that as well as for pro audio. Specifically, when you have a game and you fire a laser in the game, you want to hear that pew, pew, pew sound right when the user hits the button, right when the gamer expects it to happen. And this is something that HTML did not do well. Like the web platform did not do well prior to this, because the audio element and previous ways of playing audio, they wrapped up the stages of loading, decoding, and playing all together. And they weren't very precise. It didn't have this sample accurate, OK, start playing the sound right now. And oh, by the way, I may have five copies of this sound or five sounds that are overlapping each other, playing at the same time. And Web Audio gives you that really low-level, schedulable control, which is pretty powerful. I think there's a lot of capabilities there that are not yet fully utilized. But even the basics are so much easier with Web Audio. And I think we're already starting to reap the benefits of that in some places. And now that Microsoft is shipping Web Audio support in their Edge browser, release coming soon, I think, it's even better, because now we have it across all of the major browsers. And that compatibility is a tremendous benefit. So what else? What else is exciting? I mean, what else do you think is upcoming that's going to be an interesting direction? I think what's perhaps slowing down the adoption from people who are currently working in kind of computer musical audio is maybe the fact that the Web Audio API isn't the same as what they're currently using. So we've had for quite a while an idea of running arbitrary audio processing code in what's called the script processor, but we're trying to move away from that because the model is not suitable for purpose. But you editors of the spec and those working around it, we're kind of looking for a way of doing that. And I think that's very interesting. I think it's speaking to a few developers who are looking to see if they can effectively compile pure data or C sound kind of patches that perhaps they're using to write audio engines for iOS or Android applications and be able to reuse that code within the platform. I mean, for me personally, it's a little bit sad because it breaks that view source thing. If I kind of view source and I see just a load of ASM or WebAssembly or something. Generating arrays of audio data, it's a bit harder. But in terms of providing a platform where people will be able to start moving. This is something I've been pushing the Web Audio Working Group on quite a bit recently has been that Web Audio is actually a specific way of doing audio processing. And we do have this way of inserting your own, I want really customized JavaScript processing in it. The script processor that's getting replaced by something currently called Audio Worker. I'm not sure if it's going to keep that name or not. But that will be much more efficient at enabling you to do that. But there's this whole spectrum of applications and experiences. Some people get surprised, particularly when I was editing the Web Audio spec for quite a while, that I'm not a DSP guy. I don't intuitively understand how the FFT algorithm works. Or I haven't spent years in educating myself about all of digital signal processing. And at the same time, Web Audio hits about the right level for me. I do intuitively understand how to plug these pieces together to get a synthesizer or a vocoder or whatever. But there are people who want something completely different. And one of the scenarios, for example, is arcade game emulation, where if you want to emulate a system that used to do 8 kilohertz, 8 bit audio, grafting that into Web Audio is actually quite hard. It's not something that's easy to do today at all. And I think that we need to start looking at those sorts of scenarios of how you have lower level hooks into the audio system if that's what you really want. And certainly, Script Processor was a first kind of escape valve for that. But I think we need to do more in that direction too. So it'll be interesting to see where we can go. Yeah. I mean, I think for me and the people I talk to and the projects I see working, the Venn diagram of people who can do JavaScript, people who can do computer music, and people who can do the Web Audio API. That intersection is really small at the moment and pushing the barriers. I think it is. I think the exciting part to me too is just seeing those people who can do JavaScript or people who are interested in learning to do JavaScript grow into the music and audio space. And I still, I get a tremendous number of people who contact me personally because of the demos that I've written or the spec connection or whatever, who are like, hey, I saw your audio recorder demo. I was just wondering, is it even possible to do X? And I can reply back to them and say, yeah, that's totally possible. You probably want to look at this bit here. Go play around with it for a while because I can't do everything, of course. And I think seeing people get excited and start doing, start exploring those bits, even when they don't deeply understand them to start with, is pretty exciting too. And I think that's going to grow the overall market and what we see come out. So I think that's going to be exciting. So what else, anything particularly on your mind as areas to watch in the future? I think an interesting area that hasn't fully developed yet is just collaborative tools for the musicians. Absolutely. So I use the analogy of when Google Docs first appeared and being able to write a word processor in the browser. And people say, well, it will never replace Microsoft Word to be right. It doesn't have all of the things that I need. But the fact that you could collaborate and chat alongside things and do those sort of meant that a huge number of use cases kind of move there. And I think that's possibly what we'll start to see happening with Web Audio, that maybe it's going to be a long time until I can do all of my kind of music production in a single application in a browser. But if I just want to collaborate on a drum loop with some friends and try out a few patterns and maybe say, what do you think about this? And I think you should swap the kick drum out for a different sound or how about pushing the timing here. This is exactly the same. And then maybe we'll take that out and load it back into our environment. Yeah. I get so many people who they'll ask the question of, so I want to build this real time collaboration system where I just pop up on the screen and my buddy across the country and I can jam together. It's like, well, that's actually quite hard. I mean, the network is simply not supportive of that. Like the latency that's inherent in TCPIV networking from one end of the country to the other, particularly if you've got a Wi-Fi link at each end, is probably going to make that untenable for many types of music, most types of music. But there are so many other types of collaboration that you can do that go back and forth and hook together. And if you look at the real time differences between that sort of playing and doing music production or doing even like playing Ableton Live, like that level of I'm going to hook things in and I'm going to play just a bit before they need to be played, but the system keeps me on beat and everything, that actually is possible today with the tools that we have. And the collaboration system there is just immensely, immensely compelling. I think particularly when you add some elements of offline to that where I can work on something and maybe my buddy comes and picks it up or a musical collaborative picks it up in half an hour or something like that, that's perfect. I think that's a really exciting place to go. So I think with that we're going to close up. It's really exciting to talk to you. I'm glad we could sit down and have this chat. You too, Chris. It's good to see you in London. Thanks.