 Thanks, everyone. Thanks for coming to the first Music Tech Meetup. Thanks for making this happen. I've been wanting to do this for the longest time, but this is really fun because this is something that I really enjoy doing and it's been a passion and hobby and I'm even more excited that Jerry was here as someone who sort of taught me all those DSP basics a few years ago and I sort of learned a lot of the stuff along the way. So today I want to talk about something that I've been playing with for the last couple of years. It's something called Web Audio. So a little bit more on the questions that Jerry asked earlier. How many here have worked or played with web technologies? JavaScript, HTML. All right, cool. So this is very easy to sell to you guys because we just saw sort of how things are done in the C++ world and I'm gonna give you a sneak peek at how things can work in the web world which, you know, by the nature of how it is it's slightly, slightly simpler. So this is my talk, Web Audio. I should change that. It's not emerging anymore. It's kind of out there already. It's kind of standard, but I should really because really everybody knows that, you know, all this is about making random noises and I'm just having fun. So a quick thing about me, I work with audio technologies. I used to work a lot more with the media side of things. I don't work with that anymore. I also host a podcast about local sort of development, like deaf committee engineers and this interviewing really interesting people in Singapore and I also make a ton of trouble at local meetups. I host a couple called, one called Papers We Love where we take academic papers and share and explain them to each other so that you all get to learn. But that's enough about me. What I want to talk about is Web Audio and what is Web Audio? Why Web Audio matters and why it's something that people who are interested in audio technologies should at least know about or at least play with it a little bit and a very quick overview of how you can use Web Audio. So starting with what? This is the problem with, you know, I thought it was mine. Really good ringtone design by Google there. So what is Web Audio? Web Audio is a browser API. So this stuff works in your web browser. So your Firefox, your Chrome, your whatever browser you use, this stuff works inside that. So it's all done inside a browser. If you've heard of the term HTML5, so this is one of the, HTML5 is more like a family of APIs, a bunch of things you can do with HTML5. You can do Canvas and VR and all these things. One of the things you can do with HTML5 is Web Audio. So it lets you generate, synthesize, manipulate, create sounds in a browser. So a little bit of history of how sort of this came about. If anybody's been playing with Web technologies for many years, you might find some of this, you know, sort of, you might remember some of these things. Back in a long time ago, there was something called BG Sound Internet Explorer. This was like background sound, you know. This is the time when websites, you load them and you have this like really crappy MIDI audio playing. That's what they used. It was a tag in HTML called BGSound that you could, you know, put it in, put a, put a MIDI file and it would just play that. No, this was, this was created by Microsoft. Man, let's keep probably stolen it as always. Then there was the time of Flash. I'm sure you guys remember Flash Audio. This was, you know, this was really like where things were and you put them in using these object and embed tags, right? I'm going to show you a Flash. Unfortunately, my browser blocks it nowadays because, you know, Flash is not cool anymore. But yeah, this is a Flash audio file and now you know why they block it, right? It's, it's anyway, it's not really something that a lot of browsers like to support these days. And then there was sort of something that came along, which is more of a standard, which is more something that was accepted by the W3C consortium and all the browsers were sort of behind it. And it was called the audio tag. So this works across all browsers. It's been working for many, many years and it sort of gives you a small little player that, you know, put an audio file inside the source attribute and just play something. And it works. Let's me change things quite easily anyway. So then Mozilla came up with something called the Audio Data API. So this was something that they were trying to do because they felt the audio tag wasn't good enough to be able to manipulate things. You could change the volume. You could change how fast and slow you change the speed of how fast and slow you play the sound. But you couldn't change things like filtering. You couldn't filter things, couldn't really do much. So Firefox came up with the Audio Data API that gave you sort of kind of like what Jerry was showing, very low level signal processing kind of access to the audio file. And so you could, you know, explain the samples, change things around. That was really fun except no other browser actually liked it. So they were like, this is too low level. You couldn't, you can't do things. You're going to have to make everybody sort of a lower level programmer to be able to use this. And it was against the ethos of the web which preferred to give people more higher level API so they could use things around. So finally everybody got together. W3C blessed this and the Web Audio API was born. So this is sort of finally certified mid-last year. So that was finally when the API was considered like it's acceptable and most of the major browser supported. So the main philosophy behind the API is it allows you to do mixing, processing and filtering very similar to what we saw Jerry show earlier within your browser. So you can think of it as doing something like this and this and this, you know, you just need the sound, right? You can use a boing. So all of these things could be done just within your browser. And surprisingly, all browsers today support it. I should change the IE logo. It's not IE anymore. It's Edge. IE doesn't support it still, but the new Microsoft browser Edge actually supports it pretty well. A quick aside, so all of these web standards are published as a spec and you can go read them. Actually, it's on GitHub. It's an open website. You can go read it. You can figure out how, what is the standard spec for all these things. This is how you can write the code. All this stuff is documented really, really well and explained. This is what people like Microsoft or Firefox, Chrome teams would use to implement internally how they work. And the cool thing is that entire process of creating this is also completely open. So if you're ever curious of how that works, you can actually go follow it. And it's really fun to see people sort of, you know, arguing about different algorithms and how things should work and, you know, whether they should, what kind of features they should expose in the API. And just for those people who play with the web, and you're not sure how well it's supported, except for IE. We don't talk about IE anyway. It's pretty, pretty well supported. It's supported in quite a few versions of Chrome and Firefox. Edge supports it since the first beta, I think. It's also supported on the mobile browsers. So you can actually do all this stuff that I'm showing on the mobile browser. No change of code. It just works. Node.js. So Node.js is not technically JavaScript. In the sense that it's not in the browser. Or, okay, I should put it that way. Web Audio is a browser API, not a JavaScript API. So it's supported in the browser, not in the language. So there are some modules that people are trying to implement it in Node.js, but it's not the same because there's a significant amount, as you'll see, the significant amount of really high-level APIs that this provides, which would be a sizable work to implement in Node.js. So what can you do with Web Audio? You can make musical instruments, you can make games, you can make immersive interactive experiences, you can do communication and recording, and you can make digital audio workstation like production and software. So I'll give some examples of some of these as we go along, but first I want to talk about why do you want to do this in the web. And I think that's three really, really cool things that the web as a platform provides you to do, to write your audio applications on a web platform. And the first thing is distribution. And the fact that it's cross-platform. I mean, you can write it once and literally run it everywhere. Also, the super cool ability of being able to share your application by just giving a URL, right? This was an app, I hope it works, that was made by one of my mentors and a professor at NUS, and it's just a synthesis engine that makes all sorts of weird sounds. But the cool thing that I want to show, it's probably a bit hard to show, is I can make sounds with this. So let's play this. It's going to work. No, it's on. It might be broken. Let me try again. No. Okay. Anyway, it doesn't matter. The other demo should work. The point I wanted to show you was, look at this URL, right? It actually has things like, this is a model called Rexter. The red value is this, the badness value is this, and the gain is this. I could send this to someone else, and you'll hear the exact same sound that I was hearing. The ability to share this kind of sort of real, really specific things, just by using a URL, and everybody knows how to click a URL, is something that is kind of hard to do in other platforms. Because of the entire URL baseness of the web. Collaboration. So the other cool thing about being online is, well, you're online. You're connected to other people. So you can connect with other people. There's a bunch of things that you can do where you can collaborate. So BandLab, which is what Jerry was talking about, has a feature where you can collaborate with other musicians. And you can play your instrument. They can play their instrument, and you can sort of jam together. The lag is a little bit of lag, but it's not something that you can't work around by coming up with different ways of sort of, you record first, and you post to someone else, and they can come and record. But there are different ways to go around it, but the point is it's so simple because you are automatically connected to everybody else, and it's stuff like web sockets. This is super simple to just collaborate and work with each other. And I also feel that web being a platform that's inherently so visual, there's a DOM, there's Canvas, there is WebGL, all these things make it so easy to add some kind of interaction or some kind of visuals to your audio. Without really spending a lot of work or learning a whole new graphical framework, stuff exists. There's a bunch of libraries, just put it in and it works. So for a demo for that, this is this super old thing that we made at one of the companies to work for. So you can see that just being able to add interactions and sounds and visuals and all to work together is really simple. You don't really have to do much work because all these pieces already exist on the web. So that's sort of why I think it's a really cool platform to experiment, to play with, to make your apps and to do something super fun. So let's get onto a little bit more technical side. Let's look at how you can actually write some code in web audio and how it actually would work. So the entire sort of way you think of a web audio is what they call a graph routing paradigm. So you have some source, some kind of an audio file or some kind of something that generates audio that's connected to something else. And usually the last thing that's connected to is often called a destination and that's like the loudspeaker where the audio comes out. So you can have a sound file that's connected to some kind of a filter that's connected to the output destination that goes out the loudspeaker. So that's how you sort of build up the entire chain as it's called or routing graph. Unfortunately you don't do this using any kind of a visual programming language. This is JavaScript, you kind of write code and you can say take this, connect it to this, take this, connect it to this and that connect it to the output. So I'll go through the code a little bit but this is a simple way. You could have a much more complex graph and start getting more interesting sounds out of it. So you can have all sorts of weird configurations with different kinds of filters and different kinds of effects. The API is pure JavaScript, nothing special. So if you're used to writing JavaScript or if you're used to writing any C-style language, it's very, very simple. One of the simplest languages I think to pick up. And the cool thing is everybody has a development platform built in. Every browser can run this. So you just open up your browser, open the developer tools, copy paste this and you should just make some sounds. Can you show me? Sure. Okay, let's try. So this is, I'm making an audio context. I'm making an oscillator and a filter. Sorry, for those who can't see this, the code is broken. Can you see this now? All right, so I made an oscillator, I made a filter. I'm going to connect the two together. It's going to probably sound really bad because it's going to sound like a, you know, five less slides of code. But again, the interesting thing to realize is while you get this super nice, super clean, very simple JavaScript API, the actual implementation, it's kind of hard to see. This is from the source code of Chrome browser. So you can, Chrome is open source, you can actually go look inside. It's actually C++. This is very similar to what Jared did. I'm pretty sure if I go dig in, I'll find a place which is exactly like what he was doing. There's not much difference to how some of the sound is actually generated. It's just you have this high level API that makes life simpler for you. So like you're talking about earlier, it's all about routing and connecting these nodes together. So what kind of nodes does web audio provide by default? Provides a bunch of types of nodes. So the first type, I like to call them as sources. So it's things like oscillator, these generate audio of some kind, right? So it's an oscillator, it generates a sine tone, like what we just heard. So I was using an oscillator. There's an audio buffer source, as it's called, it's a very heavy name, but basically this means an audio file. So I can take an audio file loaded into this and I just place back the audio file. Can be anything, it can be a Rick Astley song, like we had earlier, or it could just be a flute playing a small little piece. Or it could be even a smaller part of sound, doesn't matter. Take an audio file, put into this and you just place back. These two are a little bit more sort of advanced. This is used to take audio from a video tag. So let's say you have a video tag and you wanna take the output of the video tag and change the voice of the person to Darth Vader or something. You could do all that using this. So you can get in the audio from something else and sort of process it. And this is for you to send an audio out over something like WebRTC to another person if you're doing voice conferencing. So these are sort of the ways to generate audio. I'm sorry, this was not to send out. This was to get in audio from some third party. So these are all inputs. Like this is how you get sound, see how you create sound. Then you effect it. So you can change, you can have a gain node which changes the volume of the sound. A by quad filter, which is what Jerry was showing earlier. So it's a filter. You can do low pass, high pass, all this stuff. That stuff can add a delay if you wanna add some sort of echo kind of to your audio, to your song. You can have, you can do some analysis. You can pan in stereo. So you can do left, right panning. You can do a 3D panning. So it's what we were talking about earlier with this spatial stuff with VR. So there's already a pre-implemented node that does all the head related transfer function stuff. Although it's kind of weak because it assumes a single head related transfer functions. Normally head related transfer functions are personalized. But this kind of assumes a single one. But still does quite a decent job of doing 3D audio. A convulver node that you do all sorts of really interesting reverberation processing. There's a wave shaper and a dynamics compressor for doing distortion. So you really get a lot of free pre-built effects. And this stuff sort of is all optimized and pre-implemented in super fast code in C++ for you in your browser. So you don't really have to reinvent any of these things. And of course, finally, now you have gotten the audio, you have processed it, you've got to push it out somewhere. So the most common one is destination, which is your loudspeaker. There's something called offline destination that lets you sort of run something and record it to a buffer and then either save it as a file or use it again as something else. It's called a pre-render. And then media stream audio destination, this is like something that you're doing peer-to-peer communications or anywhere. You want to send audio to someone else on another computer or other machine. You can use this to do that. The way you connect nodes is using something called the dot connect method. So here you see make a new context, you take a buffer, and you connect the buffer to the destination. So you have a buffer that's going to get to destination. So if I play this, this is basically going to take in whatever audio file that I passed to this buffer. It's going to play it out. So this is the kind of big idea that you can see that is how connections work. You just take something, you connect it to the output, and then you play it, and stuff just plays out. Of course, that was a very simple graph. You can have much more complicated graph. And if you look at this specific application, which is a vocoder, this is like one-tenth of all the node connections there are. So there's like hundreds of nodes being made and created and connected and removed at the same time in a more complex application. Does Chrome have a kind of automatic graph generation? I'll get to that. No, it doesn't. Firefox does, but yeah. Let me talk about a couple of other things. I think we missed out. Yeah, so there's something called parameters. So you saw that we had something called a Bipod filter, or filters and effects and gain along these things. But how do you change things, right? How do you change the game? You want to make the audio softer at this part, louder at that part. You want to change the filter from this frequency. So every of these nodes expose something called parameters that you can change. So you can see in filter, there is something called filter of frequency and it has a property called that value that you can set. So now the filter is going to start filtering off at 400 hertz. You can set the Q value of the gain. The oscillator also has frequency. The gain has a gain. The buffer has a playback rate. So every node explores a bunch of properties and a bunch of parameters that you can change and tweak and you can change it real time. So you're able to basically use that to control and change your audio as you go. To stuff like making envelopes and making padding and all these things can become quite straightforward just changing these values. So I'll show you a couple of examples. One is something called parameter automation. So it lets you automatically change these parameters. You don't want to keep changing them manually. So here is something that's very commonly used in music, very low level music production called amplitude modulation. You'll hear it. It's probably going to be really loud. So I apologize if it is. It's really, I'm so sorry. I'll try to figure out how to... Actually, you know what? I'm going to change my audio to my internal speakers. So it's not going to be so loud, but if you guys can't hear it at the back, please come and sit in front of the wall of space in front. And here at the back. So you can... So what this is doing is it has two oscillators and one oscillator is modulating the other oscillator. And you can change its values using... How much longer is it going to be? I have some... No, it's fine. It's all right. It's also not very complicated sounds. It's just going to be this kind of stuff. So you can do these kind of things and have graphs where one node modulates another node and can do all sorts of automation. All this is pre-built again. So you don't have to sit and change a lot of these small things. You can even connect nodes to parameters. So this is a bit more complicated stuff. So you can have an oscillator changing the frequency of another oscillator. So that's going to sound more like a frequency modulation as it's called. This sounds really weird. And like, Yossi, what's the point in all of these things? But, you know... And this is something that I... When I was making this presentation, why everything's so low level? Where do I get to make my Rick Astley songs, right? Why is everything so... So, you know, like Jerry talked about stuff in C++ where you're changing, you know, individual samples and multiplying them. And I'm talking about connecting nodes. But where, you know, how do I make music? Where is my songwriting? Whereas... So I think if anybody has ever done any coding, you remember that the first thing you did was this, right? You did print out Hello World. And then you saw Hello World. And this is again very, very different than stuff like what PayPal does or stuff like, you know, what your DAWs do or all applications started off at some point here. And this is sort of the basics. Similarly, if you ever wrote any visual code, you start it off by drawing a black square. This is the standard thing you do. You draw a white square, a black square, a red square, whatever. And then you sort of build up from there. So similarly with Web Audio, you get all these really low level basic handles on things. They're much higher than sort of, I would say, like what you do in, you know, more like C++ kind of stuff. But still they're low enough that you don't really see the direct application of that to, you know, your next music project. But that's where stuff like libraries come in, right? Because people have sort of taken these things, bundled them up, made some really, really cool things that you can just drop in place and start using it. So I'm going to talk about a few libraries that are super cool. It's something called Tone.js. Very widely used Web Audio library for doing basic sort of musical stuff. So if you want to do, if you want to have your scales, if you want to have a small synthesizer that, you know, you sequence some notes and it plays through, you give it a bunch of samples and it sequences them, all that. This library does it for you and you don't have to deal with any of this note connection stuff. It's a very simple API and it's very well maintained. So you can look at that. This is something that I made in one of the previous plays I used to work. This is more for interactive things. So if you want to have something where you're looking at the alien moving around and as you move around it changes the sound, this stuff, this would help you, that has a lot of functionality to let you do that very quickly. There's something called Babylon.js if you want to make games. Babylon.js is great for using sort of adding sounds to games. So it does things like, or whenever there's a collision, play this audio file. So all these sort of things where you need them in a specific way, all these frameworks and all these libraries that already exist, help you do that in that specific way. There's a list of, this is kind of a plug. There's a list that I maintain of all super cool libraries and frameworks and just awesome apps on Web Audio that you can just go and look and if you find something new you can just add a new one in. It's open source. So you can just play with it. There's also a bunch of tools and Cedric you asked for this. Firefox has this thing called the Web Audio Inspector. If you use Firefox, it's download something called the Firefox Developer Edition and this comes in. So you take any Web Audio webpage and just open your Inspector and say there's a tab called Web Audio and it just shows you the graph of what you have already coded. So if you have written some code and you want to check whether that works, this thing helps you do that quite easily. There's another cool thing. This is a website. It's called Kenopy. And I think I have this running so I'll try to see if it works. So this is some code that somebody wrote that does something similar to what I did earlier. Just make some sound. Now we'll see if this is probably not going to kill everybody's ears. Yeah, it's just a simple wave. But then this also has this thing where it shows you the graph. So you can write your code at the bottom and it shows you what is the code that you what is the graph that you generated based on the code. It also shows you what is the audio that's coming out of it. So it's super handy for debugging and playing with this stuff. There's something called Recorder.js which lets you plug in Web Audio to another browser feature which lets you record stuff from a microphone. So this lets you do stuff like speech recording or you want to change the voice of someone. So Recorder.js is a really nice library for doing that. And for the last bit I'm going to show you a few wacky, cool, crazy things that people have built with Web Audio. And the first of course you have to start with the Asset Machine because, just because. Okay, I'm going to turn off the big audio for this because otherwise it's just going to annoy everyone. All right, let's play. Can you guys hear it? Yeah, this would have been nice with the loudspeaker but it's fine. The links are there. I'll share the link to the talk. You can go listen to it. And you can, this is just all done for you and it's just all completely synthesized in the browser. The next one is very different. This was an art project by some people who who had access to this large repository of bird sounds from all over the world. So they built this app. So unfortunately, a part of it is broken. What it's supposed to do is supposed to find my location and make, so it shouldn't say Paris it should say Singapore. But that bit is broken. But it makes a soundscape from these recordings from all these different sounds. And I think this might, this might be fine. Yes. Yeah, it's supposed to be like what person? Am I right? This is another favorite one. It takes a bunch of audio code and a bunch of software code. It's trying to see what it sounds like. So this is some HTML that I found somewhere and it's trying to synthesize it. Actually, this is the HTML of my slide show. So this is very metal. My slide show is playing itself. It's weird. Oh, I can change. I can change all these things. It's kind of cheap because they're using some kind of a scale and then you're mapping around it. So it's always going to sound musical regardless of what you do because you're mapping it onto a musical scale. But yeah, I think it's just based on text. You're taking some text and doing some hacking something. What about the HTML text? No, I think it's just text based. You can give it any code. It doesn't have to be HTML. You can give it C code, Swift code, whatever. You can just give it a bunch of text. I know, just generate something. It is. It's awesome. It's fun. This is the best thing about it. You get all these super crazy wacky things. That is, of course, game of life. But this probably might sound better with the loudspeakers. Let's try that. This last one is something that I built as a demo for showing off some stuff. And this is a couple of strong synthesis algorithms. So this is a famous algorithm for synthesizing string instruments. So there's a JavaScript version of it that somebody wrote for Web Audio. So use that. And sort of, it's supposed to work with, actually, I have one of these guys. It's supposed to work with another version of a Chaos. Sorry, this is a Korg Nanopad. It's supposed to use the plots. So that's why you see these circles. They're supposed to represent plots. Unfortunately, I don't have that version. I have a pad version. So I can't map it straight up. But if you have one of the plots, it works. You just plug it in. And it should just detect it and work automatically. But you can definitely talk OSC over Web sockets or anything. Because it doesn't matter at that level. But with MIDI, there's this new sort of related standard to Web Audio called WebMiddy that lets you connect MIDI devices, physical hardware MIDI devices to a browser and communicate with them. So that's how this should work. I'll try to find another application later, maybe, and we can try to see if this works. But I'll trade this one first. So this is, again, completely synthesized. So it's a bunch of mathematical equations that generate this sound. And you get to change some things. So you can change things like, oh, this is, so I change this. Some parts, at some of these parameters, it doesn't get a bit wacky. But it works out. The cool thing here is that you can totally see, without my really crappy playing skills, this could be an instrument. You could totally play with this and you could even perform with this. So from a performance perspective, from a ability to create music perspective, it works. Of course, the musician, in this case, sucks. But these are the kind of things you could do with Web Audio. So that's all I have. And I would now say, go make some noise. Can I suggest another thing, Sarah? With Web Audio, there's this Doppler effect from such a game, PlayMid, online. Yes. Do you remember the name of that? Daniel Raff with WD. How do you say? Daniel Raff with WD. And Doppler? Doppler, Doppler. At least that's a super cool application. Yeah, this is really weird. This is totally not meant to be a musical application, but it was super cool. So it's supposed to let you detect hand movement by emitting some audio from your laptop and then seeing the reflection of that in the microphone. So you send some sound out and you listen to it on the microphone and you hear it change because of a hand in front of it. So it's supposed to... Let's see if it works. I don't know. I remember he had like a... Yeah, you have to click that. Yeah. Oh. Yeah, I need to use my loudspeaker, right? It doesn't work, it fits. It decreases a little bit because it's heavy, but we're still out here. So we can see the 20 kilohertz that no one hears on the right. Right. And you can see the Doppler effect on the right and the left of the Zoom version. And then... So if you could try just to go forward. I'm sorry. Forward. That's it? Yeah. Just go forward in the way. I don't know where my... I don't know where my... Oh, that's probably where it is. Yeah. Go, go, go, go. Go straight. I don't know where my loudspeaker is. It's probably here. It's here. Yeah. Yeah, yeah. Okay. So you can do it nicely, you can see the gap. Oh. You've got the right effect on the left and the rest. And you can play like there I mean... Is there one here? Yeah. So if she can't do it... Oh yeah. Okay. I better stop this. Can I stop? Can you give it to the voice? It doesn't work yet. Maybe this is better with... Total virtuoso. Okay. There's some weird shit. Anyway. You want to play with any of the stuff that I mentioned or the links to any of that. You can go to my website. This slide should be already there. And you can go through it and just look at any of the links and play with any of the things. If you have any more questions about Web Audio you can ask me now or if we don't have time now we can ask me anytime later or you can tweet at me or whatever. I really love Web Audio and in general all things audio. So always willing to have a chat about anything cool that you want to try to do or you're playing with. Thanks. Any questions? Just want to add that download link for all the stuff I showed is in the comment section. What was it? Again? The... Do you remember that? It's... It's not with us anymore. No, no, no. It's... Comments or... I don't... I don't really... Oh yeah, there you go. I got it. Yeah, it was fun. All right. If there's... Yeah, questions. Questions otherwise we can... If anybody wants to make any announcements or talk about anything that they're doing. So all the stuff that you and Jay talked about how does it tie into something like what Bambura does that trying to identify a piece of genres in music and trying to find out what someone's going to like and what someone's going to like? It's similar in certain scenarios. So one of the things I very briefly covered was this analysis part of Web Audio. So there's a... There's a... It's a very... So the whole part of Web Audio, it's real time, right? Like the entire idea is this is for making sound. But a lot of the analysis stuff that you can do in Web Audio or you can do with just simple code, you can sort of expand to do much higher level and analysis of what kind of music it is, what kind of genre it is. Because you can run these algorithms that look at entire songs and say, okay, this song is in this scale or look at these hundred songs and sort of get a vague idea of what genre they're in or you can do stuff like machine learning. It's similar concept where you actually open up an audio file, look at individual samples, sort of process them together, do some mathematics on them, find some sort of descriptors of them. So you can do things like Fourier transforms and spec... Find spectrums, find substrums and all these mathematical operations to get analytic metrics on them and then feed them to machine learning is probably what I'm guessing what Pandora does to guess what a genre of a song is. So I think it's a step... This is like a small baby step in that way but there's a bunch of libraries as well if you want to play with this stuff that work with Web Audio and also there's a bunch of offline libraries within Python and C++ that you do this where you give it a bunch of songs and say, you know, figure out what the genres are. Sir, roughly speaking on machine learning context you say you use these kind of algorithms to generate the independent variables and you absorb the dependent variable from somewhere else and try it out. So there's already like this tons of research that's being done in this... They generally call it data mining for audio so they generally try to find some useful data from a bunch of audio files either with using tags, meta tags or by doing audio analysis and then sort of feed that into machine learning kind of stuff. And there's a graph in the beginning where you show the browsers. Yes. Like, so the implementations are different in that there were boxes that were green and boxes that were red but like there are still differences between the objectives but in a different way. Yes, so the way the web stuff works actually is W3C which is the governing body that just creates the spec of our web in general creates a spec. So it says you should have... If you want to be compliant with Web Audio API you should have audio buffer source nodes and you should be able to do this. Exactly how they do it is different per browser. There are some really, really tiny differences with respect to how you code things but most of the time it shouldn't bother you. So it is much... It's apparently... So this is the only spec that I've actually followed. Apparently, spec work in browser is super like aggressive and flame wars. There's like tons of fights everywhere. And like this is most of the stuff I'm not going to do this. Internet Explorer sucks. I'm not going to do what they did and all this kind of stuff. Thankfully, Web Audio is like some of the nicest things that everybody sort of works together, agrees. Most implementations are not trying to... Like most browsers are not trying to fight each other. They're trying to work together. In fact, many times it's share code. So it works really well across browsers. So I've rarely had problems with browsers not doing things that they're supposed to. Yeah? Magenta, was it? Magenta? Yeah. Magenta is a project magenta, I think. Yeah, they're trying to get their machine learning stuff to analyze and learn about audio and music. Analyze this popular song with 150 songs and create some new interesting music. I think it can work. I think it will work. The interesting thing though would be to define what is... I'm guessing if you use popular songs, it probably could work. But musical things is such a subjective thing. It's going to be really hard to get something that many people like. So I'm not sure if it could work. I'm pretty sure somebody has tried this. I'm pretty sure somebody has tried this. They have to, because they Google released the machine learned images, right? Yeah. So it's just really not that bad. You have known about it? No, they've tried it a couple of times. But in this particular genre, this is marketing more than just... I don't know. It's... Once you get into this world, you start seeing how varied the term music can be. And it's amazing. I would be very surprised to think and predict that with registered backers. Yeah. Reason being that if you look at... If you listen to all the songs from a given band, many of them sound exactly the same, right? You listen to the cars. Almost all their songs sound the same. But then we have like two minutes. Why are those hits? It's so much more than marketing and so many other things that are involved. If you're saying that if you analyze all this and then make a song, and the question is whether it will be a hit, I think it's probably not going to be a hit. If you create a song that has the talent, it will be likable. Yeah. Possibly. Maybe with some reasonable degree of accuracy. Yeah. But that's probably the best you could aim for, I'm guessing, I'm not sure. Maybe you can invite a next marketer and get him to share the formula. Yeah. That could be. Well, maybe like try to get his formula in a code in the library. Open source it. Solid in it. All music will be open sourced. I know. Anybody has any announcements or any projects or anything that you want to share that you want to... It's not something I do, but I'm... She has something in front of me. Do you need a website or something? Can you go into no paper? She's looking for no paper. It's not something I did, but I think it's pretty cool that it might be funny for some reason. Is this? No. Yeah, can you open one of the spectrum we see there? This one? Yeah, maybe the second is simple. So that's enough. You can visualize with your smartphone. It's free. You can visualize the spectrum and you just place it. Nice. And the idea is super simple. It's just a kind of piano partition if you think about it. Each of the lines is like if you were playing a note on your piano. So you could create it with a web-board. You would have just a few oscillators. Every time... So maybe if you take a slice and you say, I need to generate a tune at this and this and this and so on. Maybe some of you can do it. I would love the music. How do you create this though? Is there a... It's a normal spectrum. No, but it's... So basically you have... I looked at the specifications. There's eight octaves, so 12 semitones per octave, 96 oscillators. So you can't generate from an audio file, right? You just generate it from an oscillator and your web-board. No, no, but how do you create that? Oh yeah, you can create that. So everything is on the website? Okay, this one place that are you? Yeah, exactly. That sounds like a very weird website. I'm going to try this. If something weird happens, it's all Cedric's fault. Yeah, no, I played with it on Linux. I think it was on some Mac, but let me handle it. Cool. It's probably not that hard to do. But look, there's iOS and Android in it. Yeah. Yeah, I think it's easy to do on web-board, you know. Yeah, it should be quite straightforward. So you just take the picture, you just take a slice and generate the sound at each of the frequencies that you have. That's pretty cool, so you can even put it back down. And if you put it down, you can also disform the thing. Oh, nice. You can see the weird chips. So it doesn't care about that. It just takes a slice. Okay, so this is what it is. Yeah, yeah, yeah. Oh. So many. Do you want to do it? Yeah. Let's do it. Let's do it. All right. Well, it involves some, yeah. We need some image recognition. You need some image people. Yeah. I know you can find them. Where we can find them. Let's do it. Anybody else? Any projects, any ideas, any things that you want to share? Also, any feedbacks for SUP, for what the meetup was? Please, I'm just saying it in behalf of you, but please let it know. So we can change the format and, you know, it's too deep, too shallow. What do you want to talk about? What do you want to hear about? I think it's nice to have an open community where we can all figure out what we are all interested in together and have people share about it. And if anybody wants to share something, I guess, for the next meetup, let SUP know as well. Yeah. So we have it every month. So. Come on, friend. I think it's a good one. Come on, friend. Anyway, if you want to try this, we're going to try the meetup thing that Navi was talking about. So one of the things that is really difficult for meetup managers is to find speakers every month. And like those of you who don't know me, I also run the iOS meetup in Singapore. And I've been running it for the last four or five years, I think. So if no speaker kind of comes up, I was the one who was speaking all the time. And it's not kind of really scalable, but I learned a lot of things while speaking. And I love, you know, it gives me visibility in the community as well. But I can't really do that for music tech because I don't have the right kind of skill set for music tech in general. But those of you who are interested to share in next meetup, just feel free to come up. It's kind of a really nice place to kind of know who are the people in the music tech in Singapore and what you guys are really doing. And I think there is another event that is happening this week at Google on Thursday, if I'm not wrong. So it is by like the band lab setting something. You know the URLs? I have shared it on Facebook page. So yeah, just go and check it out if you're free on. Share your search for your Facebook page? Or was it? I sent you a message. Oh, did I? Oh, did you? Okay. But yeah, the way it is going to work is I'm planning to have it on the third Monday of every month. So unless, you know, it's a holiday. If it is a holiday, we'll have it the week after. Oh, yeah, this one. Yeah, yeah, yeah. So it's like sound lab one, not really a band lab. So it's more of an album launch. I'll put this on the meetup.com page as well. So if you guys can just check it out. Okay. Yeah. Yeah. I'll show some things. So this is short. Yeah. Hey, this way. Okay. I have a last thing. Maybe you would like it. This one is a shameless, like... You okay with your mic? Yeah. I should be able... So it's basically, I used to be a DJ. So I like scratching finals. And I also now I'm an electronic engineer. So I've been trying to make them put together. So I basically made a little Arduino system. Arduino is a tiny microcontroller, tiny electronic circuit that you can program with C or C++. And I made something that allows you to scratch using just a microcontroller and a half mouse. Basically. So that was in a... In a residency in a fab lab. Sorry, I should have an account. What? You don't have an account? I do, I don't have an account. Just not luck. Basically we were kind of challenged to recycle most of what we used. The artist plays with this kind of boxes that you can see all over. And he makes big sculptures. Some of them are... You don't even see that it's using this kind of wood. Like this one, whatever. But this one, I didn't prepare anything. I just play whatever. So this one is a rotating object. It's kind of an ancestor of cinema. So you have mirrors and little sculptures. And when you make it rotate fast, it looks like it's animated. It's called praxisoscope. And we sonified it. So we called it praxisono... Whatever. I don't even remember. Praxisonoscope. Yeah, this one. Praxisonoscope. So basically we used inside of the mouse. There's like the old school mouse with the balls. The ball is connected to some kind of system that we can see here. And it's rotating with little teeth. And there's an optical sensor that can count the tooth when they are rotating. So you know in which direction you go because there's actually two sensors. And it's quadrature encoding, whatever. So we reconstructed this little disc with teeth inside. And we just laser kept a much bigger one in paper. Placed it on wood to make it a bit more solid. And we placed it underneath a wheel of bicycle. So this is the sensor that we extracted from the mouse. It's just a hot core cutting. So we can see it from the inside. So it's looking like that and the teeth are counted like that. And that's how we did the mirror thing. And so this is the first kind of working test. There is this library called Mozi, like a mosquito. A Mozi ZZi eye for Arduino that allows you to generate sound. It's quite simple if you are familiar with C and audio processing. All the code is also available here if you're interested. Basically, we get the data from the sensor in an Arduino. And the data that is measured, I'll graph it here. And I just apply a very simple smooth filter. So the dirty data here is the original measurement from the sensor. And then I just smooth it. And I use this sensor measurement, which is basically the rotation speed. I use this to control the speed and the direction of playback of a sound. When the speed is constant, you have something flagged and this is the direction. That's it. So if you're interested in hacking some kind of things like that, everything is on GitHub. It's documented here. Oh, just come and talk to me. Very interesting. It was very fun and what do you say about your experience with RPR? It's a Linux machine. So it's as good as any other Linux machine. But is it slow or no? It's fine. This is something I remember a long time ago when I first started doing audio stuff, people would ask me, do you work with hardware? Do you do crazy new sound processing cards these days with things that we have in our pocket? You can do really, really fast audio processing without using much processor power. So you don't really need much power to do most of the audio processing. If you're doing some really crazy stuff, yes. You will need a little bit of beefier computer. But most of the time it's fine. Any other announcements, guys? I started a company last year designing noise-canceling headphones. I would love some input from some audio file or something like that. So you're the talk? Yeah, you're the talk. Talk on how cancelation works. Okay. Not today. Like a longer one? Yeah. Like basics of cancellation again? Yeah. And then? Yeah. That's fine. You have one talk. Yeah, so the other thing about talks is only that I've learned while organizing meetups is if you want to learn about something, volunteer to give a talk about that topic. And guaranteed in a month you would have forced yourself to learn about yourself. If you're interested in something, you want to learn, you want to try out something, it can be really simple, so it doesn't have to be really complicated. Don't worry, just come up, volunteer to give a talk, and by the end of the month you'll be pretty good. Thanks, guys. Thanks for coming. This is the end of the meetup, but feel free to hang out and talk to each other. Thank you.