 This was a random collaborative project between Adnan, Rahim, and a few other people. Actually, a lot of people helped out with this one. We originally targeted for Make a Fair, but we couldn't make it because we had lots of issues. But let me go through it. So it's an experimental physical interface for the incident algorithm. So it all started with Magenta. Magenta is a Google project for using the machine learning power rest to do art. So about a year ago, they posted this. It's a neural audio synthesis. So it's a neural network that makes audio. I was like, cool. Sound plus neural networks, what could possibly go wrong? Of course, the default answer to that is Mac. So this is the way you actually run this thing, which is an iPad and notebook and you need GPUs and stuff. The basic idea is you can take a sound and sort of extract it into a model. And then when you run that model, you generate the sound again. So that's the very basic idea of what this does. And I was like, wow, this is super cool. And there's no internet. Yeah. And there is some really cool things. So when they made this website, that actually allowed you to play with this algorithm online. So the basic idea is they have a trombone that they have recorded and made a model out of. And if the internet works, I should be able to play them. Let me try it again. So this is machine learning to make sounds. And this is all cool, but one quick thing about that algorithm. So normally, when you mix two sounds in audio, you will basically hear both of them together. So if you mix a clarinet sound and a trombone sound, it sounds as if both of them are playing together. But what this does, and especially the demo earlier, you kind of had a combined sound of the two. And you could sort of add the two together and come up with a new instrument almost as if a trombone and a clarinet had a baby. And that was the thing that totally blew my mind. I was like, that's super cool. You can do so many things with it. But then nobody's going to make music on a laptop. That's just boring. Nobody's going to be typing in a keyboard making music. Although they're a lot of orchestras. And so another group at Google called Creative Lab took the sounds from Ensign and created a musical instrument. We call it Ensign Super. Wow, that's great. So this came out, and I was like, wow, this is super cool. I want one of these. And then in the website, you go down and say, all the technology and design you used to create Ensign Super is available as an open source project. When you go and look at the actual website, it's a proper get-up thing. When you look at the actual website, it's a proper get-up page. It's got all the documentation. I'll go through some of the stuff they actually have. So let me actually just open the page up. So this is the PCB part, which I'm guessing most of the people will be excited about. So they actually have the entire documentation on how to build the PCB. The first thing you'll notice is they removed the touch screen bit, because I'm guessing it's a bit too expensive to get that big a screen. They're way for this form factor, so they have a PCB that's this big. There's a touch pad, so there's no screen, but you can still touch it and it actually does the whole morphing bit. A bunch of booty encoders. The actual music synthesis happens on a Raspberry Pi behind. And they have really nice instructions to tell you how to put it together, to tell you how to solder it, all the things that are involved. There's a touch interface. This is an audio bit, so the audio is generated with an external DAC because they don't trust the DAC onboard the Raspberry Pi. So they kind of shrunk the display from something really tiny. But it's super cool. All the documentation is there. They had the documentation for the software. So it's using something called Open Frameworks, which is a framework, C++ framework for doing audio stuff. They have the firmware for the microcontroller on there. And they have all the samples for the audio to just do a quick demo. Unfortunately, it's 64 GB of sample audio. It's huge. It's a lot of file. So I was downloading it the first day and a similar time, 6 hours. Left it overnight. I had the 64 GB. It's the next day. But it was painful to get it down. But at the end of the day, I got it. Just a quick setup for the whole thing for anybody who's interested. There's a Raspberry Pi that runs the Open Frameworks bit. They use touch sensors. They use all the shelf touch sensors, two of them, on that touch panel. You just get the touch. Audio data comes in as MIDI. That's again taken in by the SDM32 and transferred to the Raspberry Pi. The LED is driven by the Raspberry Pi. And the DAC is also driven by the Raspberry Pi. So very simple, straightforward. PCB, very simple, straightforward circuit. Not very hard to make. So of course, the schematic is also online and so are the Gerbys. So everything is there, literally. All you need to do is just take them send them to your favorite PCB manufacturing hub. And you get PCBs like this. I got it in green because that was the choice of color of the people I asked. The bomb is also online, thankfully. So I can go and find all the components. Unfortunately, it's all UK RS electronics links. So half of them will work and the other half are not even available in Singapore. So we had to do a lot of swaps and a lot of hacks, but we managed to get it all soldered up and are all working. On the back side, you'll see this is a mesh for the touch panel on the back. This is SDM32. These are the touch sensors. This is the MIDI input, the display, the audio output bits and the Raspberry Pi itself. Lots of funny things happen, though. What's a random display from AliExpress? Realize that pinouts don't work. So of course, you've got to dodge all the cables there. Usual stuff. So we had this working. This is the first trial of it working. The video is kind of horrible, but when I shoot a demo later, you can check out the display. The smaller OLED is beautiful and the stuff they have set up to make it work. You kind of practice your finger and the full corners give you different instruments so you can sort of morph between them on the full corners. It's super cool. Thanks to Terrence from UWC Tampines, I got a laser-cut box made. They also have the box designs online. So literally, they've done everything for you. All you need to do is compile the software and make it run. It's a nice box. It's got a MIDI input, the audio output and the power. Of course, none of my MIDI keyboards actually output MIDI anymore because these days everything does USB MIDI. So, well, Bodge Live again. I had to cut a hole in the nice case and open the Raspberry Pi MIDI USB port to plug in my MIDI device. And a couple of USB sort of open frameworks hacks to get the MIDI from the USB instead of the MIDI from the SDM32. Otherwise, you ask, where do you do the cool thing? The machine learning, right? So I figured out that all my synthesizers set up and I was like, I have my 64 GB samples. I want to make my new samples now. How do I set it up? So I read about the audio pipeline, which is how you create new samples, how you sample sounds, make models out of them and you can morph them into things. And then I saw this bit. They set it up with 8 NVR, 8 K AD GPUs, or how? I have no idea, but I don't want to know. So I'm stuck at this part now. I have the whole thing set up. I'm now trying to figure out how to make new models for this stuff. Any ideas, any help would be appreciated. Or I could have access to 8 NVR K AD GPUs or something similar. We could talk and see what we can do with it. But overall I could spend a lot of money and that's probably a good idea. I should have gone to the TensorFlow meetup now instead of Google instead of coming here. It's at Google right now, right? But that's a quick sort of adventure that I did with the whole thing. I have the whole thing set up. I probably don't have enough time for a demo but what I'll do is I'll set it up. You guys want to come over and play with it. It's super fun. I have a keyboard and a small loudspeaker or we can plug it into the AV system later. We'll have some fun with it. The question is, are we German or not? It's all.