 Towards the end of last year, I was suffering some form of malaise and I eventually self-diagnosed. My problem is I've been thinking too much. My day job requires a lot of thinking. You know, OpenStack tends to do that to people. And my downtime had thinking of its own various backlog of home projects, sprowing and 3D printing and renovations and whatnot, I just needed to dial it back a bit, you know, and even when I was, you know, on the Twitter feed, I was contemplating the consequences of a Trump presidency or whatever else was lurking in my feed that was just horrible. So I decided I needed a new hobby. And I had some criteria for this hobby. It has to be something which requires less problem-solving thinking, no coding, no soldering, no making, just learn a new skill, nice hobby thing. And eventually what I came up with is that I really should get back into learning a musical instrument and I decided on an electric guitar. Back in the day at school I self-taught and I hacked away with precisely zero technique and precision and no teaching. I did that for a few years, moved on to electronic music at university, you know, had a good old time making jungle and drum and bass. Finally got a real job where I spent all day sitting in a chair in front of a computer and suddenly found that going home and sitting on a chair in front of a computer to make music was really not even remotely enticing. So I dropped the whole lot and haven't done anything else for the last 20 years. But it seemed like a good thing to get back into, this time I was going to do it right. I was going to get an instrument that was of sufficient quality to actually be enjoyable playing and would make a nice sound. I'd really apply myself to actually learning it and the resources these days for learning music is just phenomenal. Even YouTube aside, the books and whatnot, it's a great time to pick up an instrument to do anything. And it's something I would thoroughly recommend. We all tend to be quite techie people and we assume that the key to being better at techie things is to do more techie things but sometimes it's not, sometimes it's just to have a more balanced life and having a creative output is often very good at that sort of thing. So if anyone else is teaching on the edge, I'd thoroughly recommend just giving something a go. But one of the aspects of having an electric guitar that sounds good is that you need things further down the chain that actually make the sound. And that tends to be a collection of pedals. So building up a collection of pedals meant to be this process where every single pedal is carefully considered and really earns its way into the rack and once it's in, it's your dedicated thing for doing that particular task. So it can take a while to get your perfect collection and then once you have it, you get this logistical challenge of how is it going to be arranged, how are you going to wire it up. This looks lovely but more often than not you end up with something looking like this. So we have a layout problem, we have routing problems, power and signal lines. We have these real world wires which limit the flexibility of how things can be arranged and there's solutions to all of these and you can end up with something like this. But again, that's very expensive and a lot of effort and if I did this, it would violate no soldering, no making and no problem solving of my hobby criteria so I didn't really want to go down this path. So what I ended up with is this thing. It's a Boss GT100 effects processor. It has a bunch of effects. So here's a representation of the pipeline that's inside it. It models a series of discrete effects boxes and it's not a completely generic signal processing unit, it's very much focused on guitar specific requirements. So I don't know if you can see those codes up there but you've got the usual, you've got a compressor, overdrive. Then it splits into two A and B paths, each one with its own dedicated preamp, so preamp A and B and then it gets mixed together again. We've got graphics EQ, delay, chorus, reverb, just the usual stuff. But if you see those on the left there, the FX1 and towards the right FX2, they each can be set to any one of a number of effects. So what you end up with is basically every Boss effects pedal ever. In a pipeline where you can reorder things at your whim and set any of the options at the push of a button. But that's not all. Because what I really wanted, I have some limitations in playing at home. I've got a family, I've got a small house, I can't be cranking through a big amp. And the amplifier is actually a really important part of the guitar sound. There's things that happen when you overdrive a tube amp to the point where it's distorted. Tube distortion is very pleasant and part of the characteristic sounds of certain styles of music. And this unit has the capability to model a bunch of classic amps, Fender Bassman, Marshall, Plexi. Incidentally, the Bassman is terrible at playing bass, but to be fair, every single amp in the 50s was terrible at playing bass, and it wasn't their fault. But that's not all. Another really important aspect of the guitar sound is what happens when it comes out of the speaker into, you know, either the ambient space or to be picked up by a microphone to go into the mixing desk. And microphone placement isn't a whole art form in itself in a big studio. There's, you know, placement and brand of microphone has a huge impact on the tone. And the GT100 unit has the capability to simulate how many speakers you have in your cabinet, what size the speakers are, whether it's an open or back cabinet, what brand of microphone, how far the microphone is away from the speaker, how offset it is from the center of the speaker. So all this is really rather compelling for me, especially when I'm, you know, generally playing with headphones on. So there's a lot of parameters there, and you can control them all through the unit, but it's quite hard to get a big picture of what's going on in the patch. There's a proprietary software called Boss Tone Studio that works on Mac and PC. I run Linux, so it does actually work under Wine, except for the bit that actually sends the data to the unit. Almost works. So my current only option is to spin up a VM and run it through that with some USB pass-through, and it works fine. But it's not ideal. I would have quite liked some native tools to manage it myself. But here we can see the pipeline. I can drag and drop those modules to change the order. I can tweak the parameters. And also it has a native patch exchange file format called TSL, Tone Studio Library, which is essentially a JSON file with a collection of patches and a patch itself. It's pretty much just a memory dump of this GUI from what I can tell, which pretty much maps exactly to values in the unit with a bit of faffing around. So how does it talk to the unit? We have this standard called MIDI. It's an old standard, but it's hanging in there. Published in 1983. Serial communication between musical instruments. It really changed the landscape at the time. That's a MIDI cable there, but these days it's all happening over USB, but it's still 31 kilobit. It's a byte-based protocol, but unfortunately the data bytes only have access to seven bits of information, which when you're trying to transmit arbitrary binary data can be imitation, but we'll talk about that a little bit later. So let's just have some really simple examples of some MIDI messages. If my right index finger emoji pushed the middle C button, what would happen is three bytes would be generated. The note on message says eight means it's note on, zero means it's MIDI channel one. Then we have the code for middle C. It's the note on the piano, on the keyboard. Even a full-size piano only has 88 keys, so the fact that we're limited to seven bits, which means 128 possible values, it's no problem in this case. And then the third value is the velocity of the key press, and we're pushing it all the way down, so the value is 127 there, so that's the maximum value you can fit into seven bits. Another example, there's a control slider for various things, probably to set the tremolo would be the standard thing. Again, we have a message, B is the command, zero is the channel, zero one is the which controller it is, and some arbitrary value. We also have a pitch bend. A pitch bend is interesting because you can't really get away with 128 possible values in a decent sounding pitch bend. It would sound terrible, especially over a large range like an octave or two. So they use two bytes for this, so it gives us a maximum of around 16,000, which is much more appropriate for this particular control value. So that's standard MIDI for actually playing music. As far as transmitting patch data, it's a little bit more tricky because it's arbitrary data, it's very specific to this device, so it really needs a custom protocol to send it. Now MIDI has this other part of the spec called system-exclusive messages, and all of the transmission and receiving of patches is done through SysX messages that are essentially, please give me the value of this memory range or here is some data, please plonk it in this memory location. And that's what the Boss Tone Studio software does. So let's just take an example of setting the gain to 11 on the amp and follow it all the way through. So Boss publishes a spec, which is reasonably helpful, which says how the system-exclusive messages are managed. A very important part is this memory map where it tells you the memory range of all the different values. The interesting bit starts from the user patch onwards, and if we carry on down to 60, that's the location of the temporary patch, which is where I can write things without overwriting any of the user patches. So the device has 200 built-in patches. A disproportionate number seems to be heavy metal patches, but that's just my personal bias, I don't have anything against metals. And another 200 for user patches, which if you write to that location, it will persist. But the temporary patch is where you can just twiddle the knobs and it won't be written to anything, unless you actually explicitly do it right to some other location. So further on in the spec, we have this very large patch table where this address is the offset of your main patch, so it's temporary patch location plus that. And it specifies what every single value maps to. So here we have a single byte for the gain of preamp A, and it's on a range of 0 to 120. So setting it to 11 is not going to be a problem. I generated a JSON spec file, which essentially has an entry for each one of those lines, which says its memory address. That parameter key is what the Boston Studio file format uses to identify it, which is sort of different to anything in the spec. And then it's got some other things like the lookup table at the top there. All of these values have a corresponding lookup table, even if it's a straight one-to-one mapping between value and number, just because it's convenient and it's still fast enough to load at startup time, so it's not a big deal. And, yeah, finally, from your size, it's a single byte, so that makes things simple. So let's build an actual system exclusive message. This time, it's a much bigger message than those other mini-messages we saw before. Because there might be a lot of devices in the chain, we have to say what brand we are, we have to say what model we are. So we're a BossGT100 device zero, just in case I have a whole chain of them. And then we're saying what command this is, in this case it's a send command, and then I'm saying the address, and then the value, and yes, I know that's 11 heads, so it's not actually 11, but... Yes, it's still funny. Please laugh. And finally, there's a checksum, which is just the modulo of the important bits of the packet. So I wrote some software using Python, which was just a basic patch management, send a patch, receive a patch, manage TSL files, sendings when I was specifically sending things. I've tried a few approaches, but I found the best way of doing it was sending it one sysx message per parameter, and there's about 900 parameters. And it goes fast enough for my needs. But receiving that wouldn't work because each request is a round trip. Say, give me this, thank you, give me this. So it's much too slow. So I ended up batching it up into chunks of 128 bytes, and then unpacking it on the client. Couldn't be more than 128 bytes, because then we cross over into the discontinuous space caused by the seven bits that we limited to for addressing. So the address space is full of large gaps because it's impossible to represent those memory locations when you're missing that extra bit. So there's a bit of translation to go to and from the seven bit number to a real number. Because this is a Python talk, let's have some Python. Turns out when I was pushing one parameter per request, not all the parameters were sticking. Problem went away when I did a one millisecond sleep between each push. So okay, some parameters, the unit gets busy and it doesn't respond. So let's just hack in our magic sleep. So it's like, you know, I'm talking arcane serial protocols and magic sleeps and reading and writing from memory locations. It feels like real programming. But no, that's my old way of thinking. All programming is real programming and I'm learning to unlearn my absorption of contempt culture. So I retract that statement. So anyway, the subject of the actual talk. Now that I had this general tool, I had something which could be done with a GUI but without the convenience of an actual GUI. So it's not that profound. So I started thinking, okay, there's 900 parameters. There's essentially an unlimited combination which I'm never going to fully explore just by having sessions tweaking the knobs. So is there a more efficient way I could fully explore the capabilities of this device? So I thought maybe I could just generate random pictures and listen to them all and see if something interesting comes out. At the very worst in theory, nothing will come out and that's okay. So I decided, you know, if I was going to do this, how would it work? I eventually came up with some different kinds of mutations that were possible. The simplest one is whether a particular module is turned off or on. So I'm starting out with a completely plain patch set. Nothing applied, very plain sound, defaults are sensible, and then applying a series of mutations on top of that and then seeing what the result is. So once we've done an on-off mutation, we can, for anyone which is enabled, we can reorder it in the pipeline. I'm being careful about only making changes to things that are enabled because, you know, what if I come up with the perfect patch and all it needs is a little bit of reverb and I turn the reverb on and it's, oh my god, it's been destroyed by some previous mutation. So keep it limited to things that are enabled. There are limitations on where things can go on the pipeline, so there's a bunch of validations that need to be applied. For example, preamp A must stay in the A path, preamp B in the B path. And there's some other limitations that I want to apply, like the NS1 next to preamp A, that's just a noise suppressor. It's got to go somewhere next to the preamp is a good place to do it because that's often where some buzz is generated. I want, yeah, that's, ah, yeah, I don't really care about send return because I'm not plugged anything into it. Yeah, other than that, it's just, is it a valid pipeline. And then finally, it's actually changing values in whatever's enabled. And let's say, in the FX1, they have the subtypes and each of those subtypes have their own controls. So again, I'm only throbbing the knobs for effects that are actually enabled. Okay. Should we try and make some noise? Okay, so, the question is how do I, okay, generating the random patches is fine. I've got a tool. It takes a plain patch from a TSL file and generates however many, you tell it to, patches, it's completely offline and it doesn't need to be plugged into the unit. And then I have a TSL full of patches with a specified number of mutations in them. So then how do I evaluate them? That becomes a major problem. I've gone with a couple of approaches. One is where I've called an audition where I get a test file of guitar playing, load a patch, play the file, record the result, save the result to its own file and then go through the entire list. That means I don't have to be there for the listening to every single thing and I can come back later and go through the results. Another approach is to have an interactive session where you're applying small mutations, seeing the result. If it's good, you keep it. If it's really good, you save it to a TSL file. But if it's not good, you can back it out. You can back it out all the way and then reapply. So it's less mutations, more chance of getting something reasonable, possibly less chance of getting something amazing. So I forgot to mention while I'm learning the guitar, I'm still actually terrible. But I needed to come up with the perfect test sample. I needed to have everything. I needed to have single notes. I needed to have intervals, triplets, arpeggios, chords, power chords, open chords. I needed to be short. So this is what I came up with. It's still 17 seconds long, which I think is too long. That was with an actual pleasant effect on it. And this is what it sounds like dry. So straight off the guitar into the mixing desk. No effects whatsoever. Different take as well. Sounds quite different. Distortion is on the actual recording. This is going to open, let's shut that up. So generated a bunch of patches, loaded them into Tone Studio just to show what it looks like. So down the left here is the 100 patches. If I hover over it, I've populated the note field with a summary of what the mutations are. So you can see what's happened. FX2 has been turned on. It's been turned into a rotary effect. The order's been mucked about with. Compressor's on as well. FX1 is a phaser. Let's just see what happens. It's interesting. Let's see what's actually happening in that patch. So here we've got not a lot enabled. There's a compressor, EQ, and FX2 as a rotary effect. Now this is interesting. This is a classic old-school effect, which is a speaker mounted on a motor that spins around with a microphone pointing at it. And when the speaker goes past the microphone, it's louder. So there's a tremolo effect as the speaker goes around. And there's probably a Doppler shift effect as it sweeps past. So you get this nice tremolo vibrato plus the acoustic characteristics of the speaker and the mic and whatever box it's in. So it's, you know, cool to have a simulation of that. So what else is going on here? When I disabled the EQ, the sound was a lot fuller. So it's actually taking away some of the bottom end and some of the top end. It's actually quite nice. It's kind of a tenee retro sound. So it's a good example of often it's the things you take away from the sound that make it interesting rather than the things you add to it. Okay, I'll skip the 4A because the screenshot's not that great. Oh, and it sounds terrible. It sounds bad. That's why I called it bad. I'll play it. It's just too much. It's like 80C middle, plus extra stuff. I don't know what's going on. Okay, let's try this one. Okay. Going on here. Does it sound like a pipe organ? Okay, so it's all happening at the end of the chain here. So FX2 is a pitch shifter and it's being pitch shifted by 24 semitones. So that's two octaves, two octaves higher. That's why it's so squeaky. Then there's a delay and then there's a chorus. So it's just going to be muddy and things up a bit. Then there's a reverb. So the thing about the pitch shifter is that it models a note, processes it, and does what you tell it to. The keyword being there, a note. As soon as you start playing two notes or chords, things start getting weird. So you just don't do that. So it sort of holds it together for the first bit, but as the chords come in, it just falls apart. The chorus is probably... The settings for the chorus is probably giving it that sort of tinny tone as well. And weirdly, it's put in the Excel FX. There's an extra, there's a switch, which you can toggle to do things when you push it. And this one, it's assigned it to the ring modulator. So the ring modulator has a carrier tone. In this case, it's 70 hertz, which it multiplies with the original sound. And the result is completely atonal, basically never musical. You don't actually hear it on that, but if I had to play when you push it, things get very interesting, as if they weren't enough already. So anyway, that's just a second example of a reasonably good patch and just horrendous patch. Most of them, with the high mutation rate, that was generated with 40 mutations. At that rate, a good proportion of them are silent. But I could weed a lot of those out just by seeing how loud it is in the processing before I saved it out. And if it's not playing anything, then I can just skip it. So, oh no, we're not done. We're not done. Here's a session with another tool. No. So here's an interactive session with the mutate command. So here's my... I wish that would go away. Here is the tool itself. So you've got the send receive commands, the ran command, which generated that file with 100 patches, the audition command, which we've just been listening to the results of. There's a sort command, which just takes a TSL file input, has a bunch of TSL file outputs, loads each patch, and then prompts to say which one whether it's the reject one or the interesting one, or the listen to later one. So now we're going to listen to the mutate command. Okay, it started out with some random patch, so we're going to use the send command to set it to the neutral one. Okay, I'll just quickly... See here, I can choose how many mutations per mutate. It will apply. This one's a lot lower than the other one because we're doing small mutations which we can back out if they don't go anywhere. And there's some ways of saying what kind of mutations you would rather have, so we have a relative weighting of whether it's enable, reorder, value. And finally, we can limit it to specific effects. So what I'll probably end up doing is having a mutate session where I'm only changing effects one and effects two, or I'm only changing the preamp and the speaker and the mic setup so I can just explore those areas. But here, we're just doing the whole lot. Dude. So you can see the patch name. I'm generating a random patch name. The challenge was I'd like it to be readable. It has to fit in two blocks of eight, and ideally it doesn't have any accidental squares or insult any deities. So I came up with this consonant vowel, consonant vowel, consonant vowel setup, and it seems to work pretty well. Okay, this goes on for a while. Let's just skip ahead. Okay, we're scrolling up here just to see that stuff that's scrolling up the top and it's just showing what the mutations actually were. And here we've got an ASCII art representation of the pipeline just to give you an idea of what the reorder mutations have done. You can see a lot of the time the mutations actually result in the volume going down. And you can do this indefinitely. Next steps, you might have seen there was some to-do messages, to-do mutator sign. There's actually another thing this unit can do. There's an expression pedal which is generally used for, you know, volume when it's disabled or, you know, wah, the classic guitar wah effect. But you can actually assign that pedal to any number of, well, up to eight parameters in a patch. What if the choice of those assignments was actually part of the mutations? And this is where things get really interesting because a good proportion of the parameters really were not designed to be modulated with a pedal. But maybe some of those will still make an interesting sound. So this is a little bit more effort because, like, say when it mutates fine, I can just sit there and, you know, push the pedal as I'm playing. But with Audition, I want to be sending some MIDI controller values to, you know, simulate, say, a triangle wave of the pedal going up and down. But I think, you know, there's potential of creating a whole different element of new sounds. So the other thing I'm trying to work out is, you know, I could easily generate a thousand patches. You know, maybe there's a fantastic one in there. I really don't want to sit through everyone to find it. So how can I do a better job of categorizing all these patches? You know, it gets really fatiguing. It's quite awful. You know, I would pay someone to listen to these and tell me what was good. So hang on, what if I actually paid someone to listen to these? So one option is to go to a mechanical twerking platform and, you know, upload all these samples, prompt them for some questions. You know, do you find this interesting? Do you think this sounds good? You know, there's some really interesting potentials there. Like, sorry, I'm jumping here. Do people know what mechanical twerking is? Yeah, I'll tell you anyway. It's using online platform to set up micro tasks that people actually get paid for. And, you know, cynical me says these are, you know, generally either spam filtering or spam generation. But, you know, I'm sure there's real work going on there somewhere. But, you know, there's going to be cultural biases about what people think sounds good or not. And, you know, wouldn't it be interesting to actually be able to capture that and go, oh, okay. There's a group here that have some very interesting ideas. I wouldn't hear this otherwise. And then once I've got a decent corpus of responses, you know, maybe that can be used as a training data set for some machine learning. Maybe the machine learning couldn't tell you what's good, but it could tell you what's boring, and that would filter out a whole bunch of stuff that you wouldn't have to listen to yourself. So, Python libraries used to implement this. Python RT MIDI was low-level communication with MIDI devices. Sitting on top of that is a library called Mido. It's really nice if you're going to do any MIDI programming in Python. It has a really nice object model of MIDI messages and also has a model for ports to read and write from. You can do it blocking or async. It's quite lovely. For the audition command, I needed to load a WAV file, play it while recording simultaneously and then saving the results. The sound device library had a nice, simple API for playing and recording that was sitting on top of port audio. I say it's nice. It does tend to crash after about 50 patches, so I lied. There wasn't 100 in the result. There was only about 54, because it core dumped. Let's say it's not sound device's fault. Let's say it's port audio's fault. Sound device and WAV file both generate NumPy arrays, but they're in slightly different formats, so I just used NumPy to transpose it. I switched from one to the other. Cliff is a nice wee tool. I came out of one of the very many libraries that Doug Hellman has written. It's a way of quickly putting together command line interfaces, and it gives you an entry point mechanism where you can register a sub-command just by setting an entry point in any arbitrary Python project. This means that I can have my main patch management tool called Bogd, but I don't have to put the random generation commands in the same tool because that's a general-purpose tool, and these random commands are a relatively specialized thing. I have that in a separate repo, but when I run it, all the commands are under a single main command, so it's quite nice. Finally, I wanted a prompt-driven user interface where it's asking a question and you enter the answer, which might be multiple choice, might be text, and I found Enquirer was of the many options. It was one that met my needs, so it's quite nice. The code is there, so really not that much Python at all to get to do that, and I was quite happy with how they ended up. Spec generation code, you're never going to see that, don't ask about it. It's horrible, but the rest is all up on GitHub. But if we go back to my original criteria for learning my new hobby, it's just a complete fail. Things kind of escalated after I put in a submission for this talk, so all I can say is, you ruined my hobby, Tommy Richards, you ruined my... Anyway, that's it. Thank you. So yeah, any questions? Uh-oh. What did you need to solder? Ah, just cables, because actually getting a loop from the laptop, playing sound through the device with my very limited mixing desk means back into the laptop was slightly convoluted, so cables, that's okay. It feels like cheating. I'm wondering if you've played with any, like, genetic algorithms or anything like that to try to... Yeah. I thought about it, because I have done a little bit of genetic algorithm. Actually, you know, sort of 10 years ago at LCA I did a learning talk on label placement with genetic algorithms, and my conclusion was it's a terrible solution, basically anything. But I do have hope for machine learning, and so, you know, a lot of the new frameworks. If someone has... If someone says, okay, comes up to me and says, use this framework, do this stuff. If you've got some training data and a bunch of samples that you want to categorize, please hit me up after the talk in the hallway, because I'd be... You know, some guidance would be useful. All right, thank you very much. Steve, are you ready? Thank you.