 Sure thing, I'll just start my own timer here. And cool, so thank you for the introduction. My name is Anthony Joseph, and today I'm hoping to share some of my insights and what I've learned so far about machine learning, particularly on IoT devices. I'm presenting to you, as you can imagine remotely, from Sydney, Australia. I'm actually presenting on the ancestral lands of the Gadigal people on the Eora Nation. So what I'd like to do is to actually share a little piece of contemporary art. This is Lionel Rose, who is Australia's first indigenous boxer to actually win a world title. I think it was the Phantomweight. So I thought it was a pretty cool thing just to share to open up my talk. So I thought I might just set the scene a little bit about what got me motivated to study this particular area. And it's mainly because exercise is hard, and all the time people can get injured if they don't follow the correct technique when they're doing their exercise. So my question was, as any geek does, is can we use wearable tech and machine learning to give feedback to athletes? Because let's face it, during the COVID-19 pandemic and the lockdowns, this was the only real exercise I was doing was playing Final Fantasy VII. So I guess I want to say what was the minimum set of devices that I can use to detect a boxer's successfully blocking with one hand and punching with another? So that was, I guess, the goal I set for myself. So just to kind of open this up as a bit of a disclaimer, I am far from any lawyer. I am definitely a health professional. My colleagues would say I'm barely a developer. So talk to the people who know what they're doing and who are experts in their relevant fields because they have a world of advice that can help you on your journey. So I guess I talked a bit about what inspired me. So what could a potential solution look like? So in this case here, we have a boxer who is punching across. And on the other hand, and so that's, there's got a sensor on there, the boxing glove that's going to send a signal to a central processing unit that says, hey, this right hand is punching straights. The left hand is guarding and that sensor will send a signal to the central device saying that this left hand is guarding. The central device says, well, one hand is guarding while the other hand is punching. We've got a good technique. And we'd like to give some feedback back to the boxer to say, yeah, you successfully guarded. So I guess to kind of take a step back and to think, well, why did I pick machine learning? So let's just take an example of just trying to measure typical activities. So let's say if we want to measure walking. Well, you say like, if the speed is less than four kilometers an hour, we're going to say it's walking. Let's say we're running now. So if the speed is less than four kilometers, we're walking, otherwise running. We start to kind of get a bit more complex here where if we want to cycle, so say if speed is less than four, we're walking, speed is less than eight kilometers an hour, we're running, otherwise we're cycling. You know, the wheels fall off the cart when we introduce some other random sport like golf. Traditional programming, as you appreciate from most of the audience here, is we take a set of input data and a set of rules. We process them and we get some output. Like an answer. Machine learning kind of turns this on its head a bit where we say, here's the input data, here's the results that we're after. Find a set of rules that managed to figure out this particular expected result. So instead of having our custom rules of if, then, else, we're just saying a stream of data crawls to walking, stream of data represents running, cycling, and golfing. So we can get a little bit more flexibility here. So now we kind of got like, okay, so now machine learning kind of manages some of that complexity rather well for us. We've got to find a way to measure the environment that we're working in. So fundamentally, all these electronic sensors are converting some form of energy into data. So the ones I'll be using most today is an accelerometer which would convert kinetic energy into data. The heart rate monitors and cameras convert light energy into data and conductivity would be converted into data by a moisture sensor. So one of my favorite examples of this is the Nintendo Switch Sports. So I love my video games. As you can imagine, this person is using their controller and it's being converted into moves that the characters are using in the game. And my personal favorite is the bowling one where you can do a particular maneuver and it will simulate a bowling movement, which is this is actually me playing and I got a pretty awesome strike. So I wanted to use as an excuse to share in a public forum. So I would actually design some of this hardware or what hardware did I use? So I looked through a selection of hardware devices and settled on these two. So the SEED Studio, we are terminal. This one has Bluetooth Low Energy 5 and Wi-Fi. It's got a lot of onboard sensors and input output devices like SD cards, buttons, so we can record and display data. You can also extend it via peripherals. I've got my little one here. However, it does have a large size. Like that's my hand and that's the device. So there is not much physical space to even wear my glove and because it's got an LCD display, it does have a higher relative power consumption. On the other hand, we do have the Nano 33 BLE Sensor by Arduino. That one only has Bluetooth Low Energy 5 and as you can imagine, it doesn't have a low any user interface on board outside of a reset button, but it does have a lot of onboard sensors and normally that, but it's very small in size and very low power consumption and I've actually got on my other hand here, it's got a certain Iron Man-esque vibe to it. So as you can imagine, because we only had just the hardware sensor there, I needed to do a little bit of hardware development. So in this case here, I had the Nano 33 BLE Sensor, which has a regular power supply being powered by a battery. But I came up with this problem where I need to find a way to securely attach the Nano 33 BLE Sensor onto my boxing wraps. And I tried a few different iterations. So this is using paper tape that medical professionals will use to typically attach things like needles to your body. That one I found would come off with sweats. I tried other iterations like bobby pins, metal bobby pins conduct electricity with bad foot near electronic devices and the tape kind of worked okay, but it became a little bit expensive to keep tearing it off after each session. So in the end, I ended up settling with these tubular support bandages, which is used for mild sprayings, but it actually kept the sensors in place relatively well. And they can just be taken off and washed with the regular wash cycle. So it worked out to be a pretty effective and low-cost solution to keeping the sensor in place in a consistent place every time I would go and train. So this was the experiment set up that I ended up taking to my boxing class to try and see if I can kind of keep track of my workouts. So how did the software system will look like? So we started out with the 33 BLE Sensor, which has an onboard motion sensor. So that's a nine degree of freedom and that would send motion data through to TensorFlow Lite, which is running on the main operating system. It would then use Bluetooth low energy to transfer data between the board and the Leo terminal. And it can also provide other outputs through the IO pins through vibrational haptic feedback and through a neopixel ring. This whole motion of providing feedback became a point of controversy within this. And I've set aside some time to kind of talk about that. So watch this space. In terms of data transfer, we ended up just sending simple data types over with low energy. We ended up sending just a little simple interface and just set up. It actually worked out to be pretty effective to transmit what was relatively low amount of data across a little virtual network. So, I mean, this is a Python conference, so I probably should spend a little bit more time talking a bit about Python modeling. So I took two separate approaches to machine learning models. One was using a relatively new startup, Edge Impulse, which is machine learning as a service. They have some pretty cool toys to play around with in terms of machine learning. So if you're interested in this, I highly recommend checking them out. So for this, I used their continuous motion analysis tool sets to do special analysis of the neural network classifier across the data sets that I was collecting. And because I had circumstances where I'd have data that I might not know what it is, I had to use the K-means anomaly detection to pick it up and I'll explain some of that later on down the track. So here's the main types that I had to consider. So we had obviously the block, which is defending yourself against an opponent's attacks. We had the body shot, which is like the liver shot, very, very devastating, the hook and the jab and the uppercut. So if you're not doing any of those punches, you should be blocking all the time. The result of this model was pretty darn sophisticated. Like, we ended up getting some pretty cool results from it. As you can see from this confusion matrix, we're able to correlate what we expected with what the models was returning to us. Although we hit one particular problem, I could have my hands like this, or I could be doing a proper guard and my model could not tell the difference. And this caused problems because I could not differentiate between these two circumstances. So I needed to add an extra model into this. So what would happen is we would pass our data through the neural network. And if we had detected a block, then we would end up putting it through the K-means neighbor algorithm. If not, we would put it through the TensorFlow Lite and then we'd show the result of the model. So in this case here, if we didn't detect a block, we would pass it to the TensorFlow Lite model, which would give us the body shot hook, jab, all that kind of classification. And if we did, test the block, we would say is it a good block or a bad block and then show the result. So that's one particular method worked pretty well. There is a really cool book called TinyML. And I highly recommend that if you have experience in machine learning, because it does kind of take you from the base knowledge in through to the embedded world. And this is the alternate way of doing this. So you have where our accelerometer that basically sanitizes the data that it gets from the raw sensor, it then rasterizes the stroke and I'll show you some examples of what that looks like. And it does two things. It sends the rasterized data into the TensorFlow Lite model and it can also print a gesture of what you did. So for the machine learning experts, I included the model and what it looks like here. I'll find a way to upload my slide deck for, I guess, future people. And similar here, this is the converted code of the Keras model. So what would happen is how we actually did it was we would connect the nano-30BLE sensor like it would be on my hand and we connected via Bluetooth to a laptop and we'd use this Magic Wand PC data collector app which would basically take all that processed information and give it to us here to collect and then feed into our training algorithms. But here is what the actual rasterized versions of the different punch styles look. So as you can see here, it actually, like you can actually see that this looks like a proper, is some of the punch that we can kind of expect to see and they look rather different to each other. So we can expect to see some different results. So whilst we had the laptop in play, is that's good. It would be great if we can just send the data from the, the, at least send straight to the WEA terminal which is what I ended up doing later on. And then you just take out the SIM card, so the SIM card there, SD card from the WEA terminal and put that in your laptop and then do any other model training from that. And I thought I'll have a side-by-side comparison of the Edge Impulse and the Magic Wand example. So the Edge Impulse only uses the three degrees of freedom XYZ whilst the Magic Wand uses six degrees of freedom and it creates a two-digit raster. One of the things that the Edge Impulse, because it does continuous motion analysis, if you're recording for like a five-second window, you have to keep punching for five seconds and it will manage to sample the one punch from your multiple punches. However, with the Magic Wand, it just records one particular punch. The Magic Wand algorithm does do some processing on the raw accelerometer data that you do get from the sensor. So it does have certain advantages, like compensation for a sensor drift that the raw sensor data doesn't actually have. But I think what was most interesting was that from a practical perspective, there was very little difference between both models. So it would ultimately just come down to how your workflow was involved, how it ended up working with. If you wanted to have a fully open source, fully automated model, you could go down the Magic Wand path. If you did want to have the simplicity and I guess a high level of Edge Impulse that works too. They are both almost interchangeable really. So I've actually kind of done a little bit more study beyond I guess the technical bits. And I will, I'll kind of want to go and discuss a little bit more about I guess what I've learned so far. So I did all the training on me and I'm a pretty bad boxer. I'm a pretty unpleasant, as you probably imagine. There are boxers who are much more skilled than I am. One of my trainers, Gabby, has an incredibly fast hand speed. And I would wager that that when I, if I also use the sensors on her, it probably wouldn't detect it because I'm a lot slower punch than she is. So as a part of considering the diversity of your models, then you might need to consider, using more diverse range of users who can provide data for your models. And I follow what's called the orthodox model, which is, you know, left hand forward and your right hand, which is the dominant hand to his back. A south pole stance is for the left handers in it, which is like the reverse essentially. I haven't really tried out to see whether the sensors would work. I probably bet money that probably wouldn't. So you would need to train the models with both right-handed and left-handed boxes. And I don't know if there's any boxing fans in the audience, but I follow the very basic kind of guard. If anyone who's seen like the Floyd Mayweathers or boxing, they tend to, so I'll try to set it up a bit for this. They tend to guard like that, the Philly shell and the lower guard. These are just as accurate guards as the basic guard, but my models only obviously consider the basic guard. So what my models will tell you that you're not guarding correctly, but you are doing a different style of guard. It is yet another example of where my models haven't really considered a lot of what could be there. So it's not that kind of kept in my mind, but it's something for future training. This I'm going to describe relatively recently. So why medical devices? I promise there's a good point behind this. So medical devices in Australia is regulated by the Fair Pity Goods Administration. In the US, it's the FDA Fruit and Good Drug Administration. I'm pretty sure for the EU and the UK, these are the agencies responsible for medical devices. Please look for me on that. Someone a lot smarter than me will probably be able to tell you which agency is responsible for these kind of things. So essentially, if you are creating a piece of technology that is making some particular claim that you, for less than my advice, I can claim that this is going to prevent you from getting injured because you're following the right technique. If you make a therapeutic claim, your device is considered a medical device. And this is what happened during the COVID-19 pandemic. This biocharger was claimed to help improve your performance, optimize your health. And subsequently, the person who made these claims did not have the medical evidence to back it up and was subsequently fined $25,000 Australian dollars because of alleged breaches of the Fair Pity Goods Act. So if you do want to go down the entrepreneurial path for these kinds of things, it's something to keep in mind that you will have to ensure that your devices, if you are making a claim that you have sufficient evidence to back that claim up. So another case study I'd also like to say is the Owlet SOC Baby Monitor. If you do want to go down entrepreneurial path, Google Owlet BMC case study, my mentor Eddie showed me this and it changed my life. So thank you, Eddie. And basically, what this device was supposed to do was a little monitor on a SOC, which you'd put on your baby's foot and it would measure heart rate, oxygen levels to ostensibly give new parents peace of mind. So for at least under Australian medical device legislation, if your medical device is something that creates some sort of model of the human body and is also being, is intended to monitor or diagnose disease or analyze some physical process, it will be considered a medical device in the eyes of a regulator. And in particular for these kind of devices, you are considered as a class to a medical device and you will need to be put on a register and you will need to have some certification to prove that you meet those, that's a particular threshold. One of the things that was unfortunate happened pretty recently was that the Owlets was stopped from selling their products by the FDA because of the claims that they made and the FDA stepped in and said, no, you need to have evidence to back this up. So once again, if you do decide to go down the entrepreneurial path for this, keep this in mind in the backyard mind that if you are starting to measure human body aspects, you may fall under medical legislation, medical device legislation. So I guess, I hope I haven't turned anyone off too much from pursuing this. It is something relatively new I found out and I wanted to share with an audience. Ultimately, I had a problem which was, can I measure myself doing a, do I work out? And yeah, I ended up doing it. I found that this simple combination of two models worked pretty well. And Pete Warden, who is one of the authors of the Tiny and Old Book has this particular concept which I loved really well was that we have our motion sensors and our heart rate sensors, but when we feed the results of those sensors into a TensorFlow model, we can encapsulate, we can abstract that whole model and treat that as yet another sensor in our system architecture, which I thought was a pretty cool statement really. Like, it's a nice way to kind of consider this in terms of your overall architecture. Ultimately, I would love to kind of keep expanding this in my spare time. It would be great to keep improving the model as we go along with feedback from a trainer. So, you know, if you're on a training session and you'll keep punching away, your trainer can say, yep, that looks good. That doesn't look good. So you can go away and keep practicing on by yourself and still get that feedback from that pre-trained model. Well done simple things like a block. I could, I like to do other more complex maneuvers like a slip, which is where you avoid a jab. So I've got to stop punching the screen, try to avoid doing that. And even like combos, like a jab cross body shot. I'm not gonna show a weave because I'm probably gonna go off the screen. And while it's good that I said, you know, hey, you've done the punch, it'll be nice to see and we can probably get this from the accelerometer data, whether we connected with the intended target or not. You know, and this could theoretically lead to scoring matches. And what would it talk to you with a reference to Terry Crews? So I guess, well, I've only really used a accelerometer and a gyroscope for these, for this exercise. There are the muscle sensors, which will obviously detect the electrical energy sense over certain muscles. And we can kind of incorporate that into our models as well as, you know, continuing careers for other sports and sectors. So I think I've pretty much covered my time. So huge thank you. Like I'm kind of standing at the, on the shoulders of giants as the saying would go. I've learned quite a bit from Dan and Pete and Professor Reddy from Harvard, the Edge Impulse, TensorFlow, Adreno seed and Harvard X communities have all helped me along my journey, not only to mention particular trainers of Virgin Active in Sydney who have led me along the boxing journey. So yeah. So thank you everyone for kind of attending this talk. I'm happy to answer any questions you all might have, but thanks again for the opportunity to share what's been a pretty fun journey. I'm happy to answer any questions that anyone might have, but I might need a microphone or something like that because I'm already here too much. Leslie, can you hear me? Yes. I can hear you just fine. Thanks for the talk. I was just wondering, is the code that you used open source, the sort of Arduino code? Are you machine learning code? Yes, it should be on my GitHub and I'll take an action to put links on the talk page if not actually put them in the slide deck but I'll also put them on the conference page. So I'll make sure I do that. Great, thank you. And the other question, did you consider using a second Arduino for the other hand? Would you think that would improve your model at all in terms of if you have a left-handed person perhaps or something like that? Oh yeah, I normally would have two sensors here but I only had one sock on my hand. So I didn't get a chance to put it on. Yeah, typically I would have a sensor on both hand plus the extra one nearby. It could be on my back or just on the ground nearby. I mean, Bluetooth low energy has a range of meters really so it doesn't have to be too close to the body. But yes, the idea is you'd have two separate sensors. Sorry, I just did get a chance to put the second one on. This afternoon. That's great, thank you very much. Hello, thanks for the talk. I have just one question. What is the size of those programs then in the end? I mean, you have pretty much big, pretty big models. You have to squeeze them into a couple of kilobytes, I guess. That's an excellent question. That's probably the magic of TensorFlow Lite Micro. I'll just try to find that slide here. TensorFlow Lite Micro is the, I guess it's the superhero in all of this. Not only that, because whilst I have this nice model here, it's funnily just taking eight-bit integers and it's just outputting eight-bit integers. So TensorFlow Lite Micro does a lot of heavy lifting in terms of minimizing the model down to the absolute bare necessities that will run on these devices. And I think to consider as well these devices do not have the same computational power as, I mean, my iPhone, let alone a full desktop. So we are only just running models here. We're not doing any model training on these devices. And yet we can still get pretty good latency from it by just stripping out the absolute bare necessities and just using what we need to do to process it. And TensorFlow Lite Micro is one that is responsible for all that kind of thing. So I can't think of like the exact numbers of like the size, but it does, that's one that's responsible for kind of compressing it quite a bit. Do you have any numbers for latency? Latency was down to like one millisecond. All right, that's great. Practically speaking, you would hardly see any, you'll be running your models on your devices and you wouldn't be able to notice significant lag between you'd be probably have worse latency with like network comms than you would with running the models. Sure, but I wouldn't notice up to a hundred millisecond. Thank you. Yeah, happy to, I mean, like one of the other applications of TinyML TensorFlow Lite Micro is on wake words for, you know, like your hey Siri kind of things, I must see how many devices that kind of picks up in the world, you know, when you said, hey, so that's obviously running locally on the device, which then wakes up the device to say, hey, yeah, I can now listen to you for whatever input you want to say to your smart device. So yeah, that's, I guess one of the things that would be something to kind of keep in mind. Okay, I think that's it for in-person questions. Oh, I'm sorry, I don't know if there's a way that I can easily find the virtual questions, but I will kind of, I will leave my slide deck here. So it has my LinkedIn and my email address. So feel free to send me any questions that you all may have. And if I can't answer it, I'll try to make an introduction for you to someone who probably knows what they're talking about. Well, thank you so much for your talk. Thank you very much for having me and then for the rest of the conference everyone.