 Hi, on NPI. Hi, on NPI, brought to you by Digikey. Thanks, Digikey. Analog Devices is what the Ion NPI is this week, and that's the new product introductions that we're just NPI stands for. Lady, what is the Ion NPI this week? I'm glad you asked. It's a two-parter. It's the Max Ref SD178 camera, which is a long number. But really, it's a dev board for the Max 78000, which is a Cortex M4 processor with machine learning convolutional neural network built-in. And this is a cool all-in-one camera and audio dev board kit. It's got an image sensor camera. It's an OV series. It's got flash LEDs, a bunch of buttons, a headset audio connector for input and output. It's got a microSD card. It's also got a built-in microphone. And on the other side, it's got a capacitive touch, power button, and micro USB type-C connector that can be used for just like USB or for programming it with the SWD dongle, which comes with it. And what this is is a development board for video on the edge processing. So oftentimes, you're doing video or image processing. You're using something like a Raspberry Pi or a single board computer if you're not using a desktop. But that uses a lot of power. What if there was a processor that basically had a built-in AI core that made processing audio and video really, really fast? And that's what this is. It's a chip that has, sorry, the next one. The Max 78000, which is a special Mac controller from Maxim now, Analog Devices. It's a Cortex M4, but it's got also a RISC-5 co-processor. Tons of GPIO, tons of memory, and there's a neural network accelerator. So what normally would be like a separate desktop they call them CUDA graphics devices or you'd have a coral stick or something that's built into the processor. And so it can do very fast, as fast as a single board Linux computer, but without the power and, of course, the instant startup can run off of battery, can go into low sleep mode. Also, if you expect from a mic controller, lower cost, smaller size, instant boot up time without the complexity of having a full Linux setup. This is the description of the chip. So on the left, it's a powerful chip. Cortex M4 with an FPU, running at 100 megahertz, it's got half a mega flash, 120K of RAM. There's also a ton of RAM in the CNN, the accelerator. And that's because you can actually load the model and all of your data into RAM. So it loads instantaneously, but you don't have to worry about caching in flash or reading enough internal memory, like a Q-Spy flash memory, because it's all in RAM and you can execute from within RAM or update from RAM within one instruction cycle. You get really, really fast updates. You can see almost a megabyte of data memory just for your image and model. Also, all the peripherals you expect, I2S, PWM, I2C, SPI, UART, it's a full Cortex M4. It just got this cool accelerator plugged onto the side. So here's more detail about the convolutional neural network. So basically, when you're dealing with machine learning models, there's a lot of mathematics that you have to do. You have to multiply these tensors together and you can involve them to get data out that then goes to multiple, multiple layers. Every layer adds complexity and adds more time but allows you to do more complicated training and inferences. But the inferences can take a really long time. So folks who have done AI, if you have it on a desktop without GPU acceleration, it can be 10, 20, even 100 times slower than if you have a specialized piece of hardware that's really, really good at multiplying these big numbers, these big collection of numbers together. And again, one of the nice things about this is that there's about a megabyte of RAM on the chip, specifically so that you can have everything in memory. And when I did TensorFlow Lite for microtrollers, we tried to do this hack where we would burn to the flash and then do execute in place. It's very complicated and kind of nasty. It meant that there was this high startup time because we had to burn the model, which is quite large into memory, into flash memory because we couldn't hold it into RAM. But that's one of the nice things about this chip is it's designed specifically for that kind of high RAM usage that you need where you have to have a full image in your memory and you're manipulating it in the models in memory so you can load it very quickly. Only tradeoff is like any AI project, you do have to train the model. Now it does come with a couple models and I'll show you when I try to do a demo of one, but depending on what you want to detect visually or audibly, you're going to have to train it. So you have to collect a lot of data and you have to do the training in PyTorch and then you can create the model, export it, and then program it into the microcontroller at the max 78,000. And the training is not extremely complicated but it's not trivial, right? You need to have someone who kind of knows what they're doing. You have to have a lot of really good data and you have to have a lot of patience and you have to have a fast processor because creating the model is actually quite difficult and takes that difficult like code-wise but complicated computationally. So you have to do it on desktop computer. You can't do it on the chip. You have to do it separately and then load the model in. But PyTorch is a very popular device tool that's used for taking all this data and kind of smushing it around and figuring out what is the collection of layers you need to apply to the data you've got to be able to train and identify what you want to identify. So thankfully, ADI has a nice video series that I watched a bunch of it. Taking you step by step, how did you training? What kind of data is good? And then how to apply it to the max 78,000 which is their core AI on the edge chip. And then when you program it, you're gonna use the built-in DAP programmer that comes with the chip. You plug it into the USB-C. It's kind of cute and witty. And then you can select which processor you want to program in because there's actually two processors inside. One is for video and one is for audio. So that way you could have one processor do both but in this case, I think they wanted to demonstrate that the speed you can get from having the both commands split between the two and also each one is pre-wired to the camera or to the I2S input. There's lots of example codes. And here's some models that they have. They have, you know, this portrait cat or dog detection. I showed you that before face ID, wildlife to protect like deers or cars outdoors or people, digit detection. That's kind of the standard handwriting detection. And but there's that tutorials on how to take enough data that you can then train it. Also training models to be honest is not that there's tutorials on how to do it. One nice thing is you don't have to use TensorFlow Lite for my controllers. You're actually, because it's the accelerated processor you can actually use like TensorFlow Lite for real but do be aware of the model has to fit within the amount of RAM that's available on the Mac 78,000. There's also a feather that they made for this chip. So, you know, if you want to, once you've got the camera, you've got your prototype developed and you want maybe some other feather compatible hardware to design your own prototypes and product before going to file manufacturer. I thought this is a beautiful light blue board and the dev kit is in stock. It says one in stock, but there's 417 in the factory, which means that they can get it to you within a day or two. So you can pick one up if you would like to experiment with video or audio recognition. So I thought we can do a quick demo on the overhead. Now, because it's a live demo, I'm going to say I hope it works. So I've got, it's not focused on purpose because it's going to be focused on the screen. So this is Brad Pitt. Just believe me, I am Brad Pitt. And the demo they've got is, oops, sorry, I have to remember to tilt up. There you go. So you can see- You can see Brad Pitt. You can see Brad Pitt. Yeah, you can see that I recognize Brad Pitt. Yeah, there you go. You can see it. So it detects like famous people. It's trained on models of celebrities. So I'm sorry, it's a camera of a camera. Camera of a camera of a camera of a camera. I know, but. Oh, and also it can detect words. So if I say up, down. Oh no, there you go. Let's get this closer. Show it. Up, down, left, right. One, two, three. It's pretty good. She's actually pretty good. You know, compared to- On the edge. On the edge. So yeah, check out this. Devan, this is taken apart. It's got a battery. It's got these beautiful boards. The video stuff and the audio stuff. A very cute compact little camera kit. Nice work. Yeah. And that is this week's Eye on MPI. Eye on MPI.