 I don't know, it doesn't, ah, there we go. Good afternoon. So I'll be talking about something that is a little special. It's about basically using some embedded systems to actually do some design. And the reason I'm going to talk about that is that we've been supporting a container solution to do computer vision and machine learning for a while, but we have started moving it to the edge, right, or at least on edge device. So what's a Jetson Nano? Jetson Nano is basically a small device. Think of it about the size of a little bigger than a Raspberry Pi. But instead of having basically just a CPU, it's a four core ARM processor, ARM64 v8, and it has 128 core Maxwell GPU. With 4GB of memory. Now for those interested, the CUDA compute is 5.3. We are at 8.6 currently. So, you know, it's not the most recent hardware. What that means is that you have CUDA 10.2 and you have CUDA DNN 8.0 if you're interested in playing with those. The SD card that you would use to install the software basically provides you with a Ubuntu 1804 with some pre-installed software. If you're interested, those devices are actually fairly cheap. You know, they are 100 bucks. They need a 4 amp power supply if you want to put them in max-end mode. So that's a 20-watt power consumption. And they'll be able to do a fairly efficient computer vision and a tiny bit of machine learning. So, for example, if you wanted to run a YOLO with a tiny weight, it would work. So, why am I going with that? Well, in the past, a few of us have heard me talk about that CUDA TensorFlow OpenCV containers that we have made available to the world a few years back. That container provides you with both a CPU and a GPU optimized version for OpenCV and TensorFlow. Everything is compiled from source. It takes on a 32-core piece of hardware over an hour to compile any one of those. So it's a monster to compile, right? We build it with a ton of extra goodies. And by extra goodies, you have a ton of toolkits. You have PyTorch. You have other things that are built into the base. If you want to know more about it, you can go to the GitHub, look into the build info, and you'll have a ton of information about OpenCV and TensorFlow. The images are available on Docker Hub, and we have two flavors. We have the CPU-only version, and we have the GPU-only version. The GPU version, the CUDA DNN, has been downloaded over 10,000 times by now. I've heard a lot of feedback from actually researchers and students that are using them for their projects because it's a container. You basically start a container. You can use it. You can deploy in it. And you don't have to use the term colludes, but you don't have to install anything specific on your system. The DNN version will work as long as you have the required NVIDIA drivers installed. For example, I use it personally a lot when I do computer vision because I don't want to have a version of OpenCV just installed and have to do all the Python binding. I'll basically start my container and I'll mount gmc, which is a default mount, in PWD. So whatever I'm doing in my code, I can just run the code with Python directly into minbash. Very useful for that. More interestingly, you develop on CPU, i.e. I work on my Mac. And then when I'm done, I can basically just port it to my server slash desktop if I want in order to test the GPU side of things. You test and then you can deploy. All you have to do is realistically tell the OpenCV TensorFlow backend to use the GPU and change the from the TensorFlow OpenCV version to the Kudanian and TensorFlow OpenCV version. So that's great, but why is the Jetson Nano? Well, we have a Jetson Nano version. We designed that one about a year ago. And the reasoning behind it was that we were working on projects that enabled us to do, where we're trying to do computer vision at the edge, right? And by the edge, I meant, okay, you can put a camera, or you can collect device and push information as much as possible without having to push the data. Most of the time, you don't care about the video, but you care about the analytics that comes with the video and the frame that would be detected with that. So the Jetson Nano was available. The NVIDIA was providing a base container we would work on. So we started doing exactly that. Same idea as the non-Jetson Nano version. You can work directly out of the box, click for prototype, but the nanos are small enough to be used in the Headless Mod to do some specific computer vision applications. So for public safety, we want to know, for example, distance tracking. You know, COVID is a good example lately. You're able to follow groups of people and see if somebody splits for other example, you can also run minimalistic object detection and object recognition. Like, once again, you only have four gigabytes of memory on this one. But if you look at the little similar to HTOP under, you'll see that we have four CPUs and one GPU with four gigabytes of memory. So we can really make something fervorous and just a Raspberry Pi, for example. So for people that are, you know, Raspberry Pi friendly, this, if you can see it, is my old Docker swarm cluster, right? It's been repurposed to be a Kubernetes cluster. But I have something that I want to show you in a minute. Example of use is very simple. For example, I'm going to do a from data machine, Jetson Nano, and I'm going to pull Darknet from the source, build it, tell it that I want to build it with GPU, OpenCV, OpenMP, Libes, and specify that I want my compute architecture to be 5.3. And then I'm going to have the Darknet client available on my Jetson Nano directly out of the box. Now, once again, I said earlier, I actually want to use tiny weights because of the memory available. You know, if you have 4GB, you're not going to run a full 150 or 160 object recognition algorithm. But you can build it, you can run it, and you can do object detection on that super tiny device at about 10 to 12 frames per second with a headless map. So for the purpose I was mentioning earlier, it's functional. So if we go to where we're going with that, you know, little stack here, that actually is a little research stack that we have in the office using Jetson Nano. Idea is very simple. You have a router, we want to make sure that we have network separation, we can do accurate reservation, and we can do an SSH proxy jump post from the router. We have a Raspberry Pi. The Raspberry Pi basically serves K3S. So, you know, you've heard Ryan talk about Kubernetes, just like Microstack is available for ARM64, you're not going to run, and I run it on my desktop. I'm not running a K3S on an ARM64 with 4GB of memory. But K3S is a very, very lightweight version of Kubernetes that still gives you access to the Cube CTL. So we have a Raspberry Pi whose entire purpose is to run the server of K3S and run the Docker registry. And then you have 4 Jetson Nanos, if you can look, if you see on that picture, that are headless and are basically running the NVIDIA runtime for Docker. We use the Docker backend, by the way, because this way is that push every Kubernetes client to basically always use the GPU, whatever they want. And we can run workloads for using the Jetson Nano and CUDA containers that I just mentioned. So, and I know Blair, you were mentioning, you were looking at a conversation about Edge. That's my presentation. We are going to have a blog post as well as a full set of guide for all those components available in the near future. Just not currently available. Thank you very much and I'll take questions.