 Bonsoir Montréal-Piton. Good evening Montreal-Python and welcome to Montreal-Python 99, né vigilant, vigilant nose. Our program for tonight is going to come in just a moment, but before we get there, I need to manage to advance my slides and I need to tell you that you should join us on Slack. And the link is right here and it's also in the description on YouTube. And if you join us on Slack or if you decide to comment on YouTube, know that we have a code of conduct at Montreal-Python. That is very simple. It basically says that we should all be excellent between one another. So before I go to the program for tonight, I want to remind you that you can join us on Slack. And you have the link here on the screen, on the screen, on the screen. And in the description of the video on YouTube and then if you decide to interact with us, be it on Slack or in the comments on YouTube, we have a code of conduct at Montreal-Python. And it's super simple and it says that we should all be excellent between each other. If you see something that you don't feel excellent about, find me. I'm Yannick, or Duke, who is called Duke on Slack. And tell us and we're going to do our best to solve these problems. So our program for tonight, we're going to have Duke first, who's going to make a summary of our first event that we've done in person since the pandemic. We had a program day on Saturday, this Saturday, well Saturday passed. And Duke is going to talk to us. And then, Nikola is going to talk to us about how to address or how to use Python to solve difficult problems. So our program tonight, Duke is first going to give us a little summary of our first in-person event that we had last Saturday. We had the programming day. And then Nikola Nadeau is going to tell us about harnessing Python for solving hard technical applications. And then after that, we're going to have our traditional virtual happy hour on Jetsy. You can come with whatever drink you feel like drinking. It doesn't have to be a beer, but if you do drink a beer, you might be in good company because there's a fair chance that I will have one. Other than that, keep looking at our page on Meetup. It's Montréal-Piton and we list all of our events there. We list them also in a couple other places, but we're less regular in posting elsewhere. So Meetup is definitely the place to keep an eye on. Another thing to mention, we have our page Meetup on which we show all of our future events. It happens that we post them elsewhere, like on our LinkedIn page, or Facebook, or even on the mailing list, but we're a little less regular on it. The Meetup page is the best place to keep our events. And then I'd like to thank FGNR, who will host our virtual 5-7 tonight. And then Eddie has taken this beautiful picture of the blue snake. So thank you, Eddie, for sharing this beautiful picture. And on that, I'm going to invite Duc, who's going to talk to us about the day of the program. Hello, Duc. How are you? Hey, hi Yannick. How are you? How are you? I'm fine. Okay, well, okay, well, I imagine that, okay, I'm going to start. But that's because, well, like Yannick said, we had our first event, well, the first event in Texas since, I don't know how long, I don't even know, it's been so long. And that's it, but you have the context. But it wasn't the day of the program that we had three groups that are, well, the introduction to the program in Python. And intelligent, well, analysis of more artificial intelligence. And also, the third group was web development with Django. Unfortunately, well, that's for sure, it's like, more than there is the world, more than it's the phone, but we didn't have as many people who came as we wanted. So the next time, you will have to contact your friends, your family and all that, to come, encourage us and learn to develop with us. And all that, we had four or five people in a group of introduction to Python programming, and five people in artificial intelligence, and one person in web development with Django. So, well, that's it, it's what we're going to announce, well, if ever we do, we will still announce it, but we are also looking for places so that we can, you know, because there we made the end with a city of Morin, with the Erudy team who hosted us, who lent us the locations and all that, so thank you Erudy. But if your company or your organization wants to host us, if you have small locations where you can do small workshops like that, well, we are always encouraged to do this kind of event. The feedback I received was very positive, that's it, so that's for sure that we will do that again and again, but we will see, that's good, but stay in contact, like all the communication channels that Yannick mentioned earlier, that's for sure that if we do other events, but we will, we will recouple all the information about that, so I think that's a lot of that on my side, is it Yannick, do you want to buy something else or? Well, thank you for that summary, Duc, did you have a discussion, well I wasn't there, that's what it looks like, did you have a discussion between the presenters and you, when our next day of programming event, it's clear, don't lose the thread, we will continue to do virtual programming events like we did last year, you know, but we want to start doing more events in person, so according to you, Duc, when are we online for our next event in person? Well, next event in person, but I would say, well, me, that's what I would say, that it was done in two months, but after, you know, that's what when we do an event like that, it's the whole day, and also it's the whole Saturday's day, what you know, according to our colleagues, you know, it's maybe a little too much for them because it's still a whole Saturday's day, we hear it, so we will surely meet up and discuss it, but that's for sure, yes, it's true that if we, if we are not able to do it in person very often, but as you said, our day of programming online, which is a little easier for us, it will be a good alternative too. But that's great, that's hard, thank you very much for this summary, and then I invite everyone to follow what's going on on Meetup because we will announce it as soon as possible, when will be our next programming event, either a evening or a day in person, and on that, I will invite Nicolas, Nicolas who is Dr. Nadeau in fact, who is the founder of Nadeau, I forgot it, Nadeau Technologies, innovation, innovation, it's on the screen, well yes, who is going to talk to us a little bit about these experiences by using Python to solve difficult technical problems such as the design of bio-medico equipment, the 3D printing, high performance, I don't know what it means, but you will talk about it, robotics and artificial intelligence, so our next speaker is going to be Nicolas Nadeau, Dr. Nadeau, who is a founder of Nadeau innovation, and he is going to tell us a little bit about his experience using Python for solving hard technical problems such as bio-medical equipment, design, high performance, 3D printing, robotics and AI, Nicolas, I'll talk to you later. All right, thank you very much yeah Nick, thank you very much Duc for having us here today, so I'm Nicolas, thank you Montreal Python for hosting the event, so today we're going to be talking about harnessing Python for hard tech applications, and for me hard tech is that interface between hot hardware and software, really where robotics, AI, Edge AI, IoT, all these things really come together, and that's been essentially where I've spent most of my career, I don't do so much at the cloud SaaS side of software or hardware development, I'm really in the robotics side of things, and so the theme I talk today is really see where Python fits into this hardware intelligence systems, and I'm going to use my career as examples and stepping stones because I find it really follows the progress of robotics and AI over the past decade or so and how it's evolved over that time. One thing I found early on in my career that I thought was missing was just simple applications where Python is used as a young engineer, I would always want to where would I go to use these skills, these tools I enjoy a lot, do I have to go to a Python only place or can I explore other applications and domains and still bring in the tools I want to use, and if we're just go by the internet a lot of people might think that Python is only used for AI or web development, but there's so many more applications that we don't traditionally think of, and from a diversity equity inclusion point of view, I want the world of robotics, AI, IoT, edge applications to bring in the skills of non-hardware, non-hard tech people and really bring in these super useful perspectives and people into these applications, making it accessible to everyone of all backgrounds and skill sets, so that's essentially what this presentation is going to be. So a little bit about me, currently I am helping companies build next generation technologies and empowering high-performance teams through fractional CTO services, so that's NADO Innovations. Most recently before that I was CTO of Halody Robotics, a global humanoid robotics company, before that I was the head of engineering at Aeon 3D doing high-performance 3D printing out of Y Combinator, I spent the last decade or so creating unique technologies for biomedical applications, food robotics, humanoid robotics, industry form point no, out of manufacturing, I'm a PNG and member of the order, did my PhD at ETS where I mixed and that was really where I first started mixing the machine learning the AI with robotics to do medical ultrasound. I once upon a time was originally a mechanical engineer out of McGill and that's really where my love of hardware comes from and most of the time you could find me as a mentor at Techstars next AI, Creative Destruction Lab and Founder Fuel. So first application, let's get to the fun stuff. Over a decade ago I had the opportunity to work with a small biomedical engineering company here in Montreal called Rogue Research, had a lot of fun, great team, they're really focused on high risk, high reward type projects, those moonshots and we developed devices and tools for brain imaging, cognitive neuroscience and veterinary research. One of these devices that we created was one of the first semi-portable near infrared spectroscopy systems, so NIRS that we pushed on the market and so NIRS is a non-invasive technique and method used to measure brain activity. It's often used in brain imaging studies to help researchers understand how the brain functions, what parts of the brain light up when you do different activities or do different cognitive neuroscience tasks and one of the advantages of NIRS is that it's non-invasive meaning that doesn't need any surgery or injection of anything into the body, we really just put a cap on a person's head and so it's less sensitive compared to other techniques like MRI, we can do it with motion, we can use it on young children or patients with movement disorders because you don't have to stay perfectly still, which is a lot of fun when you're doing like cognitive neuroscience in sports for instance. Now one of the most critical parts of NIRS system is the opto design. Please pardon me to interrupt, I think we might be able to play a little bit with the layout of your screen to better use the screen real estate, I see you're getting into the busier slides here, oh that's so much better. There we go, let's zoom in, perfect, you know you try to do the multi-monitor view. All right, so opto design, we work, so NIRS essentially works by shooting lasers into people's heads, they're low-powered lasers in the infrared spectrum and the light actually passes through the scalp and skull into the brain and so when the light enters the brain it interacts with the brain tissue, the hemoglobin in the bloodstream, which carries the oxygen in the blood and by measuring the way the light interacts with this hemoglobin we can learn how different parts of the brain are activated or deactivated and where oxygen is being delivered to different parts of the brain and this gives us insight into which areas of the brain are more active. And so from a design engineer perspective, this is a fiber optic table, glued to a mirror prism, glued to a lens and this let us get like a nice low profile 90 degree optode but there's a lot of mechanical optimizations that need to be done and that all had to do with how the light actually passes through. And so this project was actually my start into Python over a decade ago when I was just a young mechanical design engineer and so coming out of university I thought MATLAB was the greatest thing ever but I soon realized that without all the expensive toolboxes that the universities give you for free, MATLAB is essentially useless and so that's where I started with Python. We needed to optimize the optodes fiber optic and lens design to know how the assembly and the mechanical tolerances affected the light emission and the detection performances of the source and detection optodes and so I designed a simple 2D ray tracing optimization tool that simulated the laser rays and the attenuation of the laser light using Monte Carlo methods, essentially using randomized statistical methods to shoot out a whole bunch of laser photons and try and guess which ones make it to the end or get detected on the way back. And so they would all get different starting parameters, essentially build a table and many database of laser rays that go out, how many were detected and then randomize around how the different mechanical assembly methods would affect that performance. We were then able to visualize the rays with MATLAB and so I get to share the results with my stakeholders for design reviews and the non-technical people. From there what's a lot of fun is we can export everything to SOLIDWORKS as 2D layups into sketches as Excel-driven design parameters with SOLIDWORKS and that let me basically auto-generate a lot of the mechanical assemblies and mechanical parts that I then get to build and test and compare those values and that performance back to the simulation. And so it was really this Python in the loop with hardware design that really started my love for this, you know, easy to use but super powerful language. After Rogue and out of Y Combinator I had the opportunity to be head of engineering at Aeon 3D and so this is Montreal's super awesome 3D printing company. We did a lot of aerospace and biomedical applications. Essentially a 3D printer is just a robot in an oven and from the systems perspective we treat it as an IoT device. And so this let us bring in a lot of tools and techniques from the IoT world in order to make this a much more, you know, smart system if you will. One fun application is that we have the first 3D printed parts landing on the moon quite soon with the Astrobotics Peregrine moon lander. I believe it's landing at the end of March and it all starts so being able to build these high performance systems and service these high performance clients with these, you know, top-end materials that are very hard to control and design around. We need a system that's easy to use, you know, agile to develop on and outputs reliable parts. And the developer and research experience really that UI, UX, that workflow of the system is key to making something like this successful. So let's look at the actual tech stack that went into designing, you know, a 3D printing IoT type system like this. The HMI is essentially a web app. Anyone familiar with full stack web development would instantly be able to develop on our system. Even more fun was that the infrastructure took advantage of Docker containers. So we could do over-the-air updates, OTA, on-the-fly, do AB deployments, roll out our different firmwares and softwares to our internal machines, to external clients, special clients, do a lot of remote diagnostics and logging. All of this is just standard in the SAS world. We brought it over to the hardware and, you know, sort of like hardware as a service type thing. And so the UI UX was just a React front-end. You can see an example there on the left, just a nice touch screen with big buttons. The back end was a Python flash container. So that gave us all our endpoints that we needed to do. And then down at the firmware level, just CC plus firmware on an AT megabase microcontroller. That would control the motion system, the thermal system of 3D printer. I had a lot of fun using Belena as our fleet management infrastructure as a service. So this allowed us to do remote management of the machines, update machines, AB deployments, anybody who's ever done any form of Docker container deployments in the SAS world, whether it's through like GitLab, CI CD, and whatnot. We're very familiar with this workflow. And this really lets do a separation of concerns between, you know, the hardware engineers and the people who needed that hardware expertise and the other types of developers that we were able to onboard and bring into the team. And it opens the doors to non-hardware people to be able to contribute to our system. And honestly, it made hiring a lot easier. And so, like all robots, especially, you know, robots inside of 200 degree ovens, debugging and customer success were super important. In robotics, things will break. And you have to be able to, you know, quickly respond to the event and fix it. And so we had a lot of clients at secure aerospace facilities where the word cloud essentially scared them and everything had to be on-prem. And so the alternative solution is what? To dump all the logs into a zip file onto a USB key, which is, you know, it's a real pain. And USB keys and secure facilities are a security issue in and of themselves. So we assume that everyone had access to a phone camera. And we have the cool idea to use our much nice touchscreen display, which is nice and big, to display QR codes with all the basic log data inside of it embedded as a JSON. And this let us really help customer support and make, you know, the process of something going wrong in the field, customer interacts with our system to being able to, you know, connect with our customer service team, provide them all the information that they would need in order to resolve that ticket or that issue much for much more quick, you know, bringing, you know, agile to that customer success process. And so instead of your service desk ticket being filled with, I think this, I think that, you know, a couple of dropdowns, we're able to attach a picture of the QR code with embedded in all the basic logs that we need to debug the issue faster. And this led to a better customer experience, better developer experience, because we're able to, you know, go quicker and, you know, developers get frustrated, sometimes chasing after the information from the end user. QR codes were able to also allow our users to audit what data was being sent. So we were very transparent with our data collection relationship with our end users. And so all of this was just a few lines of Python, you could see here essentially, on the left, within our flash gap. That's how screen would just dump it all into a dictionary and using the QR code Python library. A couple of lines basically transform into a QR code that we're able to just pop up on the screen when you need to do a debugging situation. A lot of fun. This was a great, you know, little quick thing that made the end to end experience a lot nicer. On the data science and AI side of things, whoever's done 3D printing before hobby 3D printing with, you know, appruciate at home, you know the struggle of figuring out your print parameters. There's nozzle speeds, nozzle temperatures, chamber temperatures, extrusion multipliers, all of these have an effect on your final print outcome, whether or not the print even succeeds or not, or crashes your tool head, but also on like the mechanical part properties such as like tensile strength. This is super important, these properties to the design engineers in aerospace, in biomedical, where they need really consistent and robust results. And so the printer itself also has an effect on the part parameters. So we now have your parameters going in and what printer you printed on that might have small effects on the final outcome. And so we want to start collecting data, build model, and have a better, you know, predictive understanding of what things affected the part outcome. We collected basic logs using like Belenna and using your standard SAS type apps and whatnot to feed it to logging infrastructure. But from an R&D perspective, we needed higher fidelity, higher, you know, resolution logging and really like at the scientific level. So fortunately, we had dozens of printers internally as part of our internal print farm where we do a lot of prototyping, a lot of testing, a lot of customer success type thing. Since the machines were basically IoT devices with rest endpoints served by that Python flask application layer, I simply set up a Python scraper cron job that collected this high resolution data over time. Now a lot of people might think well, you could just create like a push service, public sub type thing to push to some external data store. But this simple infrastructure didn't need any external services. We're just essentially cron jobbing scraping these API endpoints. And so that means any engineer, even hardware engineers are basically little to no software experience, we're able to maintain and interact with this data collection system. And this is, you know, typically a lot heavier data than you send to the cloud or your standard metrics observability stack. But these endpoints were also public to our research users who needed to use this data for their studies for research papers and things like that. So we're essentially also dogfooding our client tools at the same time. And now stakeholders were able to check out daily weekly data summaries that were dumped as CSVs internally in our shared drive. This is the Python cron job was just like a REST API with a request that we're targeting pandas and SQL alchemy were used for the ETL, dumped it to CSV Excel files into a shared that we're using teams back at the day is probably a SharePoint also sent to internal Postgres. And then we display it, you know, pretty dashboards and Grafana shows up all our uptime different printers and for temperatures, things like that, and also a meta based platform for less technical users to be able to do more business intelligence insight type questions and query of the data. And so having this data collection system with the core being Python, you know, really owned by the R&D team, operated by the R&D team, who wasn't necessarily in this like the hardware R&D team, weren't necessarily, you know, the top software people, they were able to be empowered as a team and enabled, you know, better development experience, you know, the ability to test new features and prototypes without having to then go over to the software team and get them to maintain their own a separate set of infrastructure. Onto some of the work I did during my PhD. So this is the world of pure academics and research. I started to directly combine the robotics and AI side of things over there. And this is really where I got into that deep hard tech world where the software controlled the hardware systems and made decisions on the fly. And the applications ranged from robotics. I made a robotics framework for Python called PyBotics presented that I believe at PyCon a number of years ago. I presented here at MTL Python, I think a few years ago, my real time motion optimization. We did some calibration studies as well. And so I'm not going to dive into those ones too much since I presented them already, but I just want to give a high level overview where Python sort of comes into this robotics world. And so offline program is probably the number one application I see. Super important when developing robotics applications because it allows engineers to simulate and test environment movements and tasks and this virtual environment that doesn't break, doesn't cost anything if you do break it. And, you know, this all happens before you implement it in the physical world where breaking things is very expensive. You essentially create a digital twin of the robot and test its performance, check for collisions, and you know, optimize the movements and the directories without needing the physical hardware right in front of you. So reduces damage and also makes it more efficient because you can do all this on your couch at home instead of having to have access to that equipment and like, you know, turn off the production line just to do your tests, which reduces costs quite a bit. From an architectural point of view, you have essentially Python at the center of development these days. You know, we have a core Python app that's agnostic whether it's controlling the simulator robot or a real robot. And now you get to connect, you just have a pointer going to whether you're pushing your application motions and everything to a robot simulator that controls a simulator robot or a real robot controller is controlling a real robot. And even further, sometimes some of these simulators are able also directly control the robot controller. And so this lets us do things like CICD with real robot applications because each code change can be tested in the simulated environment before being deployed onto the real one and to push to production where things can go really wrong if it's not tested. And so this was something I presented previously at Montreal Python. I just want to reiterate that the tech stack of Python goes beyond just controlling the robots really even doing the autonomy and training and data collection around robotics, around calibration, all that AI stuff that we're starting to see these days. And the biggest unfortunate thing is that every robot has its own software SDK or interface. And you have to basically make wrappers upon wrappers upon wrappers to control all these robots. And so that's where I love GRPC. Basically it provides a language agnostic approach to wrapping all these systems and communications infrastructure. So my Python client, the center where I do my business logic can really interact with everything agnostic. It's scalable. It brings hardware in the root loop. It makes everything testable. Makes mocking really easy because all these hardware devices essentially become abstract servers that provide a set of services or callable functions to my Python client where I do that business logic. The client doesn't need to know how anything works because everything is this black box defined through the protobuf files of what needs to be as an input and what's expected as an output. And now everything can stay versioned. All these interfaces are well-defined and shared. It's an infrastructure as code approach to the protobuf files. Now I can auto generate boilerplate communications APIs and client servers essentially on the fly as needed. You need a Java endpoint. You can spin up a Java endpoint. You need Python. Everything is just given to you for free essentially. And so from a hardware engineer perspective, the experiment design and setup time is greatly reduced through this automation of well-defined interfaces. So into my most recent work, several years ago, I met a bunch of Norwegians on the internet. And that's how Halody Robotics really got started. And in the span of a couple years, we grew to four countries, five offices, a new humanoid robot designed for security guarding. We grew to 50-plus employees, and we sold the biggest contract ever with 140 robots to ADT commercial in the U.S. So that was a lot of fun, a lot of hyper growth going on, dealing six-foot-tall robots like this. And so we were big fans of autonomy. We really embraced the usage of the large language models, the LLMs that you're hearing about now, techniques from the cutting-edge AI research to bring them in to the robotics space and really train our robots end-to-end using just the vision data, that image data from our camera system on the robot head. So this is very similar, if you think about it, to that paper and publication I've done many years ago about AI playing the Atari by just using the image input and using controller outputs. But with robotics and the human environment, the real world, all unstructured, data collection is really hard. There's lots of corner cases, human environments are really complex, messy, lots of soft, deformable objects. And so we really took advantage, and you can see this on the left here, of human VR to let us take control of the robot in low-confidence situations. And this let us capture the human motions as expert data for the behavior cloning and imitation learning AI model training. And so on the left, we're just using an Oculus Quest 2 and Unity to drive the robot through some endpoints. From an architectural perspective, robots are really complex and humanoid robots are even more complex. And the goal, CTL there, so my goal was really trying how to make it as simple as possible to onboard new engineers to make it a maintainable structure that can grow with the growth of the company. We wanted to make sure that onboarding was as straight forward as possible. We wanted to make sure that the developer experience and getting new ideas, new features deployed was as easy as possible. And so we split the architecture into three distinct layers, core interface and application. And this allowed us to create a separation of concerns, not only in the modules and the code base, but also in the skills and the development workflows and the types of engineers we'd be able to bring onboard to really contribute to our platform. And so at the core layer, that's honestly the only place where you need deep robotics knowledge. And as you move up the chain through the interface to the application layer, pure ML people, full stack developers, non-roboticists really would be successful and contributing to the overall application. Python hands down with the primary language at the top layer. At the core layer, CC++ is that real-time code. We archipatch the kernel to really control our own scheduling. A self-balancing robot needs to stay up and need to maintain that minimum latency in order to keep that running. But as you move up the layer, you don't need that real-time anymore. And so you're able to use tools like Docker and containerize everything. OpenCV, Grafana, PyTorch, all of that was really that top layer in order to quickly spin up demos, be able to build end features and applications for our clients as quickly as possible. And containerization really helps that whole dependency management side of things where engineers are able to spin up ideas and just deploy on this container without worrying how it affects other people's containers and applications. And so a great example of this was actually at an Industry 4.0 conference in Milan that we did at IPack Emo with one of our partners, AltaPack. So AltaPack makes machines in Bologna, Italy that packages dried food goods such as pastas and chickpeas. And so like Brilla pasta and the Checo products are key examples of what might be packed on one of these machines. And so they wanted a demo of potential factory of the future with robots alongside humans. And we didn't have too much time to spin this up. And so honestly, in the span of a few weeks, we put together, you know, nice and containerized at that application layer, OpenCV, so the OpenCV Python module, QR codes, as you can see here, and Arucco markers for doing some of that triangulation of the manipulation side of things and using ROS really as that communication structure. So ROS is really popular in the robotics world as a PubSub framework. So nowadays I've gone private and I lend my expertise through fractional CTO services and, you know, helping corporations, helping startups make the right decisions early on. And this lets me get to play with a lot of great tech and work with a lot of great people, more so than just one company at a time. And so one of my favorite clients recently, you might have seen this on MTL blog or the Gazette, or Radio Canada is Ocidia, and they've been deploying, you know, the spot robot for dynamic sensing applications. One of the big clients was the STM recently, you know, featured in all our local media. And so the goal of this is can we make the human world closed loop for sensing applications? Can, you know, the spot robot live its best life patrolling locations, collecting data so that the back end systems, our stakeholders, our decision makers can make data driven decisions. And so predictive maintenance using base computer vision loaded onto a doctrine container is all that's really needed to work with this robot. And it's a beautiful system. And so it's been one of my most favorite platforms to develop on lately, the robot just works, it could fall over, it'll get back up, it can go up and down stairs, go through doorways, navigate on its own, you don't have to focus on the debugging and fixing robots, like I have had to do in a lot of other my companies are, you know, robotics applications when you build the robot yourself. And so instead of debugging robots all day, I could just focus on the end application and develop using standard developer tools and workflows. This makes it really easy to spin up new tests and demos for non roboticists to succeed. And in this example, in this GIF here, this is a quick example of Boston Dynamics Giz of just using OpenCV Python again, numpy and TensorFlow loaded up into doctrine containers to quickly teach the robot how to just randomly pick up objects that it detects through the camera system. And we're going to get into how all this works. So at the core, the spot SDK is just a nice Python SDK, so pip install, I think it's Boston Dine-Client and you have everything you need to get started with the spot. It's wonderful. So anyone who's worked with Python before, you'll quickly understand what's going on here. This is the hello spot application from Boston Dynamics. You know, you start with some basic authentication and configurations of the robot, and then you connect. And now we get into, it might be a little hard to see on the big screen here, I'm going to try and zoom this in a little more. But essentially, we get into just moving the robot right away. And all of this is loaded into containers so that we can control versions, packages, we can use CI, CD to deploy things, we can do A-B testing, we can bring in our favorite packages like PyTorch, Pandas, NumPy, SciPy, OpenCV, Flask, all onto our robot without any complications because everything is nice and containerized. And so from an architectural perspective, Boston Dynamics apparently shares the same vision that I had sort of at Holody, is that there's a separation really through three distinct layers of the robot. The core layer where, you know, the Boston Dynamics is the only people to touch that layer, if you will, with the payload and the base authentication and things like that. We have a robot layer that's sort of like the interface layer that that really defines the state, the control, and the data. And these are like the objects that you pass up and down through the core to autonomy layer. And then there are autonomy layers really like my applications layer where you get the mission, you get the docking commands, you get a lot of those services that the end user would want to interact with. And so it's sort of similar to the MVC model view control architecture we see in web apps, where the core model cannot be directly touched by the front end or the user. We have to pass through an interface type layer. And so architecturally, this is how that fetch application actually runs that you saw before in the GIF. So the onboard spot robot computer runs a TensorFlow model alongside a person's laptop that acts as sort of like the client controller. And so the main script that, you know, fetch.py script is run off-board from the robot, but then networks using everyone's favorite GRPC with the robot in order to communicate, you know, go here, go there, pick up this, pick up that. Is this an object that should pick up yes or no? And so all these spot extensions are just wrapped in Docker images so we can deploy new ones ad hoc as needed with the latest and greatest code, making, you know, the whole developer experience as seamless and efficient as possible and also allowing non-roboticists to be successful. And so inside our Docker container, in the robot computer, we just have our TensorFlow model running. And so anybody who's familiar with Docker looks at this and says, yeah, we can get up and running with robotics. So if you've worked in, you know, the SaaS world before and web development Docker containers, which is really nice, NVIDIA provides optimized Docker containers from their image repository, they're optimized for TensorFlow, PyTorch, and all those types of things, and really makes AI ML application development a lot more easier. We just pull and use these. And so these standard containers are able to be loaded directly onto the spot and makes it easy for anyone to be a robotics superstar, honestly, without necessarily having that hardware or robotics background. And so what's next for, you know, the world of hard tech robotics edge AI? I honestly see a super bright future for Pythonistas of, you know, all backgrounds to get into these robotics and hard tech applications, the core layers of hardware devices, robotics, IoT, they're becoming more and more of a commodity and they're stable and like Boston Dynamics, the spot robot is a perfect example that where we can buy a platform that just works. And now the differentiating factors we get to focus on the application, the outcomes. And this is really going to be driven by people with that business mindset, that user interface user experience mindset, which traditionally honestly comes from more of a web development or traditional developer background, not necessarily, you know, those roboticists or those hardware engineers. And so honestly, if you know Python, if you know APIs, you know, Docker, CIC databases, and some have some basic hardware understanding, you're set up for a super successful career in hard tech right now, right at the point where things are starting to really kick off. Thank you very much. Thank you, Nicola. Let me bring Duke, who's going to lead our question period. All right. Thank you, Nic. Hey, thank you, Nicola. Looks like you have a lot of fun, your job. Yeah, I get to play with a lot of robots. Cool. Yeah. Okay. Let me see. Okay. I would like to invite everybody to ask your question for Nicola at Slug Channel or you can write directly on YouTube. Okay. I have already first question. What are the most common reasons why a simulated robot will not behave the same as a real robot? So, yeah, it's all about calibration. So a simulated robot out of the box, you'll have what's called your nominal model. So this is your kinematic model, how each I'm going to use, you know, right hand rule, if anyone's done chemistry or mechanics before, you have each joint of the robot connected to the next joint. And so essentially you have different lengths and attachment points and angles between these joints. And really how that robot, the real robot is assembled, screws are tightened, how gravity, how the temperature, how all your friction, all that affects these little micro tolerances and how the robot's affected. So while your nominal robot on paper is maybe 600 millimeters joint to joint, your real robot's going to be like 600.5 millimeters and the next joint and all those errors stack up. So your simulated robot will behave essentially perfectly and your real robot will be off by a few millimeters in the real world. And this is where calibration comes in. What you're really trying to do is close that error gap between your nominal model and your real robot such that your simulation knows what the real robot actually is in real life. And so when you do this offline programming, you have basically a one for one. But out of the box, all the big robot companies KUKA, ABB, Fanook, they'll talk about precision. I didn't put the slide up here, but there's a slide I like to show with, you know, the bull's eyes and precision versus accuracy. Industrial robots are great at precision. They will go to the same place over and over and over again and always be at the same spot. Out of the box, they're actually pretty terrible at accuracy. And that's go to a random location accurately and hit that button. So industrial robots will be precise, maybe to microns, but accurate to millimeters, if not centimeters. And this is where if you calibrate, you'll really make that accuracy more and more better in real life. All right. Thank you. Thank you for your question, Jean-Philippe. Let me see if I have another question. Okay. Not now. I have some question then. What is the use case of the KUKA code that you're talking about? I didn't get it. Yeah. So if anybody scanned it, you would find actually I embedded that log data that you saw on the left. So that is just QR codes that depends. If you scan it with your iPhone directly with like the camera app, it's going to try and call a phone number, don't do that. I don't know who's phone number that is. But essentially what it actually is underneath is just a JSON structured data. So that means that if I send that image into a JIRA support ticket or to a client or whatever, they could just take a picture of it and scan it and get a whole JSON out of the JSON string out of the QR code. So it's just embedded encoded data in a more visual format that transports better as an image. If I were to just display the JSON string on our touch screen in there, it doesn't take picture well. You can't necessarily copy and paste easy. Whereas with a QR code, you can easily just go snap and copy and paste your data where you need it to go. So it's just a way of really encoding our log data. And you can only store so much in the QR code was honestly a lot better than before. To get as much data as quickly as possible from a machine into the hands of our customer success people in our application engineers. Okay, okay. But the QR code is pop up at the screen of the robot. It would pop up on the screen of the robot and that way you just take a picture of it and you have your log data instantly. Okay. Okay. Yeah. Talk about log. I mean, how many data, how many, like for example, the robot that you show us, like the one that looks like a dog? Yeah. How many data do you have each day for one robot? So if you were to record it continuously, you have five high definition cameras on the robot itself. You have a camera on the hand of the robot. You have battery data, you know, position data. You can attach extra sensors. So we test humidity, temperature, you know, luminosity, all these extra sensors on top of this. And you could then query those at hundreds of hertz if you want to essentially. So you could create gigabytes of data per minute if you wanted to. Okay. You can create a lot of data really quickly. Another example, never mind the spot robot, the Halody robot, we had three 4k cameras running at 60 frames a second essentially. And that's gigabytes of data per like 30 seconds that you could collect if you really wanted to take your raw data. And so this often is a question when you're doing a lot of these edge applications is where do you optimize your data? Do you take the raw and transport it right away? Or do you use some compute resources to down sample it at the edge and then only transport, you know, your down sampled, you know, smaller data if you will. It depends. There's pros and cons, everything. I'll say it right away that transporting three times 4k images is not a good solution and doing some down sampling as quickly as possible in memory is a great starting point. But for other applications, sometimes just best, you know, ping it out there and let the subscribers take care of the transformations. Okay. Thank you. Okay, let me take another look. Okay. I will continue with my questions then. Okay. When you mentioned about Metabase, what is Metabase? Metabase, if I remember, you know, their tagline, if you will, is essentially a business intelligence dashboard type thing and data versioning tool that makes it really easy, I find, for non expert users to interact with data. So the first application I usually deploy it on is when somebody says they have an Excel file or a CSV file that they want to visualize as if you use it on a database, Metabase is able to bring those in and just display it nicely. And they also take approach where, and it's actually a very good approach from a data science point of view, is always go into your data with a question, you know, it's sort of like, you know, question driven research, if you will, it's your hypothesis. And so Metabase lets you build up and sort of like automatically put together SQL queries based on natural language questions about my data, which is a lot of fun. And so it's just a really nice open source business intelligence, you know, dashboard visualizer, compared to Grafana, which is more on your monitoring and observability stack. So Grafana is really that real time, you know, how many printers that I have running right now, where they look in the world, what's the max temperature, mid temperature, all those real time metrics and you know, uptime and those types of things. Whereas Metabase is like how many prints that I do over the last month, what is, you know, the highest temperature over the past year, you know, there's more business intelligence side stuff. Okay. Oh, wow. Yeah, I have to take a look at that. Okay. It's really pretty that that helps our stakeholders. Okay. Okay. Let me see. I don't think we have another question, but I have last one for you before we go to the end. You talk about CI, CD and how you deploy Docker images on robot or some kind of desktop. How often do you deploy like the new image or a new code in the production? Yeah. In production is an interesting one. So I'm going to use Aeon 3D, I think as the example for this one. So with Beleno is really easy. We get to just, you know, spin up, you know, your container builds and through the fleet management, we're able to select, you know, a certain number of printers internally as our alpha canary type printers, they got, you know, nightlies always deployed, latest master branch. Then we defined a certain set of printers as our beta printers. So those maybe got our latest, you know, stable release. And then only then would we start maybe rolling out to clients. And so we do like a rolling deployment where, you know, our clients that had the least critical applications, maybe they got the first wave of updates. And then slowly we deploy to more and more, you know, conservative clients at the end. I would say that we were deploying a lot of these robotics applications, deploys to clients at most once a month in hardware type applications with the running systems like this to tell the clients that, hey, by the way, you have a new system that might change things too quickly. You know, maybe there's downtime, maybe there's things, they just want a lot of times clients in the hardware space, if it's working, they just don't want it touched. And so only during their next downtime, would you then maybe roll up and roll out an update. Whereas internally, we were deploying software every day, we were pushing out the latest and latest releases. And that led us build up to, unfortunately, is, you know, bigger releases as you wait a month, maybe a quarter for like a big client release. But at least internally, we were able to test, test, test, test as quickly as possible and get this feature branches merged quickly. The same thing with Holody probably only, you know, once a month, once a quarter, that we do big updates to clients. You know, patch fixes happened more regularly, but actual big feature updates that might change the behavior of a system. And that was a big one. If the behavior changes of a system, you got to really take a step back and say, Okay, how do I educate my client that there's something that they're going to, there's a different experience that they're about to get. And how should they approach that experience? Okay, yeah, I mean, it's not, it's not always easy to communicate. No, and there's safety issues too. Like if something goes wrong, you know, 3d printers and ovens, they have heated things, but you know, a self bouncing humanoid robot, something goes wrong, it falls over, might hurt somebody. You have to take a much more conservative approach when deploying to the real world. Okay, cool. Yeah, I think, I think I'm done with the question. I don't know if you have some for Nicola. Excellent. I have lots of questions that sprung in my mind while I was watching this presentation, but I will save those for the happy hour. That seems adequate at this time of the day. Nicola, thank you again. Hopefully you'll be able to join us for the happy hour. And talking about the happy hour, let me remind everyone that it is right here at this URL. It's also a clickable link in the description of this video on YouTube. And in four weeks, what day would that be? We are going to have our next event that is going to be on March 20th. That's the beauty of having a month that ends on the multiple of seven boundary. So March 20th, our next event. And we don't have a number for this one yet. There might be a little trickiness going on with the number. You'll see, we keep that as a surprise, but stay tuned on media and you'll know everything about it. Any closing remarks, Duc? Yes, we are all in French right now. In four weeks, we are already going to have our next event. And as always, we are looking for presenters. So send us if you have worked with cool things like Nicola, or even if you think it's very cool, but if you still want to share it with us, it always makes us happy to hear what you have been working on. And I repeat, stay, listen to us from time to time on all our communication channels. So we will send you everything we are going to do and all that. Well, that's great. Goodbye everyone and we'll see you next month. Bye bye everyone.