 Okay. Okay. Yeah. Okay. So, hello, everyone. Hope you guys are all doing all right. So, yeah, I'm Luca. I'm working in Open Robotics here in Singapore. And I'm going to present about this project that we've been doing, which is a hardware project, which is called the Open Vision Computer, which is a fully open source, ROS-based vision system. So first of all, okay, you know, the usual rounds of introductions. So that's me in some old pictures from some time ago. And I'm Luca. And I'm working and, you know, doing embedded system stuff at Open Robotics and other stuff also. But I've been doing, you know, embedded systems for like quite a few years by now, about six years and like four years for like robotics related stuff, like mostly in drones before in the US and like recently in Open Robotics doing like cameras and robotic arms. And who we are, if you guys work with robotic stuff before, probably you heard of us. But if you haven't, then, you know, we do like open source software and hardware for robotics. And yeah, you know, we like, that's our motto. We use them to solve important problems and helping other people to do as well. And, you know, a bit of, you know, intro, like our HQ is in Mountain View in California, but we are a bit all over the place, our second largest office is here in the US where we move about two and a half years ago by now. And then we have a bunch of people like scattered all over America and some people also in Spain in Europe. Anyway, so down to actual, the actual presentation. So I'm going to present about this Open Vision computer. And this is a bit of the outline. So first of all, about the motivation, why we build this project. And then a bit of the path that brought us to where we are today, which we are the third iteration of this hardware project. And then, you know, a bit of talking about the architecture and how to actually cut, you can customize and build your own future development. And then not quite demos, but like videos of, like, you know, some simple applications that you can do with it. So first of all, why, if some of you people work with robotics before, there is generally a need for smart cameras, like for smart, for robotic applications, like some popular examples are, you know, the interior sense or the Z. And the reason why those are useful is that they help roboticists in, like, in outsourcing some of the computation, like to the cameras. And there is a lot of solutions out there, but the problem is that none of the solutions existing right now are open source. So they cannot be customized. You just buy this black box, you plug it to your computer, and then you hope it does what you wanted to do. And then also some of them, like, for example, the Z, they actually use the host machine to do all the processing, so they wouldn't like a powerful GPU, for example. So we decided to build the OVC, which is fully open source, both from a hardware and from a software point of view, and from a firmware point of view, of course. And it also includes an FPGA, so you can offload some of the most computational intensive tasks from your machine, you can offload them to the FPGA, like semi-computational power, which is very, very important for, like, complex robotics application, where your computational power is somewhat limited. And then, you know, but apart from, you know, all the customizations you can do in FPGA, it offers what people need most commonly just out of the box. So what people in robotics need most commonly is just images that are synchronized with IMU data to do sensor fusion. Then, you know, they wouldn't like stereo images, also synchronized with each other to do stereo matching and detect, like, the distance and depth of objects, and features to do localization. So all of those features are just offered out of the box without needing to do any sort of development by yourself. So a brief history about the journey. So the journey started a few years ago with what was called the OpenVision Computer 1, which is, like, 2017, and it included, so the module you see in the middle is actually, okay, I will not move the mouse much anymore, is actually an NVIDIA TX2. So it was fully embedded, it was quite large. It was, yeah, fully embedded, and it was tailored for an NVIDIA TX2. And it was actually, yeah, TX2 and non-altered FPGA. And it was actually used in the DARPAI Feli program, which is one of the drones, the drone you see over there in the picture. But again, so both this one, and so the second iteration was similar, but it was more modular, because now it was a two-part system, so you could, it was still designed for the TX2, but it was somewhat more modular, because you have the imager module, and then you have your computing module. But it was still tailored for NVIDIA TX2, and it was designed, like, one year later. And so what we learned quite quickly is that NVIDIA TX2 was amazing and great a few years ago, but recently NVIDIA released a bunch of new modules, which are great, like, orders of magnitude better. I mean, probably not orders of magnitude, but they are much better, you know, with the latest and most amazing system on a chip. But they are completely incompatible with the TX2 and, like, you know, basically with whatever we developed before, making it basically obsolete. So what we decided to do is we decided to do a completely different approach, where now our camera is just a USB device, which can be plugged to whatever is your computer of choice. And then also, you know, there was also a bunch of other issues before that with the sensor we are using were very expensive, so the previous OVCs were, like, very, very expensive also because of that reason. So now we simplified it a bit, made it just a USB camera, and this is what brought us to OVC3, which is, you know, cheaper sensors, and it's much more flexible, which is, you know, the module that you see over here. So just a brief overview. So there is two monochrome sensors, which are great for stereo matching because they have, like, great spatial resolution. And then there is an RGB sensor as well, which can be used for object recognition. For example, if you want to, like, track or follow objects with your robot. And then we have this, we have one of those modules that includes a system on a chip, which has both a quad-core ARM processor and an FPGA fabric. It's from Xilinx, it's called Zinc. And then there are plenty of memory, both RAM and non-volatile memory to store your data. And then also, you know, we have a chip, a cell phone-grade IMU that we use, you know, in case, you know, your application, whoever, you know, a simple IMU is good enough for your application, which is the case in many robotics applications. And then, you know, the main interface, which is USB3 Type-C, so we have about five gigabits of bandwidth to the host machine through USB3, that we use for both power and data. And then, you know, a bunch of extra interfaces, which can be useful if you want to inter, if you want to, let's say, if you want additional storage, you can use an SD card interface, or if you want to connect to external peripherals, you will also have an Ethernet interface, which is great, I think, for LiDARs, because a lot of LiDARs run over Ethernet. And then, of course, we design it to be quite modular, so it can be expanded quite dramatically. So we have like four, up to four additional stereo pairs that you can add, so you can have up to 11 cameras in parallel. So the version I have here, and we also have the booth, is having seven cameras in parallel, so it's a bit downsized for simplicity. And then, we have an additional GPIO, where you can put, where you can add whatever peripheral we need that we use, for example, for additional serial interfaces, or for additional IMUs and sensors. And this is an example of a board that you can build and use with your own custom needs. So the idea is that the main module is quite a complex module, so you wouldn't really want to change it, but because we offer this GPIO that you can connect to, and because it's an FPGA, so everything can be reconfigured, then you can actually build a much simpler board with whatever peripherals you need, and you can just plug it. So in this case, we have the vector nav, a bunch of serial interfaces, and a bunch of debugging ports, which is great, makes customizing the project much, much easier. And then, of course, as I mentioned, the external cameras, so also a much, much simpler board, compared to the main module, that then you just connect through FFC connectors to your camera, and you can have up to four of those. And then, of course, this was from the hardware point of view, but then if you look into the software, of course, you can push algorithms to the FPGA, as I mentioned, you know, we do corner detection. And then, because we also have a quad-core ARM processor, which is quite a powerful, like 1.5 gigahertz quad-core ARM processor, which runs a full Ubuntu distribution, you can actually run whatever software you have that can be compiled on an embedded ARM, can be run on this processor itself, which again, you know, has quite a lot of power to it, which is great. Just a bit of info on the architecture, so this camera was designed to be ROS compatible. And what it does is that, on connection, it runs, you know, it runs USB, Ethernet over USB, and then it tries to communicate to our ROS master running on your machine. And if you have a ROS core, a ROS master running on your machine, then it's just literally plug-and-play, like you connect it and then you get all your data streams. And then, of course, because it's very important to have very synchronized data for robotics applications, we have, we synchronize the IMU and the images together, and we stamp them with the system time. And then, yes, we fully integrated and fully reconfigurable corner detection, which is, again, also a great feature. And of course, like, you know, we wouldn't be here if the whole thing wasn't open source, so everything about it is open source, so we have our repository called OVC, or, you know, there is over there in GitHub. So, starting from the hardware files, which are designed with KitKat, I'm sad the KitKat speaker couldn't make it. But yes, so with KitKat, which is a great software, by the way, and the firmware is done with the styling tools, but it's done with the free version of the styling tools, so there is no need to, like, pay any license for that, and we also released that. We released the project. And the software itself is also, like, you know, included in the repository. But, you know, because to make it somewhat simpler for people, we don't only provide the source, but we only provide, like, binary files, so if you don't want to, you know, build the whole thing from scratch by yourself and go through all the process of learning how the whole pipeline works, because it's a very complex system. We provide, you know, binary images that you can just, you know, upload to your module and then it just works. And, you know, automatic update features, which are also great. So, this is an example of, you know, what the project looks like in KitKat. And, you know, you also, like, you need some, like, custom symbols that we also have a separate open source repository for. And then the actual firmware itself was designed in a very modular block design way, so you can integrate your own pieces of code by just designing a block and, like, integrating it with the rest of the block design, which is, again, as I mentioned just now, we give all the project files available open source and, like, everything is done with the free version of the tools. And, again, you know, because we are, because at the end of the day, what we are doing is just running an Ubuntu on the, on the OVC itself, you can literally just SSH through it and, like, treat it as your normal Ubuntu. So, you know, you can run OpenCV, you can run ROS, so you can run whatever interface you might, you whatever piece of software you would run on your normal Ubuntu and access it through the network interface, which is also very convenient. Which brings us to, like, this was, like, the OVC 3, so we have just a second revision, which just fixes a bunch of the issues, which is currently under fabrication, and the design files for these are already online. So, what we found out during development is that many people, especially for robotics and drone applications, which was the main application for this project, many people don't need a very complex board with a lot of additional features, but they just want, for example, a good IMU, and access to a few debugging boards. So, we designed a much smaller, much simpler, we call it, like, a fanny pack board, so it's a front, a forward mounted expansion board, which you can see over here, which, you know, has a, in this case, a vector IMU and some debugging interfaces, and I know it also fixed some of the minor issues, and here it's in another rendering, which we think this is, you know, one of the lessons learned is that this is a much more likely application of just wanting to expand with a few additional features and not wanting, like, a huge set of new features from your board. And then, of course, the most interesting part is that we are looking into and probably designing and releasing the next version, which is an OVC4, which will be a similar concept, but it will be even more modular. So, now there won't be any sensors on the computing module itself, but it will just be a module with a lot of connectors, and then you can connect whichever camera you want or whichever additional sensor, piece of hardware you want. So, you can be, like, fully customized, basically. And the idea is not to restrict anyone to a specific, like, sensor configuration or hardware configuration. And, you know, this one will stay tuned in the coming months to see how it's going. Okay, which is great. So, now we only have, you know, this is what you see over here. It's actually an example of a stream from the camera with the corner detection happening in the FPGA. So, you know, you can see, like, the circles are, like, the corner features detected by the FPGA algorithm, which we learn are always, like, very useful for, you know, localization and mapping applications. So, here, there will just be a bunch of videos which show both the OVC and, in a sense, like, the ROS ecosystem. So, the most simple application you can do with a stereo camera, you know, is just, like, get depth of objects. So, this was, you know, our intern Brandon some, like, a few months ago. And he's also using this stereo, like, stereo disparity package, which is, again, a fully open source ROS-based package, which is compatible out of the box with a camera. So, you know, you plug our camera, you download this package, you run this package, and, you know, in, like, 10 minutes you have, like, disparity images, and, you know, you have, like, object depth, which is very convenient because it's, like, you know, if you had to implement it by yourself, it would be quite a fair bit of work. Or, you know, something else, there is also, mapping is also a very common application. So, if you want to, like, have your robot move around. So, we had this example in which he used this library called RTAB map to move around the camera and then build a map of our previous office in one north. And again, because, you know, the RTAB map is an open source package, our camera is compatible with it out of the box. It's a very simple and fast to get this up and running. And finally, another interesting application was to detect objects. So, in this case, we wanted to detect a water bottle. And again, you know, we have point cloud processing libraries which do cylinder segmentation. So, they try to detect, they try to detect cylinder and to detect the pulse of the cylinder. In this case, you know, it's like moving the cylinder around. And again, you know, the usual, you know, compatible out of the box. And it was the final demo, which is the most interesting one, actually. So, we had this project also with a robotic arm. So, we decided, well, why don't we put together the cylinder segmentation, which you can see happening in the top left where the camera is detecting the cylinder. Why don't we do some sort of like eye arm system where we have a camera that detects a cylinder and then it tells the robot the coordinates of the cylinder. And then the robot will do, the robotic arm will do, will plan a path to pick up the cylinder, pick it up and put it into a different place. And this was a bit more of custom development. But again, because the cylinder detection is open source, but even the whole library that is used to do the planning and motion control of the robotic arm is also fully open source. So, in a sense, it's not only about the camera, but it's also about the whole ecosystem and how it makes it easier to develop, I would say, robotics application by leveraging work done by other people and not reinventing the wheel all the time. Now, he's doing this experiment where he tries to keep the bottle actually not in a non-naive pose, not like straight up, but actually keeps it tilted. And then you should see the robot that he tries to plan accordingly and then he tries to pick up the tilted bottle that he just detected and move forward. It's a very shy arm, it takes his own time to do things. Yeah, almost. Yeah, pretty close. Yeah, so this is like the movie, the Ross library to the planning. Yeah, and some more. And then finally, we also looked at different algorithms for, you know, in Ross, there's also a lot of different algorithms that are implemented to do stereo matching and disparity maps. Then we can compare the two of them. So, for example, you see that one on top uses global data, so it's more, as you said, it does have more global optimization, so it's more dense, up to dense maps, which is also great. And then, you know, it wouldn't be, of course, it wouldn't be a good conference talk without the usual slide at the end of every company saying, you know, we are hiring, but, you know, of course, we are hiring. So, you know, if anyone is interested to do cool robotic stuff, that's us and everything. Morgan is not there. Aaron is there. Aaron is there. Hi, Aaron. Yeah, so that's all from my side. So if you have any questions, you know, feel free to like ping me on my email or like, you know, find me at the booth later around and stuff. Yeah, thanks. Thanks, guys.