 So you have a microphone? Yeah. Yes, a second microphone. OK, everybody. We're about to start the next presentation. So if you could all take your seats. And I hand over to Christian and Simon. Yeah, so hi, everyone. Welcome to our talk, Introduction to Eclipse Isorix. This Christian, I'm Simon. We are from Bosch. And about two years ago, we started the internal development of Eclipse Isorix at the end of last year, we open sourced it. Let's have a brief look at the agenda. Our talk is split into two parts. So first off, we were talking about Eclipse Isorix. And then the second part is the demo of the robot Larry. And you've got the ability to see Isorix in the wild. And it's a spare time project. It's not associated with Bosch. Eclipse Isorix, I'm going to briefly talk about the motivation behind it, then show you how a typical middleware nowadays works like, then discuss how Eclipse Isorix is doing things differently, and briefly discuss also some lessons learned. So first off, why does IPC matter? Well, this is why. Autonomous driving is very complex. And with autonomous driving, you're dealing with a data driven system. So different sensors like a LiDAR and a video camera are producing data and sending this data to algorithms, and then further to the actuators. For example, a steering wheel. So with the troubles. Is it fine for you, I guess? OK. Well, and typically on the systems, you have an operating system. So each application runs in a virtual address space. And you do to overcome this virtual address space. And this is why you need inter-process communication. And the main two requirements when it comes to autonomous driving is that we are transmitting large data sets, so up to 10 gigabytes per second. And additionally, we need low latency because we're dealing with a real-time system. So how does a typical middleware like the one used today in ROS work? I'll briefly introduce you to an example. So we have here a publisher, a radar app, sending data to two subscribers, two algorithmic apps. And a typical middleware, after the data was being written, the data package, copies. It's to an internal buffers, and then does further copies like for every subscriber. And if you plot this now, you get something similar like this. So with the message size, in decrease message size, you get also an increased latency. Now imagine you're sending a 4K video stream. You really have a problem. And this is where Eclipse Isorix comes in. So I'll run the same example now with Eclipse Isorix. And first off, what's different? They have to register. All the apps have to register with our demon called Rowdy. And after they have registered, the communication runs completely independent from Rowdy. So in the second part, the apps map a shared memory segment. This is the red box that you can see here. And inside this shared memory, we have so-called memory pools. You can think of them as wagons. And at the beginning, the publisher requests ownership of such a wagon from the middleware. Then in the second step, the data package is being written into this wagon. And in the third step, we deliver this pointer to the subscribers. And this is what we call true zero copy, because data is only written once and never copied. So now we press the fast forward button a bit. Two more data packages were produced. And the data package in the beef wagon was already consumed. And here we have a mechanism using reference counting to automatically recycle the unused memory chunks. Now if you plot this again, you get something similar like this. So now the latency is independent from the message size. You get virtually limitless data transmission at constant time. And now I'm going to further dig a bit more into the details of AzureX. It comes on an Apache 2 license. It's written in C++ 11. However, we're planning to move to C++ 14 soon. It runs both on Linux and QNICs. In order to establish this communication over shared memory, we're using state-of-the-art lock-free algorithms. And we would advise anyone to not directly use this API, but use it with a higher-level API. So for example, we offer bindings for Rust 2 and also continental integrated isorix in their eCal middleware to make it faster. It also can be used as an implementation for the Adaptive Autosar Communications API, which is widely used in the automotive industry. So next I'm going to talk a bit about the lessons learned. So upfront, we strive for the highest automotive safety standard. Just to be clear here, we are not there yet. Yeah, things are work in progress. But one of the main goals or the main lessons learned that we learned over the years is determinism is the key. This is why we're not using heap, but static memory pools. So if you configure your memory right upfront, you will never run into out-of-memory arrows. And also, we are only using a subset of the SDL, the C++ standard template library. So you don't allow undefined behavior, and you also get no exceptions. Another very important lesson learned is that lock-free programming is really hard. It took us about two years to get our main safely overflowing feeful right. If we would ask ourselves, what would we do different if we would start from the scratch? We would definitely start with the basic building blocks, because nowadays, many components are still not optimized and the feature complete. And also, we believe that transparency builds trust. And this is especially true in the safety field. And this is why we want to continue the development together with the community in the open. So briefly some words about the upcoming features. We are planning to integrate into one communication. This is, for example, necessary for the ROS2 logger, because many different applications, many different publishers are sending data to one topic. And then also, we're planning request response communication and various language bindings. The ROS language binding is already available. You can check it out, you find the link at the end. Also, we're partnering up with Eclipse Cyclone VDS. This is a middleware like Jose showed earlier, currently used inside ROS2. And for full ROS2 compatibility, we are also planning Windows and Mac OS support. So if you have further questions, you can write to our mailing list or just check out the code. And without further ado, I would give this the question. Thank you, Simon. Okay, whenever I start a presentation like this one, I try to figure out what my main goal is I want to transmit to the audience. And I came up with this here. Is it polarisorix? And the reason here is, if you look at a typical middleware, it does a lot of copies. You have a sender which has to produce data. It wants to transmit this data to a multitude of receivers. And there are a lot of copies and more. Now think of a 4K camera which runs at 60 frames per second. You have a lot of data here. And then performance becomes really a bottleneck. But in isorix, we do it a little bit differently. You can say like, you're sending not data, you just point, there's the data. Take a look at it. And this is much faster. And the second thing is a personalized thing. When I started working on this project, I thought it's pretty cool to have something which will be running in a car soon to work on in a robot, for instance, in a spare time. And that was started at the Larry project. So, who is Larry? This is Larry. Larry is our blue friend here, which is a robot which has a camera upfront, which can be rotated vertically and horizontally. You have an ultrasonic sensor. You have also a tracking sensor in front of it. But it's broken where you can win robot races, for instance. Then, we have some LEDs and a beep on it. And in the future, I will plan to pull a microphone array on it and put on some speakers so that I have speech recognition, voice control, and so on, and maybe a 3D stereo camera where you can put on some VR Googles and can become Larry and can drive through a room and with your VR Googles. And I think this is pretty crazy cool. And everything is also, again, under the Pitchits 2 license. And now the question comes, can you build your own Larry? Yes, you can. You can buy this whole kit. I think it's 130 euros. The link is here in the GitLab repository. It's not from Bosch, it's from a Robo company. And where everything comes together is a Larry Robotics repository where you can find some tutorials how to handle your Raspberry Pi with an Arclinux image. We decided for Arclinux, not Resbian. How to have, you get your Larry services running. The Larry services are the applications which are running on Larry, which control the ultrasonic sensor, the camera, the driving and so on. Then you maybe want to remote control your Larry and therefore we have a Larry user interface. It's called Larry UI. And there you can log in onto your laptop to Larry and can control him, can make a blink and so on. And we have some basic applications which like an emergency brake and so on. And the other thing is, one of my other projects is a spare time 3D engine. You should never build your own, but it's makes fun. So I build my own. And if you want to help, you can also get involved here. So now let's take a deep look into Isorix, how it's really working. I think the best thing to realize how Isorix is working is by examples like with a new language, you have to speak it. And here we have Larry, he's inside a labyrinth and we have to help Larry to find out. It's a very easy labyrinth and these rules apply only for this labyrinth, but just for the sake of argument, you stick to these rules. The first rule is if there's no obstacle ahead, drive forward. If there's no obstacle ahead, turn right. And with this algorithm, he can find off this very simplistic labyrinth. Now let's take a look at the code, how it would look like in C++. Before we started with the, we have how the code is ordered. I mentioned how we interact with the sensors, algorithms, and actuators. And in this case, we have an ultrasonic sensor which takes the obstacles. Though we can realize we are in front of a wall. Then we need an algorithm. In this case, we have an algorithm. These are the two rules to get out of a labyrinth. And then we have our actuator. This is another application. It just drives, it receives commands, drive forward, turn right, and so on. And now let's take a look at the algorithm. In C++, this is a whole algorithm. You do not need any more stuff. And with this, you have a very basic application written that Larry can find out of a labyrinth. But now let's go step by step through the code. The first thing is, as Simon mentioned, we have to register at Rowdy. This is the first line. We just say, hello Rowdy, here I am. My name is Explorer. The second thing is we are interested in the data from the ultrasonic sensor. And therefore we have to subscribe to him. And every sensor and every sender has a unique ID and in this case it's Larry, ultrasonic sensor intern. When we've done this, we have to create a driver class. It's a very simplistic class provided with Larry services, which does action for us like driving forward, turn right, turn left, stop, and so on. And we have to define when we are in front of a wall, the minimum distance. In this case, 0.3 meters. And now we come to the invent loop. This event loop runs as long as Larry is stuck until the labyrinth. At first, we have to get our sensor data from the ultrasonic sensor. We do it in line six. And now let's take our x-ray Googles and let's take a look what really happens inside. Here we are asking, hey, is there some kind of package we are interested in? And then Rowdy says, yes, there's data. Look over there. We do not do any copy. We just take a look over there and say, oh, OK. There's some sensor data. The minimum distance, for instance, is greater than the sensor distance. Therefore we have to turn right. Or we do the other thing. Over there is no sensor data because the ultrasonic sensor does not detect anything. Or the sensor data detects something. But it's further away than we require for turning right. And therefore we drive forward. And if you take a look at the driving forward and take our x-ray Googles again, we're just asking Rowdy, hey, I want to send some data package. Give me some memory. And the Rowdy comes, gives us some memory. Then we write our memory inside of it. And then we just say, Rowdy, this is the stuff I want to publish. And give it to Rowdy. And then the application which is interested in, this is a drive application, tells Rowdy, hey, I'm interested in driving the information. Is there something new? And then Rowdy again tells him, yes, there is something new. Just take a look over there. No copy involved. And then interpret your data. And this means, for instance, driving forward. And now let's take a look at the high-level architecture. It's very simplistic. You have Larry here, our blue friend. And he essentially runs on a Raspberry Pi 4, which does this basic data like collect some sensor data, has to do some preprocessing. For instance, if you have an ultrasonic sensor, you just send a ping out of it. Then you will receive some microseconds later the echo from this. And then you have to calculate your distance from it. And this is done also on the Raspberry Pi and not transmitted to a computer, the high-performance computer here. And also some low-level control, like when Larry is driving against the wall, you have to stop him. And therefore this is also done on the Raspberry Pi locally. But the high-performance computer should do like a perform a high strategy. Let's think you have a swarm of Larry's or like a swarm of turtle bots, like the previous presentation. You maybe have a strategy like, for instance, Simon is hiding and you want to find Simon. And then all the Larry's can swarm out and have some object detection and try to find Simon. And if they found him, they scream or blink or whatever. So. And here are some ideas which I came up with we could realize together. So if you like them or want to get involved, just write me an email. The first thing I spoke of was a microphone array with speech recognition or voice output. Also, I'm programing a virtual Larry environment where he can train the data with neural networks. Or a stereo camera where you can do 3D reconstruction. For instance, Larry is now a drone and flies through a valley and then you can map the valley and can put on your VR goals and take a look at the valley or object detection is needed to find Simon. You have multiple Larry's in the swarm. It's also, I think in China they use drones for the agriculture field. And maybe this is also an idea we can follow up or win a robo race because we have a tracking sensor for a robo race. So why not use it? Now we get the question come up, could come up. If you're familiar with Ross, will isoics become the next Ross and here the answer is definitely not. If you have seen in the previous presentation Ross has some awesome tools and we do not need to reinvent the wheel but we have to help the Ross 2 community become even faster. Here we have the problem with the copies and the performance bottleneck and another problem. Let's say you are a developer and you're developing your cooler robot and you want to drive with him through a warehouse and you have to fulfill some safety restriction. You have to stop when a person comes and so on. And then normally in the old days you have to rewrite everything from scratch. You have proven that it works but now we have to redo all the stuff and here the idea is you do not have to redo all the stuff. We are becoming easily and with this we can help you to fulfill your security and safety requirements. So to come with the opening question is it powered by isoics? The first thing is if you have some kind of rover or some other Ross 2 project or your own other spare time project we are connected a multitude of sensors and you have a real problem with the performance just consider switching to isoics. We are pretty fast and it's our strive to be the fastest in the open source world and we are at the moment at a pretty good pace to realize our dream. So are there any questions or would you like to see a demonstration of Larry? One question. Question sure? Microphone? Hang on a second. No worries, there you go. So if you go back to slide 22 that looked like Larry had to pull a shared memory region to get notified about like there's data available. Is that right? That's what you mean. Slide 22, 3. Ah, there we go. Yeah, so Larry has to inquire the shared memory whether there's a packet ready for him, right? Like the receiver has to pull basically. And normal at all. Yep. It's fine. It's fine, okay. You can come up. Okay, let's dig deep into the details. Have you semaphors for instance to just do a push notification? As someone has an ultrasonic sensor for instance is writing some new data then you can either try receive, you just say is there any data or no data or you can do a blocking receive and then you wait till you receive data. How's that implemented? With semaphors. Okay. Are there any further questions? So you can prepare the demo, maybe. I don't know. Gonna walk up. Who was that exactly? Can you pass it? Could you pass it to the microphone? Of course. Can you explain again what is the relationship between Cyclone and Isorix? Is Isorix a DDS implementation? So basically no Isorix is not a DDS implementation but we're planning a partnership and we just had the very first thoughts about how to combine the things and we came up with a basic architecture but yeah, that's currently in discussion. So Isorix definitely does not, it's not DDS compliant, it's not a DDS middleware. So do you support zero copy or almost zero copy communication between computers, between different computational units? Like with RDMA. Okay, we support zero copy only between one VCU or one computer if you have for instance communication between a multitude of computers which are connected for instance via TCP IP. I think there are some concepts to do some kind of zero copy but at the moment not. But if there are some concepts, I think it's feasible and then we really try to do it. Okay, cool, you've got the... Yeah, the problem is he's not starting. Okay, so we'll have to postpone that so. Sorry. And a further question, so if you want to see it live you can talk to us afterwards, yeah, just here. Okay, just for sure. Could you really see ceiling? So basically you should have a look at the SOFI implementation, this is our safely... Oh, okay, sure. So the question was how is this mechanism exactly working if I understood it right to do this atomic communication so how to seal the data basically? Repeat the question. It cannot be changed, you can configure it the way it can but it makes no sense in our case. And we do this with ECLs for instance, we have a shared memory separation with a multitude of segments and then you have for the sender port a segment where he can only write data in it and then you have a receiver port which can map this memory and only he can read it and therefore we have separation of concern and have met all safety standard in this case. And you push it off to the receiver, so the receivers have a guarantee that the right of it won't change the memory. Yeah, exactly, but the problem is we have to support Q and X and Q and X doesn't support this. Oh yeah. Okay. So we wanna be politics compliant or at least we're trying. Thanks. Do you support overwriting data in case a consumer is not present? Imagine a producer needing to overwrite data repeatedly and then the consumer connecting and consuming whatever data is available. Okay, let's, your question was if you are supporting overwriting and this means you are only interested in the newest data and not the old one from five years ago. And yes, we do this. It's our safely overflowing FIVO which takes a lot of time to implement this thing. And yes, we do. So you mentioned that you are aiming at safety quality which will satisfy ACOD safety level, the highest safety level in ISO 26262. And my question is are you going to provide some kind of a safety argument about your code, about implementation so that the users of your library could just plug it in and rely on whatever you have written? Yeah, this is currently under discussion. So we really don't know really yet how these things will work out there but probably if you are interested you can talk to us afterwards, yeah, sure. Yeah, I think we're also here in newcomer. We are I think one of the first projects we are going open source and developing easily software open source. We have processes but we have to gain experience how to handle this. For instance, we are a developer, we have a crazy cool feature, we want to have it. You just make a pull request and say yeah, it's cool. We don't have this in a car. And you have to fulfill all the whole process to make it easily. And this contains a lot of reuse, requirements writing and so on. And we have to figure out to realize this. So also if you have suggestions or a similar project that was planning something like this, come talk to us here. Yeah, I'm swirling around here, just talk to me.