 So hello everyone, my name is Piotr Wieric. Thank you for coming here. And today I'm going to talk about the synergy between the blender, which is actually a 3D software, very versatile tool, and also the real hardware, which is robotics in my case, especially in context of 3D scanning. So few words about me. I've been using blender for about 10 years right now, so quite long. I started from 2.4 something. I remember the pink outline in blender. So I also worked a little bit in the game dev and also e-commerce. So I also prepared some assets for VFX, so I work in many industries. And right now I switched a little bit more to the programming side and to the technical art. And currently I'm running my own company involved. And from the internet, you can know me as MacGavish. All right, so as we saw during the whole conference that blender is very versatile. It was used in many incredible ways for many different purposes. But can we actually use it for robotics? Like can we use it like a viewer to control something in real world? The answer is of course yes, because it's blender. It can be used wherever you want and however you want. So let's dive into, before we dive into the details, let me tell why I even started building that hardware. So everything started around 2018 when I joined the Forte Digital Company. I got interested in the 3D scanning, which is a very wide term, but especially I got into the photogrammetry, which is a small part of 3D scanning. And photogrammetry is actually also very versatile tool and technique, so it's very flexible. But it has some drawbacks and also many advantages. So what is the photogrammetry? As you may know, the photogrammetry is just a technique where you took many photos of the given object from many angles and then using photogrammetry software like Reality Capture or Metashape. You can combine that into the 3D model and also you can capture very sharp textures, which is not very common in the industrial grade scanners, let's say like Structurelight or laser scanners, where textures are mostly very poor. So photogrammetry in this case has very nice textures. So for creative industries, it's a very nice tool. As you can see, there's a lot of cameras to properly reconstruct even the simple objects, especially if you want to have short textures. So that's one of the problems. So you need many, many images and it also takes the time to prepare them, to capture them, to not make any errors, to also keep the proper distance to the object. So in this case, it's much more labor intensive method compared to other scanners like Structurelight, for example. So those are the features of the photogrammetry. As I said, it's very flexible because you can scan very small figurines with proper hardware, for example, but also you can scan, for example, huge buildings like building sites or whatever you want. So it's very, very flexible and that's a huge advantage. It's much cheaper because nowadays almost everyone have a phone and with cameras, so it's not a problem to took some pictures of some objects and just process them. Also the textures that they set are very high resolution. So it's also a huge advantage, especially for the creative industries where the textures matters. But there are some drawbacks. Like for example, it fails on featureless surfaces, especially that because it's a passive method compared to Structurelight with this active method, so it's based on the features, on the texture of the object. So if there's no features, no textures, then probably there's no mesh eventually. So that's a problem. And as I said, it requires many images and it's very tedious and monotonous. So I really hate that method because of that, but what can we do about this? We can of course automate it with some hardware. So to make that stuff less boring, especially for small size objects, because I'm focused mostly on the small size. I'm not gonna scan buildings, but more or less objects with like, let's say one meter by one meter by one meter. That's the biggest size, let's say. So of course we can use for example, turntable, automatic turntable with tripod and a camera on top of that. Which is of course very cheap, simple to use and in most cases it works fine. But as always, there are some drawbacks. So in case if the object is relatively symmetrical, there's no very weird shapes. It's not very, there's no many occlusions of the objects. It's probably work fine. But in more complex scenarios, let's say like this, when the Suzanne is lying on the, like rotated in 90 degrees. So for example, can you see my? Oh yeah, you can see that. So for example, this part here might be in focus, but the rest part like this eye might be out of focus. So when we rotate like 90 degree, the other side will be in focus and the other side will be out of focus. So that's a huge problem. And of course we can improve that by moving the tripod with the camera, but it's not actually very effective because it's manual work. So we want to avoid manual work at all. So the very simple approach would be to use automatic turntable using Arduino. I personally very hate of the shelf products. So I want to design everything by myself, try different hardware and automation possibilities and to write the software that I know exactly what it does. So this is the very simple approach where you have a relay module for the camera trigger. So we can, for example trigger camera, which is synchronized with the rotations of the turntable. We have a simple stepper motors, something like in 3D printers, and also the Arduino, for example, Uno with the CNC shield. So it's very cheap and very simple approach in this case. But here is another example, what's the problem with the focus? So for example, let's assume that this shoe is lying on the turntable and it's perfectly in focus. So here is the perfect focus, but when it rotates like 90 degree, this part here is still in focus, but this part here is out of focus. So that's the main problem of the turntable. So in perfect world, the camera will follow the shape of the geometry, but often it's not possible and not trivial to do and definitely much harder to do that with the simple tripod. So how can we automate that and how can we make the process a little bit better? We need to build a robotic arm, which is not, it might be over complicating the problem, maybe over engineering the problem, but why not? Let's see how to approach this problem. So the general plan is quite simple. We, I personally decided that it might be very simple at the beginning. That's why I used Arduino. So it's very cheap. There's no problem to find some tutorials in the internet, how to properly program it, and also the whole hardware must be easily modifiable. So in this case, I went with the plywood, for example, for the mine construction with laser cutting. And another main criteria here is to possibility to automatically trigger the camera. So it's the same approach like before. So I'm using the simple relay module and also it must be strong enough because often the camera mounted on this robot is like 100 times more expensive than the robot itself. And also it must be lightweight, so it might be small enough and also easily controllable from the software because I cannot imagine that we can control complex machine with, for example, writing coordinates by hand in some software. So we need to have a visual representation of the robot in reality. So what can we do about that? As I used Blender many years, I instantly thought about using Blender for this case and Blender has many great features, like Python API, which is awesome. And as you know, it gives the Blender incredible power. It makes it incredible powerful tool. We have also inverse kinematics, so we can assume that someone much smarter than us, for example, in Math implemented that correctly and we can use that directly, those values in our software. And we can avoid many bugs made us, for example, me while I programmed the hardware. So also I can reduce the time needed for that. We have many helpful constrain modifiers like track tool, limit distance, and so on. We can create a simple UI, it's multi-platform, so we can use that on Linux, Windows, and it's easily distributed. There's an easy distribution, of course. Yeah, so in case of robotic arms, we'll of course use the PySerial, which is quite helpful to communicate with, for example, with Arduino directly. For some computation, we can use NumPy and threading for some asynchronous computations. And also the power of Blender, that it follows Python updates, so we have almost always the most modern Python included in the Blender. And as I said, there are massive integration automation possibilities. So this is the converted CAD model into the Blender. So we can see how the inverse kinematics works in this case. I put some bones into the model just to rig it properly. And then I can just, using Python, track, how those bones are rotating regarding its parents, and just transfer those data directly to the motherboard of the hardware. We have many useful constraints, as I said. So for example, the track tool. So for example, it tracks constantly our target position, whether it's a focus point or something else, we can easily track that. And also, for example, using the limit location, limit distance, also like angular limits for the bones, we can make some safety integrations to make sure that our robot won't break the environment around it. And this is how it looks in Blender, the whole setup. So it's quite simple. The model is directly converted into the mesh from the CAD. So it's very straightforward. I just added the bones, rigged it, added the turntable. Everything is controlled using the single empty. So it's very helpful to use and implement the inverse kinematics in this case. And this is how it looks in reality. So here you can see the preview from the Blender and how it works. So there's a very small lag between the input and the actual movements. The movements are coordinated. So for example, if one axis, like one joint of rotation, has a longer angular distance than the other one, it just simply moves faster than the other one. So all stepper motors are stopping at the same time. And there are some also end stop to properly calibrate it because I wanted to avoid any encoders to track those rotations. Because I wanted to make it simple, simple as much as possible. And it's made out of plywood. So it was very cheap and very easily modifiable because you can just cut it, glue it or whatever you want. And the architecture in this case was quite simple. So I implemented everything directly in Blender. So Blender was the main controlling software here. Everything happened in Blender. It was directly communicating with the motherboard. In this case, Arduino. So there was a bi-directional communication. And Arduino directly used stepper motors. So it was just used a unidirectional communication because there was no feedback from the robot itself because there was no encoders or any more advanced stuff. So let's focus on this specific part. Here is an example what the communication looks like. For example, for the move command. So user, for example, moved the robot in Blender. The thread, for example, in Blender tracks the movements if it's different than the previous movement, the previous position. The Blender sends the command to motherboard. After the movement is done, the motherboard sends back some responses. And then the Blender gets unlocked, send back some information to the user, and so on and so on. So it's very simple. We can use, for example, the model operator here to, instead of threading, for example, to track some asynchronous operations. Let's get back to the scans because it's all about scans here in Blender. So I wanted to scan something more interesting than another rock or another brick. So I asked the Polish sculptor Tomasz Diewicz if I can scan some of his sculptures. All of them are handmade, so there's no digital copy of that. So that also opens some opportunities to try to free the print smaller or bigger versions. And I also wanted to try to push the photogrammetry to its limits. For example, I had to use the focus stacking in this case. Those sculptures were quite small, like 20 centimeters high. So I had to improvise some method like focus stacking here. Here is another example and here is the 3D printed version. So basically in this case, it's perfectly good enough, almost all details where you construct it. And still for very small price compared to professional structured scanners because if you want to scan something like with this high detail, the structured scanners or laser scanner will cost like maybe 30,000 euros. But of course this architecture has some problems. Since it was just a simple proof of concept and there are some problems that can be improved in the future. For example, the main problem is that Blender is directly communicating with the motherboard. So it's not a good in this case because for example, if the Blender fails or crashes, we lose all the information in the robot. So we can no longer track it. And also if we run some heavy computations like computing proper camera positions directly in Blender, we can also crash it or volatile its own threads which cause it also instant crash. So we need to improve it. Also the Blender, the robot itself was a little bit too small, too weak because I couldn't handle objects like one meter height or more heavy than let's say 10 kilograms. So we need to improve that. So in order to improve that, I had to like found a company called Involved. I did that together with my friend who is an electrical engineer called Zmuda. And with that team, we could build much more advanced hardware because no longer all the task was on me because I'm not, I don't have a degree in electrical engineering. So he's specializing in that. So he helped a lot with the problem solving here. And this is the new version of the robot. And it looks like this. As you can see, it's much bigger. It's of course a time-lapse. So it's not that fast, but in time-lapse, it looks cooler. And as you can see, it scans much bigger objects can scan of course, because this elephant is quite small. And the hardware is much more stronger. It's stronger. It can hold like 20 kilograms without the problem. We can utilize many lights. For example, we can implement the photometric stereo for 3D objects, not only 2D objects, but of course this has some drawbacks like it's bigger. It's much more complicated to transport it. So of course there are some costs included in that, but improved architecture is a little bit more complicated now because in this case, I moved whole computations outside of Blender. So in this case, the Blender is just simple UI and all computations are done directly in the backend. So what is the backend? Backend is simply a separate software, like written in Python, but it's running totally not connected with Blender, so if Blender crashes, there's nothing wrong with the backend. It's still running correctly, so we just need to restart Blender, connect again and it's still perfectly fine. Yeah, we also did some custom protocol, which help us to, for example, code all the messages to make it safer. And also we built the custom PCB, so we no longer using the Arduino, so this PCB is perfectly built for our needs. And let's have a look closer on this architecture component. So as I said, Blender is mainly UI right now, so all computations are outside of Blender. We're just using some threads or model operators to track the changes directly in Blender and just sending them through, for example, socket to additional, oops, sorry, something. Something went wrong, sorry. Yeah, and so if Blender crashes, the backend, of course, still works correctly. And it's possible to run currently more advanced computation because we can utilize many threads or even the CUDA like in GPU processing. So yeah, many benefits from that. And we still, of course, can use the all goodness of Blender like bones and constraints, geometry nodes, and so on. The backend right now computes all robot paths, so it's not right now in the Blender, on the Blender side. As I said, it handles highly computations, manages communication in much better way. And also we included many security and safety improvements. So for example, we can detect that something is going wrong and communicate to the user or even block the machine. And yeah, supervises all modules, so we can constantly monitor all the stuff and check if everything is going correctly. So in this case, the communication pattern is a bit more complicated as the Blender is not directly communicated with motherboard. So there's a backend between that, like a mediator, for example. But the whole idea behind the backend in this case is quite similar. We just need to have another step, another communication step, and module which needs to be handled properly. Yeah, custom motherboard. In this case, it was perfectly tailored for our needs. So for example, we can handle 16 stepper motors at once, all of those can be coordinated. And smooth movements. It's basically much better and much like safety proof compared to Arduino, much more professional and still totally compatible Blender because it uses just the custom, like Python library to communicate with anything which uses, for example, USB port. So let's dive what are the new possibilities in this case, what you can implement in Blender. So very nice example is the real-time depth preview. The robot uses the Intel Resense depth camera. So we can, in this case, stream the depth from the camera directly to the Blender. And for example, visualize that using the OpenGL or the GPU module in Blender. So it's quite nice to see what the Blender sees right now. And it also can be used for the safety purposes. So we can detect what is the minimum distance to the closest point in the depth map. And for example, stop the robot or show some communication to the user. So it wasn't previously possible to use the Intel Resense in Blender because it crashed immediately the Blender once it starts streaming the data. I have no idea why it happened. Also post the GitHub ticket about that, but it couldn't be solved in any way. So in this case, we are moving that computation outside of Blender so it works again. And also there's no problem with streaming such data. It was, of course, decimated, like don't scaled. So we are not streaming whole pixels into the Blender because it will be too slow, like by factor of 10, for example, we can decimate that. And of course, we can stream that using the sockets as previously. The another example here, which is the core of the whole automation in Blender, is the Prescan. So using the Intel Resense, for example, we can capture simplified geometry before we start scanning. So robot knows what is the shape of the object. For example, if you have something like this, it's quite symmetrical. So it's very cylindrical. It's not a problem to scan with using a simple approach, but if the object is like this and it rotates like this, it's much harder. So in this case, the Prescan actually help us to properly distribute the cameras. And yeah, so the Prescan is happening. As you can see, the turntable is rotating. I didn't implement the rotation, like feedback from the motherboard to the Blender, so we cannot see in real time what's rotating here. And here, using the Open3D library, we can put together all those scans and then import again into Blender. So it's very simplified shape, but for robot, it's totally good enough to properly distribute the cameras. All right, so also if you want to distribute the cameras on specific shape, like for example, icosphere, because we need to, let's say we want to try and use some machine learning and we need to use some perfectly distributed cameras. We can also use other geometries like icosphere for that, or even we can distribute cameras based on a Susan, for example. So there are many possibilities. So all depends what exactly we need. And yeah, view-based positions. So this is quite similar what we are doing while scanning using our hands, basically. So using the viewport in Blender, we can directly spawn some cameras using the shortcut. So it's very fast to add some custom robot positions. If we are scanning something flat, we can use a flat plane to distribute that automatically. But if we see that, let's say automatic algorithm fails to cover some angles and we know about that, we can add some custom positions to improve the whole scan later. And the scanning sequence. The scanning sequence was implemented using animation, like keyframes. So once we have the robot positions, the algorithm just iterates through all those cameras, transforms them into proper space and put the keyframes for the robot. So before we start the sequence, we can move through the whole keyframes and check all those positions before any scanning. And also the great feature in this case, it's not very visible here, but based on this press scan, the robot keeps the constant distance to the object. So we don't need, and we shouldn't actually use the autofocus because it changes the camera parameters and it might be much harder to then align properly or create a mesh without the noise for the photogrammetry. So in this case, we can rely totally on manual focus. Once it's properly set, we can scan for hours with the same parameters. And also the photogrammetry model will be much better in the end. Yeah, so as I said, we can use the keyframes, we can preview all those and yeah, let's go further. Scan examples. In this case, as I said, we are using the photometric, photometric stereo in this case. We can capture much more detailed geometries, like we can capture, for example, the PBR materials, all those details, which are not possible to be captured using the standard photogrammetry because we need to observe each pixel in the image, how it behaves under different lighting. And using this, we can then compute all those parameters or, for example, displacement map or normal maps, specularity, roughness, basically whatever we want. So in this case, we have a photogrammetry model. We've applied displacement from the photometric stereo and all those PBR parameters. So in this case, like metallic really shows that it has some metallicness in it. And also the leather looks like a leather and depends on what you want to do with this scan later. For example, some companies want to use the displacement instead of normal maps, so you can make some changes to modify this model in ZBrush or Scalblit later. Yeah, the next element, next object was a little bit harder because it has bigger metallic part. But also without that, the mesh was quite poor, not very detailed. And using the photometry, we can take much less positions instead of photogrammetry because if you want to, for example, capture very highly detailed object, we need to capture, let's say, thousand camera positions with very macro lens and very high details. So in this case, I just captured some 200 images, 200 camera positions from larger distance and then improve the final result using displacement. So it wouldn't be possible without the photometry and without the proper automation and the summary. So as you can see, the blender saved us a lot of time, especially at the early stage because at the early stage, it is very easy to fall in some rabbit holes and spend months in there. In this case, by using the software that I already known very well, I could save a lot of time and utilize properly written tools, like, for example, inverse kinematics. So I didn't have to study math for weeks to understand it properly. I just can properly program everything to communicate different pieces together. Blender is also constantly improving without, like, we don't need to spend the time on improving the software itself because it's actually handled by other people. So it's also a very nice feature in case of Blender. And I also talked about, I wanted to implement the UI using the OpenGL but when I started studying it, I thought it would take me too much time. So you actually don't need to know how to build responsive UIs or responsive viewports. You can utilize Blender in this case. And there's, of course, the Blender community, which is awesome. There's a lot of help here and there and we can find many, many examples in basically whatever we need. And the geometry nodes. I actually didn't tell anything about the geometry nodes at all here because it's still quite new and I'm still not using it in the robot. For example, I'm sure that it's possible to use geometry nodes to properly distribute cameras, maybe in real time, without the processing and utilizing complex algorithms in the backend component. And that's for sure a huge element to improve. And yeah, but one drawback that I can see here, for example, if you want to, let's say, sell your hardware commercially and someone doesn't know the Blender. So it might be a problem for someone to actually get used to Blender. There's a lot of different buttons, sliders, windows. So it's definitely not for everyone. And I'm very looking forward to a nice feature in Blender, which is Blender Applications, which was introduced last year. But I don't know if it's still developed because in this case, we could remove all those not needed stuff in Blender and just keep all that things and features that we actually need. But I'm looking forward to it. Maybe it will happening, maybe not, but yeah. Thank you very much for coming. It's a bit shorter presentation. Thank you very much. And if you have any questions, just let me know. You can make some Q&A, yes? Yeah, so if I generate the G-Code for the ARM. Actually, in the first version, which worked using the CNC shield, I used the G-Code, that's true. And actually it's very easy because in G-Code, you have a command like G something, X, Y, Z. So we need just properly pick those X, Y, Z values and map to your, let's say, access. Like, for example, change the linear movements to rotational movements. So it's totally not a problem. In the robotic ARM, I use the G-Code. In the next version, like the proper robotic device, we have custom commands, so we are not relying on G-Code. And of course, it has some drawbacks and advantages because using something custom makes it not as popular as G-Codes because G-Codes are commonly used everywhere. Almost every CNC machine uses that. But in our case, it was simpler to use custom way of sending commands. But in most cases, it's just the formatting. It's different format, basically. So you have the information, you can code it whatever you want and however you want. Yes, so in this case, the output from the robot are just images. So you can use whatever you want, like reality capture, meta shape, or some open source software for that. The only custom software here was the PBR processing tool because there are not very, there's not a lot of software for that in the market. So we have a custom way to handle that. But still, if you have images, you can use whatever you want them. For example, you can use the Nerf or the Gaussian Splatz, Gaussian Splating, which is very popular method right now. So yeah, so basically the robot automates the boring thing, which is the taking photos. The whole project of the robot, is it open source? Yeah, so probably this, I'm sure that right now it will be closed source because we are still investigating the market. If that's gonna fit the market, there will be enough needs for that hardware. So right now, for example, we are focusing mostly on the services, like providing, let's say, for machine learning to the museums, to the game dev. And during that time, we are still improving that by using it. So hopefully next year, we'll have something more ready to show more publicly. Yes, how I computed that? Yes, it was very necessary because without those positions, like without the proper algorithm to compute the optimal camera position and robot positions, we actually don't have anything more than just automatic robot with moves with camera. So to have a better quality in the end, we need to make sure that we keep the same distance during the whole capturing process. So we're not too close or too far. We need to compute the coverage. So before we actually put those images into the photogrammetry software, we need to know more or less if the object is properly covered from outside. And yes, I think that the automatic camera positions are the core of the whole automation. And without that, there's basically no automation at all. Yes, so for the first question about the physical correct camera, we basically we need to figure out what's the lens? What's the lens, like 35 or something like that? What's the sensor size? And we can actually skip the distortion part because it has too small effect in this case to compute the proper camera positions. So we can skip that. Blender camera actually works pretty nice in this case. And in my algorithm, I used OpenGL to compute all those positions. So we can also simulate the proper camera, physical camera in OpenGL, even with distortion parameters. But of course we need those distortion parameters somehow computed. So we need to calibrate the camera and then include that into the software. And regarding the second question was, oh yeah, I'm not using ROS at all completely. I wanted to build something from scratch that I know exactly how it works. Probably if I have studied the ROS a little bit more, it might be convenient because I see that many like academic people, people from academy or universities use that a lot. So it's very popular. But in this case, I wanted to go fully custom. And regarding the machine learning for the robotic positions, did you mean that? Yeah, so in this case, I'm not using any machine learning because to train some networks, we actually need the data. So we need to tell that this, let's say we have a disc camera position and we need to tell and train the network and we need to decide that this camera is correctly positioned. So it's definitely possible what we are, but we are not there yet. We mostly focus on the hardware and making everything works right now. But yes, we are thinking about that. And actually this project is used for training some networks in some ways. And I definitely want to use that in the future versions because it's not a problem at all. Yes. So the coverage is actually computed after making the press scan. So for example, we can pick like, we want to 200 cameras for this specific case on let's say 50 centimeters distant from the surface and we are running the algorithm. It computes most optimal camera positions. And after that, we can compute the coverage. So it's based on the press scan simplified data. So everything is simplification of course because we have some trade-offs here and there. For example, using the simplified press scan data it's much faster. So we don't need to utilize photogrammetry at all in this case. Just need to put together some point clouds using transformations. And this, I think that computing the coverage is also the crucial point because if you let's say specify two little camera positions, like the maximum number, it won't be possible to properly scan the object. So for example, you will have just the halved scan object without the full coverage. So it's quite important parameter. And also everything depends on the algorithm itself whether it can implement something better or worse. So there's of course plenty room to improve everything with more advanced algorithms. Yes? So how do I compute the camera? All right. So we can use like OpenCV for camera calibration. It's very popular, it works. So in some cases, like for the photometric part we can use the computed parameters from the photogrammetry because it might be more robust. And then we are using simple brown Conrad model which is very, very popular. So it's also supported or can be implemented in shaders in OpenGL. So yeah, but in most cases I use the simple OpenCV method to calibrate cameras. Yes, any questions? All right, yeah. Yeah, this is quite complex stuff because we're using the, let's say, differentiable rendering here to properly map those materials together. And in this case, like metallic is simply a blending factor between the two materials like dielectric and metals. So in this case, for example, we can rely on the specular value. If the specularities have higher than a specific level we can assume with some, basically we can assume that this surface might be metallic because it reflects light much stronger than simple dielectric which has about 4% of reflections, something like that. But metallic surfaces have much stronger reflections. So let's say, yeah, I will go a little bit deeper in this topic. So for example, you can capture that using the cross-polarization and the parallel polarization. So subtracting that, you can more or less like capture the specularity of the object and the albedo of the object. And using this, you can make even a simple thresholding to capture all those values which are higher than specific threshold. And in most cases, when you don't compare both, like rendering with real object, it looks much better than simple photogrammetry. But it's really, really deep topic and there are many methods and very computationally heavy, heavy methods. Sorry, it's not very simple to implement that. But more or less it's possible. Oh, yeah, sorry. By actual model, do you mean the photogrammetry final? So yeah, so in my case, I firstly need to move the robot in blender to move it physically. But do you mean that can I move it physically to see what's happening in blender? So it will be possible, but in this case, we had to use the encoders in each axis. So without that, it's not possible because just a simple stepper motors which doesn't have any logic inside of them. So they cannot communicate you back what's their current position. But using encoders, it's definitely possible to use that. So yeah, you will just need some kind of a thread running in blender which is receiving all those details and updates the model. Of course, there might be some differences between the reality and the blender because the encoders have its specific resolution. So blender is definitely more accurate than physical encoders. But yeah, answering your question, that's possible, but harder. Yes, yes. Yes, you just simply need to rotate the object physically and make sure it's not distorted anyway. Like if it's something made from cloth, you have a big problem because you cannot simply rotate it. But yeah, you just simply need to create two separate scans and then combine them in the photogrammetry software. Yeah, the robotic arm is actually paused right now. Right now it's not developed anymore. I just took it from the, tried to use it again. I forgot how to use it actually. I'm trying to learn it again, how to use it. It was very complex and the UX was very bad at that time. But yeah, it was very great experience building such stuff but it's not a very good idea in this specific case. It's totally overkill. There are simple mechanic architectures like let's say similar to 3D printers or CNC machines. And yeah, robotic armors are very hard to be made properly. And of course, the physics is not our friend because, yeah. All right, it's 50 minutes right now. So thank you again for coming.