 Our second speaker for today is Giacomo Spiegler. Giacomo is also assistant professor in the Department of Political Science and Artificial Intelligence where he works on a range of topics concerning primarily deep learning and deep reinforcement learning. Giacomo receives his PhD in Computational Neuroscience at the University of Sheffield and incidentally obtained his bachelor's degree in computer engineering from the same university where Murat was a PhD fellow, namely the Scuola Superiores Santana. In a recent study co-authored with one of our students, Giacomo explored the important issue of ethical use of machine learning algorithms by comparing the privacy, utility, fairness, trade-off in different neural networks. But what he's going to talk to us about today is a project that he was working on during the COVID-19 pandemic. During the pandemic and the ensuing lockdown, Giacomo invested time and resources in acquiring hardware to be able to create a low cost robotic hand with sensors that can be built locally and used for scientific research. You can see the hand already in the picture and in just a few seconds, Giacomo will tell us a little bit more about that. Giacomo, the floor is yours. Thanks. Okay, so in this talk actually today, I will talk about the keyboard handle, the hand next to me. But I also want to talk about some recent advancement in the field that are changing the way we think about robotics. So we're going to lead you to a journey about the recent developments in the past few years of the field of robotics. So first, we talk about end-to-end learning. So in order to understand what is end-to-end learning, we need to go back and see how we're doing things before the deep learning evolution of around 2012. So if you think of traditional ways of doing artificial intelligence like tasks on the lab, like speech recognition, computer vision, or any other task, the typical approach was to build a pipeline with a lot of interacting components and try to encode what we believe was the way to solve the task. So for example, we wanted to do a transcribed text from audio, we would make an acoustic model to, for example, our ears, a piercing sound on a low-gate miscage. So higher frequency are farther spaced away than lower frequencies. So we are first extracting features like this, then make a phonetic model to recognize phonemes, connect them together into words, then build a grammar to detect what sequence the word was posted and whatnot, and then actually get text. Now it turns out that if you do the same type of task with a single neural network, a deep neural network, and you let the computer do the optimization for you, you always provide the update and compute power, but in general, you always get much, much better performance than the human manually designed system. So this has been true in a lot of fields. So we got superhuman performance of speech recognition, translation, text generation, computer vision, and a lot of similar tasks, and especially in the past few years. But all of these happened within the last 10 years. And some of these fields, like machine translation or robotics, are even more recent than that. So the question is the following. We saw a significant advance in all these, we call them supervised learning tasks. So we already have a label, we have a picture, we know what's in the picture, is the category dog. So you can make a label and ask the system to predict it. And that works really well. The question is, can we do the same to learn the behavior completely autonomously? We say it's end-to-end because we start from raw data, for example, pixels as an image in pixels or in audio as a waveform, not even a spectrum. And we want the neural network to predict directly an output into the proper auto space could be a label or what's in the picture can be actual text generated and transcribed. So it's end-to-end because we start from the raw data and we get directly the output. We know intermediate steps. And the model is asked to do this by itself. Again, computer are really good at this. So we already have a system that can do this better than humans on a wide range of tasks. So the question is, can we do the same with robotics? And this is not my slide. This was from a couple of years ago. People at the main research institutes in the world are actually going in this direction, at least in some type of research activities. So we did not come up with this, but again, quite some people are moving in this direction in the field. And it seems very promising. So when we talk about end-to-end learning robotics, we mean about starting from a raw sensor, could be the position of the joints of the robot, could be accelerometers, or in general, could be pixels from cameras. So the robot could have cameras in their eyes or outside looking at the tasks the robot is doing. And the output in this case is directly model command, which could be torque to apply this model, or could be the angle of each joint. So what angle to put each joint of the body of the robot at. And we don't tell the robot how to do it. We simply say, OK, you have this situation, you need to solve a task. And here is your body. Just do your thing. And I wanted to be able to do it better than we could do it by designing this system by ourselves. So this is similar to what have been successful in related tasks, but that still are not on physical robots. So it turns out that if you want to learn a behavior rather than a supervised learning task like output labels or true step text, you have to do a slightly different system. For example, we see it in a couple of slides, deeper and awesome learning. But it's still kind of deep learning applied to learning a whole behavior instead of just predicting a single label. And isn't it successful, a tremendous successful in the last few years? In 2016, AlphaGo by DeMinder won against the board champion at Go. That game of Go is actually very challenging because it got a branching factor of 200, which means that at each decision, at each turn, a player can make up to 200 valid moves. And that means that traditional search methods cannot be successful in this topic. But deep learning was used to learn, basically, a natural intuition to predict what could be good moves. And it turns out that it was extremely successful because now no human player can compete with computers at Go. Similarly, a couple of years later, 2018, 2019, both the mind and open eye extended these similar systems to play on challenging games like Star Trek II and Dota 2, which are easy for humans to play but very, very difficult to play at the professional level. So they actually are international tournaments and the world human expert on these games are tremendously better than the average human player. But now computers can defeat them at those games. And these games are very complex. A game at Dota 2 can last for one hour. And computers have to make a lot of decisions that will have an impact on the performance of the game one hour later. So that's kind of very challenging for computer to do. And yet our current system can already defeat the best human players at the cost. So in this game, we still don't have a body. But we are still dealing with decision-making problems and learning a behavior. So the computer is learning to play this game better than humans. And we can do the same assimilation and using complex bodies, like in the top right corner on the bottom right corner. And in this case, the robot can learn to do very challenging tasks, controlling artificial bodies in simulation. And that also works pretty easily with current state-of-the-art techniques. You can do it on a single GPU in a single computer and obtain a fantastic performance. So the field is progressing in a promising direction. So the question is, we want to do the same in robotics. Robotics, as we've seen a bit, is much more challenging than many other AI tasks. Because robotics is incredibly expensive and has a lot of hard engineering problems they need to solve before even applying your algorithm or your brain to control the robot. So a lot of kind of baseline difficulties are separate from the research work, but they still have to solve and to deal with if you want to use this robot. So that leads to much slower progress than other fields in AI, unfortunately. But we're going to work on that and to make it a bit more accessible. OK, first, we talk about end-to-end learning robotics. This is kind of a bad keyword. It can mean a lot of different things. So there is not one single way to do it. The idea of end-to-end learning is that you have a robot that is controlled by taking raw sensor data and cameras as input and then outputs directly motor commands to control the body of the robot. So no intermediate step, no modules designed by humans to tell the robot how to do it. The robot that finds out how to do it by itself. So how can it do it? Well, the typical way to do it is called deeper enforcement learning. That means that we have a robot that learns to do a task by trying to endeavor. And while doing the task, at every time step, every decision the robot makes comes with a reward. So the robot will receive a reward or a punishment. That's just a number, like a plus one or minus one, depending on whether the robot is doing well or bad. And we can design a reward function that tries to encode how well the robot is doing on a certain task. Then the robot adds this reward together, just sum them together. And we use server enforcement learning algorithms that allow us to find the parameters on your network such that the robot will collect as much cumulative reward as possible during interaction. And that's what has been done by OpenEI in the top right corner. In this case, the robot had to do a cube orientation task. So it had to move this little colored cube in the handle to move it into a target 3D orientation of the cube. Now, this task is extremely challenging, even though it looks simple, because that hand has a 20 degrees of freedom. So it's very high-dimensional outer space. And it's a very challenging model task. So if we're to do this with the initial control-based method, we would barely be able to do something that maybe has a fraction of the performance of this system. But with end-to-end learning, the robot learns to do it very well. Now, that costs a lot of money and also requires a lot of compute power. But the field is improving and the algorithm is improving, so it's becoming cheaper and cheaper to do this kind of activity. In other way, so one problem, however, of reinforcement learning is that we need to encode a reward function. So we need to write a mathematical function that, given a certain behavior of the robot, will tell how well the robot is performing. It turns out that this is actually very difficult to do because we can create a function that we believe encodes the structure of the task. And then we find out that the robot learns to do a different task that still collect a very high reward, but does not do what we actually wanted to do. And that's because that's called the problem of reward specification. So it's very difficult to encode a task in a reward function, using a reward function in a unique way. So ways around it are, for example, imitation learning. Imitation learning is similar to the idea of humans showing how to get us to know humans. So if I want to show you how to make coffee, I will just make coffee in front of you and then you will be able to do it right after me, right? So that is still not how it works in the field. That's very difficult to do, but we are moving that direction. So in practice, what happens is we can treat learning behavior as supervised learning by collecting a trajectory of motion. For example, in this case in the bottom left, we have a motion tracking system that capture the different type of movement of a dog. And then this movement are remapped onto this quadruped robot by mapping the position in space of the pose and the hips and the shoulder. And then the robot is trained in a supervised learning way by mapping the angles of its joint, such that the movement it will produce will be as close as possible, as similar as possible as the movement of the actual dog. And as you can see in the gif in the bottom, it actually works very nicely. The robot of course has many few degrees of freedom than the dog, but you can see the movement of turning on itself, working backward and forwarder. It's kind of realistic if you think that this body has a relatively few degrees of freedom, while the one in the top row has got I think a 72 degrees of freedom, which is insane for current robots as a hardware wise. But again, we are getting there. So that's an alternative because we don't have to specify reward function. We only have to show to collect data and use it to train a model. And we know that if you can collect data, deep learning works fantastically well. So that promising approach. Lastly, we know that the collecting data is cheap, but collecting and labeling data is expensive. So there is a trend in all fields of deep learning generally not only robotics of going towards unsupervised or self-supervised learning. Self-supervised learning works by giving a task. So giving a training signal that is independent from the current task. For example, if you can see the picture, it's a bit small. But in that case, the robot arm is interacting with objects in the environment. And it's not receiving a rewarder or being told what to do. It's simply moving at random environment. But if it moves objects and then the configuration as seen by the camera changes, then the robot receives a reward. So the robot basically is rewarded by doing exploratory behavior. So by doing so, the robot ends up learning useful behavior that don't have a purpose per se. But once you take those learned behavior and define to them on a new task, then it's much easier to learn because the robot has already learned some useful model commands and some useful visual system. So again, you can do end-to-end learning in a wide range of ways. There is not a single answer. Usually we can combine many of these systems together. But the idea is that you want to have as little supervision as possible. You might want to collect data to create a reward function, but you don't want to tell the robot how to do a task. You want the robot to learn to find the good way to do a task by itself. Because again, as we saw with traditional machine learning, computers usually find a better behavior or better performing systems if they are led by themselves without enforcing too much structure on the solution. However, there are more benefits to this if you do end-to-end learning rather than a typical traditional machine robotics. So here I have a tip of this. I'm working on testing them. So they're still in test. So take them with a very big pinch of salt, okay? But the thing that kind of reasonable. So I hope you will agree with me on this. Okay, the first one is more methodological. So if you have a simple task, imagine you have a robot that has to follow people. You can do it pretty successfully with traditional robotics method, which means that you have a system based on components. And one component will use a deep neural network to recognize where a person is on the image and to track them. Another component will implement the control of the robot to move it around and to try to see if the person is at the left of your visual field, the robot will move left if it is to the right, to move right, otherwise it will just move forward. And they can engineer the system very easily. The problem is that the good is based on if then rules, it can break easily. So if you increase the light in your room by a lot, then the camera gets fluted by light and your system that was trained on a certain type of images will not recognize the person anymore. Even if the person is in front of the robot, then the robot will stay still and will not follow the person anymore. So it can break in a kind of dumb way. So that's another limitation. However, if you were to do the same with end-to-end learning, you will have a significant overhead in engineering complexity. You might have to make a simulation to train your system on. You might have to train, to collect a lot of data for mutation learning. You may have to do, you know, more to spend more complete power to do the training. So for simple tasks, you have an overhead when doing end-to-end learning, which may make it less appealing than traditional methods. However, if we start improving the system by making software libraries more readily available and improve access to this type of research, we could actually lower this bar and make it easier anyway. So that's one thing we want to do in the future. All right, the hypothesis follows. If you take traditional methods and you apply it to a very complex task, like making coffee, then your system breaks down. There is just no way the current methods are able to do them right now. In the future, they might be able to do it, but it takes significant engineering complexity to the system. There is that you have too many interacting components and all of them have to work with basically perfect performance. Because if a single module breaks, the whole behavior collapses. Now, if the robot cannot detect where is the market that it needs to take to make coffee, it cannot do the whole behavior only because that single component has broken. And that means that, you know, it increases the chance of the whole system failing. But if you have a setup and all the engineering setup to do training of the system end-to-end, changing tasks can be as simple as changing the simulation to model a new task or changing a reward function. And then adding more compute power. And then, you know, without changing anything as in your system, you might have the system, a robot that learns to do a task in the same way as just learning to do a simple task. So the important here is that by increased complexity, you have a quite an overhead of engineering complexity when using end-to-end learning, but it will scale much more linearly than traditional method that would at some point blow up and become useless. We either task becomes too complex. The second hypothesis is that if you have an end-to-end monolithic system, for example, one single neural network that takes a row sensor as input and outputs an action, instead of having these interacting components, you might produce a behavior that is much smoother. What I mean is the following. If you have multiple interacting components, if one of them breaks, the whole behavior collapses. And each of them is kind of on off. So if that model is not working, all the other models cannot work because they rely on the information coming from that model. So an example could be a vacuum cleaning robot that does not have any obstacles, so it moves forward. Then you raise the wall and bump into the wall and then it acts that there is a wall. So it changes behavior and just goes backward for a fixed, like one meter. And then it repeats the behavior. But because there is only one way to go forward, because it's kind of stuck in that little corner, it keeps going backward and forward, forward, backward, forward, backward in a loop. And you look at that, you see, okay, that's a dumb robot. You know, it's literally an if then rule that broke because the sensor did not get the obstacle. And then that component was about detecting the obstacle to the planning has broken. And even if you don't work in robotics, you look at this robot, the behavior of this robot, and you can tell that any if then rule kind of broken in a binary way that is either working or not working. So the point is that here, if you have a modeling system, they do not have a single on off behavior, binary behavior, because they will not be relying on an individual module to do it, to perform its behavior. But you may find out that there are better ways to do a task even within complete information. Another example is a robot, like a robot dog that keeps a wagging the tail if you smile. Now it has to detect that there is a person in the camera and the person is smiling. And if it detects smiling person, it starts wagging the tail and it's happy. But if the light increase in the image, then the robot doesn't attack the person or they face expression anymore and the robot is just staring at you and doing nothing, and say, okay, that's a dumb robot. But if you do it end to end, the robot might not actually need to know that there is a face in the picture if there are some feature in the image that correlate with the behavior, like about you smiling and the robot having to do certain behavior. So sometimes we introduce too much, we believe that the system requires too much information to be to do its behavior, while they can actually do away with much less information. And if you train the system end to end, you can get this behavior in a much simpler way than we expected or than we thought was needed. And that can lead to more robust system. And now we are working on some experiment to test the hypothesis. We're starting to train some robots and collecting some data. So hopefully we will have some more data to actually test whether this hypothesis or not. But this given idea about why end to end learning could be interesting compared to traditional robotics. Even setting aside the fact that when you have a computer optimizing behavior, it usually learns behavior that's much more powerful, much more performant than the money design behaviors. Okay, this was kind of a background to give an idea about this exciting new area of robotics that is developing in the last very few recent years. But let me talk about briefly what we are starting doing along this direction at the University. So first we need to find a couple of problems that if you do robotics, you are probably aware of these problems. If not, you may not be aware of this. The main problem in robotics can get a couple of main problems in robotics. One big barrier that prevents most research groups from doing research in robotics is cost. Robot platforms, even very simple one, cost no less than $10,000 and can be easily half a million for a single robot. This means that unless you have a very well founded company or you have a big grant to pay for your robot it's very difficult to do a research on the topic just because you cannot have access to the platform. Remember that if fewer people can do research on the topic the field progresses lower. If more people can work on the topic then we can get more engineering and scientific advances faster. So we get robots in every house faster if you can get cheaper robot. Other problems we're saying is that if you do end-to-end learning in robotics there is significant overhead because there are engineering complexities about designing the system from the point of view of software. So hard engineering, not even algorithm how to implement the algorithm. It's just a basic engineering of how to make these things work and how to write this code. Because there are no standards yet in the field think of something like Keras, PyTorch or TensorFlow libraries that make a development in those fields very easy. Now you can do image classification in literally three lines of code with these libraries. There is no yet any essentially built-in for robotics. There are no standards yet fixed for robotics which means that people who want to do research on the topic have a much higher barrier to access for this type of research. And last but not least, of course the algorithms are still improving they're still not super effective. They are very powerful they can learn very complex behavior but there are still behavior we cannot learn with current methods. So this is not the value of research and people are improving the methods to make this system capable of learning more difficult tasks and they can be more useful. But we started from the first problem and the first problem was the cost of the robot platform. So commercial robot hands in this case these are used for research on dexterous manipulation. So if you have an object you want to move it in your hand or do complex manipulation objects in your environment you usually need the hands with the very many degrees of freedom and the commercial existing hands cost a lot of money. So the one from OpenAI the one with computer orientation costs $300,000 and the other one that is similar to this one cost $15,000 plus service and taxes and a lot of money. And even that is pretty cheap for a robot is still out of budget for most research labs and in particular, you can imagine like third world country that they have even less funding. So that prevents many people from doing research on the topic. So the first thing we did is develop designing and building this hand is the people handed it's got 16 degrees of freedom and is pretty capable of doing most dexterous manipulation tasks we care about and most of the benchmarks in the field this hand is capable of doing them mechanically. But because it's actually really cheap and it's a very low cost research platform we actually plan to sell it to other research labs at a very low cost as it is more fraction of the any existing alternative so that we can improve access to research and more labs can have a do research on dex manipulation which means that we're gonna get a robot that can do complex manipulation in every unstructured environment sooner and then we get the robots in every house in maybe 10 years instead of 50. But the main reason we developed this hand is to be able to do research on it. So we're not interested in the engineer of this hand or the mechanics we're interested in what we can do with this hand. And the first objective is to replicate the open eye work on the keeper orientation. So this already works very well in simulation and we're in currently in these days in the process of extending transferring this learning behavior from simulation to the real robot. And hopefully within a couple of months that we might have the keeper orientation demo running on the robot hand. And after that we can start doing a lot of many different baselines and doing different tasks and the work on the seem to real transfer that is transferring behavior learning simulation to the physical robot. So now we have the platform and now we can use it to do this exciting research. And in the process develop new methods to make it cheaper to require less compute power and thus less money to do the training and also to make it the better smoother and better performing. So I'm trying to robot learning robotics is an exciting new development in the field. And I strongly believe it's gonna be a major contribution when a major component in any robot that will exit the factory. So if you want robots in day to day life in your house or in the street it almost certainly would have to use some of this technology but that's my opinion of course but I'm pretty confident in this and I think we will talk about it in the discussion after. I also believe- If you would have some time for that you should start rounding it off. Yeah, yeah, this is the last slide. The next thing we want to do is and this again not the primary objective of our research but just because now we have this handle and we can manufacture at a very low cost we actually have a side benefit of this research that is of improving access to research. So we can actually help other research teams all over the world to enter this area of research and to work on flexible integration using a single standardized platform that people can share code on in an open source way. The handy side would be made open source and also pre-train neural network could be exchanged between different research labs. So that's a by-product of our research but they can have a major impact between the field that we hope on. Lastly, we are also planning to do other exciting research on end-to-end robot learning even without using the handle. So one part is about using the handle and all the exciting things we can do with this handle. But other things are planning I'm starting to work on reinforcement learning from human preferences. So learning a word function based on human feedback rather than a hard-coded word function. I plan to, I'm actually currently testing the hypothesis I put in the previous slides about the advantage of end-to-end versus traditional robotics methods. And finally, we also plan to do some research on a smooth still operation and that's still still operation using this robot hand and attached to a robot arm. And if any student, people at university or United universities is interested in this topic please contact me because we have a lot of open projects we are seeking for collaborations. I think we should bring Morat in as well and go for a little discussion on these topics because I could ask many questions on this but let's see if we can make it a bit broader. So let's leave the robot hand as nicely in the middle here. So what I wanted to know actually from both of you you brought in different perspectives on robotics where Jacomo went into a lot of the technical aspects on the cost of robots and Morat talked about the social aspects and the ethical aspects of robots. And when I look at robots in the world robots have been used already for a long time but it seems to be mostly limited to factories and then people stay away from these robots because it's too dangerous to get close. And we see already for decades we have these talks that robots are gonna get more in society and we gonna see robots helping elderly and we see robots doing traffic control and things like that. And that still isn't happening and I can assume that Jacomo brought up cost that that might be part of it but probably there are many aspects why we don't see robots yet that often in society. So can you give me a perspective on what the reasons are that we don't see them right now at least not much and when will this happen and what promise do we need to solve before this happens? So maybe Morat could you start because we haven't heard from you for a while. Yeah, sure, I can discuss this one quite a lot but to be super brief why we don't have robots in our real life. The one problem is engineers or machine learning modelers do not design their algorithms or models to be deployed on the society. So one of the problem for instance they their models are not expandable enough like if the system fails, the users cannot trace why this decision made by the robots. Especially this is true for Antoine system like Jacomo introduced. Of course, this kind of systems can provide some emerging property for the robots but it will not increase let's say quality of life of the humans in that perspective. If it's fails, it has to explain why it has to fail. So that's why we are still not trust robot to put our house or educate our children or to be part of our real life. And that's what we might be in the comments. Okay, I think in a different perspective I think the main problem is actually more practical. I mean, there are so many immense economical and practical applications for robots that everybody, not everybody has a daily person but any company would actually want to put robots everywhere. I think the problem are technical. So one is the costa. And now I think we are getting exciting benefits. So costa robot is still high but it's falling really rapidly. So the field is going in the right direction and especially advanced to the printing like also this hand is pretty printed is making much cheaper to build this robot. So that is something. But that's like one thing. So if a robot costs 50,000 euro of course it cannot be in every house right may cost more than like a luxury car. Another problem is that why it works in factories when not in day life is that the day-to-day life is unconstructed. We call it an unstructured environment. So every house is different not different life condition, different object different people. And we need robots that can interact in an environment they never seen before. And I believe that the traditional robotics method are where this fails. So if you engineer a system to work in a certain setting you cannot predict every setting it will work on. But if you have end-to-end learning you have system that can be trained to be actually much more generalizable out of trend distribution generalization is a major feature of deep learning and a major area of research. And also when you have system that can learn end-to-end you can actually design them to be continue learning after deployment. So you have seen that not only are more generalizable can work in more environment but can also learn and adapt to the new environment. And you cannot do this with additional methods. Of course you can do a deep learning based component like perception pair with additional methods but then you still have the point of failure later on. So if you have to go in this direction with heavy end-to-end learning will improve these behavior in a novel environment and make it actually practical to user. And then of course once you have that lowering coaster is a probably engineering thing and that can be there are many ways to work on that. And once you have something that works lowering the coaster usually is feasible. But if we look at the future so I know future or G is a very different topic and difficult topic but do you have expectations in this? What kind of robots would we see introduced and at what point? I know it's pure speculation but what have you any idea on that? What kind of robot that we will see in the future? Yeah, in the near future. So what would be the first thing that we would see change in this respect? Any idea? First we should change our culture. That would be fun. Okay. And we should consider a robot as our parts daily life. Like in Japan, right? Yeah, like in Japan or like we are using smartphone and look we will have a personal robots or daily life. So that's culture change should be made. Another one we need a big research budget for this one to be sure. And lastly for the of course I'm a human and robotics researchers. I would be happy to see a lot of human robots around but it will not start from the human and robots areas. Like it start from the rumorable that cleans the whole house and whenever you disassemble the rumorables and you see there is a dirt. So it indicates that it is a job that says. So then it will gradually increase until we reach the human and robot and we definitely need human and robots in our daily life because every kind of systems surrounding designed for humans. Like the height of this table or the structure of this cup, everything designed for human affordances in that sense. So that's why my prediction it will not stay with human and robots but it will converge to human and robots in our daily life. Yeah. Anything on the edge? Yeah, I agree. And also I don't see many ways in which human and robotic can ever be truly cheaper. So it can be of course much better and currently is but this could be very, very expensive. 5,000, 10,000 euro minimum. So that means that it's not going to be like as smart and ever it can have us to be no less. But I have personally a vision about this. So I will devise in my answer now but I have a vision about this. So I would want to work on this but I think it will happen. And it's about robot pets. Not meaning like a robot dog but meaning like as a general companion robot. The idea is simple. And if you want to make a domestic helper robot you need to have a robot that can solve very, very difficult tasks like making coffee, cleaning and those are difficult tasks for a robot to do. We will get even current technology the current end-to-end methods with the proper development and improving the methods will be able to do it in the future but not now. So it's too big of a shock but robot pets don't need to do a task. They have to be entertaining in terms with you socially but don't need to clean your house. You don't have to do a difficult task. So I think it's something that we can do with current technology or we are very close to be able to do it. And that's something that we'll have a tremendous social impact when if it managed to get them to the point that I can imagine them in a proper kind of life like kind of like a natural life form like they can really interact in a complex way. But then the texture will be massive. And so we, I think the first problem we see in every house will probably be robot pets rather than other domestic helpers aside from the room by now or practical single task robots. Thank you.