 Human autonomy is the result of the seamless integration of our perception, our cognition, and our action. This autonomy is this integrated ability that enables humans to perform by themselves without the need to be teleoperated. Autonomous robots aim at this same capability, acting without being teleoperated. So they have sensors, like distance sensors, cameras. They use their computers to plan for their actions and they move. So Cobot is mobile-rooted Carnegie Mellon, performs tasks for users in our building, picking up and delivering objects from location to location. Sensors provide a very large amount of complex numerical data, from temperature levels to distance to obstacles, walls, people, furniture, from which robots need to interpret features about the scene so they can plan for their tasks. For example, a robot needs to know their position. And for that, Cobot uses their 3D scene images to try to extract planar surfaces that are matched to walls and then to maps. And the more accurate is this mapping, the more confident the robot knows where it is. This is a robot that actually moves in our environment performing tasks going from location to location in very different environments in terms of their appearance from corridors to halls and even glass bridges with success. However, in spite of the significant advances in having robots move, robots have many cognitive perception, limitations at the perception cognition and action level. And they definitely cannot understand all natural language and perform all actions such as going upstairs or opening doors. So at Carnegie Mellon, we actually introduced a new concept of autonomy for robots, this symbiotic autonomy, which enables robots to ask for help to overcome their limitations. So please press the elevator button. This says an armless Cobot. Please put the envelope in my basket. And humans can help with these very simple requests. Cobot can also go to the web if their request involves something for which it misses knowledge. For example, bring me coffee. If Cobot does not know where coffee is in the building, it queers the web and then goes to the most probable place returned by the web where the object can be, in this case, kitchen for coffee. It also interacts with humans through dialogue and can ask for clarifications. And when it doesn't understand what the human says, it may pop up some kind of other representation such as a map where the humans can specify what they mean by conference room or what they mean by printer room. And all these interactions with the humans and the web are saved for future use by the robot. Finally, if the robot does not get its help, it waited and nobody pressed the button, it also automatically can fill email messages, templates with the situation, emails to the developers and asks for some remote help to come to the place where the robot didn't get the help. We have been talking about the single robot, but after all, our research involves multiple robots, multiple Cobots that coordinate to actually optimize their task and share knowledge. Such Cobots have moved at Carnegie Mellon close to 1,000 kilometers in the last two years. Our multi-robot coordination is motivated and builds upon a lot of the work we do in Robot Soccer, in which robots beautifully plan for joint motion and teamwork that leads into these beautiful four-way paths that ends with this successful goal. Our multiple robots at Carnegie Mellon, in addition to actually doing their pickup and delivery tasks, have introduced and have performed a very novel and new task of data collection. They move in the buildings with their sensors and knowing their location, and they can collect temperature, humidity, Wi-Fi signal strength data and provide these maps for great policymaking based on the data. So symbiotic autonomous robots, it's a reality. Multiple robots, multiple humans, the web, the physical space. So we can see these robots as cyber, physical, social systems, kind of a new species that coexists with humans. So the ultimate final question that we face is how to move these robots from our labs and our buildings to the real world is how do we actually envision interacting with and getting help from these fascinating new artificial kind of intelligent creatures?