 An essential capabilities of autonomous system is the ability to perceive the environment, that is to build an internal representation of its environment. Where did we start in this area 10 years ago? We were able to detect go and no go zone, essentially doing obstacle detection. We were able to identify a few classes of terrain. We were able to track objects in the environment of the robot. What did we need to do? Well, we needed to do two big things that essentially differentiate the RCT application from other applications. First, we needed to have a much richer representation of the environment, one that will enable us to do the kind of complex reasoning that we need to do to carry out complex missions. We needed to do that in unstructured environments and we needed to do that online as the mission is being executed. Second, we needed to do all this with limited resources, limited resources in terms of knowledge, limited resources in terms of data and in particular data annotation and limited resources in terms of computation. So what did we do about this? The first challenge that we addressed is detailed online semantic description of unstructured environment. This is an example here from earlier in the program showing one of the first demonstration of online semantic labeling of videos that can be used for navigation and reasoning. Moving forward, deep learning came in and we now have techniques that are much more elaborate that allow us to do this online semantic labeling and integrate that in the entire system to be able to do reasoning on the fly and this is something that you will see demonstrated this afternoon. Second, we need to be able to generate description of the environment at a level of fidelity and at a level of resolution that can be used for the kind of missions that were mentioned in the previous presentation. In particular, being able to develop a representation that can be integrated in the world model that can be reasoned about to carry out complex mission. Third, as was mentioned earlier, the world is not static. Not only it's not static, it's populated by agents that act in a way that is difficult to predict, difficult to model, so we develop techniques to be able to track those agents, to be able to predict their behaviors and to be able to use those predictions to make decisions on the fly as the robot is executing a complex mission. More importantly, not only being able to develop those fundamental techniques, but integrating them into complete systems, and again, you will see some of this this afternoon during the demonstration. A major challenge in all of those elements is what data is used to construct those models. The traditional approach in the success of machine learning involves large volumes of data, large volumes of supervised data that is annotated, prepared, curated, that is unsustainable and that is unsustainable, especially for the kind of application that we are considering here. Instead, we develop techniques to be able to learn those models from limited amount of data in an unsupervised or semi-supervised way. Finally, we need to do all this with limited on-board computing resources. We develop techniques to be able to do the kind of semantic labeling that you saw and other perception operation with limited computing to be able to do this in an anytime fashion, meaning that the algorithm itself is able to adapt to the amount of computing that it has available to it during the particular mission on the execution. So those are some of the key challenges that we address to address those two big challenges that are specific to the RCT system. Thank you.