 So, thanks for organizing it. Okay, let's start. So, today I'd like to chat about data processing program in OpenStack called name Sahara. My name is Sergey Lukyanov, I'm a PTL of Sahara Project. And so, let's start with a small overview of the killer things done. Next slide, please. Okay, and so first of all, let's define what Sahara is and what's the goal. So, Sahara is a project to provide a scalable data processing second, as I said, it's known as an interface. And so, there are two main directions. The first one is to provide and then operate the data processing clusters. And the second one is to schedule and operate the data processing jobs and workloads. So, the first one is about deploying the Hadoop cluster, for example, or any other data processing clusters. Now, Sahara supports Spark and Storm as well, in addition to the Hadoop. And the second direction is about running the big data data processing jobs on top of the redeployed clusters. Next slide, please. The ETP functionality is a rusty data processing. It's a Sahara take data processing workflow management. It's a code name of the second direction of Sahara goals. Next slide, please. So, there are some numbers for the killer release and let's not spend time on it. It's just increasing as always from the release to release and the numbers are pretty doubled comparing to the overview that we've done a year ago. Next slide, please. So, what was doing here? During the killer, there were a few new plugins introduced and a new version supported to existing clusters. So, the newcomers to the Sahara plugins for the query is a MapR plugin. It's one of the distribution of Hadoop with a custom very high-performance distributed file system and Apache Storm. It's a real-time data analysis tool. And the two existing plugins, Vanilla Hadoop and Cloudera distribution, including Hadoop, were updated to the latest versions and some additional services were supported by the Cloudera plugin as well. Next slide, please. So, in a Sahara UI site, there were a bunch of improvements and bug fixes in Horizon and we added the guided cluster creation and job execution workflows. So, now users could go through the guide on a UI site to create the stuff needed for provisioning cluster, including creation of the configuration templates for Sahara, like non-group templates and cluster templates. The same could be done in a guided way with the job executions. All of our object page in Horizon currently contains filtering for the search. So, it's now easily to filter objects by, yeah. For example, the cluster or the type of the job, et cetera. So, let's move on to the next slide, please. In the killer, the indirect virtual machine access was implemented. So, can't please, there is a way to set some node groups to be gateways to access other nodes. So, the Sahara controllers will use the gateway nodes as the proxies to access other nodes. It's needed to better consume the virtual IP addresses and so, you don't need to have IP addresses assigned to all the nodes in public IP addresses. And, obviously, Sahara will still be able to access all the nodes. Next slide, please. So, the next big feature in killer that was implemented is the event log. So, this feature is about expanding additional information about the provisioning status and the progress. So, now, while the cluster provisioning for scaling, you could go to the event log tab in horizon or through API and get information about the more granular status. For example, what's the number of machines for the provision and what are they waiting for? And, it's very helpful, not only to track the progress of cluster provisioning, but to debug issues, issues with the cloud, for example, because if you have network issues, for example, with event log, you could easily see that some of the instances are not accessible via network, for example. Next slide, please. The defaulting plate CLI2 was added in killer to be able to add functionality for end users to provide the defaulting plates for different tenants and products and open stack. It means that there is a bunch of predefined team plates for different plugins in Sahara for the base and you could use the custom ones to generate the default not group and cluster templates for any tenants in your cloud and the users will be able to use some predefined ready cluster templates and after that, you will be able just to create clusters via a few clicks. Comparing to the way with our defaulting plans where you need to create the configuration templates first, so it was the user experience feature. Next slide, please. So let's move on to the Liberty plans and next slide, please. It's first of all, let's talk about the new plugins in the chat updates. So the two main distributions of Hadoop, CloudJR and the HTTP will be updated in Liberty Cycle. The CloudJR distribution of Hadoop gets the latest version supported, 5.4, and we are going to provide the HDFS and YARNA high availability mode support in Liberty for both of the plugins and we are currently working on the reworked version of the HTTP plugin that is now based on the Humbari Blueprints comparing to the direct API calls. So HTTP plugin currently supports all of the services as supported by its manager named Humbari. And so the whole big Hadoop world stack could be deployed by the HTTP plugin including Storm, Spark, et cetera, embedded to these clusters and EJ supported as well for it for both YARNA and HDFS. And the separated Spark plugin which would be named to an evil Spark plugin because it's installed from the upstream without using the deployment engines. So it's getting support as the latest 1.3 version. Next slide, please. In Liberty, one of the huge UX functionalities is the data sources place address or team plates. This feature provides end users an ability to create the data sources with some variables inside URLs so it could be a random value or job execution ID embedded to the data source URL and it means that if you will execute a few times a single job, for example, if you run one of the job executions, you will receive the outputs if they're specified with the data source team plates in different places, for example, with randomly generated suffix. Next slide, please. So another big feature is to provide the objects updates support to the API and UI site. So our goal in Liberty is to enable support for updating all of the objects with strict validation for not being in use, including validation of the dependencies. For example, you're unable to update the class into plate if there are some classes created from this class into plate. And in addition to it, in fact, the parts of it, we're going to support extended ACLs for our objects and we agreed to provide two types of feasibility. The default one is just a tent one, like a default objects for OpenStack projects and the public one that will be shared between all the tents, it's pretty much the same as in glass. And we're going to add the protected field like a glance images to be able to protect your objects like clusters and team plates from the essential removal, for example. Next slide, please. In Liberty, we're planning to separate our integration scenario test tool from the security to the security as a hard test to pull it through. Spec is currently not created, it's in progress, but on the summit, we were agreed on doing it and choosing it separately to support the current master and the few previous releases of OpenStack to be able to use the latest testing tool with the latest scenarios to test all of the currently supported versions of OpenStack. Next slide, please. So in Liberty Cycle, we were actively working on an overall provisioning probability and we'll continue doing it to the end of the cycle and in the next cycles as well for sure. But in Liberty, the main goal was to to support the Keystone sessions, to retry OpenStack client calls, to ensure that if there are some networking issues or some EPI rates, limits, issues happening, then we'll always retry and not fail the cluster provisioning. It's extremely important for the huge, big data clusters like some of the not Hadoop cluster that you don't, they're unable to fail the cluster creation and a few instances creation failed. And in Liberty, we're already deprecated the direct provisioning engine and the heat engine is now the only provisioning engine supported by Sahara. It means that all changes to the existing engine, I mean, to the previous direct engine will be blocked and we're going to remove the scores of ways for direct provisioning engine in the next cycle. So I think it's pretty all about the Sahara, Keystone use and the Liberty plans updates. Thank you very much for your attention.