 Okay, thank you. So my name is Sergei Lukanov. I'm the Programmed Technical Lead for OpenStack Data Boardsing Program. Good name, Sochera. And so I'd like to make a short overview of the project and some highlights of the things done during the general release and some plans for the Q1. So Sochera provides a scalable data boardsings tech and management interfaces. And it includes two main directions. The first one is a provision link and operations for data boardsing clusters like Hadoop, Spark, Storm clusters. And the second direction is about cabling and operating data boards and job workloads on top of clusters provisioned by the first part of the project. So EDP itself is a it is a shared stake on data boardsing flow flow management. And right now it's a very pliable mechanism that makes it able to implement your own workload managers for different processing clusters or use some existing ones like UZ for Hadoop. Next slide, please. And for now we're using Apache UZ for managing workloads and top of Hadoop clusters. And we're using the Spark manager for Spark clusters and there is now actually the workload managers for Storm but probably we'll make some. So on the next slide, we can see some stats for the general release. The main difference between this and previous releases is that Sahara has been officially included to the integrated OpenStack release in June. And so we already see very good growth of number of contributors and contributions itself. So more if you could define them as a launchpad page for Sahara and let's move on to the next slide. And let's talk a bit about the main changes having some Sahara project during the general cycle. Firstly, we moved to the specs process for new features instead of following blueprints. I mean, not just instead but in addition to following blueprints. So right now we're using specifications for most of the old and new features that will be added in Sahara and it works good enough for now. So the next thing to have done during the general cycle is that Sahara dashboard that was previously maintained and developed in the integrated Git repository had been completely merged into Horizon. And now it's available out of the box in Horizon installations and this will be enabled automatically if there's a port sync endpoint available in the Keystone services catalog. So the next thing as I already said before is a pluggable framework diagnostic EDP mechanism. So the last data processing now done without any hard-coded approach. And in fact, the new plugin could be written for Sahara to implement the data processing cluster provisioning and starting from June release. Such one more plugin could be written to support running workloads and top-of-class clusters. So right now we could support in theory any data processing clusters by Sahara and we're going to implement some new plugins from new and popular data processing frameworks. So talking about the change done in June about the supported distributions data processing frameworks, the next slide is about it. So we started supporting the 2.4 branch of vanilla Apache Hadoop in June release and the brand new plugin has been added to support global distribution of Apache Hadoop for the whole 5.x branch. And we started supporting Spark data processing framework too in addition to the Hadoop and it was the first non-Hadoop plugin that was done for Sahara. And the whole plugin mechanism approach has been very good tested and blessed by adding this new plugin because Spark is absolutely a different thing. So the next feature in change that has been done during the general cycle was an addition of Cylometer notifications. So now we're reporting change for data processing cluster statuses to Cylometer and we can now fetch some statistics from Cylometer about the cluster slide cycle. Okay, during the general cycle, we implemented a bunch of resources for heat and it includes ability to create not the group and cluster to play it for Sahara using heat resources and to create Sahara cluster itself. So the first interaction of Sahara that is about using data processing clusters is now fully available from the heat side. So you can write the heat stack that will include some of your resources and you can add a few resources to deploy the Hadoop cluster with the Congress of North for example. Okay, and the last big change that has been done in June is the Security Group Support Edition and the Special Edition for Auto Security Groups Creation. So for now, Sahara is able to create automatically security groups for data processing clusters that will open ports on the between nodes that need to communicate and to open some ports to the public network only on the nodes that need to have to be accessed from the internet. So let's take a look on the key role plans. We're going to support new versions for all of the different plugins including new version, so support for the new Hadoop 2.6 that has been released about a week ago. It will be there a distribution of Hadoop plugin which is a hot network data processing data platform and to our vanilla plugin that is implementation of provisioning of upstream Hadoop, not from some of the distribution. In addition, we're going to have the Apache Storm plugin support. It's already launched yesterday to Sahara, so we'll be supporting one more data processing framework and Apache Storm is a real-time messaging processing service. So it makes users able to process some messages like Twitter queries and et cetera. The next one is a dashboard directs improvements and it includes some things like adding filtering to the different page, adding some results to make the process of creation the parts and frameworks a bit easier than it's now. The next point is about the heat integration. It's mostly related to upgrading our internal mechanism of working with heat to the latest version of heat templates including HOT. And the last point is about ironic support. It's mostly about checking that everything is working okay with ironic and supporting building pre-installed images for ironic with the installed data products and frameworks. And we are going to support the very important case of provisioning hybrid clusters with the part of the cluster on the hardware machines and part of the cluster on the full machines. For example, to provide users the ability to deploy some permanent parts of the cluster on hardware and to provision on demand, for example, compute capacity on the full machines. Okay, so I think that's all from me for Sahara update. So if you'd like to connect us, if you have some questions you can always find us on the open spec dash Sahara channel on FreeNode or open spec dash Jeff at the WISTS. That's up on the Twitter command list. And some more content points will be added on the YouTube video description. Thank you for attention.