 We have Flavio and Sergei on the line who will be going over their project updates from Juno to Kilo and anything else that they think would be relevant for our users and operators out there. So today we're going to kick it off with Flavio who will be providing the updates for OpenStack's messaging. So Flavio, whenever you're ready. Okay, thank you Allison. So, yeah, I'm Flavio Pocokon, the current PDL for the OpenStack messaging program. Also co-name is Zakar, or Zakar, I don't even have to pronounce it honestly. And, well, as it has already been said, we're just going to go through the Kilo plans that we've kind of discussed back in Paris at the summit. So, if we get to the next slide. Before we even get started with the Kilo plans, I'd like to take a little bit, a few minutes of time, and it's funny what Zakar actually is. And before you even ask what Zakar means or where the name comes from, the name is actually the name of how misopotamians, like in the misopotamian mythology there were some messengers for the god of sin, and those messengers provided those messages through nightmares and dreams and to people. And obviously the relation with the name is about the nightmares and not exactly about the messaging part. So that's where the name comes from. It is Zakar. We had to change the name. The project was previously called Marconi, and there were some issues related to the name and we were basically forced to change it. And this is what we came up with. So, now that I've said this, let's get to something more technical in the next slide. So, one-on-one about Zakar. Zakar is like a really protocol, like Zakar is a messaging service for OpenStack. It's not the only one. There are other solutions for different areas. The key thing about Zakar is that it is a data API. It has a data API. That's what it provides. It doesn't provide any provision in the API. And it provides messaging features and solutions for different messaging patterns. And obviously it wants to be easily scalable and easy to maintain and provide all the Python libraries that are needed to interact with the service. And if you are familiar with other vendors, Zakar would be something similar to either queue service or even AWS, SQS, and SNS put together. So, when it's put together, it's because when you're reading messaging in Zakar's description, which by the way what I put in that slide is actually a mission statement that we have in the governance report, when you're reading messaging there, we are actually referring to not just the ability to send and receive messages but also the ability to have notifications on different kinds of notifications. And that's actually something that we will be working on during Kilo and I think it's the next point in the next slide. So, yeah, we can move to the next slide now. So, notifications. There are a few things that – no, there are a few things that we actually want to work on in Kilo and we were – like we tried a lot to keep the list small because in previous summits we came up with many ideas and many things we wanted to work on and obviously time is limited and we kind of went through all the feedback we have gotten so far from the committee and different discussions on the mailing list and we decided to take really like very few things and work on those and there are two main features that we want to work on. So, there are brand new features that we want to have in the service during Kilo. One of those is notifications. Like I said, Zakar aims to provide notifications as well. So, we're going to go ahead and implement these as part of the new version of the API. We will add notifications so you will be able to subscribe to the service in many different ways and we're not talking about notifications in a way that you would probably get them from rivaling queues so that you would just connect to the server and get notifications. Messages basically pushed back to the client library but we also like to have different kinds of notifications like pushing messages through a mobile, APNs, emails, even SMSs or text messages. But I mean, that's part of the future and the things that the two publishers that we want to focus on for the Kilo release are actually web hooks so you would subscribe some URL to the service and when a message and you will actually subscribe to a queue, so a specific queue or topic if you will and so when messages get there, you will get them in that URL that you have subscribed or you can also get those messages back to the client if you have a connection to the server or persistent connection which actually takes us to the next thing we want to implement and I think it is in the next slide. So, yeah, persistent transport. This is something that we currently don't have. So, Zaka is pretty much – I hate this word but I'm not going to use it anyway it's actually a pluggable so you can create different plugins for different parts of the service and we have this pluggability in the storage side and the transport side of the service. For the transport side though, we don't have any plugins besides the one we support right now which is HTTP and that basically means that if you want to talk to the service you have to use an HTTP client and you have to send the HTTP message a request to the server and it will process it for you. And something we want to have definitely is the ability to connect to the server and keep that connection alive so that you don't have to see you can basically work around the burden related to HTTP and the overhead of the protocol which it might be load it fills this so we want to have the ability to connect to the server and have that persistent connection there. And something that we also want to have is better support for browsers and therefore we have chosen WebSocket as the protocol we will use for the first implementation of a persistent transport and WebSocket has been around for quite a few years already I'm not even going to say like a long time but it has been around for quite a few years. There are different many iterations over the specification of the protocol and where it is being supported by most of the mainstream browsers nowadays. So we wanted to have something that is cross browsers and it can be used by many people from the browser. And in addition to that WebSocket can also be used outside the browser if you have a WebSocket library for your preferred language you could use that library and talk to Zuckar using a WebSocket transport. And that's something we will actually do in the Zuckar client we will have support for WebSocket from there as well and hopefully in the type of meeting it will do something fancy like falling back to different protocols based on what's available in the server. So in implementing this persistent transport will also require us to have a kind of different implementation of our current protocol or current API. So the API in terms of actions won't change at all but it will change in terms of form because it will be kind of for the persistent transport, for the WebSocket transport will kind of convert it into something serializable and that can be sent through a WebSocket. So we will kind of translate what we have in the HTTP transport into some kind of dictionary or something like that that we will be able to send to the server through WebSockets. So that's probably the gif of this work here and I'm very keen on what we're doing here so looking forward to have it ready. And so we can now move to the next slide. Can we move to the next slide? The other thing we want to work on is storage capabilities. Storage capabilities is not... So we have all this portability and like I said we have this portability in the storage side as well and we are able to... I mean you can write your own storage driver and use it from Docker but this storage driver currently has to support every single feature supported by the current build team storage drivers that we have and we want to make this layer more flexible so that other people can implement their own storage driver that don't have to necessarily live in our code base and those drivers can support these people needs depending on what they want to do. So in order to do that we need to convert all the features that we have currently in our storage driver into something called capabilities that we will expose through these storage capabilities and we will expose those in the API through flavors. And these capabilities basically are a way for the driver to say the things supported in it and for example a driver may opt out from supporting claims or it may opt out to form supporting FIFO depending on what technology it is sitting on and it may also opt out from supporting durability and in favor of a higher throughput for example. And this is actually the base feature that we need to have in order to implement the next one that we have online and if you want you can move to the next slide now. So the next one that we have online is actually optional FIFO and the previous one was basically the basis for implementing this because something that we have discussed, at the very beginning when we started working on that car like I said at the time it was called Marconi when we started working on it we got some feedback from the community and something that the community said to us is that FIFO, not having FIFO in services like SQS was actually very painful. So we heard their feedback and we wanted to have full support and like 100% guarantee of FIFO in the service. And we did that and we couldn't go, the currently released version has full support for FIFO but it turns out that FIFO has, there are two issues basically related to FIFO. The first one, it's not bad itself but there are some things related. The first one is that there is some overhead related to FIFO depending on the storage driver. You have to do some magic to actually guarantee 100% ordering and some technologies may have it built in, others don't so you may have to do some work around and hack it somehow in the driver and along that line there are some technologies that won't support it at all which is the second issue that we have and since they're very valuable and good technology that may be a good fit for a stock or storage driver we don't want them to have to pay the price of something that we have chosen as a need for the service. So after hearing the latest feedback that we got from our community, from DeltaSci community we decided to make FIFO optional and it will depend on the driver itself and how it is configured and you can basically opt in or opt out from having FIFO in a third deployment basis basically. So this is something that's definitely coming in Kilo as well. We can now move to the next slide. We're basically going almost at the end of my presentation I don't think I'll use the whole 15 minutes or probably already used them. Can we move to the next slide? Okay, perfect. So keys to topics. This is something that we haven't decided yet. So time permitting, we would like to... If we have enough time we would like to rename or what we have right now called keys we would like to move from having keys to something called topics and stop having a first citizen resource that we need to create in the database. This is just a peer optimization for DACA internally so that we can save space in the storage. So storage technologies that need to have the key resource created they can still create it. But if there's no need to do that like the MongoDB one that we have or even the Redis one you can just skip and don't create at all and we would like to do this switch from keys to topics which would be like a more lightweight resource to have in the service but this hasn't been decided yet so it may or may not happen before the next release. We can now move to the next one. So I think that's pretty much it. If we go back to April 2015 we will have a release that has notifications persistent transport storage capabilities and optional FIFO. And well the dots down there is because prioritizing things is actually harder than time traveling so things might be moved to the next release or some other things may come into this one here. So at the high level this is what we would like to do. No promises made. We will hopefully have all these and more but we will see. And we will move to the next slide. This is a story yet to be continued and yet to be told we will see what happens in the next release but if you have any other questions or you would like to join in help and with anything and you are interested in the project please we are all at openstack-zacar-free-node and I'm Flavio Parcocas already set. My email is flavioatredhat.com and I'm Flapa87 on IRC and Twitter if you have any more questions. Thank you very much. Awesome. Thank you Flavio. And again we will have his contact information and a link to the IRC in the description for the YouTube video so if you do want to get involved or have any questions at all for Flavio please feel free to reach out and get involved in any way that you can. So also today we do have Sergei who will be going over the updates for openstack data processing. Sergei, whenever you are ready go ahead and start. Okay, thank you. So my name is Sergei Okanov. I'm the Programme Technical Lead for OpenStack Data Processing Program. My code name is Sergei Okanov and so I'd like to make a short overview of the project and some highlights of things done during the general release and some plans for the Q1. So Sergei Okanov provides a scalable data processing stack and management interface and it includes two main directions. The first one is the provision link and the operation for data processing clusters like Hadoop, Spark, and Storm clusters. And the second direction is about scheduling and operating data processing jobs and workloads on top of cluster provisions by the first part of the project. So EDP itself is a shared stake on data processing flow management and right now it's a very pliable mechanism that makes it a stable to implement your own workload managers for different set processing clusters or use some existing ones like Uzi for Hadoop. Next slide, please. And for now we are using Apache Uzi for managing workload loads on top of Hadoop clusters and we are using the Spark manager for Spark clusters and there is no extra load managers for Storm but Pobl will make some. So on the next slide we can see some stats for the general release. The main difference between this and previous releases is that Sahara has been officially included to the integrated open stack release in June and so we already see very good growth of number of contributors and contributions itself. So more info can be found on the Launchpad page for Sahara and let's move on to the next slide and let's talk a bit about the main changes having some Sahara project during the June cycle. Firstly, we moved to the specs process for new features instead of following blueprints. I mean, not just instead but in addition to following blueprints. So right now we are using specifications for most of the old and new features that will be added in Sahara and it works good enough for now. So the next thing to have done during the June cycle is that Sahara dashboard that was previously maintained and developed in the supported Git repository had been completely merged into Horizon and now it's available out-of-the-box in Horizon installations and this will be enabled automatically if data processing can point available in the Keystone services catalog. So the next thing, as I already said before, is a pluggable frame-reported ADP mechanism. So the last data processing is now done without any hard-coded approach and in fact the new plugin could be written for Sahara to implement the data processing cluster provisioning and starting from June release. One more plugin could be written to support our new workflow in the data processing cluster. So right now we could support in theory any data processing clusters by Sahara and we're going to implement some new plugins from new and popular data processing frameworks. So talking about the change done in June about the supported distributions and data processing frameworks, is about it. So we started supporting the 2.4 branch of our new Apache Hadoop in June release and the brand new plugin has been added to support a better distribution of Apache Hadoop for the whole 5.x branch and we started supporting Spark data processing framework in addition to the Hadoop and it was the first non-Hadoop plugin that was done for Sahara and the plugin mechanism approach has been very good tested and blessed by adding this new plugin because Spark is absolutely a different scene. So the next feature and change that has been done during the general cycle edition of Cylometer notifications so now we're reporting the change for data processing cluster statuses to Cylometer and we can now fetch some statistics from Cylometer about the cluster's life cycle. Okay during the June cycle we implemented a bunch of resources for Sahara and it includes the ability to create non-grouping cluster to play for Sahara using heat resources and to create Sahara cluster itself. So the first interaction of Sahara that is about using data processing clusters is now fully available from the heat side the heat stack that will include some of your resources and you can add a few resources to deploy the Hadoop cluster with the hundreds of nodes for example. Okay and the last big change that has been done in June is security groups support edition and the special edition for auto security groups creation so for now Sahara automatically is able to create automatically security groups for data processing clusters that will open ports on the between nodes that need to communicate and to open ports to the public network only on the nodes that need to have to be accessed from the Internet. So let's take a look on the Kilo plans we are going to support new versions for all of the different plugins including new version and support for the new Hadoop 2.6 that has been released about a week ago it will be to call their distribution of Hadoop plugin which is a hot network data processing data platform and our vanilla plugin that is implementation of provisioning of upstream Hadoop not from some of the vendor distribution in addition we are going to have a purchase term plugin support it's already merged yet today to Sahara we are going to support one more data processing framework and the purchase term is a real time messaging processing service so it makes users able to process some messages like Twitter queries etc. Next one is the dashboard DX improvements which includes some things like adding filtering to the different pages adding some wizards to make the process of creation the process of frameworks a bit easier than it is now the next point is about what is heat integration it's mostly related to upgrading our internal mechanism of working with heat to the latest version of heat templates including hot and the last point is about ironic support it's mostly about checking that everything is working okay with ironic and supporting building pre-installed images for ironic with installed data processing frameworks and we are going to support a very important use case of provisioning hybrid clusters with part of the cluster on hardware machines and part of the cluster on virtual machines for example to provide user's ability to deploy some permanent parts of the cluster on hardware and to provision on demand for example compute capacity on virtual machines okay so I think that's all for me for Sahara update so if you'd like to connect us if you have some questions you can always find us on the OpenSpec-Sahara channel on free node or OpenSpec-Def at least the Turkmenink list and some more contact points will be on the YouTube video description thank you for your attention awesome thank you Sergei for your time and like he said those links will be available in the YouTube description so please feel free to reach out if you have any questions or you would like to learn more about either of these projects thank you