 webinar series. These series evolved from sessions that were held by the PTLs regarding updates to their projects at each summit. And we've converted them into these quick webinars to extend the reach of these events beyond the summit. Today our PTLs are going to update you on what may be new for Juno, as well as detail any items of note that are new for our users or operators, things of interest in that realm. Today I have Sergei Lukshinov, who's now I probably said incorrectly, sorry Sergei, with data processing. Doug Hellman with Common Libraries and Teri Karez with Release Management. Each PTL is going to take about 10 to 20 minutes. So we will start with Sergei and I will move over to your presentation, Sergei. And then feel free to take it from here when you're ready. Okay, thank you. My name is Sergei Lukshinov. I'm the Program Technical Lead for the Data Processing Program. It consists of a security product that was named in the beginning of the Icehouse release, but it was renamed due to some legal issues. So it's tenuous about our current state and overview of Icehouse results. So let's start with some beats about our mission statement. It's to provide a scalable data processing stack and also credit management interfaces. So there are two main directions. So there's the security product. The first one is to provision and operate the Hadoop closers with different policies, configurations, hiding some hell about provisioning configurations and adopting them to the vehicle environment and etc. And the second direction is to schedule and operate Hadoop jobs and workloads. This second direction is named Solstice Data Proxing shortly EDP. Okay, so the EDP has taken data processing workflow management. And the main goal of EDP is to let end users get answers about data without having to know a single thing about the cluster management. Of course, for the users, those with high value questions also. Currently, support on the Hadoop is a data processing tool, but it's a very large world of different tools, the whole ecosystem of tools for data processing. What we have our intention to implement support for other data processing tools soon and we'll have one in June. Okay, let's take a look in the Icehouse release overview. There are boring numbers, but I'd like to know that we doubled the number of contributors and reviews in comparing all the Hadoop cycle. And you can find the more details on the Sahara slash Icehouse release page on the launch pad. About the project status during the Icehouse, Sahara was incubated in Icehouse cycle and it was successfully good-rated in the end of Icehouse to be part of the DUNA in due to its release. And we do it now again to make it happen in June. So let's talk a bit about the most interesting feature that was just used during the Icehouse release. And the first one is the heat-based cluster provisioning. The feature is about using heat by generating heat specs and sending them to heat to provision all of the resources that we need to set up Hadoop cluster and all the links that we need to make it working. So the next feature is the Hadoop 2 support and why it's important. The Hadoop 2 has been released in early Icehouse and there's a lot of guys in big data area that are starting to use Hadoop 2 and mostly all like to taste it, to try different apologies, configurations and that's why it's important to support it in Sahara. And please, because we're doing Sahara to make these things much, much easier than you can do with Hadoop 2. There are two main important things that the first one is that we support Hadoop 2 in both Ponelli and the EDP plug-ins. EDP is the Hatchin Dwarfs data platform plug-in. It's one of the biggest Hadoop distributions. And of course, our plastic data ports functionality supports Hadoop 2. Let's move on. There was a bunch of EDP improvements, including a database that's group support using the EDP plug-in. It's really no secret database, but it's not exposed from the Sahara-like database. It's used for data processing because it's based a lot of data processing things. And it could be used as a data source for EDP jobs internally in the classroom. And we are using Scoop to be able to use external SQL storages. So the Scoop is a Hadoop world to communicate with SQL databases. Additionally, we will implement support for streaming and playing dialog material jobs and workload. And external Hadoop Distributed File System is now supported in the data source for EDP workloads. Okay. One more important thing is that during the ICE house, we will implement a bunch of API tests in the team test and we stabilize them. And right after the ICE house release in early June, we've enabled Sahara in the integrated gate. And now we are able to say that it's a test that's in the gate, and it's intending to improve the number of tests to run them for each part of the token stack to be sure that it's still working. Okay. The one more feature that has been added during the ICE house is serialized support in the Python Sahara client. In ICE house, it was implemented basically for all of main functionality. And in that you now can use not only dashboard for Sahara, but to use serialized provision, clusters, and run jobs and all of it. Okay. Let's take a look on our June plans a bit. The main plan for June, as I said, is to support non-Hadoop plug-in, non-Hadoop data ports in the same tool. It will be Spark. It's the batch data ports in the tool with support of streaming jobs. And of course, this goal contains not only of the support for provisioning Spark plug-ins, Spark clusters, but to support it in EDP to support both directions of Sahara mission. The second goal is to measure our Sahara dashboard plug-in to Verizon to make our Sahara page available by default in OpenStack dashboard. And there are two heat-related things in our plans for June. The first one is to implement Sahara support and resource in-heat. It means that we'd like to support all our provisioning functionality using Sahara sources in-heat. And the second one is to make our heat engine, heat provisioning engine in Sahara, the default engine. And of course, we have plans for testing improvement. It contains at least additional fake plug-in-based testing certificate that will provision some very simple cluster topologies using the fake plug-in with the tools. And it will guarantee for us that our implementation of heat-based engine is well working and we could use it by default. So I think that's all from my site. Thank you all for attention and I think questions will be at the end of webinar. Great. Thanks, Sergey. Let me just see if we have any questions now. Now we can do them at the end. Okay, it looks like we don't yet. So thank you very much. Okay, I'm going to move over to Doug. Okay. So Doug, when you're ready. All right, thanks, Margie. So I'm Doug Hellman. I'm the project lead for the Oslo program. And I'm going to start out today by giving a little bit of background on the project because it is not necessarily one that is as public-facing as some of the other application-based programs. Our mission statement is to create a set of libraries that the other projects will use as a foundation for the work that they're doing and to build those with stable APIs with reliable testing and high quality and that sort of thing so that we have good documentation and consistency and reliable code that we can share among the different projects. Before Oslo started, the way we shared code between projects in OpenStack was actually literally copying modules between the different repositories between the different applications, which meant that something that started in Nova might make its way into another program into one of their applications. But then the two different versions would drift apart as different bugs were fixed in different places and features were added into places. So Oslo was started as a way to unify all of that and stop doing so much copying and that sort of thing. So what Oslo provides is management, you can move on to the next slide, management for the code copying that we do do so that we're copying, we're still copying some code because we want to provide time for APIs to evolve before we lock them down and make a library out of them. And the easiest way to do that is still to copy the code between projects so that they can adopt those changes at their own pace when they're ready to make an API change internally. But we're managing the copying using some tools that we've built so that we actually centralize all of the changes and we try to have a single copy where new features are added or bug fixes are made and then that copy is replicated in the projects that are using it. And then we also have a process for stabilizing that code and turning it into a library once the APIs are stable. And the goal of making those stable libraries is to provide consistent behavior for employers. So we want things like consistent configuration options across the different projects that are third party dependency management so that we don't have two different applications within OpenStack that depend on two different versions of some third party library making it impossible to install them together. And then better bug fixing and security fix rollout. So when we release a change to a library you don't have to update every application, you just have to update that library. And those are pretty standard sorts of things that you do in software development projects and we're evolving those procedures and tools within OpenStack using within the Oslo program. Okay, so to talk a little bit about what's going to happen in Juno I have to catch you up on what we did in Icehouse. We so far have a couple of libraries that are released. Oslo config which manages configuration files that doesn't actually define the configuration options. And Oslo messaging is our latest library and that is the RPC layer library. So that's the library that all the different applications use to talk to the message bus either RabbitMQ or Cupid. And Oslo messaging was released at the end of Havana and adoption started in Icehouse so we had a few applications that were ready and were able to get the work done in Icehouse. That work is ongoing. So we also spent a lot of time in the last release cycle building tools now that we've done a couple of libraries to sort of know what the process needs to be and we built a bunch of tools to automate it and we have some processes so that we don't miss any steps so that we can churn out more of these libraries. We also adopted four libraries that were developed outside of the OpenStack program. We have the Pi CADF library which is related to auditing. The TaskFlow library which is being used for managing multi-step processes within an application in a way that allows the process to stop and start and pick up and resume and roll back and that sort of thing. The Stevedor library is used to manage plugins and load drivers. And then the Cliff library is a command line interface to the library which we're using for the Unified OpenStack client as well as some of the individual projects I know Neutron has adopted it for their command line client. And adopting those libraries that someone else developed has allowed us to test changes to those libraries in the same way that we tested everything else in OpenStack. So we've got those libraries in the integrated gate now. And that prevents us from checking something into those libraries that's going to break an application or a release of OpenStack. So for Juno, our plans are to work mostly on library graduation but we do have a few features that I want to talk about. The RootGrap team is working on some performance improvements to that project. That should help with performance within Neutron especially with some of the other projects as well that use RootGrap. The auto messaging adoption, it was scheduled to be completed and I believe all of the integrated projects have now adopted it. I think there's a little bit of cleanup work to be done in one or two projects but that should not have any problem being completed by the end of the release. And then all of the four libraries that we adopted had version numbers that were less than 1.0 sort of implying that their API is not necessarily stable. So we're working to stabilize each of them over the cycle so that we can have a 1.0 release for each of them by the end of the cycle. And then the big work is graduating the new libraries from our incubator which is the common source of code that we're copying into the project still. And we have identified seven new libraries that we can graduate and that number may vary because we're still talking about some of them merging together or splitting apart. So it may not be exactly seven but the same amount of code will be released that just depends on how many libraries we end up making. And so take a look at the dependency graphs between those different libraries. You can see that it took us a while to work out how to release all of the code. We could have just released it as one big file of code in a library but that would make releasing updates not very easy and it would make managing changes to that library significantly harder. So we did a lot of work to analyze the dependencies between the different modules and figure out how to group them in logical ways so that we could break it up. So we have the libraries listed here in blue are already released and being managed and that includes the ones that we've adopted as well as some of the ones that we've released from the incubator. And then the libraries in yellow, those are slated to be released during June. And then the ones in gray, those are intended to be released starting in the next release cycle. We haven't planned beyond understanding what the dependencies for those are. So the seven libraries in yellow, the OsloDB library is the database layer similar to the way messaging works that will provide a driver layer for talking to different kinds of databases like SQL or mostly relational databases that might SQL or Postgres. Oslo logging is the configuration libraries for managing, setting up all the log files and log translations and that sort of thing. Oslo I18N is the internationalization library and that includes the code for translation including the lazy translation which allows API users to see messages in their own language without having to translate them themselves. So you'll actually get error messages in your own language. And then we've got a couple of others. The serialization library is sort of a low level thing for converting objects to XML or to JSON and returning this through the API or writing them to file. And the concurrency library is used for doing things like locking and communicating between two different running processes. So some of those things may merge together depending on how big the libraries actually end up to be once we look at the code. But based on the dependencies and sort of isolating the feature set that's what we came up with a set of libraries. If you want to follow along and track our progress cycle we're reviewing Launchpad Blueprints just like the other projects are. So you can follow our progress there. And we've also adopted a new process using the spec repository for submitting Blueprints with much more written details so that we can do more reviews of those and make sure that we catch issues before we start doing work. So you can follow along on that repository and the reviews that are against that repository as well. And that's pretty much all I have. So unless there are any questions. Since I've blocked everyone from speaking. Let's see. Okay. I'll open up the mic at the end too. So thank you very much. I appreciate that. Let's see. That chart was pretty interesting. Wow. I'm going to pull up Teri next. That's okay. One second. There he is. Thank you very much. Okay. Okay. Thanks, Maldi. So this is Teri Carras. I am the PTL for the release cycle management program. And I'm going to talk about what we have in store for release management in Juneau. But first of all I need to introduce. It's the first webinar we have on this program. I need to give you a quick introduction on what the release cycle management program is about. It's a rather small team. We have three sub-teams. The first one is focused on integrated release management. It's mostly a project management type role where we make sure that we keep track of what will end up in the integrated release that all the dependencies between the projects get solved. And that we respect the release schedule with the values freezes and the value steps that we have put in place to ensure that the quality of the end result is satisfactory. We have a separate team that works on stable branch maintenance. So they take the released bits and apply critical bug fixes in a specific branch. And from time to time they will issue a point release to collect all those fixes and all those vulnerabilities that we fix into a table that you can potentially package afterwards. The last sub-team is the vulnerability management sub-team. So that's the team that receives vulnerability reports and about flows in open stack software and will investigate those reports and if they're substantiated they will work with the developers to produce and get those fixes into open stack and get them covered by an announcement that is the open stack security advisory to communicate to our stakeholders that we have identified the vulnerabilities. So the difficulty with that team is that they are working mostly in privates because we are receiving the vulnerability report in privates. So there is a high level of secrecy that we need around the actions of the vulnerability management team. So now for Icehouse we did implement a number of changes in our processes. On the release management side we changed the weekly meeting format. We have a cross-project meeting for all the projects that participate into the integrated release. And we used to have a section during that meeting for each project where we would get updates from that specific PTL and we would update status. The problem is with the growth of open stack we cannot really have that anymore with 10 projects in a one-hour meeting because that would be only five minutes per project. So we changed that so that we have one-to-one synchronization points between the PTL of that project and the release management. And we collect them all into a summary and we present that summary during the cross-project meeting. That frees up most of the time of the cross-project weekly meeting for more interesting discussions between projects so that we make sure that we communicate the priorities correctly across project boundaries rather than just talk about separate projects during the whole meeting. So that was really well received. We also introduced a new freeze in our development process. So by the end of the cycle we no longer introduced new dependencies into open stack so that the package orders, the work of the packages that take our pre-releases is facilitated. So they don't have like a new dependency to package two weeks before the end release which is a bit of a pain for our downstream packages. So we introduced that new dependency freeze and it was also well received and we'll do that again in June. For the stable branch team they released three Havana stable point releases and the final grizzly point release as well. And now the grizzly stable branch is at its end of life with no longer back port fixes or vulnerability fixes to it. The vulnerability management team ended up handling 85 different vulnerability reports which ended up resulting in 25 open stack security advisories. And we also improved our reporting by providing a clearer analysis of the affected versions that we for a given vulnerability we used to only provide information for affected supported versions. But that was not really useful for people that were running unsupported versions. So we now provide a more accurate account of which versions are actually affected by a given vulnerability rather than just talk about the versions that we continue to support. So looking up for June now on the release management side we decided on the release schedule so the feature freeze will be on September 4 and that means after September 4 we stop adding new features, we stop changing things and we stop fixing things so that we can end up with a release that actually works. The dependency freeze will be on the same day as the feature freeze day so we will stop adding new dependencies at around the same time. There are always exceptions to those rules but it's a general step that we won't add new features except a few ones that are granted exceptions to that process. The final June release date is set for October 16 so you can already mark your calendar because we usually see those targets pretty well. Other changes on the release management side, on the next slide, we have already started simplifying development milestone publication so during the release cycle we publish development milestones and those used to have a rather complex process where we would create a branch and we would have a two-day coding time on that branch before we actually tag the branch for a development milestone but since development milestones are not really meaningful or more used than the master branch it wasn't really useful to have that more complex process so we simplified it and now it's just a tag on the master branch and that's probably a good thing to limit the load on the release management team. We also adopted named pre-release branches so whether or not have a milestone proposed branch at the end of a cycle we will have a proposed slash Juno branch at the end of a cycle that will let us have more accurate testing because we can test upgrades from Ice House to the proposed Juno branch or from the proposed Juno branch to the master K branch that will come afterwards so it's mostly a change to close a hole in our testing at release candidate time. We also extend the one-to-one sync points to the incubated projects since it was very successful for integrated projects we decided to do the same thing for the incubated projects. On the stable branch side the support will be extended to 15 months for the previous releases. We limited during the last cycle we limited the support to 9 months and looking at the results from the user survey we can see that the open stack branches, open stack releases are used where longer than just 9 months so we have to strike a balance between the pain of maintaining those branches alive in our testing and the needs of our users and we struck that balance for the Ice House cycle to a 15 month support. We're also losing the acceptance holes for the patches that are accepted in stable branches. We used to have pretty strict rules about what is acceptable and what is not. We will now have an exception process so you can always ask for a specific patch to be accepted in a stable branch and the stable branch team will look after it and if the benefits are outweighed in the process that the patch brings then we will just accept the new patch. We'll also have a new person joining the stable branch team for handling one of the point releases for the Ice House cycle Shuck Short will join Adam Gilderman and Alan Peweck in the stable branch team. On the vulnerability management, oh no, stable branch release schedule we have plans for a point release for the Ice House release in August 7th and October 2nd and also another last ice house release at some date to be determined. The last Havana release which is number 2013.2.4 will happen sometimes during September. Those are the dates that you can expect those point releases to be tagged on the stable branches for Havana and Ice House. On the vulnerability management side we are expecting to change a few things as well. We'll clarify which code repository is actually supported by the vulnerability management team because we have 130 different code repositories in OpenStack and some of them are not really security supported at the moment and it's not really clear which ones are supported and which ones aren't. So all the integrated releases are obviously supported. All the client libraries are also supported but everything in between is not necessarily very clear so we'll clarify exactly what the set of supported repositories. We are working on incident types so that we can have a clear process depending on how the incident turns out to be. We will also work on vulnerability matrix so that we can assign a score to a given vulnerability so you can look into it and say if it's really critical for you or not that critical in your deployment. We'll try to get the security advisor published on the website in a reference section. The target is only pushed to announce mailing lists and the general mailing list and we need to have some reference section on the website where we will publish them. And finally we hope to have improved tooling all around because the difficulty, the OpenStack infrastructure is extremely often so we cannot really use it for embargoed vulnerabilities so the goal is to improve that tooling so that we can have tools to test embargoed patches efficiently rather than just manually at the vulnerability management team. That concludes this presentation on the release cycle management program and if you have any questions I'll take them now. Great thanks Siri. I think we have any questions at this time but let me unmute people. You are now unmuted. The call is unmuted if anyone else would like to ask questions. Now's the time. While we're waiting also these webinars will also be on the foundation YouTube channel in the next few days or by Monday. So you can catch them there as well. I think all in there's a series of seven. And they conclude next. We have a question on the chat now. So the question is, is the 15 month support starting with Icehouse? Yes it is starting with Icehouse. We plan to support it for 15 months. Great thank you for the question. Anyone else while we have them? Okay. Well after the webinar concludes you'll get an email saying thank you if you have any additional questions feel free to send them back to foundation and then I can divvied them out to Sergei Doug or Terry from there and like I was saying that these webinars will also be on the foundation YouTube channel as well. So if there are no more questions I think then I will conclude here for your time. I know you're all very busy and have a great day. Thank you.