 This is Heidi Joy-Trathaway with the OpenStack Foundation and Jonathan Bryce, our Executive Director. This is our preliminary marketing preview for the Mitaka release. Without any further ado, Jonathan, would you please take it away for us? As Heidi Joy said, this is kind of our high-level overview of what Mitaka represents. We're still a few weeks out from the release. And so, you know, this is information that we have gathered with the help of the development leaders and the product working group who did a lot of extensive work in going through every project and gathering detailed information and bubbling it up into themes. But, you know, if there are other pieces of feedback or other things that are interesting to you, we definitely want to hear that input as we get close to the release. Mitaka is the 13th release of OpenStack. OpenStack is a framework for building cloud services and integrating all kinds of different technologies together. On the compute side, for instance, it's bringing together virtual machines, bare metal container frameworks, and doing that all under a single set of APIs so that organizations can really take advantage of the diversity of technologies within their data center, but without having to have a lot of different islands of technology in their data center. And, you know, this is a concept that we started talking about really in depth in Vancouver last year. And as we have seen adoption ramp up with new deployments and bigger deployments, it's been really cool to see how this is a message that has definitely resonated with users and potential users. This is a situation that almost every organization is facing. They have a variety of technologies. They have a variety of needs. They have existing applications that they have to run and continue to maintain, but they want to be able to develop new software to deploy new applications faster and faster. And OpenStack is really the only set of technologies out there that enable this diverse set of use cases and the ability to integrate everything from traditional enterprise storage system to distributed open source storage system. So it's a powerful set of tools and I think a powerful message that has really been resonating with a lot of the organizations that we talked to. We talked about this for the last couple of releases, but we've gotten past the core set of base level functionality, kind of the table stakes for cloud technology, and the community has really focused on making it easier to deploy OpenStack, to manage it, to scale it. And within the Metaka release cycle, another thing that I think was really important as a theme that emerged was a focus on the experience of the cloud user, not just the cloud operator, but the end user of the cloud, the developer or the application deployer. And so there was a lot of work across projects to focus on things like the API and different elements of the user experience, different pieces of kind of the user interface, to provide more consistency and a better kind of unified platform experience for end users. The manageability and scalability, a lot of work continues to go into those. We continue to see really interesting new deployments, and those operators are getting involved in bringing their feedback around what they need to see, whether it's, you know, I need finer grained security controls, I need more scalable orchestration, I need, you know, an easier way to deploy networking. And it's really awesome to see how that is feeding back into the development cycle, and we're seeing, you know, specific work items that are coming out in response to those pieces of feedback. One of the things that was pretty cool to see come to fruition was work that's been done on the OpenStack client. There's a web interface that provides a graphical interface to manage OpenStack resources. But there are also a lot of times when an end user wants to script specific actions against an OpenStack file, they want to be able to automate it or tie it into configuration management, or, you know, just work on a command line to perform bulk actions. There's a tool that's been being worked on, it's called the OpenStack client, and it unifies the interface and the access across multiple OpenStack services. And so it's pretty cool because the approach that it takes is very much how do you use OpenStack holistically as a platform, like providing a consistent set of calls for creating resources, whether you're creating a network, creating a block storage device, or creating a virtual machine. As an end user, I don't have to kind of learn the intricacies of each particular service API if I am trying to get started quickly and, you know, script against them. So this is something that really, I think, kind of provides a greatly improved end user experience. There's been improved support for software development kits across a number of different languages, again, so that programmatically developers can work with OpenStack environments more effectively. One of the API level improvements that was implemented in the Mutaka timeframe was a function in Neutron, which is informally called Get Me a Network, and it basically takes all of the steps necessary to create a network and attach a server to it and, you know, give an IP to that server and really get it on the network and accessible down to a single function. This was one of those specific pain points for people who were trying to move from NOVA network before to Neutron, because this was something that was simpler and kind of a simple NOVA network model but was more complicated when you moved over to the more powerful Neutron system for managing networks. So this is something that the work in Neutron was completed for this, and we've actually talked to some operators and asked them, you know, what are some of the advancements that you're really excited about? And this was one that they called out specifically. The next step is to integrate this fully into the compute side of things so that it really becomes a single call to basically get a server and get it on the network. It'll be even simpler than kind of in the old model. Manageability is really about how do we improve the lives of the person who's operating the OpenStack cloud, that set cloud administrator. These projects have been working on simplifying the configuration for the different projects. In some cases, that means reducing the number of options. In some cases, it means putting in more defaults so that you don't have to necessarily go through and set every single configuration option. And we definitely saw a lot of work on this in NOVA specifically. Keystone is the identity management service to improve the setup for Keystone. And, you know, if you look at all of the steps that are necessary to get from, you know, I've installed Keystone to its running, to its authenticating against the backend service, to, you know, it's handing out the tokens that users need, authenticating requests and connecting services. They've now set this up so that, again, in a default scenario, it's basically one step to get through all of that. And then continued improvements in Neutron for layer three networking. And DVR is distributed virtual router, which is basically an improvement for availability and scalability of the routers that Neutron creates on the network. You might remember that last time we talked about how there was a concept called a convergence engine that first appeared in Liberty and continued to be developed on in Metaca. And the idea there is that, you know, as you start to use heat, you end up with actions that can be split and distributed across multiple nodes, but heat has to be aware of what those actions are and if they can run in parallel or if they have to run in sequence. It's a pretty complicated logic to figure out on the orchestration engine side. And so the heat team over the last few releases has been working to build that in so that it can be acted in kind of a more distributed way and properly handle those different orchestration actions across the horizontally scaled deployment of heat. And that includes, you know, this mentioned stateless node as well. That includes the ability to distinguish between actions that need to maintain states or actions that are stateless. And as you start to build all of that into the heat engine, you get to where you can handle more complex and, you know, a higher number of actions and a greater load inside your heat system as you scale it out horizontally. Designated as the DNS service, DNS zones are basically the set of records that exist under a domain name like www.openstack.org. When you are managing a DNS zone, you want those to be distributed across a lot of servers so you have high availability, but if you end up with a lot of DNS records then distributing those around can start to impact performance and scale. And so incremental zone transfers gives you the ability to basically do partial scale and partial replication of DNS updates. It makes it much more scalable as you create more and more DNS records. And again, you know, if you think about the point of open stack, it's to automate all of those different resources. And ITs and DNS are one of the things that in a cloud world are getting set up much more frequently because they're part of automated processes rather than, you know, in kind of the pre-cloud world where this was done manually by a network administrator. So you have to adjust the way that DNS systems are managed so that, you know, if you create 100 or 1,000 of these in a day it doesn't impact the performance of your DNS servers. The Furnit tokens are the new model now for how authentication tokens get passed around. And the Furnit token model is a model that doesn't require constant lookups back to a central keystone back in. So it allows a keystone service to be able to handle more authentication requests and an open stack cloud overall to be able to perform more actions without a keystone becoming a bottleneck. And then the last one that's mentioned here is continued work on Cells v2 and the NOVA project as well as more updates on the scheduler and the efforts that were started in the Liberty cycle around Cells v2 and pluggable schedulers. As a review Cells is the concept that was brought into NOVA a couple of years ago now as a way to horizontally scale not just the nodes inside of a compute cluster but actually to kind of horizontally scale out multiple compute clusters. And the initial version was really useful for a specific set of use cases that it was initially developed for. But as we've gone through the years we have more and more cloud environments that are between past hundreds to thousands and even tens of thousands of physical nodes inside of the environment. And so having a horizontally scalable way to manage open stack environments across different data center locations, different availability zones in a data center is really becoming a key point of scale inside of open stack. Cells v2 is really the architecture of that concept and the culmination of a lot of work with different operators by the NOVA development team. So there's been more good progress in Mitaka on that. And that's really going to be kind of the long term model for scaling compute in open stack. Here we have a quadrant with a few different categories of usage that we have started to see really taking off. Enterprise private cloud, these are organizations that you probably recognize because organizations that we've had speak at different open stack summits like Wal-Mart and eBay and TD Bank. But there's a new one in here which is really, I think, one of those companies that's synonymous with enterprise technologies and that's FAP. We've talked to FAP recently and they're going to be speaking at the summit in Austin which is really exciting to have them come and join us there. But they obviously create some of the most well-known enterprise software out there but they themselves as a company, they also run a lot of different environments and have a pretty complex enterprise IT environment which they are transitioning to run on top of open stack. And so they have a great news story and they're going to come talk about that in Austin. The public cloud service providers, that's one of the original use cases for open stack and sometimes we don't talk about it as much but we really started to see some cool success with different service providers who are building out public cloud offerings on top of open stack technology. And in some cases they are vertically focused on a specific industry or perhaps regionally focused. A couple of weeks ago there were several announcements that came out from some of these companies. City Networks is a company in Europe that operates a public cloud and they have a specific public cloud offerings for financial services organizations in the European Union. They announce a big user win with a large insurance company in Europe. DataCentered is a public cloud provider in the UK and HMRC, which is the tax service in the UK, announced that they were deploying their front ends for business and individual taxes on top of DataCentered's cloud. So it's cool to see some of these end users that are building on top of open stack cloud. And then Deutsche Telecom just announced a major public cloud initiative using open stack. So it's cool to see that traction as well. We talked a lot about the telecom industry and network functions virtualization. We're going to have AT&T speaking at the Austin Summit. They made some amazing progress in 2015. They talked about how they've deployed 74 data centers now with open stack and are running production network workloads on top of those open stack environments in addition to internal enterprise workloads. They've already been deploying it at big scale, but they have really large plans over the next few years as they continue to roll this out across their whole network. And we see other telecoms all over the globe like Deutsche Telecom, at Deutsche Telecom in South Korea, Swisscom and others who are Verizon and others who are building similar services. And then the final category is research and academia and big data. Nectar has been a long time open stack deployment in Australia. It's a combination of several open stack clouds that a different university is down there that provide a big pool of resources for researchers. Chameleon is a similar effort that is being deployed here in the U.S. There's actually one of the big locations is here in Austin at the University of Texas Advanced Computing Center coordinated with University of Chicago and with the big grants from the National Science Foundation. It's running workloads already for several hundred researchers across the U.S. doing some interesting stuff from biomedical things to machine learning. And so that's something that, you know, we've had CERN, for instance, has been a marquee user and driven a lot of strong leadership and involvement in an open stack for years. And it's cool to see that continuing to develop as an area of usage. Just wanted to highlight a couple of things that have been going on in case you're not familiar, I've been following some of the community activities over the last release cycle or six months. One really cool thing that happened this last weekend, we hosted our first open stack app hackathon. It took place in Taiwan and it was in collaboration with the government as well as local universities and then a couple of different, you know, companies and organizations in the open stack ecosystem. And they had 36 teams who competed to, they were building kind of apps and projects that were focused on smart cities and kind of the future of smart cities. So it was a really cool competition. We actually had the prime minister of Taiwan that attended to support it and help give out some of the prizes. And we're actually flying the winning team to come to the Austin Summit and show off what they built. And most importantly, we've been working with the community and with the local organizers to have this kind of be the pilot app hackathon and it's a template that we're putting together to roll out in many different communities. We're hoping to run two or three more of these this year as we really focus on this app developer audience and generally educating more of that ecosystem people. Jonathan was talking about some of the new research users that we have. There's a new scientific working group that's come up. We've had a lot of interest there. They're going to be having their first meeting at the Austin Summit. And if it's something that you're interested in, there's definitely something you should participate in and follow along. I think there's a wiki page with some of the group members and how to get involved. We've talked for quite a while about NFB. We've continued to build a strong relationship with OKNFB as well as quite a few telecom operators. We're seeing a lot of momentum and traction there. I believe that next week we're going to be having a board meeting and then OKNFB is also going to be having a board meeting in the same location around the Linux collaboration summit. There's going to be some joint meetings happening there just trying to build relationships between the communities and define more of that workflow so that OpenSTAT can be the platform for NFB. Finally, just wanted to call out the Austin mid-cycle that happened in February in Manchester. It seemed like an extremely successful event. It was the first one that happened in Europe, so we weren't sure how many people were going to attend. And it ended up being at capacity, I believe between 150 and 200 people there. But it seemed like a very productive event. Overall, just getting the operators together and building out that community has really, I think, made an impact on the development process and also just helping them feel kind of more close to the community and heard. And I know that the product working group and enterprise working group were also out there and meeting and sprinting on some content and just generally getting really ingrained with the operators and users, which was great to see. The app hackathon was really exciting because there were, as she mentioned earlier, I think 36 teams, a couple hundred developers. There were actually a lot of female engineers that came to participate, and they built and deployed applications, I think almost 300 applications, on top of an open-stack environment that a local company had set up for that hackathon. Everything from an automated system for growing potatoes to things to try to solve parking problems in a big city, all kinds of interesting things. As she mentioned, the Prime Minister came, was really impressed with the community and actually, you know, since then, we've seen some really strong interest from a number of parts of the Taiwanese government and academic institutions in having a strong participation in the Austin Summit. So overall, it was a really, really cool thing to see and a big success. Hand it back to you, Heidi Julie, to talk about the timeline for the Mitaka rollout. Thank you. We've added the Mitaka release logo to our marketing assets, so that's openstack.org forward slash marketing. We'll have the press and analyst briefings coming up at the end of next week. So we are on the path to finalizing our press release and fine-tuning the messaging. You've had the opportunity to really see us in full work mode and full draft mode. That's why we call this kind of the marketing preview, so we're continuing to work on this over the next couple of weeks. And then Thursday, April 7th, it's the birthday of Mitaka. So we will have the release website live. It'll look very much like the Liberty website that you saw last round. We'll also have a demo video, and we saw more than 30,000 views on YouTube of our Liberty Cycle demo video. People are really interested in seeing how the product works, and so I'm really excited to have that as a major asset available for you. That'll also be part of our Mitaka release website. We'll have these graphics available, and then we'll be releasing the press release at 801 Pacific. Finally, you can also take a look at some of the interviews that we did with project team leaders about the Mitaka design series. We started interviewing them right after the Tokyo summit, and then all the way through the mid-cycle, talking to them about what the hot topics were for their team and then what features they were planning to deliver. If you want to get a little closer to the action, hear straight from their PTL's mouth, learn a little bit more about the background of projects or the user concerns or major issues that they identified that they were working to solve in the Mitaka release, I'd encourage you to take a look at this YouTube playlist that'll go through many of those conversations. Finally, I wanted to send a huge shout-out to the product workgroup, and specifically the roadmap team led by Shamile to hear from IBM. They really killed themselves to pull together some amazing content for us. I'm hoping at the end of next week we'll be able to see a first look at that community-generated roadmap. That includes 100-foot view, kind of at that project level, specific features or enhancements. A thousand-foot view that kind of rolls that up into bigger themes or bigger goals, and then that 10,000-foot view that really looks at themes that unite all of these projects, like scalability, modularity, manageability, some of the themes that Jonathan spoke to earlier this morning. I'd really encourage you to take a look at the roadmap when it's released. They'll also be doing a presentation at the Austin Summit, and it really helps you delve into all of the different things that are happening across the OpenStack ecosystem with regard to the projects and the new features coming out. That said, I think we can wrap up now, and I thank you very much for joining us on this call.