 While we're waiting for a couple more people to trickle in, welcome to Austin, everyone. How many of you folks are from Austin? Not that many. I spent about 10 years in Texas, and I just love the Tex-Mex here. Stephen and I were just talking about Papacitos. If you haven't gone to Papacitos, you have to go to Papacitos. Like a full 13% of the reason why I'm here in Austin is just to go to Papacitos this week. I'll probably go again. That's right. I still feel it from yesterday, but I'll go again sometime this week. I promise you that. So, let's get started. Today, you may have come here to hear about the future of the cloud or something like that. I'm going to talk to you about how to succeed with OpenStack in the enterprise. And as you can see, here's a picture of an IT admin. She's clearly very happy. So hopefully this talk will actually reflect the level of happiness that this IT person has in terms of succeeding with OpenStack in the enterprise. My name is Omri Gazeet, and I manage products and services for HP's cloud division, which is also the division that builds the helium set of products. And I'm going to do a talk in three stanzas or three acts. Act one is really about giving kind of a stark, honest assessment of where OpenStack is from a maturity perspective for enterprises that are trying to adopt it. And this slide looks a little bit the same as Boris's slide, which I actually liked from the keynote today. He talked about how there's completely do it yourself and then there's completely go outsource everything to AWS and there's a lot of white space in the middle. This slide's a little bit different from that though. There are really four different consumption models for really any type of open source software and OpenStack is no different. You can basically go all the way upstream and work upstream, right? That, there's a trade off there. It costs you hardly anything at all in terms of what you pay for vendors. But then in terms of effort, it costs you a lot of effort. And you can see the little muscle sign that my daughters like to use on their iPhone quite a bit to indicate strong. You have to be very strong. You have to recruit a whole bunch of upstream developers. And you have to do a lot of things yourself. So the next one, the next modality is consuming a distribution. And that is where Linux is today. No one really consumes Linux from kernel.org anymore. They consume a distribution. And we actually make one. We make software that you can use to run an OpenStack cloud. But as many people talked about this morning, it's not just enough to actually take software and run it. It really takes a lot of people in process to actually get an OpenStack cloud to work and a lot of people that approached OpenStack originally in the last two or three years as well, we'll just install the software and now we have a cloud. They didn't quite recognize the fact that they had to go develop a set of skills and experiences in order to actually run that cloud. And a lot of them got soured on OpenStack as a consequence. So even as a software vendor, I'm here to tell you, we're not quite yet to the level of turnkey where the cloud runs itself. You really still need to have quite a bit of expertise standing up an OpenStack-based cloud. Where we really feel like enterprises are successful today in their journey of adopting OpenStack is kind of on the right side of the figure when they consume a solution. That could be an appliance or it could be somebody helping you stand up an OpenStack cloud through professional services and or even helping you operate it, but it's still on your premises. We see actually a fair amount of success there, a lot of success. And moreover, enterprises that started kind of on the left side a few years ago had a bad experience are now coming back to OpenStack and trying to get again this time with the help of a partner. And that typically leads to a lot more success. And then all the way on the right side is when somebody is operating the whole thing for you. So in some ways that looks like a public cloud, but a lot of times that has a lot of unique characteristics. Like for example, the vendor can actually run your own cloud for you, a private cloud that doesn't have anybody else's VMs intermingled with your VMs. And that is actually very attractive to a lot of enterprises as well. So we see most of the success in OpenStack today kind of on the right side of this picture. And that's not to say that OpenStack won't continue to mature and grow and become more and more turkey. It's just that circa 2006, this is the state of the art. So the next stanza really is about some of the lessons that we learned building OpenStack based software over the last few years. The first lesson is really in deployment and life cycle management. What we did in our V1 product was we doubled down, we bet heavily on what was then the prevailing wisdom in terms of the upstream philosophy, which was build OpenStack on top of OpenStack. That was called TripleO. And we like to call these turtles all the way down, right? You're building an OpenStack on top of OpenStack and you could do that on top of OpenStack, getting that on top of OpenStack. You can run it arbitrarily deep if you want. And it makes a lot of sense from a computer science perspective, but pragmatically, practically speaking, it actually created a lot of obstacles to adoption, a lot of barriers. And that's not to say that TripleO isn't a great project. We have awesome engineers that contributed to that project and did really, really good work. Some of our best engineers worked on this. But when we looked pragmatically at how to deploy OpenStack and manage its life cycle, some of the big things that TripleO kind of had around it that got in the way were, for example, it was very opaque. You would go run the installer, so to speak, and then you'd end up getting a cloud. And it was very hard to change anything around that without doing pretty deep customization. And once you did customization, you lost the ability to update the cloud. And you certainly lost any kind of capability to do kind of minimal downtime or zero downtime upgrades. And so in RV2 product and beyond, one of the things that we decided to do was focus on Ansible as a deployment technology. And Ansible is not new, and it's an open source technology. One of the biggest things that we decided to do was focus on standard configurations, and that is basically pick the set of kinds of clouds that people could go deploy and really have them fall into the pit of success, so to speak. So we have a set of standard configurations, and you can actually customize those configurations by configuring them as opposed to making deep customizations that kind of throw you off from the path of righteousness, so to speak. And so just the fact that the entire deployment system is text-based, it's all yaml-based, means that you can actually go understand what it is, reason about what it is that the deployment system is doing, and you can actually go change what it's doing to fit your environment and still have a cloud that can be updateable. That is, every time we deliver updates, you can actually go take advantage of those updates, whether those be security updates or whether they be bug fixes. And it's possible to actually upgrade. One of the hardest things about OpenStack Clouds is upgrading from one version to the next. Most enterprises that we talk to that have taken OpenStack for a spin, they report to us that they have multiple versions of OpenStack kind of running side by side around different compute clusters. And with an Ansible life cycle-based approach, we've found that we actually have a very high degree of success upgrading our customers from release to release. And the last thing is you can really make it secure out of the box, right? Again, by having the standard configurations that turn on TLS for endpoint security, TLS for internal communication between roles, you actually get something that follows the best security practices. So you have a secure cloud out of the box. So second set of lessons are really about management. And life cycle management is also a kind of management. It's almost the day zero management, it's managing the actual life cycle of the platform. This is really about day one and beyond management. And in V1, really management is all about event collection and creating the console experience so that IT admins can have the visibility and the control over their IT environment. And in V1, community development is an iterative thing. We had the state of the art in OpenStack at the time was a thing called Solometer, that was also an event collector, although for a somewhat different purpose. It was focused on collecting events for the sake of metering. And the upstream community continued to be a little bit confused about what Solometer could be used for. And there are many in the Solometer community that wanted to extend it towards monitoring scenarios. We also had a project called Horizon that was a tenant console, right? That's the console that OpenStack developers can actually go up to and provision themselves resources. And it had some admin functionality that was a little bit bolted on the side to it. But again, we were thinking at the time as a community, we want to go evolve Horizon to be that management surface area. And some of the lessons that we learned is that we really need to treat IT and operations as a first class persona. And so that's why we invested pretty heavily in Monasca in Ops console as the IT Ops versions of those things. And that's not to say that Horizon isn't a great project, it is. We continue to invest heavily in it as a tenant console. But we have, for example, Ops console as the operational console for the operations persona. And likewise, Solometer is good for metering events. But Monasca is where our focus is in terms of providing deep monitoring. One of the lessons I remember we learned pretty early on, especially from deployments. I see some people from the engineering team in the crowd. I see Rajiv over there. He remembers this lesson. We basically had a single database often that kind of the people in the field that would deploy the software, they would use a single database, single MySQL database for both the OLTP store that kind of stood behind Nova and some of the operational services and for the event collection store. And same thing with the messaging fabric. And of course, those things are very different use cases. And we found that the cloud kind of ended up grinding to a halt very quickly as that database started filling up with all these events. And we were like, what's running this cloud? Is it like hapsters in there? We basically realized that we have to have a very different approach to these two very different operational environments. So we have a separate management plane in the product today that is underlined by a scalable store. We built Manasca with InfluxDB as the default provider underneath, which is a much different kind of database than MySQL. And the product that we actually ship provides Vertica free of charge, which is our columnar lightning fast database for being able to do real time queries. And that helps quite a bit when you actually build the technology on top of the right storage techniques, you actually get a much better experience here. The next thing you want is you don't want just an event store, you wanna be able to create standing queries on top of the event store that really tell you when things go wrong. And Manasca calls this an alarm engine. So we have that, again, as part of the community, we've built that into Manasca. And as part of the product that we deliver one of the things, one of the biggest lessons we learned was we wanna actually take the common kinds of failures and encode them and create prescribed resolutions for them. So that when we know some pattern often occurs in the wild, we wanna make sure that we have some prescribed resolution for that. And the ultimate in that is kind of this closed loop analysis. Those prescribed resolutions can sometimes drive the underlying lifecycle management to reconfigure your cloud. So that, for example, when you run out of capacity in certain places, you can actually go create new capacity through the lifecycle manager. So we really love that closed loop analysis. The monitoring system feeds a set of intelligent choices that drive the lifecycle management system. And then when you look at ops console, really all that is, is a visualization on top of Manasca data, right? So it gives you all the kinds of things that you would expect there. It gives you the ability to look at all your alerts and create new alerts and configure them. Time series visualizations of what's going on in your cloud. So that's really kind of the state of the art in terms of what we've been able to move forward in terms of the operations persona. Now the third lesson is insecurity. By the way, before we go to security, virus.exe, network access point over here, I love that one. That's the one that I want to connect to my laptop, virus.exe. There's always somebody that makes that kind of joke at conferences. But a different joke was in V1, really security was left as a best practice, right? And we wrote a really good white paper along with the OpenStack security upstream team about how you create a very secure OpenStack environment, right? But it would still read the white paper look. And going forward in our products, we really kind of tried to go in code security all the way inside the product and have it configured as secure by default. So TLS for endpoints, TLS for internal communication. Barbican is a very important project for us because it allows you to basically have an enterprise secure key manager that can manage all your secrets in hardware. We have one of these at HP called Atala, but Barbican is a very generic project and you can plug in any hardware based or software based ESKM underneath there. And so once you have that, now you can actually start doing data arrest encryption for your Cinder volumes. So all of these patterns start flowing from having some of these security capabilities built in. Another good one is Bandit. Bandit is a static analysis tool for Python programs that we helped develop along with a community in the OpenStack security upstream working group. And one of the cool things that we did was we actually put Bandit as part of our CICD pipeline. So that as developers are writing code, we run all of these the static analysis when we find all these insecure patterns for around SQL injection and whatnot. And we're able to actually fail the commit before it ever makes it into the product. So again, just being able to start using some of the tools that have been around and bringing them together to make the entire thing more secure. Audit logging, another very important one, very important part of PCI compliance. So again, having those foundations in the product enable you to run a whole bunch of different workloads on top of the platform. And that's really what we're looking for. We're not just trying to run kind of the non-production, the non-critical workloads. We're really trying to run all of your mission critical workloads as well. So with that, I want to transition to act three of the presentation, which is really about workloads. Let's see how much time I have, plenty. So platforms are about workloads. Platforms that don't support any workloads are not very interesting or valuable or useful. And in 2013, when I got involved with the OpenStack project, it was right around the time around Grizzly and Havana. Back then, the prevailing wisdom was that the kinds of workloads that you wanted to run on OpenStack were cattle. The idea that you have these, you build software, you build your services with failure in mind. It is failure of hardware, failure of the underlying software. You want to build it for resiliency. And back then, again, the pattern that we were trying to emulate with the OpenStack community was to try to replicate what AWS had. The AWS ethos really is about stitching together a very resilient platform based on a set of services that AWS provides. And we were trying to do the same thing. They had things like CloudFormation, we wanted things like Heat. They had things like Elbas, we wanted things like Elbas and so on and so forth. That was the pattern, that was what OpenStack was really gonna be good for. And then in 2014, something interesting happened. We started seeing, it became vogue to start running cloud native platforms on top of OpenStack. So things like OpenShift and Cloud Foundry started being interesting workloads to run on top of OpenStack. And the kind of requirements that we had on the underlying platform actually shrunk, instead of having to kind of take a dependency on all these higher level services, these cloud native platforms, all they needed was compute, networking, and storage. And there started to be a debate about what the best kind of approach for building cloud native workloads were, much ringing of hands ensued. But in 2015, another interesting thing happened, which was OpenStack started getting pulled the other way from running cattle to starting to run pets. And we started seeing features like live migration being kind of the most emblematic that landed in OpenStack and started becoming really usable. And all of a sudden, people started talking about running pets on top of OpenStack. And two years before that, it was like, if somebody said, I'm trying to replace my VMware environment with OpenStack, the community would tell them, you're doing it all wrong. Like OpenStack is for cloud native workloads. But in 2015, not so much, the platform started becoming mature enough to be able to do that as well. And of course, in 2016, we now have, over the last year, pulled OpenStack even further along kind of that additional features and functionality that enable even deeper scenarios like low latency, high performance scenarios around carrier grade. So OpenStack now supports a bunch of different workloads and that's not a bad thing. That's actually the way that you know that a platform is mature, is when it actually supports a bunch of different workloads. And so we actually love this. We don't think that there should be any hand wringing at all in the OpenStack community around the fact that OpenStack now supports a whole bunch of additional workloads. In fact, there's a lot of synergy there. So with that, I wanna go focus on, again, that debate between building cloud native applications on the left side, using a cloud native platform and using the Cattle approach. Now, I would say that neither is wrong, neither is right. You're gonna have success potentially with the different patterns. But if you look at the AWS approach, the whole ethos, again, is DIY by stitching together a set of services. And the approach on the right, what we think of as a Paz platform or a cloud native platform is delegate all of these things to a platform. So for example, if you wanna deploy an application and version it, that'll be cloud formation or heat in the AWS approach. For load balancing, you would have ELB or Neutron, Elbas, and so on and so forth. For services, you would have RDS in the AWS environment, and we replicated that with Trove. Queuing with Zacar and queue, and so on and so forth. On the right side, the Paz basically takes care of all of those details. Now, you have a trade-off there, right? You have a prescriptive system for doing things in a certain way, right? And so that takes away some of your control. But at the same time, you're saved from doing a bunch of undifferentiated heavy lifting. And if you look at how startups were successful with AWS, they built a bunch of platforms, right? In Netflix, that's a whole platform that they built on top of the AWS services. The state of the R in 2016 now is that we now have cloud native platforms that are mature enough that you can actually get out of the business of doing a lot of this undifferentiated heavy lifting. Just take the notion of zero downtime deployments. That's a very sophisticated capability, right? And a lot of startups have actually had to go build very sophisticated pipelines to get there. If you have that from a cloud native platform, then it actually starts making sense to take advantage of it. And the enterprises that we talked to that are most successful with their journey to cloud native development and cloud native transformation often choose the Paz approach. Now, you may ask, what about containers? Aren't containers gonna kill OpenStack? Aren't containers and Docker gonna replace Paz and kill that off? Well, we find that it's almost the analogous conversation with Dockercoin the term container as a service, CAS. And with the CAS approach, it's almost like the AWS ethos, except substitute the word container for service. A lot of these capabilities in Kubernetes and in the swarm stack are delivered through containers. You wanna do auto scaling. Sure, we have one of those. We have a container that can actually monitor all those things and scale out all the things. You wanna do log aggregation. We have Elasticsearch and FluentD and Kibana all wrapped up in a container that you can actually go deploy. But you have to stitch all these things together. Likewise, things like load balancing, you have a service in Kubernetes. Or in swarm, you just basically deploy a container that will do load balancing for you based on one of the technologies to do load balancing, right, Nginx and so on. So again, if you look at the Paz approach, all those things are built in. So at least in my book, the answer to the question is almost the same. If you wanna go get, fall into the pit of success building cloud native applications, I think you really wanna go take the Paz approach because that saves you a lot of the undifferentiated heavy lifting. Now, our Paz is perfect, hardly, hardly. A lot of the Paz's have actually started well before these technology layers solidified. So the good news is that the Paz systems out there are starting to react to that. And so you see OpenShift and Days and Cloud Foundry all take a dependency on Docker and RunC as the bottom layer. You don't have to go invent a new containerizer or a new wrapper around LXC, right? You don't have to do that anymore. That exists in the open source community and it's fairly mature. So let's take advantage of that. And some have even gotten to the next level and said, you know what, we're gonna adopt Kubernetes as our scheduler. Cloud Foundry still has Diego, OpenShift is now built on top of Kubernetes. So we're seeing the layers of abstraction move up. But that doesn't change the fact that you really do want an approach that kind of abstracts a lot of the heavy lifting, undifferentiated heavy lifting from you. Good, so just to bring that last lesson home, kind of that lesson is use the platforms Luke, use OpenStack to run a wide variety of workloads. It works. It's mature enough to now do not just cattle and not just running cloud native platforms, but you can actually run pets on it. You can do live migration with it. You can even run NFV workloads. I see some members of the NFV team, Rajeve for example, is over there. He's the dude that you wanna talk to about how to go do very high performance, low latency kinds of scenarios. But if you wanna go build cloud native applications, the moral of the story is also use a cloud native platform. So use OpenStack, but also use cloud native platforms. If you wanna hit the pit of success or fall into the pit of success, so to speak, building cloud native applications and microservices. Great, so of course this would not be a vendor sponsored talk if I didn't have a slide on the product that we actually just are announcing and are about to release. We have a product called Helion OpenStack 3.0 that encodes a lot of the things that I talked about in this talk. It has a lot of the capabilities and incorporates a lot of those lessons. Now I'm not gonna say anything more about this product in this talk. There's plenty of other talks that you can go to. If you wanna learn more about things like Monasca and some of the underlying capabilities that I described, it's security. We have a lot of community talks here. We have about 45 or 50 talks that we're participating in. And we also have a set of talks tomorrow that kind of go into more detail around the kinds of things that we're doing. So things about security and patterns there. Life cycle management and so on and so forth. There's a bunch of great talks. I think it's in this room, right? In this room tomorrow. So pretty much a day full of talks there. And the last thing I wanna say is I wanna kind of give a shout out to the folks who've created a really nice experience across the street in the textile building. It's at the corner of Third and what is it? Third and Trinity. We have a whole experience there that allows everyone to take a deeper look underneath the hood and see what we've actually done. So with that, I wanna thank you all for coming to this talk. I encourage you to have a great time at the OpenStack Summit here in Austin. And I'd love to take a couple of questions if you have some, because I think I still have a little bit of time left. No questions at all, there's one over there. So the question was, where can we find the Ansible Playbooks? And there is a repo on GitHub. It is, so anybody in the audience know, it's like github.com slash Healyon OS, I'm pretty sure. How do we do federation, do you mean identity federation? So it's interesting, so we actually ran a public cloud, and probably most of you know in this room, we kind of got out of our public cloud business late last year. But we had multiple regions, we had two big regions for example in the United States, and we had a single keystone that, a single logical keystone service, but we actually replicated databases between these regions. So we have a set of patterns that we know about that have kind of been very successful for us. The underlying OpenStack technology continues to evolve there. And we think that it's pretty ready to be able to do federated identity. But that's not something that we've actually built out of the box. That's something that you still need to do a little bit of configuration tweaking for. But we'd be happy to talk to you offline. In fact, if you go to the textile building, that's where most of the engineers are gonna hang out. And you can actually ask a whole bunch of technical questions there. It starts tomorrow, and it's really just right across the street, literally right across the street from the convention center. Sorry, HPE-Healian-OS on GitHub. Fantastic. The tweet machine is here. Mr. Steve Inspector. Any other questions? Fantastic question. When are we releasing the new version of the Healian Development Platform to go with Healian OpenStack? So one of the things that we did when we shipped Healian OpenStack was we shipped a product called Healian Development Platform. And the philosophy there was to build the best kind of IaaS plus Paz stack with Cloud Foundry being the multi-vendor open source ecosystem that we wired into for the Paz layer and then open stack for the IaaS layer. As we kind of switched to being a much more multi-cloud kind of focused in our vision, we decided to make that Cloud Foundry layer be essentially working on top of every IaaS. So it should run on top of AWS, it should run on top of Azure, on top of OpenStack, on top of vSphere, etc. And so we basically refactored the product. So there won't be an HDP as you know it that has things like Trove and Q and all of that that runs on top of Healian OpenStack 3.0. We sell a product called Healian Staccato, that is kind of the heart and soul of development platform. And Healian Staccato 3.6.2 runs today on top of Healian OpenStack v anything. And we'll run on top of Healian OpenStack v3.0. We're also in the process of basically taking all the greatness out of Healian development platform, things like the code engine and some of the other things that we're working on. And taking them and effectively shipping them in a new package that we codename the cloud native application platform, right? And so that is an upcoming release sometime later this year. But for now, the idea is to go run Staccato on top of Haas, Healian OpenStack to get that integrated IaaS pass experience. Other questions? Going once? Going twice? All right, thank you very much. Oh, sorry, one more. So the question is about integrating OPNV. And the answer is we have a whole product on that called Healian Carrier Grade. That is the product that integrates OPNV and NFV elements into OpenStack. And go find, again, the team that works on Healian Carrier Grade at the textile launch tomorrow. Well, thank you very much. Much appreciated, guys.