 Hello and welcome, I'm Jason Venner. I'm the VP of Architecture and Technical Marketing for Contra Long Cloud at Juniper Networks. I have spent probably the last six years working in the OpenStack space. I was the chief architect at a company called Morantis for five years prior to that and I built a couple of versions of eBay's clouds and PayPal's clouds. And we're gonna talk about OpenStack Advantage for the hybrid cloud world. I'm gonna talk a little bit about sort of the architecture that I expect you're all moving towards. We'll talk about the advantages of the public cloud, advantages of the private cloud and then of course hybrid cloud. I'll go through a few case studies and consumption models. I'll talk a little bit about ways, things to avoid to learn from others mistakes and we'll have a question and answer session. I've spent a lot of time helping companies through the digital transformation going from legacy into the cloud world so those are valid questions to ask. So this is a token architecture slide since there was architecture mentioned in the presentation about the kind of model I expect you're gonna end up in. You want to end up in a place where you have a pass layer that's ubiquitous across all of your clouds so that the behavior and APIs for your applications and services is consistent across all environments and your operational and management tools can be the same. You want to have a cloud management platform that's driving all of this across all of that and you wanna have a continuous delivery pipeline that's feeding changes in. You'll also need to have fairly comprehensive monitoring and analytics feeding back into your smart orchestrator and feeding into your data warehouse so you can make decisions about what products and services you want to extend if you're moving to a continuous, in your continuous delivery model because you're gonna be making your investment decisions much more quickly rather than the sort of legacy I'm gonna make this and in three years my customers are gonna experience it more. Public clouds, why do we love public clouds? Public clouds work. You no longer have people run around your data center chasing hardware failures, chasing network issues, trying to figure out why an API server for the infrastructure is down. Everything is there all the time and there's no capacity. You don't have to deal with your procurement people or your data center placement people. You can get the capacity you need when you need it. There are all the services you could ever imagine. The downside of this is you end up in a situation where you can have unexpectedly large monthly bills and if you start to code to the APIs that your cloud provider is offering you, you find yourself in a situation where it becomes very expensive to move out. If you need special hardware or you have pet applications that require higher levels of uptime, you're also usually unable to do those in a public cloud. Finally, we talk about the data protection requirements. There are a lot of companies today who really wanna keep their data in their data center inside of their firewall and they don't trust that in a public cloud. Also, I haven't seen an Amazon API for mainframes yet. Many of the financial service players are still running their general ledger systems through a mainframe. So I said that. Trying to explain to your security compliance team that the servers are run somewhere else, they have no access to the infrastructure can really be a very difficult thing. In addition, I spent a lot of time building systems in the financial services sector and anything that can be done as a cash equivalent becomes a high value target for hackers. And in today's world, we have state sponsored hackers going after our banks and other critical infrastructure opponents. Keeping these things securely inside your firewall makes a lot of sense. One of the, I suppose many of you remember the sort of YouTube video of somebody hijacking the control system on a Jeep Grand Cherokee about a year ago. You don't want that happening to the X-ray machine that's setting up to do a CAT scan on you. You don't want it happening to any of the medical devices that are doing invasive things on your body. We see robo-surgery systems coming up fairly fast, particularly for people in remote locations, and you wanna make sure that no one's getting at those. So these really belong in highly secured private systems. So for our private cloud, let's talk about, I've spent, I've installed more OpenStack clusters than I wanna think about starting in the cactus timeframe. It's painful, it requires a lot of skill and keeping it operating requires people who are system administrators, network administrators, Python programmers, and OpenStack experts. If you're a small player, that's a lot to take on. If you're a hyperscale player, that makes a lot of sense. The other interesting problem for most of the OpenStack users is the distro vendors will come in and say, I don't care what you're using to manage your current physical infrastructure, what you're doing to manage your Linux distros, I don't care about your patch management service. You need to give me a bunch of bare metal servers and I'm gonna pixie boot my own Linux distro on those and they're gonna take over your network switches which goes ever well with your network team and wire this cloud together and it's gonna be beautiful. Doesn't happen. So this is being one of the reasons I think you saw at the summit that this is the summit of managed OpenStack. Managed OpenStack lets you, going back a step, I've been in a lot of cloud projects where the time from initial, I want to do this OpenStack project to real first customer production workload can take a year sorting through all of the hardware and physical infrastructure and operating issues and with managed OpenStack, we see people up and running inside of a day and once you have been managed OpenStack, your experience is very much like the public cloud but it's in your perimeter. It just works. You have somebody to call, they're maintaining it to SLAs, you consume. You have to worry about capacity management but that's about it. Hybrid, we're in a situation where truthfully you don't want to be locked into any platform, whether it be a hybrid cloud or OpenStack itself. You want to be able to run your workloads on the platform that makes sense for you at the time you choose. This implies that you're going to be in a hybrid cloud situation at all times in the future. More to the point, you may have infrastructure on vCenter that's not moving so you're always in a multi-cloud or multi-environment and we'll go into the nothing like having leverage over your vendors. One of the key challenges in a multi-environment situation is unless you carefully orchestrate your pass and operations layers, you end up having to learn multiple stacks which goes back to that problem we had with some of the earlier OpenStack installs where you had your traditional stack that your IT organization you had operate and you had this OpenStack stack that nobody in your organization understood. And when you start operating in five or 10 or 15 environments, it becomes very important that you have a consistent behavioral experience across all of them for everybody. Did everybody enjoy the interop challenge results? Yeah. One of the things that you saw in that was people are moving up to a pass layer to provide the consistent behavior so we see Kubernetes really becoming the passive choice for people. So this is a sort of distillation of a lot of experience over the years. Where do I want a particular workload to go? You really want highly dynamic workloads to run in your public clouds because then you don't have to maintain huge reserve capacity for them. It's possible for people, I need an analytics job and I need 50,000 servers for a couple of days. You can do that in the larger public clouds. You tell your IT or your data setter person that you need 50,000 servers for a week and it's not gonna be a fun day. Anything with a critical security requirement, you're gonna run inside your secured firewall. Your compliance people will be much happier and if something does happen and your CEO has to go in front of Congress about something, they'll be able to say, look at all the precautions we took. And workloads that require consistent performance often do better on dedicated hardware. You know, I'm sure the public cloud providers will tell you that if you're running large MPI jobs, like computational fluid dynamic jobs, they'll run just fine in the public cloud. I used to run weather simulation applications for NASA and on a dedicated OpenStack cluster, I got roughly 10 times the performance I did in the public cloud because of the variability in the performance of the different virtual machines, even when I was selecting for 10 gig non-blocking switch connectivity between the servers in the cluster. 10X makes a big difference. In DevTest, you really need to have it span all of your environments because if you're not developing and testing on something that is functionally identical to your production environment, your testing is irrelevant. How many times do we hear it worked in my laptop? In the world that we're building of continuous delivery in multiple cloud, that's a critical fail. That's like leaving your wallet full of cash on the subway and expecting everything to be wonderful. And more to the point, all of this needs to be automated. You can't be manually intervening with your environments. Way back in the sort of pre-devop days, I was running search at a small company and we were making a major transition in our search and I wrote up an 18 step transition plan for each of our search clusters for the operations team because they were the ones that touched the hardware. And they would, every night they would do one cluster and I'd come in in the morning and they would have skipped steps seven through nine. And I would have to go manually, fix all of the clusters and they did this repeatedly for 18 clusters. It's like, dude, what? If I had actual automation for that, it wouldn't have been a problem. And in our world today, 18 is a small number, thousands probably more real. Particularly as you're moving into the IoT world where you're starting to run edge services globally. We go back to the need for true continuous delivery and testing that is relevant and valid for your production environment. So that your test environments and the deployments and the data sets need to be consistent so that you know that what you've just tested will behave the same way in your production environment. If people are ever touching something in your production environment, you've essentially failed because you have a situation where you no longer have a reproducible scenario. So you can't replicate that across all of your environments reliably. And more to the point, if you have to have a compliance audit, you don't really know what happened. I've been a firm believer for many years that the holy grail of these modern applications is that there are no user visible failures. The key here is user visible. You're continually gonna be experiencing failures in your environment, but if the user experience remains consistent, it doesn't matter. So how do you mitigate the visibility of failures? One is you're looking at your environment and you're trying to react, or your automation is reacting before something actually causes a user impact. So predictive analytics based on what's happening in your platform, and this is both hardware and software metrics as well as business level metrics. If you see a sudden change in the rate of transactions in your system, it's likely something's gone wrong. If you see a sudden change in the ratio of successful API calls to failed API calls, something is going on and you need to intervene. If you can catch that early, very few users are significantly affected. So the other thing I'll say, and this is a real driver of cost in the modern enterprise, in you see a lot of the startups where they are all of the developers are what I'd call full stack developers. They are the network administrator. They are the Linux administrator. They're deploying the applications of the database administrator. They're deploying the virtual machines and wiring all this together. And then they're actually coding their application features on top of that. Those people are hard to come by. They're expensive. And they're usually doing this manually. What you need to have is you need to have a team that's really building the templates for these on top of your pass. So again, the infrastructure for your applications are consistently deployed everywhere. And then your developers can focus simply on the business logic that they're working on and their IDE takes care of integrating that with your environment and testing it. I had an early example of this with a financial service provider where it took an experienced developer roughly two weeks to build a test environment, to test, to replicate a problem that they would experience in production. Not a new developer, an experienced developer who is very familiar with their platform. And this environment would be approximately correct. It would never be accurate. We built an infrastructure templating system for them and where it was a click of a button saying I needed the version that's running in production on this day deployed in this environment. And we had a brand new person come in and deploy an environment red and hollow app in a live demo on their platform. Something that had never happened before. So this makes a huge difference for your business. And going back, once you have that, it enables you to run in the multiple clouds. So we've cut a couple of case studies. Two of these are courtesy of Platform 9, one of the managed OpenStack vendors. They work with a Swiss educational institution that really wanted to start doing more DevOps so they could deliver more educational features to their users faster. They decided they didn't want to be in the business of learning and managing OpenStack. So they shifted to a managed service provider so that OpenStack worked for them and they didn't have to worry about it so they could focus on just delivering the applications. Very good. And this gave them predictable performance, predictable costs. They didn't have to worry about sudden unexpected bills late in their fiscal year as an educational institution. They run pretty tight. And it was a good choice. And we all see the turmoil in the healthcare industry in this country. But a lot of that is, they have to find ways to save cost. And so they're all working on innovative applications for claim processing, health management, everything else. And they're adopting a DevOps process so they can deliver features faster. And a managed private cloud gave them the ability to do this without again being experts in the cloud and being able to consume multiple clouds safely. This one here is interesting for me because I've worked for this company as an employee and as a consultant kind of multiple times and I've built multiple cloud projects for them over the years. I can remember in the pre-OpenStack days thinking vCloud director was gonna save my life. And then I spent a quarter trying to get access to a vCloud director environment so I could try my automation and it didn't happen. So we shifted to this weird company called Rackspace at the time and built an automated application management platform on that. And we used to take their data centers down because at that point in time, their cloud data structure was built for people making one or two changes a day and our automation was making 100 changes a minute. And they got upset with us and said, hey, try this OpenStack thing. And that actually became one of the first largest consumers of OpenStack back in the bare timeframe. But in their, there are probably 36,000 node servers of OpenStack today and they're the last version that became that. Their core goal was to be able to take code that was ready to run and get into production in 15 minutes. Prior to that, without an active God, it was about 45 days because of the capacity planning and all of the manual steps involved and the hundreds of tickets to provision server space, network space, firewall rules, et cetera across their infrastructure. And they exceeded that a long time ago. The other one was because of the, this is another piece about automation that I probably haven't mentioned before. Many of you are dealing with environments where you have some kind of seasonal peaks. So either you build out your infrastructure to support your peak and most of the year you run at 5% capacity because it's too expensive for you to manually change your provisioning. Or you have automation that lets you spill out capacity and you can run your infrastructure at a much higher capacity. They went from 5% to 60% utilization on their servers and their goal is now to get up to 85. When you're dealing with tens of thousands of servers, that's a lot of infrastructure. Completely aside from that, I'm with Juniper. We have a smart analytics product called AppFormix that we're starting to use with orchestrators and what we're trying to do is by using the predictive analytics to reduce the amount of capacity we need to pre-allocate for applications because of the ability to react ahead of a user-visible failure and regain the capacity needed to serve the users so that instead of having to provision 50% over, we can provision 20% over. Again, how do we drive cost out of the infrastructure? So let's talk about ways to consume OpenStack back in our hybrid model. We talked about do it yourself. Unless you're a hyperscale player, this is probably not the right choice. You're really gonna need a minimum four-person team of OpenStack experts who are continually gonna be recruited away from you. So unless you can afford to drop a million bucks a year on these people, don't do it. And unless you're distro, if you're choosing a distro vendor, you don't need to be as intimate with OpenStack, but unless the distro vendor's gonna operate it for you, eventually your team is gonna be up to their elbows in the OpenStack code while they're operating it and you're gonna be in the same place. So distro vendors that do build, operate, and transfer. If you're a mid-sized player, this is probably a good one for you. They're gonna come in, build an OpenStack cluster for you and they're gonna operate it and they're gonna train your people and they're likely gonna stick around for a while and take care of the heavy lifting for you. This lets you consume OpenStack very much like a public cloud, but lets you take over operations if you want to. But you can get up and running fast without having to wait for your people to come up to speed. And if your ITO organizations have been dealing with different environments, this is really necessary. And of course, we have OpenStack as a service which is the watchword of this summit. You have the best of the public cloud world in your data center. Most of, many of the vendors can give you people with a specific nationality if you have regional requirements to do the operation. Some of them will put people on site. I'm not gonna mention any vendors for this, but this is a really good choice if your requirements are small or you need to get started fast. This is the, one of the joys of being in the first world is we have choice, but choice is hard. You can get, you can have your OpenStack entirely in your data center. You can have your OpenStack in a colo. You can have it on your service provider's hardware managed by your vendor. So this is an interesting model for people who have the big burst capacity. If you have your long run average capacity running your data center at the best price, but you have the ability to burst into your service provider's data center through their metal as a service where the environment is identical to yours, it lets you reduce the amount of capacity you need to carry internally while letting you service your peak and unexpected demand. We all like unexpected demand. Our cloud journey, where I've been to the cloud journey probably seven or eight years and it's still evolving. It really, it's about finding ways to really manage your risk while moving faster. I spend a lot of time in the financial sector. I think about a lot of things as risk management. Most of our current practices are about how do we manage the risk of failure? And we build Byzantine complex processes that take an order amount of time and cost a lot of money as a way of reducing the risk of catastrophic failure. And what we see is that the kind of lean model of continuous delivery and DevOps provides another way to manage risk that lets us see the results of what we're doing much quicker and reduce the overall cost of failure. I can remember working for a credit card provider about 18 years ago or so and we would come together yearly and plan out the IT projects that we're gonna start in three years. So it would be, you had an idea, a critical customer need five years before the first customer experience of that if you're lucky. How does that work for people today? Doesn't. We see the hyperscale players where they have a feature and idea and it may be the same day that code is showing up and customers are experiencing it. This is what we're trying to get to and how do we manage our risk and deliver that across multiple environments? Serverless is changing the game. It's gonna be a while for people to absorb it and you need to be constantly looking at what's going on in your environment, figuring out where the friction points are in getting features that matter to your customers and figuring out what features that matter. If you have deep attachments to the way things are, life is gonna steamroll you. You have to figure out what's actually happening and adapt quickly. I'm gonna wind this up. Probably the most interesting realization I've had in last year, I was talking to a friend about SREs. To me, having an SRE organization, a site reliability or engineering organization which is a sole job is to ensure that your customers have a good experience no matter what and drive change backwards to keep that moving forward is that the average lifespan of an SRE at a job is about nine months. They come in, they say all of these things are broken and then everybody says, yeah, so what? And they give up and they go. Don't do that. Listen to them. They are your saviors. Thank you. Questions? Come up to the mic. Hi, Jason. Scott Fulton from the new stack. Hi, Scott. You talked at the beginning about the benefit of Kubernetes as a platform for consumption and how pass is providing a kind of consumption layer. Earlier here at the show, one of your juniper colleagues, Chandan was talking about creating a kind of a direct API that would in fact bypass the pass layer or bypass the orchestrator so that a developer, a developer of software could communicate very much more directly with the underlying infrastructure and specify exactly what she wants from that infrastructure for particular applications. It's a very different way of thinking. So I'm wondering whether juniper or you has worked out which use case works best with which class of customers. Who needs that pass layer and who needs by contrast that direct contact? I think what we'll see actually is the pass layer evolving so that you can get either the direct contact you need by having full control for those people that need it or providing tied guy rails for people who don't have the time or expertise or the need for it. The average developer probably doesn't but if you're doing some kind of deep learning application that requires GPU access, you want a lot more control over how things are done than if you're doing yet another ad network. I shouldn't say that derogatively, I'm a ad network's fun to work. Anyone here from an ad network, anyway? Okay. But I do think that the goal of pass is to enable people who don't want to to deliver things that can be successfully operated and run from a security and efficiency standpoint. Any more questions? Well, thank you very much.