 So good morning, everyone. So Mike and I are going to tag team this. I'm on the architecture side, Mike's on the product management side and Chris really helped set us up because the first thing I actually want to talk about as we get started is OpenShift is a platform. Kubernetes is what we kind of think of as a kernel and the analogy really applies to the idea of a distribution So today you go get your Kubernetes and you add a few things to it and everything works great the future is Much more complicated than that and I think that's what as we were starting to see at the end of Chris's slides We're talking about more and more pieces coming together to help you build these very large software platforms for running applications For doing network virtualization The complexity of these stacks is going to be high one of the things that we really believe in is the transition from platform to distribution, which is the idea of Taking the core pieces of the Kubernetes ecosystem and the projects that exist around that ecosystem the satellite those hundreds of open-source projects that Diane had in her slide and curating and bringing those as they become stable and secure and well integrated Focusing on experience focusing on install management life cycle. These are really important things to us at Red Hat We have that experience from the Linux kernel and we think it's a really key part of what OpenShift is is it's not just a platform But also a distribution And just so we can marry the platform to you in the audience raise your hand if you're from OpenShift engineering at Red Hat or You work with Kubernetes These these individuals are here to start interacting with you And if you've never met a sort of a code contributor to the project This is your opportunity to meet them on the customer and the partner in the ISV ecosystem side How many people are in like financial services or touch money in some way? Okay, and how many are like pharmaceutical or manufacturing and how many are aviation or utilities and telecom So there's a there's typically a those are the primary mixes that we see in the customer and in the ecosystem So definitely talk to each other and and we'll help facilitate that conversation today okay, so There's far too much for us to cover so Mike and I are going to Swiftly cover a massive ecosystem of projects and features and exciting things coming. I will absolutely forget things And so if you find me or Mike later in the day come up to us We're very happy to talk to you ad nauseam about everything exciting that's coming in the ecosystem So community first that's what Red Hat That's the the core the heart of Red Hat, which is about community So Kubernetes 1-8 was bigger than the last two releases Kubernetes 1-7 and 1-6 combined there's a huge amount of excitement around Kubernetes the project but a key part of What we've been talking about in Kubernetes is Kubernetes as a project needs to grow and Kubernetes as a project is going to grow by Creating a thousand seeds and sending them off into the wind It's gonna be really important for us to manage this transition from this big monolithic Kubernetes project Which a lot of people think of so you know I go to get these binaries and that's Kubernetes And that is going to change and how that changes is going to be many more projects working together Collaborating in the open source ecosystem and much so the same way that you see with Linux The focus for us has been on stabilization a lot, but it's also been about forming that That healthy community keeping it going so the Kubernetes steering committee was formed Elections were held a little bit earlier this year. So Kubernetes now has a permanent array a One in two year elected term steering committee And this is a group of people who are intended to help the community move forward The SIGs in Kubernetes are very important These are the groups of people who help Contribute and drive the project forward in all the different areas networking cloud providers storage There is a new top-level SIG in the last few months It's SIG architecture and this is intended to kind of be a place where We can set the direction of Kubernetes the core and also help identify what is and what isn't core Kubernetes And that'll help make that transition as I talked about from platform to distribution in the ecosystem We formalize some of our processes for we're working to formalize some of our processes for making change happen in Kubernetes And this is drafting on many of the same successes and experiences of previous large open-source projects You know typically when we go out and talk to users of these technologies they fall into four camps One of the camps is you know your next generation applications your microservices You're typically your line of businesses that are trying to move faster That's very much greenfield Then we have a large brownfield footprint of revenue-generating applications that are using Technologies today and they have to merge that with what they did yesterday, and then we have Next-generation IT ops right and these women and men are trying to Reorganize their data centers. Maybe that means moving on-premise to a public cloud Maybe that means something else But they're looking for technologies to help them do that and then there's the transformational right that the digital the CTO office and those those initiatives when we look at those that for footprint We're looking for technologies that we can use to help solve all four of those use cases And a lot of these patterns come out right content is king If we didn't have the ability to give you this content then the platform wouldn't really shine And if the platform didn't shine we really wouldn't give you a way to have that new content So we have next-generation cloud native services We have a new Concept with our middleware business units How are they going to have a service on the platform instead of just having an application on the platform then? We have a lot of low latency next HPC evolution type features, which I'll get into a little bit later We have new models for a developer to work in with their dev flows And we have a lot of install and upgrade management features to get into in this slide deck So hold on your seat and let's get into it First off While we're still talking about platform stability is probably the most important thing being a reliable foundation If you don't have a reliable foundation, there's no point to building something on top of it So all those services that we talked about before all those features They depend on having a core platform that is stable. So in Kubernetes 1.7 and 1.8 and 1.9 there was a very strong focus on Fixing bugs moving features into stable patterns But there was also another focus and this is something that on the redhead side We were very focused on which was production matters and refining and tightening and polishing The system at scale in some of the most demanding environments in the world already and making sure that we have a good foundation to build On for the next several years I'm going to give a couple of examples Most of these are some specifics that we got out of our very large open shipped online environments as well as from customer feedback Kubernetes relies on events as a way of notifying users about what's going on You see an event stream of if your application is crash looping or if you get Detached from a node and you get sent someplace else if a build fails for some reason What we actually realized in some of the largest and densest environments that very unhealthy applications We're clogging up the pipes that were sending too many events And so as part of our experience with these very large clusters in our online environments We actually worked in the upstream Kubernetes community to refine and Put a good pattern in place and to work with others in the community like Google to set kind of a long term direction For where we wanted events to be and you know the actual mechanics are lots and lots of low-level details But we tried to fit it into that overall whole of what is it going to make both allow people to understand What's going on in the platform at a very fundamental level and keep that core feature in place while continuing to refine it and polish it A side effect of that was also very dense clusters. So many open-shift users run extremely dense clusters Where they're not just running, you know one micro service application with you know 10 individual components They're not running five. They might be running thousands of these applications and when you have thousands of applications running together There's a lot of metadata and operational policy that comes along with it And so some of the things that we've worked on over the last year and given a lot of special focus is We want to anticipate where users are going with both Kubernetes and with open-shift and to lay the foundations So that when we get to these very dense scales when customers grow to tens of thousands of applications tens of thousands of micro services working together that all of the groundwork has been paved for them in the Both the open-source Kubernetes community and open shift So we added a number of features that make it easier to deal with very large data sets From the API perspective so anyone who's using an API and Kubernetes will benefit from this But it also enables some of what I'll talk about in a little bit Which is as we begin to make Kubernetes a platform for extension where people can bring new types of infrastructure APIs We talked about Istio. We talked about Radalytics anyone who's building APIs on top of Kubernetes will also benefit from some of these improvements So it's just a way to make sure that the effort we invest enables everyone as a platform Monitoring has been a big thing, you know It's a fair criticism of open shift that we didn't initially focus on operational monitoring with some of the built-in product tools We worked with early adopters of of open shift on their own solutions Starting in OpenShift 3.7 Prometheus, which is a CNCF project that you'll hear a lot about this week at KubeCon Or at this cloud native con, excuse me. You'll see some of you'll see that discussion about Prometheus Prometheus is a great ad hoc near-term metric solution We've worked in very practical environments to to integrate Prometheus very deeply to make sure the data is flowing up But also to keep an eye on those early cases that we knew of where people monitoring the platform We want that information to flow up not just into Prometheus, but into cloud forms and some of the other tools and technologies that our customers have already built around OpenShift We we use these metrics to help guide some of these optimizations. We've talked about to really focus on that large scale Kubernetes and OpenShift experience So I'm gonna jump through some of these and leave some time at the end for questions If you see something here that we skip over, please don't hesitate to ask We'll make sure it's some time at the end a big part of Kubernetes and Of OpenShift is about efficiently using resources, right? That's one of the things that we've always heard from from users is they're looking to build applications rapidly But from an operational perspective, they want to make sure that those resources are used effectively And so a key part of that life cycle is understanding What's running where which Kubernetes is pretty good at today, but then turning the the flip side of it What resources they're using CPU memory disk and optimizing the platform to ensure that all applications are getting a reasonably fair share And so there's a lot of work going on in Kubernetes and an OpenShift around this Some of these high-level near-term things are about we're really working to standardize kind of the core system metrics So there's been projects for a long time You may have heard of Heapster and CAdvisor We're looking to turn those into formal APIs so that other components can depend on them like the scheduler And we're going to use that to tie the platform back to itself. So when When you're running very dense clusters, you'll be able to benefit from knowing exactly how much resources are in use at Any one point on the cluster and feeding that back in Autoscaling on custom metrics on application metrics is also really important to us And you'll continue to see that of all over the next The next few months Extensibility I touched on this just a few minutes ago is so fundamental to Platform because a platform that isn't extensible has to keep implementing features until it implodes under its own weight And our goal really with Kubernetes is to build the mindset of Kubernetes an OpenShift is to be an open platform for building Application focused infrastructure not infrastructure focused infrastructure right the point of the API is not to go get a VM The point of the API is to run your applications And we want it to be easy for people to use those tools to build new APIs that are very application focused So we talked about service mesh with Istio many of the things that we've been doing on extensibility will be leveraged by the Istio project to add additional service policy to Regular Kubernetes applications to upgrade applications and inject Intelligence into this and as a long-term arc the more extensible Kubernetes is the more it becomes much more like The Linux kernel becomes a key part of the system, but the system is much more than the sum of its parts Yeah, let's get to some networking before I do how many people are designing or a part of clusters that are under a hundred nodes And then how many are above two hundred and fifty nodes? Cool, so I'll let you know from a global point of view that in 2016 I would say we had the majority of our population Planning for around a hundred nodes twenty seventeen It was really two hundred and fifty to five hundred nodes is the sweet spot right now And then they're telling us you're telling us that we should get prepared for the sweet spot beat a thousand right now We're text testing maximums at two thousand nodes per cluster So you got a lot of headroom to grow there But it really hits back to what Clayton's talking about in the density when you start getting into these type numbers And when you start thinking that there's on average between 50 to 70 containers running on any given node And those are pretty impressive density numbers on the networking side There's a lot of as we approach those higher numbers We start to see some inefficiencies in IP tables that we're working on so there's a lot of Engineering efforts on what is our replacement technologies and right now it looks like IPVS is a forerunner in what we could use to really speed up How the kernel is processing those rules In terms of the the network policy itself how many people have tried kubernetes network policies It's a pretty exciting technology And if you haven't touched it, please take some time to at least read read some proposals about it It allows you to look at pod labels and really control who's initiating traffic to what services And that really opens the door to a whole new level of granularity control on the network that we've just never had It's now fully stable and kubernetes and in open shift So it's something you can definitely take a part of where it's growing is on its egress on how we leave That that cluster. There's a lot of clever things coming to bear there Yeah, and I would actually add to that as well, you know egress has always been something that we've heard from customers as Many people are not deploying kubernetes in open shift and isolation They're integrating these into their existing environments slowly They're bringing pieces of their infrastructure to bear and that that slow addition means that there's many policies and setups that Organizations already have in place and so we spent a lot of our time on the open shift angle specifically I'm focusing on how do we make sure that the use cases of I've got a legacy rack of databases that have very specific Firewall policies, how can I make sure that only the appropriate applications connect to those because I have a corporate policy that says This is exactly the corporate security checklist item that says this is how I have to do security for those databases And so taking those kind of that kind of feedback working within the kubernetes ecosystem and building those into open shift and other projects in the Ecosystem has been a real big focus for us and in three seven We have the ability to have an IP address per project So now you can really hone down on those firewall rules and then IPv6 How many people are under the gun to get IPv6 out this year? This is a 2018 is starting. How many people have to have it in 2018? Yes About 5% of the room. Well, it's a You must be lying to me when we're on the phone Because it's it's extremely important. You know, it's coming from three areas. It's coming from government It's coming from telecom and it's coming from the open stack community who just got IPv6 support Probably I think six months ago or so we've run a lot on open stack And so those synergies are coming together where it's a big enough population for us to really push it forward I it is fair to say though that we've been mostly held up by the cloud providers So if all the cloud providers in the room could you know get on get up on their act and get IPv6 supported to make us a lot happier On the storage so stateful sets really came in into its essence in the next and I'd say in the last two months Or so you really close the gap on some quarter things that were left on the table That means that those types of applications those databases are looking typically for local storage, right? They want that high throughput on the host you don't want to be designing a Kubernetes distributed cluster around where things are physically attached So the scheduler needed to be made a little more smart about what is connected to those nodes so we can Dynamically schedule something based on that we can still have our PV and our PVC concept that that sort of user experience With these local storage devices and that all completed into alpha stage And it should be ready to use in kube 1.9 I believe in 3.7 that just came out in November. It's in tech preview. So please it's in the product give it a try The last thing there is resizing and snapshotting so snapshotting also became tech preview in 3.7 It allows your tenants the ability to go ahead and snapshot to their PVs based on the underlining storage technology So it's a it's an AWS PC type thing you now have that ability to snapshot based on that underlining technology has a tenant on the platform so definitely take a take a look at that and You know overall one of the things that I think has been a strong focus for us is that we do deal with the hybrid world as Chris alluded to is it's not just software on one side of the equation. It's it's a it's a very complicated world And there's very there's different degrees of demanding applications that need to run and depending on whether you're running a database or a Stateless application or you're running a machine learning framework What we're trying to do is set a path in Kubernetes for some of these core concepts and allow through extensions and other features to allow more complex solutions to evolve and You know this this will continue to evolve But I would I think it's very useful to say that the arc that we're on is to make everything possible and Some things very easy and to give people the tools that they need to build much more complex and sophisticated You know down to the metal cash-sharing sorts of optimizations are still possible, but it won't be an immediate focus in kubernetes for the next year or so nice the We talked a little bit about cryo, you know, we think that the container runtime is really important We think it's really important that the container runtime be designed to work Well with the container orchestrator and so cryo for us was an opportunity to look at the design of a container runtime and how it Fits extremely well with Linux and with kubernetes and to focus on the kinds of optimizations and Release patterns and processes that make sure that no matter what happens every time a kubernetes release is cut There's a container runtime that works perfectly with that version of kubernetes And so over the next few months it's in tech preview and OpenShift 37 the ability to run cryo on your nodes as a container runtime instead of docker We look we would look to over the next few months We're going to be much more aggressively using this in our very large environments Testing and we want to get a lot of really good performance and reliability feedback before we move to the next stage with it And the only design principle we're going after here is to make sure you have choice So as you as you start to get into these next generation container run times that are more focused on the orchestration layer We want to make sure that you have the ability to choose one of them And you'll always have that choice with our solutions So I don't want to rush us But we have a ton of things in the you know, these are more runtime features These are details of the platform. They make applications run better Platform features is what really be the alternative side of this equation is is what are the things that actually make the world Other developers life easier right and service brokers is a huge part of that And it's pretty much we're in the age of the service broker at this point We got very attracted to service brokers because our community wanted us to really give a better user experience To how a tenant connects his or her application services together that Attach itself very rapidly to this age of wanting to bring in cloud-provided services into the data center And at the same time house Very large corporate services on the platform and allow other tenants to the consume them those three pools of Request came in and formed what we have now called the service catalog in the 3.7 product Service catalog and service brokers if you're unfamiliar with the concept and Chris covered it quite nicely with all these diagrams It allows you to really design how a tenant is going to bind unbind provision or unprovision a service when he or she Wants that in their application Where we need to work on in the next steps is around that last step of injection right now We are capturing that in the config maps and in the secrets and then when you are ready to attach or bind that service to your Application you are given a list of secrets to connect to that service So we have one more step to automatically do that last step for you that should come in the next release or so And then on the granularity right now all the services are the same for everybody We want to make sure that Mike Barrett is allowed to have different services than Clayton and that Mike Barrett's on a different say AWS or a juror payment program than Clayton is Clayton's a hog when it comes to spending And and just like service catalog install and upgrade I talked about stability and reliability ensuring that we Make every upgrade of OpenShift and Kubernetes work extremely well And you know, I'll I'll be totally honest You know OpenShift deals with a large number of very different environments and very different customer requirements And that is our focus you know our installers are intended to work on every cloud provider on every bare metal platform Everywhere that rel runs and so we have to work through these cases and make sure that they're supported as well so In OpenShift the next version of OpenShift is actually going to be OpenShift 3.9 It is going to do a rolling upgrade through Kubernetes 1.8 directly into Kubernetes 1.9 And so it's a in a sense a catch-up release that'll be a rolling upgrade But the goal is there's a lot of interesting things in Kubernetes 1.8, but there's even more interesting things in Kubernetes 1.9 We figured it was a good time to to do that and there's a subtlety there That's a that's us skipping a release for the first time Which meant the installer had to be smart enough to do that behind the scenes for the first time So I know a lot of you in the community want to make sure that you could skip releases at some point right now We're forcing you to do that serially. This is the first engineering project that solves that for you and a little bit about reference architectures, we're expanding the amount of Examples and guidelines for best practices for installing on the different cloud providers a really key initiative Underneath that is moving to a more cloud-native model on machines We've always had to bridge that gap between bare metal where there is no dynamic provisioner for a bare metal box all the way up Through VMs and into cloud providers starting with OpenShift 3.7 working into OpenShift 3.9 There's going to be a lot of focus on a cloud-native approach to individual machines Which really just boils down to the machines themselves will be stamped out of images This will be something that we consider standard the standard way we install and deploy machines come up They connect to the cluster an administrator can either auto approve them or just run them through the process They'll join the cluster and this means that things like auto scaling and dynamic scaling of your cluster as well as rolling updates and canary updates of actual new new fields of machines become much easier and You know again, this is all about our focus on reducing the operational load You know there's a lot of great work in the Kubernetes community that paved the way and red hat continued to Convest in those community functions and take the next step for OpenShift because from the very beginning We have always had very strong Security around our nodes and some of the last bits for that were finally possible in Kubernetes 17 and 18 Management so the the product right now we've always given you the ability to use our managed IQ open source project It's productized here at redhead has cloud forms We've taken it extremely far in this next release We've potified it. We've run it muted fully supported in containers It meets a template deployment pattern on Kubernetes at this point It is just a management API now on your cube cluster and this has an amazing amount of features that we can start really pushing into Operational best practices it has charged back It has the ability to really pull those Prometheus attributes and show them to you in a unique way and connect those to our ansible tower product lines and to automate a lot Of more sophisticated things for you out of the box cloud forms should really help I'm dealing with multiple clusters and bringing that information together in a single unified dashboard So we're starting to run a little bit low on time. So I'm just going to tease here There's a ton of great work that's coming in Kubrick. That's going to come in Kubernetes 110 We're going to obviously continue scaling and bug fixes extensibility and improvements If I had to say one thing that was really important Again, I'm going to go back to that resource metrics is about making the system just work the autonomous aspects of Kubernetes that will make Walking away from a cluster and having everything continue to take over so even some things that aren't even on this slide Red headers have been working on fencing for bare metal and VM environments. That'll make it easier to automate Recovery actions and there's a ton of work It's going to go in over the next few releases to tie that to close that loop between application author intent the operational platform the operational policies that administrators have put in place around quota and resource usage and Overcommit and reliability and tie that back in so that the system can help can do more to manage itself So let's talk a little Chris Brought this up, so I'll get a little more deeper into it when we talked to the open shift in the origin community around serverless What they're really looking for is an opportunity to have a different pricing model And what we can bring to the table By taking a serverless technology like OpenWisk and bringing it into Kubernetes primitives and making it a user experience with OpenShift is that pricing model So when you look at function-based computing and you have all your functions designed for your application to microservices You deploy them out. They hit pods that are running those runtimes. They execute on those pods now What wouldn't it be great if they were able to idle if they were able to use HPC or HPA? Custom metrics so there's a lot of cost functions that we can do to bring them down and bring them back up for you to really Really blend in with the rest of your container environment And that's what our customers are really asking us to provide with open-wisk integrations And I'll note that those are some of the same additions those improvements idling resource usage That's actually something you know idling has been in the OpenShift product since the 3-2 release but again idling and the ability to reduce resource usage and to spread workload over time is going to be really important It is going to be a key focus for us. We were doing it on HCP now. We're doing on functions. It's a huge It's an awesome world Application config this is kind of the idea that config is a very complicated thing There's many different ways to define applications from the very simple microservice all the way to something like OpenStack There's no one-size-fits-all solution an effort that's under that's going on in the Kubernetes community is to try and Blur the lines so that we're using common tools so that we have common ways of talking about what applications are and look for patterns That can be reused in multiple ways So if you're deploying giant massive applications You may want to deploy that giant massive application all at once if you are a bunch of individual teams You may want to reuse the same tools that you know a giant project is using to deploy everything In your individual spot and each individual developer might want the flexibility to customize their tools And so we'll continue to evolve how applications are defined as kind of a long arc We don't think this is by any means the end of where we'll go with configuration and application configuration We want to give people the tools they need to to build applications and maintain them over time I think this is the last one. Hopefully yeah, it's you how many people are gonna go to an STO talk this this week I Think a third of the conference material is on Istio Look through the agenda, so it's a pretty popular technology from our red-eye point of view and what we're talking to our customers It falls into either a north-south or an east-west conversation and on the north-south We've been champion H a proxy for quite some time And we have a list of requirements that people want us to close the gap on and these next generation lightweight web proxies close a lot of those requirements for us right we get to Dynamically change URLs we get to change certs in a much more automatic fashion, so it's a it's a huge leap there It also does htb2, which is coming up quite a bit on the east-west This is a this is interesting right if whoever ever thought that you were going to put a web proxy in the front of every single application Service that would be insane if you didn't have containers and if you didn't have a container platform to accomplish that right that would That's voodoo. You don't you don't put a web proxy in front of every Application service, but if you did do that. Holy cow. Look at all the things that fall out of it Right now you can you can meter it you can control who has the ability, you know privacy There's a circuit breaker concepts and it solves the number one thing that Netflix and its OSS components failed to solve Thou shall not make the application developer develop to the platform right that was the number one cardinal rule and Istio Solves that for us the add the the tenant can now be blind to a lot of those things so We know there's probably questions Diane is certainly giving me the the evil eye up there If you'd like to catch us after the session be more than happy to talk and Mike and I'll be around all day So we really appreciate your time. Thank you very much. Thanks