 There, without any further ado, I want to bring up one of the newest Red Hat-Hers and one of our very favorite ones as well, Clayton Coleman, the lead architect for OpenShift, and Brandon Phillips who has joined us from CoreOS and is now a Red Hatter, so I'm going to let them give you the State of the Union. All right. Good morning, everyone. So Diane, I think, ruined our big surprise, which was we bought CoreOS. I don't know if you guys have heard that in the last couple of months. So I'm Clayton Coleman. I'm Brandon Phillips. So when we talk about State of the Union, normally this talk is somewhat about what's happened in the last six months or the years since we talked about it, but I figured when we talk about Union, we're talking about CoreOS and Red Hat coming together, what are the things that matter to the people in the room, give a real high level overview. To me, the idea for the Red Hat CoreOS merger was really about bringing two people, two groups of people who cared very deeply about the technology in Kubernetes and the open source ecosystem, bringing them together and being able to build something greater out of it. I think when this went through, there was kind of this idea that we were going to bring the two smartest sets of people in Kubernetes together. I don't know if that's actually the case, but I'd like to think so. So before we get into the technical weeds, I wanted to just give a little context on where CoreOS came from and really how we thought about the space and our overall mission as a company and as a set of products. So there's 3.5 billion internet users today and there's about any point, this is a rough estimate, 29 million practitioners of IT and development in the world. And the simple fact is we're horribly outnumbered. The number of folks that are using the internet, actually the number of folks coming on to the internet versus the people who are experts in how the whole thing works. And if you think that we can just educate our way out of it or if we can expand the number of jobs and catch up, that's simply not the case. There's a quarter billion of people being out of the internet every year. These are just staggering numbers. And the fact of the matter is everywhere you go across the globe, everyone's taking all of their, everyone talks about mobile and the cloud, but these are just a bunch of things that end up with a bunch of important data ending up on servers. And so at CoreOS, with these kind of staggering numbers, we thought about, well, how do we make sure that the roughly 100 million servers that exist worldwide, how do we ensure that those things are well managed? How do we ensure that the people who are in charge of managing them have the tools to do the best possible job by security, by the customers, ensuring that there's uptime for those three and a half billion people. And what we came up with was we needed to change the way that enterprise software ends up being delivered into people's data centers and onto people's public clouds. Kind of the culmination of what we've been doing over the last few years is this idea of operators. We'll talk about it in the context of both the platform, the OpenShift and Kubernetes platform, but then also in the context of applications on top. But really what it comes down to is we want to enable this idea that no matter where you end up running your application across the hybrid cloud, we want to make sure that the experience is the same, the application infrastructure around your app is the same, and that you're able to manage it in a way that removes a lot of the complexity and enables you to simply use necessary infrastructure to make your application and your team successful. So that's a little bit about what the Coros mission was about. We'll get into some of the weeds now. And on the flip side on the OpenShift mission, for those of you who've been with OpenShift for a very long time, we started out as a platform as a service. Platform as a service is about making things easier, deploying software specifically. There have been lots of attempts at platform as a service and lots of different platforms as a service. Everyone, different requirements, different use cases come together to mean that people build things slightly differently. For us, the opportunity with Kubernetes and with OpenShift starting just three years ago was we want to find the easiest way to ship the most software. Iteration has to be easy. We want to reduce that difference between development and production, make deployment repeatable and automatable. And Kubernetes and containers and the ideas of building powerful tools that can be composed together to build applications just enough of the standard patterns that we all used, but still allow things on top to have their own complexity and flexibility. We've seen that in the Kubernetes ecosystem. We've seen even in the last, I think, three months, there have been something like 15 or 20 different projects for building images or for making it easier for people to iterate locally and then push to a Kubernetes server. This idea of empowering developers while still keeping that under operational control and having the tools that we as operations teams need to understand what our users are doing to keep the applications running, to standardize, and to provide a common security base. All of those pieces, I think, have been really borne out maybe three years ago when we stood up and announced OpenShift V3GA. There were some doubters, I would guess for most of the people in this room that there's not as many doubters as there were then. There's thousands of OpenShift clusters worldwide, we've done nine releases, ecosystem is exploding. And so that idea, making it easier to get ideas from, to go from ideas to production as quickly as possible, this is where that complement with what the CoreOS mission really came to was. If one side of the story is about keeping control of the world with automation because it's just going to grow out of our hands, and on the other side it's about empowering developers to be as efficient as possible, but still to do that in a form that's under control. What's the next step? Sure, so essentially our shared mission, and I think what we'll be talking about a lot over the next year or so as we bring products together is automated operations. And the way that we think about, the way that we thought about automated operations was essentially this. Cloud is essentially two parts. The traditional hosting business where you rent some servers from somebody. And then the other part, which I think is the interesting property that has caused kind of the explosion around cloud usage is, well, I make an API call and now the service is available to me. I make an API call and that service is now updated. And overall we call that automated operations. And that's kind of the shift in enterprise software that I think we share as an interesting change, particularly as you start to think about hybrid cloud over the next year or two. So there's a lot of things that could be automated and you were somewhat vague today because there'll be a bunch of big announcements coming up tomorrow and at the summit sessions that I will have a list at the end for everybody to go see. So we're going to stay really high level. Just tease you a little bit and want to get everybody to go into the sessions. We don't want to steal anybody's thunder. But the things we want to automate are applications in the platform. Whether that's the operating system in Kubernetes, the install and update mechanisms, the infrastructure. Again, all of these things exist to serve us, the developers, and us, the operations teams that keep all of this working. On the application side, development and test, being able to deploy anytime around the world. And not just one type of software, not just these really limited applications, but every kind of software. And keeping those up to date, because again, a ship software. And once it's out in the world, my job's not done. Because that problem that Brandon spoke about is, that's another point of vulnerability to my organization. If I don't know where it is, where it's at and can deliver fixes to it over time. All right, so what I wanted to show here is the automation that we ended up building into CoroS Tectonic. Which was the name of the product that we had built as part of CoroS Kubernetes product. Seems like, there we go. All right, so this is the CoroS Tectonic dashboard. When you first log into it, we have a few things that are useful and interesting. When you first get in, we have high level metrics. One of the things that we wanted to do right off the bat was to ensure that the system gave you monitoring information. So whether the system is up to date, whether the system is healthy, etc. So that's the first thing you land at the login screen with. But one of the unique properties in going to this whole concept of automated operations that we had was that CoroS Tectonic had kind of a one click update experience. So if you're familiar with the experience on your iPhone, your Android device, where you click a button and then your device updates over time. CoroS Tectonic had a very similar experience. Now, this cluster has automated updates already running. So the last upgrade that we released a few weeks ago has already been applied to this cluster. But it's really an interesting experience for somebody who's spent their career in enterprise software where this cluster has been running for nearly about nine months now. And I get to see as somebody who's been involved in the product, I get to see things go from mocks on our UX team and our design teams laptops all the way through to actually being delivered on my cluster every time I hit the button to update to the latest version. And this is kind of a large change for how software is delivered. But I think an important one if we're thinking about getting software that runs across hybrid cloud environments. And the way that we actually make this all happen is pretty interesting, a little recursive, and it's 9 AM in the morning, so it may hurt your heads a little bit. But just bear with me. What we actually do is we run the entire platform from monitoring to the Kubernetes components like the scheduler and API server on top of Kubernetes itself. All right. So what you see in here is this is the Kubernetes scheduler. So the thing in charge of scheduling the applications across the cluster. And they're all just running as pods on top of Kubernetes. Which gives us a bunch of really, really interesting automated properties. One of which is that the exact same software that we use to monitor your application and alert on your application health is the exact same software that we use to monitor the health of the cluster and alert on the cluster's health. And so we get all of these properties automatically for free. Like what is the CPU and RAM and memory usage of the scheduler, for example? Or if I need to write a piece of software to update the cluster, I can use the Kubernetes API to do that. So that's how we've built automated operations in Tectonic. It's all built against the Kubernetes API. And this is kind of a sneak preview of some of the ideas that will be flowing into OpenShift and complimenting some of the features of OpenShift over time. So, and I think that to add to that compliment while we're going back to the presentations, since the beginning, the focus of Red Hat has been making Kubernetes the best place to run applications bar none. And to build on top of that a layer to make it easy to deploy and run containerized applications. And as we worked with Brandon and the rest of the CoreOS team, it was really obvious that we had picked complimentary approaches, automating the platform by using those pieces. Every bit of an investment that Red Hat has made over the last, almost four years now, four and a half years now, has been about making Kubernetes the best place to run those applications. And it turns out that a platform that's really good at running applications is also really good at running that platform. And so each of these points has really made me really excited about what's gonna be coming this year and next. Because every one of these builds on the strengths that we already have to deliver for every end user and reinforces those. So that every time we make a user's life a little bit better, who's building applications on top, we make administrating the platform just that little bit easier. And then the last thing I wanna touch on was that this isn't just administration of the system, the Kubernetes system. We put automated operations and the namesake of the company is this operating system we built called CoreOS. We put these automated operations all the way through the platform down to the individual host in the cluster. So this is just a looping demo of where when a machine gets an update, we actually coordinate that update through the Kubernetes API as well. So you get this full system view all the way down to the hardware, the virtual machine of whether the system's up to date, what the state of the update is for each individual host in the system and actually control the rollout of that. The big difference between automated operations and these updates experiences between your phone and the server environment is obviously we have to keep the server environment running. And so it's slightly inconvenient when your phone is down for like three minutes, it's probably like the worst feeling in the world at this point in 2018, you're like, I don't know where I need to go. I don't know, I'm not in touch with anyone. But on the server side, we're able to control these things so that only one machine or a handful of machines in the cluster are down at a given time while the updates are happening. Really taking advantage of the distributed nature of Kubernetes. All right, this is your baby. So the other thing that we've been thinking about is over the last five years, we've been really focused on how to deliver a platform that has this concept of automation. But then the next bit is, well, how do we make it easier for open source projects, for ISPs, for really anyone to deliver software on top of Kubernetes? For the sole reason of supporting applications. Applications have a bunch of different pieces that are necessary, whether it's databases or load balancers or caches or whatever. And those pieces of software today are at least on the top 95% side of the market. They're really easy to deploy from a cloud provider via an API call. But we wanted to think about what about the long tail, all the other systems that exist. And so last week at KubeCon, we introduced this concept called the operator framework. And operators of this term that we have for cloud native applications, where applications again like caches or databases or complex stateful applications or load balancers or machine learning systems can be wrapped up into a Kubernetes API and on demand deployed to support an application by anyone who has access to a Kubernetes cluster. And so over time at CoreOS, we really want to see more and more of these operators come into existence. And at Red Hat, we kind of have the horsepower to ensure that that happens. So this is an open source project to ensure delivery of not just a platform filled with automated operations, but supporting services and applications on top of that platform. And it's really about extensibility. So if you're going to build a platform, it's really critical that platform can actually support the use cases on top of it. So from the very beginning, Kubernetes has had this idea that it was a small core of a distributed operating system for the cloud. And each of the pieces that you build on top of it would complement other pieces. So if you bring in a new service, for instance, to let you, I don't know, get access to vault at scale. And to allow people to deploy their individual monitoring applications that not just the people who are consuming it benefit from the effort that gets put into it, but that everybody can benefit equally. And so the pieces that we've built into Kubernetes over the last year or two features that folks may be familiar with, like custom resources and API aggregation, and making it easier and easier to extend Kubernetes. This is the next step in that story because we're building not just the tools that you need to extend, but the concepts and putting them together in a framework that makes it easy and approachable. Yeah, so to give a practical example. So two years ago, we introduced a thing called the Prometheus operator, which makes it really easy to deploy instances of a monitoring service on top of Kubernetes. And what we'd hoped was that application developers, if getting monitoring for the application was just an API call away, that they would more properly monitor their application. As a proof point to this, Ticketmaster, which is a customer, ended up making this Prometheus operator available to all their users across all their clusters, all their Kubernetes clusters. Ended up with about 400 copies of Prometheus supporting their applications, making it possible for the application engineers to set up alerting, set up monitoring, and support themselves with the services they need to make sure that their application had good uptime. And this is just an evolution of the story that we've believed for a really long time. We're building a platform that makes it easy to take ideas, turn them into reality, and to keep them running. Automation is a key part of that, having capabilities available. So I drew this really quickly and just threw a bunch of buzzwords on there. Whatever the buzzword of the last six months is, all of these ideas, all the pieces that make up an application, those will change. And people will evolve and will build new tools and new ways of taking advantage. Serverless is up here because that's the hot topic today, is how can I make these really, really simple applications run easily and at scale. And I think for us, when we talk about things you'll hear this week and over the next year for us, is that we think of these as just points along a spectrum. The underlying idea, making it easy to build an application, to deliver it anywhere in the world rapidly, to keep it up to date, and to keep the platform that it runs on secure. At the heart of it is automated operations, and we will continue to build upon that for a long time to come. So we alluded to some of the sessions, Brandon and I are speaking in these a number of PM. These are the top four if you're interested in some of the deeper details on the ideas that we talked about here. The CoreOS and Red Hat session on Tuesday, the Kubernetes and the platform of the future talk that Brandon and I are giving with Steve Watt. 330 on Tuesday will kind of go into some of the deeper details of the trends here, Container Linux and Red Hat Enterprise Linux, the road ahead. We'll talk about what the story is for the operating system pieces that Brandon alluded to, and the OpenShift roadmap on Wednesday is always a perennial favorite. So we're kind of at the end of the prepared stuff, and Joe is gonna come and join us. All right, guys. So hi, everybody. I'm Joe Fernandes, I run the product management team. Myself and Mike Barrett will be running around with mics. So if we have time for questions after each session, just raise your hand. I do have a couple of questions to help clarify things for the audience. So, Brandon, you showed the tectonic console, and I know one of the things we've been doing is bringing that together with the existing OpenShift console. You want to talk about how those things come together and what users they're targeted for in terms of? Sure, so the tectonic console, really the focus of that was people who are administrating Kubernetes objects. And so that meant people who were caring about deployments, were caring about application monitoring, were caring about the cluster settings. We went in identity upgrades, role-based access control for a cluster level. And so that's been kind of the focus of that, which to go to this idea that a lot of these ideas are complementary. The OpenShift console for since the beginning has been focused very much on the application engineer and the application developer's concerns. And so these two consoles kind of serve two different purposes. And I think you'll see some, our story is always going to be, we're trying to make sure that if you fit a role on the platform, that there's an experience that works really well for you. And so I think, as we'll talk about in some of the sessions coming up, those two users where they blend, you'll actually be able to see those pieces coming together. And where there's value in focusing on one of those, we'll have a really strong experience around that particular user, like a cluster admin. Yeah, and at the OpenShift roadmap session, Mike Barrett's saying that you'll be able to actually see a demo of how these things are already coming together. So we have actually mocks where the consoles are now converged. It'll be tied to your role. So if you log into OpenShift as a user, as a developer, say you'll see the OpenShift console, the service catalog, and be able to act as a user. If you log in as an administrator, you'll get to see the user's view, but then you'll get to see all of the administration views that Brandon showed. So state of your cluster, state of the services and stuff, the things that a cluster admin should have access to. And so it should be seamless just based on how you log in. The other question I get a lot is about operator framework and service operators. Last year, we were spending a lot of time talking about the service catalog and service brokers. You want to talk about how those two concepts, the Kubernetes service catalog and then service operators, how those things come together. Yeah, at a really high level, the service catalog has always been about consumption, about making it easy for you to offer up to your individual developers a suite of services that most of the times are things that are run by a central operations team or run by central infrastructure teams. And in a sense, the operator idea when we make these concepts Kubernetes native is they're no different than any other tool that a developer might have. And so I think you'll see that, again, it's more about use case and who you are and how you consume software than necessarily the deep details of where the technology is. Yeah, a way of thinking about an operator is imagine that you had a public cloud provider that allowed you to inject new services. So you were able to say, well, this cloud provider doesn't provide a Cassandra service. So I'm going to go find somebody who's an expert in Cassandra and plug in the Cassandra service onto this public cloud. This doesn't exist, except for in the case of Kubernetes and this concept of hybrid cloud. And so what we're trying to do with operators is make it so that you can find a component, an operator, injected into your OpenShift cluster, and then make available whatever new service your application developers need, whether it be through Service Broker or something else. Make that available to them, an API call away. And that's kind of the analogy that we're going towards with operators. Yeah, and to make one more plug, tomorrow afternoon, there'll be a Red Hat keynote that also features some of our partners like Microsoft and IBM. We'll actually be doing a demo of the operator framework with one of our ISV partners, which is CouchBase. And kind of to echo what these guys said, the service catalog is about end users consuming those services. And for those of folks who have used it, that there's provision and bind operations to make that seamless, it also deals with on-platform services as well as off-platform services. So last year, we did a demonstration with Amazon. And then at AWS Reinvent, we launched an Amazon service catalog where you could bind and provision and bind to Amazon services from OpenShift itself. But what you'll see tomorrow in the keynote is answering this question of when you're running a platform service on the platform, who operates it for you? The developer who wants to consume CouchBase is not the DBA who needs to run it. And at the end of the day, we need to be able to automate the operations of the services or of the apps, not just automate the operations of the platform. So please try to attend that keynote. And you'll see, I think, a pretty cool demo of how this all comes together. Sue, how are we doing on time? Well, you're done. All right. Well, thanks. And didn't you see the good job of getting you all to ask questions? Right. All right. Thank you all for coming. We'll see you. Thanks.