 There we go. We're using Wi-Fi, so nobody get on Wi-Fi right now for a little bit to do Google sites. We'll be all good. Go for it. There's only one demo, and we've got him hardwired. Good morning, everyone. So just a quick intro. I'm glad, by the way, that Diane is in my team, and she knows what my responsibilities are. So I'm Reza Shafi. I come from CoroS. I run the product team at CoroS prior to the acquisition. I've been here now almost a year since the acquisition, and now responsible for the product, along with Joe Fernandez, who's also here. So we run the product teams for OpenShift together. So happy to be here with Chris, who's going to be joining me in just a bit. But before I get Chris on board, just wanted to talk a little bit about why does it matter, at least a perspective on why what we're all doing with OpenShift matters. And to talk about that, I'm going to start. And by the way, we're going to see a lot of tweets here, because a couple of announcements coming up. That's where we're speaking. All right. It's kind of interesting to see it on the side. You should get a Matt Diane, a Mac. Much easier. That's OK. We'll live with that notification. It's going to be good updates for everyone. All right. So why does it matter? And I'm going to go back 15 years to an article that basically said it doesn't matter. And I remember being a young consultant at the time. This article actually had quite a bit of momentum and traction in the industry at the time. And people were talking about how our industry is about to go nowhere. I was actually worried about my career as people were talking about this. And the argument that Nicholas Carr was making was that IT, like electricity and like railroads in their time, is going to become commoditized. And organizations are not going to need IT, are not going to need to innovate in IT in order to be successful. It's just the commodity that you just use. And so let me just get a quick poll from the audience here. Who thinks, 15 years later now, who thinks that Nicholas Carr was right? OK, good. Almost no hand. Good. Well, I would say he was half right. So let's talk about that. I'm just going to look at the last 24 hours for me. I used Uber to go to the airport and come to the hotel. They've definitely used IT to disrupt their industry. And then I have done my expenses throughout this small time with Concur, which, by the way, is a great customer of ours, local here. And again, they have disrupted the way I do my expenses compared to the way I was doing it with other software, let's say, 15 years ago, which was a lot more painful. And then in the airplane, I used my phone to access an array of videos and entertainment from United. And again, that has completely changed the way that entertainment was being delivered in airplanes. And by the way, I was playing around with OpenShift 4. And doing so, I was just accessing resources at the infrastructure level on Amazon AWS. So on that last one, I would like to say, I was accessing those compute resources like commodity. I was getting it like a utility. And so even though in the first three scenarios, I would say Nicholas Carr wasn't right, in the last one, he was probably right. So what's happening here? I think it depends how you look at it. If you look at it from the different layers, from a computing infrastructure to use the words from the article, getting towards boring necessity to operations, computing infrastructure is slowly getting there. But if you go one layer up, the services that we use to build applications today, I think you can see that innovation is still very much alive. You look, you know, I was just coming up the stairs here, technologies like VTES, right? Technologies like EtsyD, which by the way, you're gonna hear more about tomorrow at the keynote, not to spoil anything. And technologies like Co-CroachDB, and the list goes on and on and on. The innovation at the computing services layer has not stopped. And if you go one level higher than that, applications, the applications that we're all building in our organizations that you're building to get your business to be more successful, all innovation has definitely not stopped at that level. And I would venture to say that the same is true for the electricity example that was used in the IT Doesn't Matter article. Yes, it's true that the electrical infrastructure level is getting to be a boring, it probably has been for a long time, a boring necessity to operations. But if you go one level up, you know, I was just at a Starbucks on my way here again, I put my phone on the table and it started charging. That's innovation at the services level, at the electrical services level. And of course, at the appliance level, electrical appliances are innovating all the time. Now here's the difference though, between electricity and where we're going with our applications and computing services. If you look at these pictures, they're pictures of very early electrical appliances. You see a hair straightener, bottom left, a toaster on the right, a food warmer on the top left. And you'll notice something that's maybe a little odd, but common to all of them, in that they've got a little light bulb plug at the bottom. That's because electricity was first brought to the homes for lighting. But then people notice it's just electricity and I can just plug any appliance into it. And so they started creating devices or applications that plugged into the electrical service using that plug. And in some ways that was a blessing in disguise because it provided decoupling from the electrical infrastructure provider. So it didn't matter whether it was Tesla or Edison or whoever else who was coming up with the electrical infrastructure, your applications were decoupled from it. I'm worried that the way things are going right now, because of the trade-offs we have to make every day for flexibility and speed, that is not the way we're going with the application and software world, NIT world. In fact, as we're tying ourselves to the services, to the computing services provided by the cloud providers, whether it's at the serverless functions level, whether it's at the queuing level, they are deeply integrated with the underlying infrastructure. And those are ties that are hard to break. It's like saying your toaster works with the Washington State University or electricity here, but sorry, if you go to San Francisco, you can't move that toaster, you gotta buy a new toaster. And so this is why I think OpenShift matters. If you look at the things we're working on, it's all about getting to a point where you don't have to trade off between that flexibility and the simplicity of the cloud and the diverse ecosystem of services that are out there, the open community that's out there so that they all operate like a cloud so that your applications can be truly portable and move anywhere. And that's really it. And so what are we doing? Big themes, obviously, the OpenShift team bet on Kubernetes a while ago. Looks like a good bet now. And CoreOS was there with them. We're also talking about automated operations and you'll hear us talk a lot more about automated operations. Something that's at the core of OpenShift 4 that brings the simplicity of the cloud from the operating system all the way up. And not just up to the Kubernetes level, but beyond that for the services that are running on it. Services from all the providers that are out there. All the open core and open source services that are out there, not just cloud provider services so that they can act and behave like cloud services. And things like the operator framework are there to do that. And that's another aspect of what we're working on. So with that being said, let me pass it on to Chris Wright, our CTO to talk a bit more about what we're doing along these lines. Thank you. I should start tweeting to Diane and just have a conversation with myself here. Good morning and welcome to sunny Seattle. I personally am from the Northwest and I recently moved to Boston and I can say it's really cold out there and it's sunny, but I actually feel nice here back in the Northwest. So what I wanna talk to you about is some of the things that we're doing in terms of collaboration across communities. A little bit of like technology, say like sneak previews of where we're going and really think about what are we building together? So my perspective is a hybrid cloud is an opportunity to really create total independence for your applications from the underlying infrastructure. And this is something that if you think about Linux and what Linux did, this is something that we've been doing as an industry for quite a long period of time. So Linux created the opportunity to run applications in a consistent runtime environment independent of the underlying hardware. A hybrid cloud gives us that same capability using Kubernetes to do distributed computing and allowing us to place those applications on public clouds or within your own data center, private cloud or even virtualized or bare metal deployment. So that's really I think what we're building and I think that's the most exciting part to me. If you think about what is a cloud, I would describe a cloud as two key things. One is basically ease of operations. You've essentially outsourced your operations to somebody else, run the infrastructure for me. I'm just gonna consume it through APIs. That creates a lot of benefits. It's really efficient way from a consumer perspective to use the infrastructure. Another key piece to a cloud is what I would describe as differing levels of abstraction or ways to engage with the cloud or services that you consume from the cloud. So a cloud isn't just easy to use, but it's a breadth of services. And what I'm really excited about with the hybrid cloud is we have this opportunity to create an ecosystem of services that run on top of a consistent, call it a fabric or whatever sort of buzzword you wanna use a consistent platform for and a community for creating this ecosystem of best of breed services that we can really run anywhere. So one of the things that I think Kubernetes has done a really great job of focusing on is bringing applications, containerized applications, sort of the modern software architected microservices type of applications to cloud environments. We haven't done as much work as a community focused on bare metal. And one of the things that I find interesting is over the last roughly year or so in talking to customers and users, there's this increased interest in running Kubernetes directly on bare metal. So you could argue we've spent a whole long period of time trying to convince ourselves that hardware just doesn't matter. It's irrelevant. It's kind of this homogenous commodity. You just use compute. And what I think is happening today is we're discovering that certain types of workloads can benefit from running directly on bare metal. Maybe it's access to specific hardware. Maybe it's a specific footprint or environment that you need to operate within, like forthcoming 5G environments with mobile edge or multi-axis edge computing. We have really limited footprint. So bare metal I think is something that we're starting to look at more within Red Hat. It's an opportunity for us to do some collaboration and it's an interesting kind of collaboration across communities. We're working in some context with folks from the OpenStack community to do some of the bare metal provisioning pieces and really trying to create a bare metal environment that has that same ease of use fully automated, easy to upgrade that we're building virtualized, especially in public cloud. So to me, this is a new phase for Kubernetes, a new opportunity, a new way to look at what we're doing with Kubernetes to deploy into a new environment on bare metal. Another piece that I think is interesting here is we have an opportunity to think of application deployment as managed by, say, a single cluster manager and scheduler. So we've been working on this project called KubeVirt for quite some time and when you combine something like bare metal provisioning, and KubeVirt, you get an environment where you have containers and VMs as peers and first class citizens that sit right side by side with one another. So the applications that we've spent the last decade building in virtual machines, we don't have to just abandon, we can easily integrate them together in a single environment. Again, managed simply, we talked a little bit, Reza mentioned operators, this kind of ease of deployment. And I think that's a really cool opportunity, it's kind of a converged infrastructure that gives us a place to run applications, independent of the virtualization technology. So kind of a next generation way to look at infrastructure for applications. And I think I already mentioned the performance sensitive pieces, but a great example there is the use of GPUs and FPGAs for machine learning workloads. So there's the machine learning SIG here in OpenShift Commons. And one of the things that we're hearing a lot from users of Kubernetes is, it's a, maybe to put it simply, I think we would probably all agree, Kubernetes is one. Kubernetes is the de facto standard for how we do cluster management and scheduling of applications across Linux clusters. What that means is the entire industry is collaborating and focused on a common platform. And as an example, bringing data-centric machine learning workloads to the same platform is something that we're seeing a lot of interest in. And so things like performance sensitive applications where we want direct access to a GPU to accelerate the machine learning environment, is something that we see community enthusiasm around. And customers are starting to use that as a common building block for their internal applications and infrastructure. So another piece that I think is interesting is initially we spend a lot of time, like how big can we scale these clusters? There's been some scale labs and a lot of blog posts on how many hundreds or thousands of nodes we can scale to. And that's important and it's interesting. And we also have to consider what is the kind of impact of a large scale cluster? Well, there are some fundamental scaling challenges. There's the question of what does your blast radius look like when you've consolidated onto a single cluster? There's geographic issues as you're trying to spread your applications around the world. There's on-premise, off-premise when you talk about a hybrid cloud environment. So what we're starting to see is more and more interest in running more smaller clusters rather than fewer larger clusters. And here's some examples on the screen of things like edge environments where just by definition you have a lot of different small clusters, maybe direct locality proximity to data, data generation for those machine learning or analytic workloads that are thriving from all the data that's maybe in this example on a remote rig. Maybe it's security and compliance issues, where your data sits, how you partition your application base, access to specific kinds of hardware, so maybe a specialized cluster and using different types of hardware. So a lot of different reasons that you might see rationale behind deploying multiple smaller clusters which introduces a new challenge. Scaling up, well it's hard and it's fun for performance engineering, but we kind of understand what it means from an identity perspective, a storage perspective, a networking perspective when it's one large cluster. When we have multiple clusters we create a new set of challenges for ourselves. So this is an area that has been underway in the Kubernetes community for quite some time. And we are slowly bringing this technology into OpenShift and creating a federation capability that will really allow us to take advantage of deploying clusters where they should go and then deploying applications to the correct cluster over time. And again, we can't trivialize what it means to replicate the data across all the clusters where applications may need to land or what does networking look like in a context where you need to direct to the right cluster, you wanna feel like it's a seamless interconnection across all of the clusters. These are not native functionalities in Kubernetes today. This is how we can advance this state of the art. Already in OpenShift 3.11 we have the cluster registry so you can register a cluster as you bring it online. And we're working towards things like workload placement and policy associated with that so that you can think in terms of, again, from an application developer perspective, how can we ease deployment and have the system take over and push the application to the right location, maybe take advantage of a hardware accelerated environment that your application may ultimately be dependent on to deliver its SLA. So this to me is a view of where we're trying to go with Cluster Federation and we can take a look at a simple use case. Here the developer is using the Federation Control Plane to do the initial workload placement and the initial workload placement was pushed to a bare metal cluster of Kubernetes. Now for whatever reason, it could be policy dependent or point in time doing some upgrades. The developer decides that it's time to push that application to a different infrastructure. So here we're showing something moving the application to a cloud cluster, cloud-based cluster. And what's cool about this is we're pushing the real behind the scenes work into automated routines so that you can specify how you wanna deploy the application and have the system slowly reconcile things like ensure that the data is properly replicated and the application can move to the right location. You get ingress routing to the application appropriately so you're not bouncing all over the place and ultimately our goal here is to make these collection of clusters feel seamlessly independent so that they really look like one large place to deploy applications even though they're managed as separate clusters. So if you're interested in that, we've got a couple folks here. I don't see Paul, he's usually easy to identify. And Ivan, and here's the GitHub repo. We have this demo that we've been working on that shows application portability and it's just a way that you can get directly involved in federation. So some work has already started. The cluster registry is already there but there's a lot of work to do to really fill it out to the full vision of how do we create a completely federated clusters. Another area that shows this de facto standardization around Kubernetes is the work we've seen in the serverless space. So it's not new to the industry, we've been hearing a lot about serverless especially in the context of AWS and Lambda. It's demonstrated some real interesting capabilities of making it as easy as possible for developers to create applications or functional logic. In the CNCF serverless working group, there's quite a few different projects associated with that and where I see the de facto standardization of Kubernetes winning, showing itself is in the Knative project. So this was launched back in July and what Knative is, literally Kubernetes native functionality to support the primitives needed to build serverless environments or serverless platforms. So the primitives of connecting to event sources, building images that will be launched based on those events, scaling from zero and back to zero based on the events associated with the service or function that you're trying to support. These are the kind of core building blocks and we're really excited to announce just today a few minutes ago that we have an OpenShift developer previews of Knative on OpenShift and again this is just the beginning of building an infrastructure that allows you to create functions that run entirely independently of that underlying infrastructure. So the specialized socket that Rezo was talking about and the potentially vertically integrated stack that you find from one single provider is something that we can neutralize and turn into a broad-scale industry initiative using technology like Knative. We've got operators and operators are an example of creating that ease of use that we mentioned in the beginning, what is a cloud? You've heard a lot about operators, this is just an example of a single operator comparing simple containerization through cloud service to operators on Kubernetes which gives you the breadth of availability across many platforms and ultimately if you look in comments there's something like 50 operators and this is a space where we have the opportunity to create that, not the pure ease of operations but also the breadth of ecosystem support for a hybrid cloud vision. So this is something that I think is a really great advance, state-of-the-art advance and it's a place where the ecosystem fundamentally matters. And this to me all adds up to something and I gotta give kudos to Reza for the slide and the quote but it all adds up to something where what we're trying to build is a future vision where it's automated operations. It's so simple to use, it runs like a cloud. I use the term autonomous cluster, Daniel Rieck who's here somewhere uses the term self-driving cluster. We're really moving in a direction where everything that we do enables machines to manage the clusters and developers to build the applications that are disruptive and really bringing innovation to the industry and providing that platform for really the next generation of businesses and opportunities. So with that, thank you for coming to the comments and look forward to collaborating with you throughout the week. Thank you.