 My name is Chris Wright. I'm CTO at Red Hat and I wanted to talk today about the perpetual pursuit of excellence. And you saw earlier it's in this digital transformation kind of bucket. There are some slides here that are suspiciously corporately branded and I apologize for that. But I'm selfish and I was trying to save time. So this is, I think, it's my hypothesis. I don't think it's really something that is largely contentious. But open source today is really the source of technology innovation. You look across all of the different projects in open source communities and they are at the cutting edge of technology. Whether it's cloud or machine learning or serverless, all these new development models and infrastructure and platform projects, these are developed in open source. Certainly this is important to Red Hat. Red Hat's business is about building products that are derived from open source projects. And so the more we see this innovation happening in the upstream open source communities, the more excited we are, and the more we're able to bring these new technologies and new innovations to our customers. We're trying to modernize themselves and go through the digital transformation. This is just sort of our reduct view of that same concept, which is there are a collection of projects. Those projects go through kind of a life cycle from our point of view, which includes the total pure upstream project. In some cases there's a point in time that does community curation or combination of different components, and then for us ultimately those become products. So you see things like Kubernetes upstream, OpenShift Origin as a combined community distribution, and then OpenShift Container Platform as a product from Red Hat. So when you look at that kind of from upstream to productization, there's a couple of points where people start to look at a technology from the point of view of what can I do with it, how does it drive value or business value for me and my company? Starting at the beginning you see the innovation cycle. You've got a kind of DIY point on that time horizon, which is people who are excited about the technology bringing it into their businesses and playing with it internally. You see a productization and standardization phase at the end of that cycle. We live mostly in the upstream, we Red Hat, we live mostly in the upstream, and then that productization and standardization place. We work with some customers early on in the DIY space, but mostly we're directly upstream where we're working in productization, and that standardization is where you get ubiquity, de facto standardization, something like Linux, which is everywhere at this point, and I think it's worth noting that Kubernetes has won. Kubernetes has been around for a short period of time, it was only announced approximately three years ago, and in that time period it's gone from a new exciting open source project to a de facto standard for container orchestration in the industry, and that has happened really rapidly. So if you roll back in time, 2001, VMware introduced server virtualization for X86, 2006, Amazon introduced EC2, 2011 maybe was OpenStack, 2014 Kubernetes, and today Kubernetes is emerging as a de facto standard for container orchestration. I think this is awesome, we had re-invent and saw a couple of key announcements there, really talking about Kubernetes, and in the keynote announcement for Kubernetes, the announcement for Amazon doing EKS was one of the most loudly applauded announcements. You really see the excitement and enthusiasm around Kubernetes. Another piece of this standardization is actual standards. Standards and open source have kind of an interesting relationship with one another. In a certain sense, if we're all collaborating on the same code base, we create that de facto standardization and what we really need is formalization of what the APIs are that we expect are consistent, stable, and not going to change throughout the lifespan of major release or even the multiple major releases, the long lifespan of a project. So for example, Linux, which is standards compliant with standards like POSIX, has a very well understood system call interface, and that system call interface is binary compatible. That system call interface is something that has changed over time, so it's augmented. The core system calls haven't really changed, many of those are predating Linux. And my personal opinion is if it weren't for Linus waking up with a broken laptop and very grumpy one morning when somebody introduced a regression that meant his box didn't boot, we would be in a really different place today for containers. Containers are fundamentally reliant on this well-defined interface between the Linux kernel and user space applications. And if it weren't for that morning when Linus woke up and then decreed in probably not very nice language, Bow Shout not break user space, that really began something that today we're reaping the benefits of. So standardization in a more formal setting is specifications, it's things we know and love, move slowly, and in many cases it can become politicized and it's complicated. So we're doing this balance de facto standardization and open source code and then working in communities like OCI where we actually do create some form of a specification to write code to it and create some commonality across container run times and container image formats. We're particularly excited about OCI having released 1.0 this summer and we've been working on a project Diane mentioned, Cryo, which is really leveraging some of that work in OCI and a container run time interface in Kubernetes being pluggable to create what we think is a really nice architectural fit to Kubernetes for managing launching containers. And we've got other projects in that space. You may have heard of Scopia or Builda. So a couple of different projects in how we can build and launch container images based on some open source code and as well as some standardized specifications. So that brings us here. And I think this is a really impressive, you know, kind of the NASCAR logo shot but it just shows you what Diane was talking about earlier, the breadth of this community. There's a lot of cool stuff happening in the OpenShift Commons community. Clearly OpenShift is built around Kubernetes and containers and container images but it's also an ecosystem of users and other commercial companies looking to work together to build this common collaborative environment. I think it's really impressive to look at everybody who's involved and I wanted just to say thank you to everybody who's participating here in OpenShift Commons. Who is here for the first time? Okay, that is amazing. If you can't see, I'd say two-thirds or three-quarters of the room raised their hand is here for the first time, so that's fantastic. Welcome. Actually, I didn't expect that. You surprised me. So to the digital transformation bit, it's not my favorite topic only because it feels marketing buzzword-y but it is a real thing. People talk about it all the time and despite the buzzwords, there's real business work going on under the hood. What we see with our customers is this move towards speed and agility and initially trying to capture some efficiency internally. So efficiency can be as simple as use the same building blocks, standard building blocks, something like Linux. Use it across your entire infrastructure. Agility and speed and when we start using cloud technologies, we're using APIs to manage infrastructure. We can automate our work using containers to deliver applications, maybe even building applications in a modern software architecture like microservices, and this is the world that our customers live in. So on that left-hand side, efficiency could easily mean I have applications that are on hardware that aren't moving. They've been there for 20 years. They run my business or the core business kind of transaction processing engine. And on the other end, I want to produce a web front end for my customers, consumers that are interacting with me as a business and I need to be competitive with startups that are doing things really fundamentally differently born in the cloud and not necessarily owning all the same kind of assets that the traditional businesses have. So this is the space where our customers live and we work a lot to bridge these two worlds together. We can't leave behind that core transaction processing engine that may actually be running the business, but we also want to help our customers transform and move into this modern world and move more rapidly. The two key things here are the cloud and modern software architectures, cloud native applications and cloud infrastructure, hybrid cloud infrastructure. So for us, we've been talking for quite some time about the hybrid cloud. The hybrid cloud is a concept that allows application portability across a lot of different infrastructures. So you see on the far left there you have bare metal, you have virtualized infrastructure, you've got private clouds and you've got public clouds and all of these are potential target platforms for any of the applications that our customers are going to run their applications on. Sitting above that is some application platform that provides the run time consistency across all of those platforms and that's where you get portability and the ability to not be locked to one particular deployment scenario. And then a bunch of other things up and down the stack in terms of management and connectivity with storage and networking and developer tooling, but here we're really focused on that application tier. It's just a quick blowout showing a bunch of applications and moving from hybrid cloud to multi-cloud so you see that cloud picture has changed to include a bunch of public cloud providers. And again, the focus here is OpenShift and OpenShift providing that consistency across all those footprints so that you can run your applications independently of whether they're an older application or a more modern application, different run times, anything that runs on Linux can run in this environment. Same picture with the hybrid cloud and as I said, anything that runs on Linux can run in this environment because containers are fundamentally Linux or put a different way, containers are operating system technology. Whenever I say containers are Linux, people say, well, what about Windows containers? Absolutely they exist and I think the point is that it's an operating system level technology so the applications that run on those operating systems continue to run in a containerized environment. There's really, from an application point of view, there's very little this difference between running directly on the operating system or running containerized on the operating system. So let's talk about hybrid apps and this is a complete hypothetical situation. So if you're wondering at points in time does this specific example actually make sense? It's a fair question. It's really meant to illustrate a point. One way you can deploy your application, this is kind of a standard, bog standard application with some application logic, a database, and messaging between components. One way you can deploy that is completely on-premise. So again, using something like an OpenShift platform running these containerized components on-premise whether that's bare metal, virtualizer, or private cloud internally. And this is in the context of hybrid apps. You can take this same application stack and move it off-premise to the public cloud. And again, it's that underlying infrastructure, that platform that's created consistency. And this is sort of the either-or approach to hybrid cloud, meaning you could deploy here or you could deploy there. Our customers are also interested in an and scenario. I like to deploy here and there. And again, you could argue this is a strange way to deploy your application. That's really not the point. The point is that your application is made up of components and the components could be deployed on different targets or different locations. So here you see core application logic and the database on-premise and your messaging moved off-premise into the cloud. You may decide that you don't actually want to manage your database. And you may look to a cloud service provider to give you the management behind the database and use it as a service. So you maintain the schema. It's your data, obviously, but you're now consuming somebody else's service. So you can see the same application, messaging, and you've offloaded some of your responsibilities into a database service. And here you can see it stretching across multiple public clouds. So it's the same on-premise application. You've got your DB in one public cloud and messaging in another. And lather rinse repeat with all these different choices of using a service. We happen to provide a service or we're working to provide a service around Fuse, which is a messaging or integration service that could be used here. So this is looking at OpenShift, a cloud service specifically in the context of messaging between applications. And we did some announcements with Microsoft showing that you can run... We're working together to be able to run Windows containers on Windows servers and Linux containers on Linux servers managed by OpenShift. And here you can see a .NET app and a SQL server. Actually SQL server runs on Linux and actually runs really well and .NET is also accessible on Linux. You have choice there. And here you see just an extension of this kind of picture where we're starting to make a more complicated application and the application is starting to consume external services. Some of these services are services that are provided potentially by other platforms by hosting cloud service provider and those services are things that we're working to make accessible through the service broker, which you'll hear... I won't talk much about it, but you'll hear about it a little bit later today. A really important part of integrating into an external environment. So we had this world where we have legacy applications which potentially show up as APIs or services. We've got new applications that we're building. We need to create connectivity and bridges between those things and the service broker is a place that can do that as well as connect you to, again, cloud services, native cloud services or software as a service offerings. So my role is really meant to be focused on where we're going, what is their technology strategy, what are the things that are interesting happening in the industry that we want to make sure from a Red Hat point of view we're paying attention to or understanding where we intersect, how we can work with those emerging communities. So I wanted to switch gears here a little bit and talk about that. Most of this is going to go pretty quickly and hopefully you'll get time to hear in much more detail about a lot of these topics later today. You saw the kind of hybrid cloud picture that we laid out. That platform is really a core component to our hybrid cloud story. We're trying to work towards efficiencies in two different types of persona. One is the developer and the other is kind of the operations team. And from a developer point of view, our focus is providing efficiency for the developer so that effectively all lines of code directly translate to business value. You're not spending your time building frameworks and bootstrapping infrastructure. You're really just building application logic and from a business point of view, business logic. On the management side, we're moving in the direction of a more policy-defined management so that you're not dealing with the details on a kind of box-by-box or cluster-by-cluster perspective, but rather dealing with higher level policies and certainly you need to be able to dig in wrong and figure out where things are broken and how to fix things, but being able to define higher level goals, policies are kind of these two key concepts on the developer side, efficiency and on the operations side, efficiency. The way I think about it is, as you create well-defined separation of duties or separation of concern, you allow each persona to optimize what they're working on. So giving developers autonomy allows them to move more rapidly. You're not sitting there waiting. You enter a trouble ticket and you're waiting for two weeks to get a VM that's blessed by IT. An idle developer, somebody who's, A, you're getting bored and B, you're just not being productive. So this kind of separation of concern allows the operations team to pay attention to the platforms and the developers to write code and pull in the languages, frameworks, whatever that they're interested in building their applications from without a lot of kind of headaches from having to work with the IT side while we need to maintain some, like we need to do this efficiently but also responsibly. So I'll talk briefly at the end about security and the kind of dev sec ops or security lifecycle of an application. So you don't want to free for all. You don't want to have a situation where you've put things in production that you don't even know what's there. We've seen a lot of compromises recently that really result from that kind of moving faster, running with scissors basically, moving faster than the whole environment is ready to understand and can really maintain security around. One of the things I think is interesting is a platform is a place where you can create that separation of duty. So similar to an API, an API allows a certain type of innovation to happen on either side. So if you have a well-defined boundary, you understand how you interact across the boundary. You're free to innovate on one side and free to innovate on the other side. It's similar here with creating a platform. Platform serves like an API in this sense. It's a way for operations and developers to communicate. It's a little different from a traditional API but similar in concept. And this is just all the goodness that comes from using open source and the ability to move things across platforms, underline infrastructure, and really tap into that community innovation, which to me is the innovation engine that's driving what happens today in the industry. So I call this the perpetual pursuit of excellence. I see the industry working in these three kind of key areas, trying to continually improve the user experience. So it could be a consumer user, typically, but not always. Always trying to improve the operational experience and the developer experience. In the middle on the upside, it's really the same illities. It's reliability, availability, stability, security, all the standard things that you think of from an IT upside, and that's where we're trying to push towards that policy-defined infrastructure. Having a platform gives you a target to build policy around. And one of the things I think is interesting and important, especially for Kubernetes, is as you onboard more and more unique types of workloads onto a platform, that platform becomes more and more valuable. And I don't mean just in any one context, I mean across the industry. So if you have that sort of 80-20 where you've found a sweet spot and you are working well for 80% of the industry, you're leaving out 20% of the industry. Maybe it's a bell curve and these are niche corner cases. If you can bring those same niche corner cases onto the same platform, that platform is more useful. I would argue that it's also building towards the future. Architecturally, you tend to have to do things in code, refactoring the code to make that platform withstand the test of time as you bring on more exotic corner cases. And to me, the example there is, again, Linux, having been around for over 20 years and evolving with the industry and taking on more and more use cases so that today, you know, it's in my pocket on my phone, it's in my TV, it's in cars, it's in supercomputers, it's ubiquitous, it's everywhere. It's running all these different workloads across many different kinds of hardware. And in the analogous space in Kubernetes, we're starting to see more and more workloads come to the platform. And I think that's a really important thing for this community to think about and to work towards enabling together. On the developer side, we've mentioned most of this, it's about speed, agility, those kind of things. It's really reducing the amount of scope that the developer has to manage and offloading that into the platform. Consuming more and more things as services through APIs and developing small services and exposing them as services and APIs. On the user experience side, I think this is an interesting space because it's, again, mostly thought of in the consumer arena. And you're trying to build simple, intuitive interfaces. They're contextual, they understand who you are and even are pleasant to use. It's a delightful user experience in a context where you're trying to do business through this application, you're trying to reduce the cognitive load on the user so that they're not stuck trying to figure out their application, you're really focusing on your buying decisions. I think we need to take those concepts and be aware of them and consider those in the context of projects like Kubernetes. Targeting a different persona who is a user of the software, not a consumer, maybe a developer, ease of use is increasingly important as you see that explosion of innovation in the open source community that we talked about earlier. It means there's a lot more choices that developers are making and there's a lot more understanding that you have to come up to speed with in order to even use something unless you're making it really simple and easy to use. And I think that's one of the benefits of containers and one of the exciting things about container orchestration is you can build these sort of turnkey components and plug them into your application. The delightful, intuitive, personalized pieces from a consumer application point of view today are also starting to really mean AI is under the hood. And so while that's about contextualizing and giving recommendations, it's also how we'll use data coming out of distributed systems and understand the current health of the system. So I think these are really important things to consider and I think the user experience is sometimes overlooked when you target developers who, like me, are excited about the bits and bytes and then the problem is there's too many things and you can't keep track of them all. So AI, this is where, again, our view is creating this continuity between the legacy applications, modern applications, which I would define as a modern software architecture like microservices and intelligent applications. We want to run those on the same platform, OpenShift. I'll skip over the insights piece, but we're doing some of this work in OpenShift.io. OpenShift.io can take advantage of some kind of machine learning to give recommendations when user developer is writing code and you're pulling in a dependency in your POM file that is known buggy, has a problem with the stack that you're creating, security vulnerability, for example. This is just a simple example of how we're using AI internally at Red Hat, similar with the Access Insights one that I skipped over, but if you also go from a community point of view, RAD Analytics is a project that we've kicked-started to bring some of these analytics tools to OpenShift, specifically a lot of the work is done around Spark, and our goal is to show that this platform is good for running modern, interesting, innovative new engines. So you'll hear today about machine learning. We work with a number of different partners on bringing these kind of machine learning engines to our platform. Similar story with Blockchain, very much a partner-centric view of the world for Red Hat. We have a commercial ecosystem called the OpenShift Blockchain Initiative, and we work there to bring Blockchain apps to OpenShift. So you see this kind of bring the new technology to a common platform is the consistent theme here. Who's in here from the telco industry? Small, handful of people. You have your work cut out for you, and this is an exciting space. 5G and Mobile Edge Computing is kind of the next-generation telco networks. So today the networks are 4G, SDN, and NFV have been the buzzwords in telco. The next generation is 5G, and with 5G you get low latency, high bandwidth, dense connectivity right at the edge of the network. With that, you have the ability, and with alongside NFV, you have the ability to start running more interesting application workloads at that edge, including the traditional examples would be autonomous vehicles, augmented, and virtual reality. And what's interesting is many of the companies that are looking into this are seen containers as the optimal way to run those workloads. Really important considerations around performance, jitter, reliability. You've got messaging, signaling this going on, this managing phone calls, so you don't want to drop those. So it's a bit of a different environment than a traditional enterprise data center or a cloud application. But these are things that will be really interesting for the Kubernetes community to take on as new challenges. You already see some of this work happening and scheduling and being aware of things like SRIL vNICS and GPUs, where GPUs are interesting at the edge as well. So I think there's a lot of work that's happening here that will directly impact the Kubernetes community in OpenShift. Lambda has made a big impact, at least in certain circles in the industry. There's a lot of conversation around serverless and function as a service. To me, it's the scale of eliminating scope from the developer. So an asynchronous event-driven programming model is ideal. It's an optimal program model. It's also difficult. So here we're creating a platform that takes care of most everything for you and you're just writing your business logic as code that's triggered by some event. At the very far end, you have a bare-metal server where you're managing the application plus the operating system and the whole mess. So you can see this sort of spectrum. We're moving up and to the right there. We've been investing in a project called OpenWisk and bringing that to the OpenShift platform. So this is bringing that event-driven function programming environment to OpenShift. And a topic that I'm sure you will hear about again today as well as repeatedly throughout KubeCon is a service mesh. So microservices being an architectural pattern for building modern software applications. As you break apart an application into components and services, the network becomes really fundamental to that application. Most application developers aren't network engineers. They're not spending their time understanding the network. And creating a platform that manages that network connectivity between all the service components that may be horizontally scalable and load balanced becomes more and more important for developers to be efficient. So here you see kind of an architectural diagram where you see some pieces of Istio as a service mesh in the platform and then some Envoy sidecars deployed with run times really hosting your application and managing that network connectivity through those sidecars between all the different components in your application. A lot of benefits that come along with this approach so that now you're offloading your responsibility to the platform, you're not having to build in a whole different set of client libraries for every different client. And really, if you expect every developer to understand how to build an application, distributed application with a lot of services, you're probably setting your developers up for some headaches and complications when they, you know, think simple things like the network falls apart and breaks down and retries are inconsistent across the platform. So the Istio project, again, leverages Envoy as a proxy. And this is something that we've been working on at Red Hat and working together with the OpenShift community and the broader Kubernetes and general communities around bringing this to our platform. And I think this is the last thing I will touch on. I mentioned it briefly earlier, DevSecOps. Security is always important whenever we ask any of our customers where security is and their kind of priorities. It's always at the top of the list. But it's also this balance of how do you move quickly and maintain security. Currently, there is a new project announced called Graphius, an associated project called CRITUS, something that we're also involved in, bringing this to the OpenShift platform as one of the pieces of the software supply chain in your infrastructure. So knowing, managing metadata around every artifact, knowing where it is, how it's deployed, and then some policy around whether it's okay to deploy this thing and in which context is a really important part of security so that you can ensure that you've pulled things from safe artifact repositories and you're putting things into production that are essentially blessed and have the right metadata associated with them. So I think that's probably all I have time for. I'm sure I went a little over. But thank you for listening. This is a really important community for Red Hat. And I'm stoked to see so many new people. That's a really cool thing. And the message I wanted to leave you with is that OpenShift is a platform. It's building those kind of swim lanes or that separation of concern. And as OpenSource is the innovation engine for technology, OpenShift is a platform that's ripe for absorbing that technology and bringing it to developers and operations teams. So thank you. Have a great day.