 Ladies and gentlemen, please welcome Red Hat President, products and technologies, Paul Cormier. A great night last night of our opening with Jim and talking to certainly Satya and Jenny and especially our customers. It was so great last night to hear our customers and how they set their goals and how they met their goals. All possible because, certainly with a little help from Red Hat, but all possible because of open source. And you know, sometimes we have to all do that as set goals. And I'm going to talk this morning about what we as a company and with the community have set for our goals along the way. And sometimes you have to do that, you know, audacious goals, it can really change the perception of what's even possible. And you know, if I look back, I can't think of anything at least in my lifetime that's more important, such a big goal than John F. Kennedy setting the goal to the American people to go to the moon. I believe it or not, I was really, really only three years old when he said that, honestly. But as I grew up, I remember the passion around the whole country and the energy to make that goal a reality. So let's sort of talk about and compare and contrast a little bit of where we are technically at that time. You know, to win and to beat and win in the space race and even get into the space race, there was some really big technical challenges along the way. I mean, believe it or not, not that long ago, but even, but back then, mathematical calculations were being shifted from brilliant people who we trusted and you could look in the eye to a computer that was programmed with the results that were mostly printed out. This is a time where the potential of computers was just really coming on the scene and at the time, the space race, at the time of space race, it revolved around an IBM 7090, which was one of the first transistor-based computers. It could perform mathematical calculations faster than even the most brilliant mathematicians. But just like today, this also came with many, many challenges. And while we had the goal of, in the beginning of the technology to accomplish it, we needed people so dedicated to that goal that they would risk everything. And while it may seem commonplace to us today to trust, put our trust in machines, that wasn't the case back in 1969. The seven individuals that made up the Mercury Space Crew were putting their lives in the hands of those first computers. But on Sunday, July 20th, 1969, these things all came together. The goal, the technology, and the team, and a human being walked on the moon. You know, if this was possible 50 years ago, just think about what can be accomplished today where technology is part of our everyday lives. And with technology advances at an ever-increasing rate, it's hard to comprehend the potential that's sitting right at our fingertips every single day. Everything you know about computing is continuing to change today. Let's look a bit back at computing in 1969. The IBM 7090 could process 100,000 floating point operations per second. Today's Xbox One that's sitting in most of your living rooms probably can process six trillion flops. That's 60 million times more powerful than the original 7090 that helped put a human being on the moon. And at the same time that computing was drastically changed, this computing has drastically changed, so have the boundaries of where that computing sits and where it lives. At the time of the Apollo launch, the computing power was often a single machine. Then it moved to a single data center. And over time that grew to multiple data centers. Even with cloud, it extended all the way out to data centers that you didn't even own or have control of. But computing now reaches far beyond any data center. This is also referred to as the Edge. You hear a lot about that. The Apollo's version of the Edge was the guidance system, a two megahertz computer that weighed 70 pounds embedded in the capsule. Today the Edge is right here on my wrist. This Apple Watch weighs just a couple of ounces. And it's 10,000 times more powerful than that 7090 back in 1969. But even more impactful than computing advances combined with the pervasive availability of it are the changes and who and what controls those that. Similar to social changes that have happened along the way, shifting from mathematicians to computers, we're now facing the same type of changes with regards to operational control of our computing power. In its first forms, operational control was your team, your team within your control. In some cases, a single person managed everything. But as complexity grows, our teams expanded just like in the computing boundaries. System integrators and public cloud providers have become an extension of our team. But at the end of the day, it's still people that are still making all the decisions. Going forward with the progress of things like AI and software defined everything, it's quite likely that machines will be managing machines. And in many cases, that's already happening today. But while the technology at our fingertips today is so impressive, the pace of change and complexity of the problems we aspire to solve are equally hard to comprehend. And they are all intertwined with one another, learning from each other, growing together faster and faster. We are tackling problems today on a global scale with unsinkable complexity. Beyond what any one single company or even one single country can solve alone. This is why open source is so important. This is why open source is so needed today in software. This is why open source is so needed today even in the world to solve other types of complex problems. And this is why open source has become the dominant development model which is driving the technology direction today. To bring together the best innovation from every corner of the planet to fundamentally change how we solve problems. This approach and access to innovation is what has enabled open source to tackle the challenges, big challenges, like creating the hybrid cloud. Like building a truly open hybrid cloud. But even today, it's really difficult to bridge the gap of the innovation that's available in all of our fingertips by open source development while providing the production level capabilities that are needed to really deploy this in the enterprise and solve real world business problems. Red Hat has been committed to open source from the very, very beginning in bringing it to solve enterprise class problems for the last 17 plus years. But when we built that model to bring open source to the enterprise, we absolutely knew we couldn't do it halfway. To harness the innovation, we had to fully embrace the model. We made a decision very early on, give everything back. And we live by that every single day. We didn't do crazy, crazy things like you hear so many do out there. Well, this is open core or everything below the line is open and everything above the line is closed. We didn't do that. And we gave everything back. Everything we learned in the process of becoming an enterprise class technology company, we gave it all of that back to the community to make better and better software. This is how it works. And we've seen the results of that. We've all seen the results of that and it could only have been possible with an open source development model. We've been building on the foundation of open sources, most successful project Linux in the architecture of the future hybrid and bringing them to the enterprise. This is what made Red Hat the company that we are today. And Red Hat's journey, but we also had to set goals. And many of them seemed insurmountable at the time. The first of which was making Linux the enterprise standard. And while this is so accepted today, let's take a look at what it took to get there. Our first launch into the enterprise was REL 2.1. Yes, I know, 2.1. But we knew we couldn't release a 1.0 product. We knew that. But we didn't want to allow any reason why any customer, anyone should look past REL to solve their problems as an option. Back then, we had to fight every single flavor of Unix in every single account. But we were lucky to have a few initial partners and big ISV partners that supported REL out of the gate. But while we had the determination, we knew we also had gaps in order to deliver on our priorities. In the early days of REL, I remember going to ask one of our engineers for a past REL build because we were having a customer issue on an older release. And then I watched in horror as he rifled through his desk through a mess of CDs and magically came up and said, I found it. Here it is. Told me not to worry that the build, he thinks this was the build, this was the right one. And at that point, I knew that despite the promise of Linux, we had a lot of work ahead of us. To not only convince the world that Linux was secure, stable, and enterprise ready, but also to make that a reality. But we did. And today, this is our reality. It's all of our reality. From the Enterprise Data Center standard to the fastest computers on the planet, Red Hat Enterprise Linux has continually risen to the challenge and has become the core foundation for any mission-critical customers run and bet their business on. And even bigger today, Linux is the foundation of which practically every single technology initiative is built upon. Linux is not only standard to build on today, it's the standard for innovation that builds around it. That's the innovation that's driving the future as well. Let's get started our story with RHEL 2.1. And here we are today, 17 years later, announcing RHEL 8 as we did last night. It's specifically designed for applications to run across the open hybrid cloud. RHEL has become the best operating system for on-premise all the way out to the cloud. Providing that common operating model to the cloud foundation on which to build hybrid applications. Let's take a look at how far we've come and see this in action. Please welcome Red Hat Global Director of Developer Experience, Burr Sutter, with Josh Boyer, Timothy Kramer, Lars Carlitski, and Brent Midwood. All right, we have some amazing things to show you in just a few short moments. We actually have a lot of things to show you. And actually, Tim and Brent will be with us momentarily. We're working out a few things in the back because this is going to be a live demonstration of some incredible capabilities. Now, you're going to see clear innovation inside the operating system where we worked incredibly hard to make it vastly easier for you to manage many, many machines. I want you thinking about that as we go to this process. Now, also keep in mind that this is the basis, our core platform for everything we do here at Red Hat. So it is an honor for me to be able to show it to you live on stage today. And so I recognize that many of you in the audience right now are working as administrators, systems architects, systems engineers. And we know that you're under ever-growing pressure to deliver needed infrastructure resources ever faster. And that is a key element to what you're thinking about every day. Well, this has been a core theme in our design decisions behind Red Hat Enterprise Linux 8. An intelligent operating system which is making it fundamentally easier for you to manage machines at scale. So hope what you're about to see next feels like a new superpower for you. So let me introduce you to Lars. He's totally my Linux guru. I wouldn't call myself a guru, but I guess you could say that I want to bring Linux Enlightenment to more people. Let's dive into Red Hat Enterprise Linux 8. Sure, let me go ahead and log in here. Wait a second, that's Windows. Yeah, we build a web console into RLA. That means that for the first time you can log in from any device, including your phone or this standard Windows laptop. So let me just go ahead and show you the standard Linux credentials here. Okay, so now you're putting your Linux password in over the web. Yeah, that might sound a bit scary at first, but of course we're using the latest security tech like TLS and CSP. And because it's the standard Linux offset, you can use everything that you're used to, like SSH keys, OTP tokens and stuff like this. Okay, so now I see the console right here. I love the dashboard overview of the system. But what else can you tell us about this console? Right here you see the load of the system and some of its properties. But you can also dive into logs, everything that you're used to from the command line, right? Or look at services. This is all the services I've running. Can start and stop them and enable. Okay, I love that feature right there. So what about if I have to add a whole new application to this environment? Good that you're bringing that up. We've built a new feature in TRL called application streams, which is a way for you to install different versions of your app stack but since Windows doesn't have a proper terminal, I'll just do it in the terminal that we built into the web console. Since the browser, I can even make this a bit bigger. So to, for example, see the application streams that we have for Postgres, I just do module list and I see, you know, we have 10 and 9.6, both are supported. 10 is the default. And if I enable 9.6, now the next time that I install Postgres, it will pull all the changes from 9.6. Okay, so this is very cool. I see two versions of Postgres right here with 10 is the default. That is fantastic and the application streams is making that happen. But I'm really kind of curious, right? I love using Node.js and Java. So what about multiple versions of those? Yeah, that's exactly the idea. We want to keep up with the fast-moving ecosystems of programming languages and databases. Okay, now, but I have another key question. I know some people are thinking right now. What about Python? If you're like this, Python gives you command not found. Oh, just have to type it correctly. You can just install whichever one you want, two or three or whichever your application needs. Okay, well, that is, I've been burned on that one before. Okay, so now I actually have a confession for all of you guys right here. You guys keep this amongst yourselves. Don't let Paul know. I'm actually not a Linux systems administrator. I'm an application developer, an application architect, and I recently had to go figure out how to extend the file system. So I'm going to go ahead and type pvcreate vgextan resize2fs, and I have to admit, that's hard. Right, I've opened the storage page for you right here where you see an overview of your storage. And the web console is made for people like you as well, not only for people that are new to Linux, right? It's, if you're running, sorry, if you're running some of the commands only, you know, some of the time you don't remember them. So for your example, I have a file system here that's a little bit too small. Let me just grow it. It's like, you know, dragging a slider. It calls all the commands in the background for you. Oh, that is incredible. It's that simple just drag and drop. That is fantastic. Well, so I actually, you know, will have another question for you. It looks like now this Linux systems administration is no longer a dark art involving arcane commands typed into a black terminal. You're like using one of those funky ergonomic keyboards. You know what I'm talking about, right? Do you know a lot of people including me and people in the audience like that dark art, right? And this is not taking any of that away. It's an additional tool to bring Linux to more people. Okay. Well, that is absolutely fantastic. Thank you so much for that, Lars. And I really love how installing everything is so much easier including that Postgres Seeker and, of course, the Python that we saw right there. So now I want to change gears for a second because I actually have another situation that I'm always dealing with. And that is every time I want to build a new Linux system, not only I don't want to have to install those commands again and again, it feels like I'm doing it over and over. So Josh, how would I create a golden image, one VM image that I can use to build a new system again? Yeah, absolutely, Burr. We get that question all the time. So Relay includes image builder technology. Image builder technology is actually all of our hybrid cloud operating system image tools that we use to build our own images and rolled up in a nice, easy to integrate and use system. So if I come here in the web console and I go to our image builder tab, it brings us to blueprints, right? Blueprints are what we use to actually control what goes into our golden image. And I heard you and Lars talking about Postgres and Python. So it brings us to this page, but you can go to the selected components, and you can see here I've created a blueprint that has all the Python and Postgres packages in it. And the interesting thing about this is it builds on our existing Kickstart technology, but you can use it to deploy to whatever cloud you want, and it's saved so that you don't actually have to know all the various incantations from Amazon to Azure to Google, whatever. It's all baked in, and when you do this, you can actually see the dependencies that get brought in as well. Should we create one live? Yes, please. All right, cool. So if we go back to the Blueprints page and we click Create Blueprint, let's make a developer blueprint here. So we click Create, and you can see here on the left-hand side I've got all of my content served up by Red Hat Satellite. We have a lot of great stuff in Relate, but we can go ahead and search, so we'll look for Postgres. And it's a developer image. We'll add the client for some local testing. We'll come in here and add the Python bits. Probably the development package. We need a compiler if we're going to actually build anything, so we'll look for GCC here. And, hey, what's your favorite editor? Emacs, of course. Emacs, all right. Hey, Lars, how about you? I'm more of a VI person. What? Emacs and VI. All right, well, if you want to prevent a holy war on your system, you can actually use Satellite to filter that out, so we're going to go ahead and add them both so we don't have a fight on stage. So we just point and click. We'll add the graphical one. And then when we're all done, we just commit our changes, and our image is ready to build. Okay, so this VM image that we just created right now, from that blueprint, now I can actually go out there and easily deploy this across multiple cloud providers, and as well as this on-stage hardware we have right now? Yeah, absolutely. We can deploy it on Amazon, Azure, Google, any infrastructure you're looking for, you can actually hit your hybrid cloud operating system images. Okay, all right, let's see it. So we just go and click Create Image. We can select our different types here. I'm going to go ahead and create a local VM because it's a developer image and maybe they want to pass it around or whatever. And I just need a few moments for it to build. Okay, so while that's taking a few moments, I know there's another key question in the minds of the audience right now. And you're probably thinking, I love what I see, but Red Hat and Price Linux 8, but what does it take to upgrade from 7 to 8? So, Lars, can you show us and walk us through an upgrade? Sure. This is my little Summit Blog that I set up. It's powered by WordPress and SQL Server, but it's still running on 7.6, so let's upgrade that. Let's jump over to my host view and satellite and you see all my Relate machines here, including the one I showed you the web console on before. And there's that one with my Summit Blog and there's a couple others. Let me select those as well. This one and that one. And just go up here, schedule a remote job and choose Relate Upgrade and hit Submit. I made it so that it makes a boon snapshot before, so if anything was wrong, we can just roll back. Okay, okay. Now it's progressing here? It's progressing, looks like it's running. Doing a live upgrade on stage? It seems like one is failing. What's going on here? Okay. Let me check the pre-upgrade check. Oh, yeah, that's the one I was playing around with ButterFest backstage. Oh, wow. It detected that and, you know, it doesn't run the upgrade because we don't support upgrading that. Okay, so what I'm hearing now, so the good news is we are protected from a possible failed upgrade there. So it sounds like these upgrades are perfectly safe. I can basically, you know, schedule this during the maintenance window and still get some sleep. Totally, that's the idea. Okay, fantastic. All right, so it looks like upgrades are easy and perfectly safe and I really love what you showed us there. It's a point-and-click operation right from satellite. Okay, so well, you know, we were over here checking out upgrades. I want to know, Josh, how are those VMs coming along? They went really well. So you were away for so long. I got a little bored and I took some liberties. What do you mean? Well, the image built and, you know, I decided I'm going to go ahead and deploy it here to this Intel machine on stage. So I have that up and running in the web console. I built another one on the ARM box, which is actually pretty fast, and that's up and running on this ARM machine. And that went so well that I decided to spin up some in Amazon. So I've got a few instances here running in Amazon with the web console accessible there as well. And even more of our pre-built images up and running in Azure with the web console there. So the really cool thing about this, Burr, is that all of these images were built with ImageBuilder in a single location controlling all the content that you want in your golden images, deployed across the hybrid cloud. Wow, that is fantastic. And you might think that's all, but we actually have more to show you. So thank you so much for that Lars and Josh. That is fantastic. It looks like provisioning Red Hat Enterprise Linux Systems 8, or Red Hat Enterprise... Red Hat Enterprise Linux 8 systems. It's easier than ever before. But we have more to talk to you about. And there's one thing that many of the operations professionals right now know that provisioning a VM is easy, but it's really day two, day three. It's down the road that those VMs require day-to-day maintenance. As a matter of fact, several of you folks right now in this audience have to manage hundreds if not thousands of virtual machines. I just recently spoke to a gentleman who has to manage 1,300 servers. So how do you manage those machines at great scale? So great that they've now joined us and it looks like they've worked things out. So now I'm curious, Tim, how will we manage hundreds if not thousands of computers? Or one human managing hundreds or even thousands of VMs is no problem because we have Ansible Automation. And by leveraging Ansible's integration into satellite, not only can we spin up those VMs really quickly, like Josh was just doing, but we can also make ongoing maintenance of them really simple. Come on up here. I'm going to show you here a satellite inventory. And as Red Hat is publishing patches, we can, with that Ansible integration, easily apply those patches across our entire fleet of machines. Okay, that is fantastic. So all the machines can get updated in one fell swoop. They sure can. And there's one thing that I want to bring your attention to today because it's brand new. And that's cloud.redhat.com. And here at cloud.redhat.com, you can view and manage your entire inventory no matter where it sits of Red Hat Enterprise Linux, like on-prem, on-stage, private cloud, or public cloud. It's true hybrid cloud management. Okay. Well, one thing I know that's in the minds of the audience right now, and if you have to manage a large number of servers, this comes up again and again. What happens when you have those critical vulnerabilities that next zero-day CV could be tomorrow? Exactly. I've actually been waiting for a while patiently for you to get to the really good stuff. Okay. So there's one more thing that I wanted to let folks know about Red Hat Enterprise Linux 8 and some features that we have there. Oh, yeah? What is that? So actually, one of the key design principles of RHEL 8 is working with our customers over the last 20 years to integrate all the knowledge that we've gained and turn that into insights that we can use to keep our Red Hat Enterprise Linux servers running securely and efficiently. And so what we actually have here is a few things that we can take a look at to show folks what that is. Okay. So we basically have this new feature. We're going to show people right now. And so one thing I want to make sure, it's absolutely included within the Red Hat Enterprise Linux 8 subscription. Yeah, so that's an announcement that we're making this week, is that this is a brand new feature that's integrated with Red Hat Enterprise Linux and it's available to everybody that has a Red Hat Enterprise Linux subscription. So I believe everyone in this room right now has a RHEL subscription, so it's available to all of them. Absolutely, absolutely. So let's take a quick look. Okay. Everybody can try this out. So what we actually have here is a list of about 600 rules, their configuration, security, and performance rules. And this list is growing every single day. Customers can actually opt in to the rules that are most applicable to their enterprises. So what we're actually doing here is combining the experience and knowledge that we have with the data that our customers opt in to sending us. So customers have opted in and are sending us more data every single night than they actually have in total over the last 20 years via any other mechanism. Now, I see now there's some critical findings. That's what I was talking about when it comes to CVs and things of that nature. Yeah, I'm betting that those are probably some of the RHEL 7 boxes that we haven't actually upgraded quite yet. So we can get back to that. What I'd really like to show everybody here, because everybody has access to this, is how easy it is to opt in and enable this feature for RHEL. Okay. So let's do that real quick. So I'm going to hop back over to satellite here. This is the satellite that we saw before. And I'll grab one of the hosts. And we can use the new web console feature that's part of RHEL 8. And via single sign-on, I can jump right from satellite over to the web console, so it's really, really easy. And I'll grab a terminal here. And registering with Insights is really, really easy as one command. And what's happening right now is the box is going to gather some data. It's going to send it up to the cloud. And within just a minute or two, we're going to have some results that we can look at back on the web interface. I love it. So it's just a single command and you're ready to register this box right now. That is super easy. Wow, that's fantastic. Brent, jeez. We started this whole series of demonstrations by telling the audience that Red Hat Enterprise Linux 8 was the easiest, most economical, and smartest operating system on the planet, period. And while I think it's cute how you can go ahead and opt in on a single machine, I'm going to show you one more thing. This is Ansible Tower. You can use Ansible Tower to manage and govern your Ansible Playbook usage across your entire organization. And with this, what I can do is on every single VM that was spun up here today, opt in and register Insights with a single click of a button. Okay, I want to see that right now. I know everyone's waiting for it as well. But hey, are your VMs ready, Josh Lars? Yeah, my blog is running on really late now. Yeah, Insights is a really cool feature of Rel and I've got it in all my images already. All right, I'm doing it. All right. And so as this Playbook runs across the inventory, I can see the machines registering on cloud.redhat.com ready to be managed. Okay, so all those on-stage VMs as well as the hybrid cloud VM should be popping in there. I see Postgres SQL as well. Fantastic. That's awesome. Thanks, Tim. Nothing better than a Red Hat Summit Speaker in the first live demo going off script. No big deal. Let's go back and take a look at some of those critical issues affecting a few of our systems here. So you can see this is a particular DNS mask issue. It's going to affect a couple of machines. We saw that in the overview. And I can actually go in and get some more details about what this particular issue is. So if you take a look at the right side of the screen there, there's actually a critical likelihood and impact that's associated with this particular issue. And what that really translates to is that there's a high level of risk to our organization from this particular issue. But also, there's a low risk of change. And so what that means is that it's really, really safe for us to go ahead and use Ansible to remediate this. So I can grab the machines. I'll select those two. And we'll remediate with Ansible. I can create a new Playbook. It's our maintenance window, but we'll do something along the lines of like stuff, Tim broke. And that'll be our, because we can name it whatever we want. So we'll create that Playbook and take a look at it. And it's actually going to give us some details about the machines, what type of reboots if any are going to be needed and what we need here. So we'll go ahead and execute the Playbook. And what you're going to see is the output is going to happen in real time. So this is happening from the cloud. We're affecting machines no matter where they are. They could be on-prem. They could be in a hybrid cloud, a public cloud, or in a private cloud. And these things are going to be remediated very, very easily with Ansible. So it's really, really awesome. Everybody here with the Red Hat Enterprise Linux subscription has access to this now. So I kind of want everybody to go try this. Like we really need to get this thing going and try it out right now. But don't send them out of the room just yet. You guys stay here for a while. Okay, Mr. Excitability. I think after this keynote, come back to the Red Hat booth and there's an IT optimization section. You can come talk to our insights engineers. And even though it's really easy to get going on your own, they can help you out, answer any questions you might have. So this is really the start of a new era with an intelligent operating system imbued with the intelligence you just saw right now at Insights. That's obviously fantastic. So we are enabling systems administrators to manage more Red Hat Enterprise Linux at greater scale than ever before. I know there's a lot more we could show you, but we're totally out of time at this point. And we kind of, you know, went a little bit sideways here at moments. But we need to get off this stage. But there's one thing I want you guys to think about. All right. Do come check out in the booth, like Tim just said. Also in our dev zone, you can get hands on Red Hat Enterprise Linux 8 as well. But really, I want you to think about this. One human and a multitude of servers. And if you remember that one thing I asked you up front, do you feel like you gained a new superpower and the Red Hat is your force multiplier? Woo-hoo! All right. Well, thank you so much, Josh and Lars, Tim and Brent. Thank you. And let's get Paul back on the stage. All right. That went brilliant. That was just, as always, amazing. I mean, as you can tell from last night, we're really, really proud of REL8 and that coming out here at the summit. And what a great way to showcase it. Thanks so much to you, Burr. Thanks, Brent, Tim, Lars, and Josh. Just thanks again. So you've just seen this team demonstrate how impactful REL can be on your data center. So hopefully many of you, if not all of you have experienced that as well. What about supercomputers? We hear about that all the time. As I just told you a few minutes ago, Linux isn't just the foundation for enterprise and cloud computing. It's also the foundation for the fastest supercomputers in the world. And our next guest is here to tell us a lot more about that. Please welcome Lawrence Livermore National Laboratory HPC solution architect, Robin Goldstone. Thank you so much, Robin. So welcome to the summit. Welcome to Boston. Thank you so much for joining us. Can you tell us a bit about the goals of Lawrence Livermore National Lab and how high-performance computing really works at this level? Sure. So Lawrence Livermore National Lab was established during the Cold War to address urgent national security needs by advancing the state of nuclear weapons, science, and technology. High-performance computing has always been one of our core capabilities. In fact, our very first supercomputer, a Univac One, was ordered by Edward Teller before our lab even opened back in 1952. Our mission has evolved since then to cover a broad range of national security challenges, but first and foremost, our job is to ensure the safety, security, and reliability of the nation's nuclear weapons stockpile. Since the U.S. no longer performs nuclear testing, our ability to certify the stockpile depends heavily on science-based methods. We rely on HPC to simulate the behavior of complex weapons systems to ensure that they can function as expected well beyond their intended lifespan. That's really great. So, are you really, are you still running on that Univac? No, actually, we've moved on since then. Sierra is Lawrence Livermore's latest and greatest supercomputer. It's currently the second fastest supercomputer in the world, and for the geeks in the audience, I think there's a few of them out there, we've put up some of the specs of Sierra on the screen behind me. A couple of things worth highlighting are Sierra's peak performance and its power utilization. So 125 petaflops of performance is equivalent to about 20,000 of those Xbox One Xs earlier, and 11.6 megawatts of power required to operate Sierra is enough to power around 11,000 homes. Sierra is a very large and complex system, but underneath it all, it starts out as a collection of servers running Linux and more specifically, REL. So, did Lawrence did Lawrence Livermore National Lab use REL before, Sierra? Oh, yeah, most definitely. So we've been running REL for a very long time on what I'll call our mid-range HPC systems. So these clusters built from commodity components are sort of the bread and butter of our computer center, and running REL on these systems provides us with a continuity of operations and a common user environment across multiple generations of hardware, also between Lawrence Livermore and our sister labs, Los Alamos and Sandia. We've always had one world-class supercomputer like Sierra. Historically, these systems have been built from exotic proprietary hardware, running entirely closed-source operating systems. Anytime something broke, which was often, the vendor would be on the hook to fix it. And that sounds like a good model, except that what we found over time is most of the issues that we have on these systems were either due to the extreme scale or the complexity of our workloads. Vendors seldom had a system anywhere near the size of ours and we couldn't give them our classified codes, so their ability to reproduce our problem was pretty limited. In some cases they'd even send an engineer on site to try to reproduce our problems, but even then, sometimes we wouldn't get a fix for months or else they would just tell us they weren't going to fix the problem because we were the only ones having it. So for many of us, the challenge is one of driving reasons for open-source, even open-source existing. How did Sierra change things around open-source for you? Sure, so when we developed our technical requirements for Sierra, we had an explicit requirement that we wanted to run an open-source operating system and a strong preference for RHEL. At the time, IBM was working with Red Hat to add support to RHEL for their new little Indian power architecture, so it was really just natural for them to bid a RHEL-based system for Sierra. Running RHEL on Sierra allows us to leverage the model that's worked so well for us for all this time on our commodity clusters. Any packages that we build for X86, we can now build those packages for power as well as ARM architecture using our internal build infrastructure. And while we have a formal support relationship with IBM, we can also tap our in-house kernel developers to help debug complex problems. Our CIS admins can now work on any of our systems, including Sierra, without having to pull out their cheat sheet of obscure proprietary commands. Our users get a consistent software environment across all our systems. And if a security vulnerability comes out, we don't have to chase around getting fixes from multiple OS vendors. You've been able to extend your foundation from all the way from X86 all the way to the exascale supercomputing. We talk about giving customers, we talk about this all the time, a standard operational foundation to build upon. This isn't exactly what we've envisioned. So what's next for you guys? Right. So what's next? So Sierra is just now going into production, but even so, we're already working on the contract for our next supercomputer called El Capitan. It's scheduled to be delivered to Lawrence Livermore in the 2022, 2023 timeframe. El Capitan is expected to be about 10 times the performance of Sierra. I can't share any more details about that system right now, but we are hoping that we're going to be able to continue to build on a solid foundation that REL has provided us for well over a decade. Well, thank you so much for your support of REL over the years, Robin, and thank you so much for coming and telling us about it today, and we can't wait to hear more about El Capitan. Thank you very much. So now you know why we're so proud of REL and why you saw confetti cannons and t-shirt cannons last night. So, you know, as Burr and the team talked about at the demo, REL is the force multiplier for servers. We've made Linux one of the most powerful platforms in the history of platforms, but just as Linux has become a viable platform with access for everyone, and REL has become more viable every day in the enterprise, open source projects began to flourish around the operating system. And we needed to bring those projects to our enterprise customers in the form of products with the same trust models as we did with REL. Seeing the incredible progress of software development occurring around Linux, let's lead us to the next goal that we said to ourselves. That goal was to make hybrid cloud the default enterprise for the architecture. How many of you out here in the audience are RHCSAs or RHCSEs? How many out there? A lot. A lot. You are the people that are building the next generation of computing the hybrid cloud. You know, again, just like our goals around Linux, this goal might seem a little daunting in the beginning, but as a community, we've proved it time and time again, we are unstoppable. Let's talk a bit about what got us to the point we're at right now, and in the work that, as always, we still have in front of us. We've been on a decade-long mission on this, believe it or not. This mission was to build the capabilities needed around the Linux operating system to really build and make the hybrid cloud. When we saw RHEL first taking hold in the enterprise, we knew that was just taking the first step because for a platform to really succeed, you need applications running on it. And to get those applications on your platform, you have to enable developers with the tools and run times for them to build upon. Over the years, we've closed a few, if not a lot of those gaps, starting with the acquisition of JBoss many years ago. All the way to the new Kubernetes native code-ready workspaces, we launched just a few months back. We realized very early on that building a developer- friendly platform was critical to the success of Linux and open source in the enterprise. Shortly after this, the public cloud stormed on to the scene. While our first focus as a company was done on premise in customer data centers, the public cloud was really beginning to take hold. RHEL very quickly became the standard across public clouds, just as it was in the enterprise, giving customers that common operating platform to build their applications upon. Ensuring that those applications could move between locations without ever having to change their code or operating model. With this new model of the data center spread across so many multiple environments, management had to be completely rethought and re- architected. And given the fact that environments spanned multiple locations, management, real solid management became even more important. Customers deploying in hybrid architectures had to understand where their applications were running and how they were running, regardless of which infrastructure provider they were running on. We invested over the years with management right alongside the platform, from satellite in the early days to cloud forms, to cloud forms, insights, and now Ansible. We've focused on having management to support the platform wherever it lives. Next came data, which is very tightly linked to applications. Enterprise class applications tend to create tons of data. And to have a common operating platform for your applications, you need a storage solutions that's just as flexible as that platform. Able to run on-premise just as in the cloud, even across multiple clouds. This led us to acquisitions like Gluster, Sef, Permabit, and Nuba. Complimenting our platform with red hat storage. For us, even though this sounds very condensed, this was a decades worth of investment. All in preparation for building the hybrid cloud. Expanding the portfolio to cover the areas that a customer should depend on to deploy real hybrid cloud architectures. Finding and amplifying the right open source project and technologies, or filling the gaps with some of these acquisitions when that necessarily wasn't available. By 2014, our foundation had expanded, but one big challenge remained. Workload portability. Our machine formats were fragmented across the various deployments in higher level frameworks such as Java EE still very much depended on a significant amount of operating system configuration. And then containers happened. Containers, despite having a very long being in existence for a very long time as a technology, exploded on the scene in 2014. Kubernetes followed shortly after in 2015, allowing containers to span multiple locations. And in one fell swoop, containers became the killer technology to really enable the hybrid cloud. And here we are. Hybrid is really the only practical reality in way for customers. And at Red Hat, we've been investing in all aspects of this over the last eight plus years to make our customers and partners successful in this model. We've worked with you, both our customers and our partners, building critical rel and open shift deployments. We've been constantly learning about what has caused problems and what has worked well in many cases. And while we've amassed a pretty big amount of expertise to solve most any challenge in any area of the world, it takes more than just our own learnings to build the next generation platform. Today, we're also introducing OpenShift 4, which is the culmination of those learnings. This is the next generation of the application platform. This is truly a platform that has been built with our customers and not simply this is something that could only be possible in an open source development model. And just like rel is the force multiplier for servers, OpenShift is the force multiplier for data centers across the hybrid cloud, allowing customers to build thousands of containers and operate them at scale. And we've also announced OpenShift and we've also announced Azure OpenShift last night. Satya on this stage is going to talk about that in depth. This is all about extending our goals of a common operating platform enabling applications across the hybrid cloud regardless of whether you run it yourself or just consume it as a service. And with this flagship release we are also introducing operators which is the central feature here. We brought this work last year with the operator framework and today we're not going to just show you OpenShift 4 we're going to show you operators running at scale. Operators that will do updates and patches for you letting you focus more of your time in running your infrastructure and running your business. We want to make all this easier and intuitive. So let's have a quick look at what we're doing just that. Hey team. I know all of you have heard we're talking to pretend to new customers about the travel app. So the new plan is to open it up as a service and launch by this summer. But I know this is a big request for a not very big team. I'm open to any and all ideas. Please welcome back to the stage Red Hat Global Director of Developer Experience Burr Sutter with Jessica Forester and Daniel McPherson. All right we're ready to do some more now. Now earlier we showed you Red Hat Enterprise Linux 8 running on lots of different hardware like this hardware you see right now. And we're also running across multiple cloud providers. But now we're going to move to another world of Linux containers. This is where you see OpenShift 4 and how you can manage large clusters of applications running Linux containers across the hybrid cloud. We're going to see this is where software operators fundamentally empower human operators and especially make ops and dev work efficiently, more efficiently and effectively together than ever before. Right so we have two folks on this stage right now they represent ops and dev and we're going to go see how they build an application together. So let me introduce you to Dan. Dan is totally representing all our ops folks in the audience here today and he's totally my ops person. Let's go and call him Mr. Ops. OpenShift 4 we've had a much easier time setting up and maintaining our clusters and large part that's because OpenShift 4 has extended management of the clusters down to the infrastructure. The difference becomes apparent when you take a look at the OpenShift console you can now see the machines that make up the cluster where machine represents the infrastructure underneath the Kubernetes node. OpenShift 4 now handles provisioning and deprovisioning of those machines. From there you can dig into an OpenShift node and see how it's configured to monitor how it's behaving. So I'm curious though does this work on bare metal infrastructure as well as virtualized infrastructure? Yeah that's right Burr. So pod journal nodes, node journal machines and OpenShift 4 can now manage it all. Something else we've found extremely useful about OpenShift 4 is that it now has the ability to update itself. We can see this cluster hasn't updated available and at the press of a button upgrades are responsible for updating the entire platform that includes the nodes, the control plane and even the operating system in real core OS. All of this is possible because the infrastructure components and their configuration is now controlled by a technology called operators. These software operators are responsible for aligning the cluster to a desired state and all of this makes operational management of an OpenShift cluster much simpler than ever before. All right I love the fact that all that's from the one console now you can see the full stack right all the way down to the bare metal right there in that one console. Fantastic. So I want to switch gears for a moment though and now let's talk to Dev. Here it represents all our developers in the room as a matter of fact she manages a large team of developers here at Red Hat but more importantly she represents our vice president of development and has a large team that she has to worry about on a regular basis. So Jessica what can you show us? My team has hundreds of developers and we're constantly under pressure to deliver value to our business and frankly we can't really wait for Dan and his ops team to provision the infrastructure and the services that we need to do our job. So we're really looking at the future of OpenShift as our platform to run our applications on but until recently we've really struggled to find a reliable source of Kubernetes technologies that have the operational characteristics that Dan's going to actually let us install to the cluster. But now with Operator Hub I.O. we're really seeing the ISV ecosystem be unlocked and the technologies they are things that my team needs it's databases and message cues tracing and monitoring and these operators are actually available for complex applications like Prometheus here and they're written in a variety of languages and that is awesome. So I do see a number of options there already and Prometheus is a great example but how do you know that one of these operators really is mature enough and robust enough for Dan and the ops side of the house? Here we have the Operator maturity model and this is going to tell me and my team whether this particular operator is going to do a basic install if it's going to upgrade that application through different versions or all the way out to full autopilot where it's automatically scaling and tuning that application based on the current environment. Okay, that's very cool. So coming over to the OpenShift console now we can actually see Dan has made the SQL Server operator available to me and my team and that's the database that we're using. Yeah, SQL Server, that's a great example so sitting server running here in the OpenShift cluster but this is a great example for a developer what if I want to create a new SQL Server instance? Sure, so it's as easy as provisioning any other service from the developer catalog we come in and I can type for SQL Server and what this is actually creating is a native resource called SQL Server and you can think of that like a promise that a SQL Server will get created the operator is going to see that resource install the application and then manage it over its lifecycle and from this installed operators view I can see the operators running in my project and which resources it's managing. Okay, but I'm kind of missing something here I see this custom resource here in the SQL Server but where are the Kubernetes resources like pods? Yeah, I think it's cool that we get this native resource now called SQL Server but if I need to I can still come in and see the native Kubernetes resources like your staple set and service here. Okay, that is fantastic. Now, we did say earlier on though like many of our customers in the audience right now you have a large team of engineers a large team of developers you got to handle, you got to have more than one SQL Server right? We do, one for every team as we're developing and we use a lot of other technologies running on OpenShift as well including Tomcat and our Jenkins pipelines and our Node.js app that is going to actually talk to that SQL Server database. Okay, so at this point we can kind of provision some of these? Yeah, so since all of this is self-service for me and my teams I'm actually going to go ahead and create one of all those things I just said on all of our projects right now if you just give me a minute. Okay, alright, so basically you're going to knock down Node.js, Jenkins, SQL Server all right now that's like hundreds of bits of application level infrastructure right now live. So Dan, are you not terrified? I guess I should have done a little bit better job of managing just this quota and historically Jessica and I might have had some conflict here because creating all those new applications would have meant my team now had a massive backlog of tickets to work on but now because of software operators my human operators are able to run our infrastructure at scale. So since I'm logging into the cluster here as the cluster admin I get this view of pods across all projects and so I can get an idea of what's happening across the entire cluster and so I can see now we have 494 pods already running and there's a few more still starting up and if I scroll through the list we can see the the different workloads Jessica just mentioned of Tomcats and Node.js's and Jenkins's and SQL Servers down here too. Yeah, I see containers creating and you have like close to 500 pods running there. Yeah, and I'll filter this list down by SQL Server so we can just see that. Okay, but aren't you not running going around our cluster capacity at some point? Actually yeah, we definitely have a limited capacity in this cluster and so luckily though we already set up autoscalers and so because the additional workload was launching we can see now those autoscalers have kicked in and some new machines are being created that don't yet have nodes assigned because they're still starting up and so there's another good view of this as well so you can see machine sets we have one machine set per availability zone and you can see that each one is now scaling from 10 to 12 machines and the way the autoscalers work is for each availability zone they will, if capacity is needed they will add additional machines to that availability zone and then later if that capacity is no longer needed it'll automatically take those machines away. That is incredible so right now we're autoscaling across multiple availability zones based on load. Okay, so it looks like capacity planning and automation is fully handled at this point but I do have another question for you you're logged in as a cluster admin right now into the console can you show us your view of operators software operators? Yeah, actually there's a couple of unique views here for operators for cluster admins the first of those is operator hub this is where cluster admin gets the ability to curate the experience of what operators are available to users of the cluster and so obviously we already have the SQL server operator installed which we've been using the other unique view is operator management this gives a cluster admin the ability to maintain the operators they've already installed and so if we dig in and see the SQL server operator we'll see we have it set up for manual approval and what that means is if a new update comes in for SQL server then a cluster admin would have the ability to approve or disapprove of that update before it installs into the cluster actually there is an upgrade that's available I should probably wait to install this though we're in the middle of scaling out this cluster and I really don't want to disturb Jessica's application workflow Yeah, so actually Dan it's fine my app is already up it's running let me show it to you over here so this is our products application it's talking to that SQL server instance and for debugging purposes we can see which version of SQL server we're currently talking to it's 2.2 right now and then which pod since this is a cluster there's more than one SQL server pod we could be connected to Okay, I can see right there the bottom of screen there's you know 2.2 that's the version we have right now but you know this is kind of the point of software operators at this point so you know everyone in this room you know wants to see you hit that upgrade button let's do it live here on stage right Dan? Alright, alright I can see where this is going so whenever you update an operator it's just like any other resource on Kubernetes and so the first thing that happens is the operator pod itself gets updated so we actually see a new version of the operator is currently being created now and once that gets created the old version will be terminated at that point the new software operator will notice it's now responsible for managing lots of existing SQL servers already in the environment and so it's then going to update each of the SQL servers to match to the new version of the SQL server operator and so we can see it's running and so if we switch now to the all projects view and we filter that list down by SQL server then we should be able to see yeah so lots of these SQL servers are now being created and the old ones are being terminated So it's a rolling update across the cluster so the SQL server operator deploys SQL server in an HA configuration and it only updates a single instance of SQL server at a time which means SQL servers always left in an HA configuration and Jessica doesn't really have to worry about downtime with her applications yeah that's awesome Dan I'm so glad the team doesn't have to worry about that anymore yeah and Jessica I think enough of these might have run by now if you try your app again hold on might be updated at this point yep let's see Jessica's application up here alright on laptop 3 there we go fantastic and and yet look we were on 2.2 before we're on 2.3 now we're on 2.3 excellent and you know that actually works so well I don't even see a reason for us to to leave this on manual approval so I'm going to switch this to automatic approval and then in the future if a new SQL server comes in then we don't have to do anything and it'll be automatically updated on the cluster that is absolutely fantastic and so I was glad you guys got a chance to see that rolling update across the cluster that is so cool the SQL server database being automated and fully updated that is fantastic alright so I can see how a software operator does enable you to manage hundreds if not thousands of applications but I know a lot of folks are interested in the back end infrastructure could you give us an example of the infrastructure behind this console yeah absolutely Burr so we all know that OpenShift is designed to run in lots of different environments but our teams think that Azure Red Hat OpenShift provides one of the best experiences by deeply integrating the OpenShift resources into the Azure console and it's even integrated into the Azure command line tool in the AZ OpenShift command and as was announced yesterday it's now available for everyone to try out and Burr there's actually one more thing we wanted to show everyone related to OpenShift 4 and this is also new with OpenShift 4 which is we now have multi-cluster management this gives you the ability to keep track of all your OpenShift environments regardless of where they're running as well as you can create new clusters from here and I'll dig into the Azure cluster that we were just taking a look at okay but is this user in place something I have to install in one of my existing clusters no actually this is a hosted service provided by Red Hat as part of cloud.redhat.com and so all you have to do is log in with your Red Hat credentials to get access that is incredible so one console one user experience to see across the entire hybrid cloud we saw it earlier with Red Hat 8 and Red Hat 998 and now we see it for multi-cluster management here with OpenShift so you can fundamentally see now the software operators do fundamentally change the game when it comes to making human operators vastly more productive and more importantly making dev and ops work more efficiently together than ever before so we saw the rich ISV ecosystem of those software operators where you can manage them across the hybrid cloud with any OpenShift instance and more importantly I want to thank Dan and Jessica for helping us with this demonstration okay fantastic stuff guys thank you so much let's get Paul back out here once again thanks so much to Bern as team Jessica and Dan so you've just seen how OpenShift operators can help you manage hundreds even thousands of applications install upgrade remove nodes control everything about your application environment virtual, physical, all the way out to the cloud making things happen when the business demands it even at scale because that's where it's going to get our next guest has lots of experience with demand at scale and they're using OpenSource container management to do it they're work building a successful cloud first platform and they are the 2019 innovation award winner please welcome 2019 innovation award winner Cole's senior vice president of technology Rich Hodeck Hi Paul, how are you doing? Thanks so much for coming out we really appreciate it so I guess you guys set some big goals too so can you baby tell us about the bold goal that you helped that you personally helped set for Cole's and what inspired you to take that on yeah so it was 2017 and our life was pretty good I had no gray hair and our business was well our tech was working well and but we knew we'd have to do better into the future if we wanted to compete retail's being disrupted our customers are asking for new experiences so we set out on a goal to become an open hybrid cloud platform and we chose Red Hat to partner with us on a lot of that we set off on a three year journey we're currently in year two and so far all KPIs are on track so it's been a great journey thus far that's awesome so obviously obviously you think open source is the way to do cloud computing so we absolutely agree with you on that point so what is it that's convinced you to go more along the way so I think first and foremost we do we have a lot of traditional ISVs but we found that the open source partners actually are outpacing them with innovation so I think that's where it starts for us secondly we think there's maybe some financial upside to going more open source we think we can maybe take some cost out and unwind from these big ULAs and thirdly as we go to universities we started hearing as we interviewed hey what is Coles doing with open source and we wanted to use that as a lever to help recruit talent so I'm kind of excited we partner with Red Hat on OpenShift and RHEL and Gluster and ActiveMQ and Ansible and lots of things but we've also now launched our first open source project so it's really great to see this journey we've been on that's awesome Rich so you're in a high touch beta with OpenShift 4 so what features and components and capabilities are you most excited about and looking forward to with the launch and what are maybe some new goals that you might be able to accomplish with the new features I will tell you we're off to a great start with OpenShift we've been on the platform for over a year now we won an innovation award we have this great team of engineers out here that have done some outstanding work but certainly there's room to continue to mature that platform at Coles and we're excited about OpenShift 4 I think there's probably three things that we're really looking forward to one is we're looking forward to a better upgrade process and I think we saw some of that in the last demo so upgrades have been kind of painful up until now so we think that that'll help us number two a lot of our OpenShift workloads today or the workloads we run in OpenShift are the stateless apps and we're really looking forward to moving more of our stateful apps into the platform and then thirdly we've done a great job of automating a lot of the day one stuff the provisioning of things there's great opportunity to go out there to do more automation for day two things so to integrate more with our messaging systems and our database systems and so forth we're excited to get on board with version 4 we are too you know I hope we can help you get to the next goals and we're going to continue to do that thank you so much Rich all the way from RHEL to OpenShift it's really exciting for us frankly to see our products helping you solve world problems what's which is really why we do this and getting into both of our goals so thank you very much and thanks for your support this has all been amazing so far and we're not done a critical part of being successful in the hybrid cloud is being successful in your data center with your own infrastructure we've been helping our customers do that in these environments for almost 20 years now we've been running the most complex workloads in the world but you know while the public cloud has opened up tremendous possibilities it also brings in another type of another layer of infrastructure complexity so what's our next goal extend your data center all the way to the edge while being as effective as you have been over the last 20 years when it's all at your own fingertips first from a practical sense enterprises are going to have to have their own data centers in their own environments for a very long time but there are advantages of being able to manage your own infrastructure that expand even beyond the public cloud all the way out to the edge in fact we talked about that very early on how technology advances in compute networking and storage are changing the physical boundaries of the data center every single day the need to process data as a source is becoming more and more critical new use cases are coming up every day self-driving cars need to make the decisions on the fly in the car factory processes are using AI need to adapt in real time the factory floor has become the new edge of the data center working with things like video analysis of a car's paint job as it comes off the line data is only needed for seconds in order to make critical decisions in real time if we had to wait for the video to go up to the cloud in back it would be too late the damage would have already been done the enterprise is being stretched to be able to process on site whether it's in a car, a factory a store or an ATM usually involving massive amounts of data this can't easily be moved just like these use cases couldn't be solved in private cloud alone because of things like latency and data movement to address real time requirements they also can't be solved in public cloud alone this is why open hybrid is really the model that's needed in the only model forward so how do you address this class of workload that requires all of the above running at the edge with the latest technology all at scale let me give you a bit of a preview of what we're working on we are taking our open hybrid cloud technologies to the edge integrated with our OEM hardware partners this is a preview of a solution that will contain Red Hat OpenShift Seth Storage and KVM virtualization with Red Hat Enterprise Linux at the core all running on pre-configured hardware the first hardware out of the out of the gate will be with our long time OEM partner Dell Technologies so let's bring back Burr and the team to see what's right around the corner please welcome back to the stage Red Hat Global Director of Developer Experience Burr Sutter with Garima Sharma we just saw how OpenShift 4 and operators have redefined the capabilities and usability of the open hybrid cloud and now we're going to show you a few more things so just be ready for that but I know many of our customers in this audience right now as well as the customers who aren't even here today you're running tens of thousands of applications on OpenShift clusters we know that is occurring right now but we also know that you're not actually in the business of running Kubernetes clusters you're in the business of oil and gas you're in the business of retail you're in the business of transportation you're in the business at all we also know though you have low latency requirements like Paul was just talking about and you also have data gravity concerns where you need to keep that on your premises so what you're about to see right now in this demonstration is where we've taken OpenShift 4 and made a bare metal cluster right here on this stage this is a fully automated platform there is no underlying hypervisor below this platform it's OpenShift running on bare metal and this is your Kubernetes native infrastructure working in storage with me right now is Garima Sharma she's one of our engineering leaders responsible for our infrastructure technologies please welcome to the stage Garima thank you my pleasure to be here at the Red Hat Summit so let's start at cloudredhat.com and here we can see the cluster that Dan and Jessica were working on just a few moments ago from here we have a bird's eye view of all of our OpenShift clusters across the hybrid cloud from multiple cloud providers to on-premises and notice this bare metal cluster well that's the one that my team built right here on this stage so let's go ahead and open the admin console for that cluster in this demo we'll take a look at three things a multi-cluster inventory for the open hybrid cloud at cloudredhat.com second, OpenShift container storage providing convert storage for virtual machines and containers and the same functionality for cloud, convert and bare metal and third, everything we see here is Kubernetes native so by plugging directly into Kubernetes orchestration we gain common storage networking and monitoring facilities now last year we saw how container native virtualization and kube-vert allow you to run virtual machines on Kubernetes and OpenShift allowing for a single converged platform to manage both containers and virtual machines here I have this .NET project now from last year we had a windows virtual machine running a ASP.NET application and we had started to modernize and containerize it by moving parts of the application from the windows VM to Linux containers so let's take a look at it here I have it again oh, Lars showed you windows earlier and I was playing this game backstage so I was just playing a little solitaire sorry about that I don't have much time for that right now but as I was saying over here I have visual studio now the windows virtual machine is just another container in OpenShift and the RDP service for the virtual machine it's just another service in OpenShift OpenShift running both containers and virtual machines together opens a whole new world of possibilities but why stop there so this year we broadened this native infrastructure as our vision to redefine the operations of on-premises infrastructure and this applies to all manners of workloads using OpenShift and Metal running all the way from the data center to the edge now why you ask right two main benefits one to help reduce the operational costs and second to help bring advanced Kubernetes orchestration concepts to your infrastructure so next let's take a look at storage so OpenShift container storage is software defined storage providing the same functionality for both the public and the private clouds by leveraging the operator framework OpenShift container storage automatically detects the available hardware configuration to utilize the disks in the most optimal way so when adding a node you don't have to think about how to balance the storage storage is just another service running in OpenShift and I really love this dashboard quite honestly because I love seeing all the storage right here so I'm kind of curious though Garima what kind of applications would you use with the storage so this is the persistent storage to be used by your databases your files and any data from applications such as Apache Kafka now the Apache Kafka operator uses Kubernetes for scheduling in high availability container storage to store the messages now here our on-premises system is running a Kafka workload streaming sensor data and we want to store it and act on it locally right in a place where maybe we need low latency or maybe in a data lake like situation so we don't want to send this data to the cloud instead we want to act on it locally right let's look at the Grafana dashboard and see how our system is doing so with the incoming message rate of about 400 messages per second the system seems to be performing well right I want to emphasize this is a fully integrated system we are doing the testing and optimizations so that the system can auto-tune itself based on the applications okay I love the automated operations now I am curious because I know other folks in the audience want to know this too can you tell us more about how this truly integrated is so again you know I want to emphasize everything here is managed fully by Kubernetes and OpenShift so you can really use the latest coolness to manage them all next let's take a look at how easy it is to use Knative with Azure functions to script a live reaction to a live migration event Knative is a great example in fact you are part of my breakout session yesterday you saw me demonstrate Knative you can come to our Guru night at 5pm and actually get hands on with Knative so I really enjoy using Knative myself as a software developer but I am curious about the Azure functions component yeah so Azure functions is a function as a service engine developed by Microsoft fully open source and it runs on top of Kubernetes so it works really well with our on-premises OpenShift here now I have a simple Azure function that I already have here and this Azure function you know let's see if this will send out a tweet every time we live migrate a Windows virtual machine right so I have it integrated with OpenShift and let's move a node to maintenance to see what happens so basically as that VM moves we are going to see the event and trigger the trigger the function yeah an important point I want to make again here Windows virtual machines are equal citizens inside of OpenShift they are investing heavily in automation through the use of the operator framework and also providing integration with the hardware right so next now let's move that node to maintenance let's be real clear here I want to make sure we understand one thing and that is there is no underlying virtualization software here this is OpenShift running on bare metal with these bare metal hosts that is absolutely right the system can automatically discover the bare metal hosts alright so here let's move this node to maintenance so I start the maintenance now what will happen at this point is storage will heal itself and Kubernetes will bring back the same level of service for the Kafka application by launching a pod on another node and the virtual machine will live migrate right and this will create Kubernetes events so we can see the events in the event stream changes have started to happen and as a result of this migration the Knative function will send out a tweet to confirm that Kubernetes native infrastructure has indeed done the migration for the live VM right see the events rolling through right there yeah alright and if we go to Twitter alright we got tweets fantastic and here we can see the source node report migration has succeeded so pretty cool stuff right here so we want to bring you a cloud like experience what this means is we are making operational ease of use as our top goal we are investing heavily in encapsulating management knowledge and working to pre-certify hardware configuration in working with our partners such as Dell and their Dell ready node program so that we can provide you guidance on specific benchmarks for specific workloads and auto tuning system alright well this is I know right now you're probably thinking I want to jump on this stage and check out this bare metal cluster but you should not wait till after the keynote and then come out and check it out but also I want you to go out there and think about visiting our partner Dell and their booth where they have one of these clusters also okay so this is where VMs and networking and containers and storage all come together and a Kubernetes native infrastructure you've seen right here on this stage but I know Grima you have a little bit more yes so this is literally the cloud coming down from the heavens to us okay right here now right now right here right now so to close the loop you can have your cluster connected to cloudredhat.com for our insights and site reliability engineering services so that we can proactively provide you with the guidance through automated analysis of telemetry and logs and help flag a problem even before you notice you have it beat software, hardware, performance or security and one more thing I want to congratulate the engineers behind the school technology absolutely there's a lot of engineers here on this cluster and worked on this stack absolutely thank you really awesome stuff and again do go check out our partner Dell they're just out that door I can see them from here they have one of these clusters get a chance to talk to them about how to run your OpenShift 4 on a bare metal cluster as well alright Grima thank you so much that was totally awesome we're at a time and we got to turn this back over to Paul thank you thanks again Burr and Grima awesome you know so even with all the exciting capabilities that you're seeing I want to take a moment to go back to the first platform tenant that we learned with RHEL that the platform has to be developer friendly our next guest knows something about connecting a technology like OpenShift to their developers and part of their company-wide transformation and their ability to shift the business that helped them take advantage of the innovation award winner this year please let's welcome Ed to the stage please welcome 2019 Innovation Award winner BP Vice President Digital Transformation Ed Alford thanks Ed how are you? so let's get right into it what were you guys trying to accomplish at BP and how was the goal really important in mandatory within your organization so Paul and everyone else we're a global energy business with operations in over 70 countries and we've embraced what we call the dual challenge which is the increasing demand for energy that we have as individuals in the world but we need to produce that energy with fewer emissions as part of that one of our strategic priorities that we have is to modernize the whole group and that means simplifying our processes and enhancing productivity in digital solutions so we're using cloud-based technologies and more importantly open source technologies to create a community inside the whole group that collaborates effectively and efficiently and uses our data and expertise to embrace the dual challenge and actually try and help solve that problem that's great so how do these new ways of working benefit your team and really the entire Oregon company as a whole? We've been giving the innovation award for our digital conveyor both in the way it was created and also in what it's delivering there's a couple of guys in the audience Paul Costel and Bruno Rothgeser their teams their teams developed the conveyor using agile and devops and sometimes we talk about this stuff a lot but actually they did it in a truly agile and devops way so we're trying to experiment in working with different ways and highlighting the skill sets that we as a group require in order to transform Using these approaches we can now move things from ideation to scale in weeks and days sometimes rather than months and I think that if we can take what they've done and use more open source technology we can take that technology as a whole group to tackle this dual challenge and I think that we as technologists and it's really cool I think that we can now use technology and open source technology to solve some of these big challenges that we have and actually just preserve the planet in a better way So what's the next step for you guys at BP? So moving forward we are embracing ourselves embracing a cloud first organization build out the technology across the entire group to address the dual challenge and continue to make some of these bold changes and actually get into it and really use our technology as I said to address the dual challenge and make the future of our planet a better place for ourselves and our children and our children's children That's a big goal Thank you so much Ed Thanks for your support and thanks for coming today and frankly I think is the best part of this presentation We're going to meet the type of person that makes all of these things a reality This type of person typically works for one of our customers or with one of our customers as a partner to help them make the kinds of bold goals like you've heard about today and the ones you'll hear about more in the week I think the thing like most about IT is you feel that reward just helping people and helping people with stuff you enjoy with computers My dad was the math and science teacher at the local high school and so in the early 80's I kind of met the default IT person so he was always bringing in computer stuff and I started at a pretty young age What Jason's been able to do here is more evangelize a lot of the technologies and different teams I think a lot of that comes from the training and his certifications that he's got He's always concerned about their experience how easy it is for them to get applications written how easy it is for them to get them up and running At the end of the day we are a loan company that's why we lean on a company like Red Hat that's where we get our support from that's why we decided to go with a product like OpenShift I really liked the product itself So my daughter's teacher they were doing a day of coding and so they asked me if I wanted to come and talk about what I do and then spend the day helping the kids do their coding class The people that we have on our teams like Jason are what make us better than our competitors Anybody can buy something off the shelf It's people like him they're able to take that and mold it into something that then is what we're offering for our partners and for our customers Please welcome Red Hat's certified professional of the year Jason Hyatt Jason, congratulations Congratulations, what a big day What a really big day It's great to see such work that you've done here It's really great and shows out in your video It's really especially for you as well to see how skills can open doors for young women like your daughters who already loves technology So I'd like to present this to you right now Congratulations Congratulations Good, and we I know you're going to bring this passion I know you bring this in everything you do So it's just congratulations again Thanks Paul, it's been really exciting I was really excited to bring my family here to show the experience It's great to see them all here as well Maybe you guys could stand up So before we leave before we leave the stage I just wanted to ask what's the most important skill that you'll pass on from all your training to the future generations So I think the most important thing is you have to be a continuous learner You can't be comfortable in learning what you already know You have to be a continuous learner and of course you got to use the I You don't even have to ask you the question Of course That's awesome and thank you for everything that you're doing So thanks again What makes open source work is passion People that apply those considerable talents to that passion making it work to contribute their ideas and believe me it's really an impressive group of people Your family and especially Berkeley in the video I hope you know that the certified of the year is the best of the best the cream of the crop and your dad is the best of the best of that so you should be very very happy for that I also can't wait to come back here on this stage 10 years from now and present that same award to you Berkeley Great, should be proud Everything you've heard about today is just a small representation of what's ahead of us We've had a set of goals and realized some bold goals over the last number of years that have gotten us to where we are today Just to recap those bold goals first build a company based solely on open source software it seems so logical now but it had never been done before Next building the operating system of the future that's going to run and power the enterprise making the standard base platform in the enterprise a Linux based operating system and after that making hybrid cloud architecture of the future make hybrid the new data center all leading to the largest software acquisition in history Think about it, around a company with a 100% open source DNA without throughout despite all the fud we encountered over those last 17 years I have to ask is there really any question what open source has won realizing our bold goals and changing the way software is developed in the commercial world was what we set out to do from the first day that Red Hat was born but we only got to that goal because of you many of you contributors many of you new to open source software and willing to take the risk alongside of us and many of you partners on that journey both inside and outside Red Hat going forward with the reach of IBM Red Hat will accelerate even more this will bring open source innovation to the next generation hybrid data center continuing on our original mission and goal to bring open source technology to every corner of the planet what I just went through in the last hour or so a mind boggling to many of us in the room who have had a front row seat to this over to the last 17 plus years has only been Red Hat's first step think about it we have brought open source development from a niche player to the dominant development model in software and beyond open source is now the cornerstone of the multi-billion dollar enterprise software world and even the next generation hybrid architecture would not even be possible without Linux at the core in the open innovation that it feeds to build around it this is not just a step forward for software it's a huge leap in the technology world beyond even what the original pioneers of open source ever could have imagined we have witnessed open source accomplish in the last 17 years more than what most people will see in their career or maybe even a lifetime open source has forever changed the boundaries of what will be possible in technology in the future and in the one last thing to say it's everybody in this room and beyond everyone outside continue the mission thanks have a great summit great to see you