 And changing the way software is developed in the commercial world was what we set out to do from the first day that Red Hat was born. Innovation today requires a fundamental change in how we do work. It's a huge leap in the technology world. Beyond even what the original pioneers of open source ever could have imagined. We're trying to go through a culture change at my organization so I'm hoping to take some of those ideas that were expressed in the conference back with me. I'm really excited about networking opportunities, seeing people from industries different than mine. I took as much pictures as I could in each breakaway. I'm trying to focus a lot on Ansible. There's things that are coming with the latest updates so we can apply the newest features to our software solutions. I learned how to build an RPM from source, which is really neat because that's something I've just never tried doing myself and it'll be really handy in my job. API security, I thought that was a brilliant session. And then with Service Mesh or Istio on the back and seeing what they had actually kind of created or cooked was actually really cool to see. We had a session with Red Hat Satellite and afterwards my team leader and I, we walked up to the workshop presenter and we spoke to him. That's a one-on-one thing that we don't have when you are at work. The general session today was amazing. There was such an Adela. Today, in fact, is the general availability of the Azure OpenShift service so it's fantastic to see that out. We're starting the container journey and we're scared of how it's going to go down the line and we hear stories like they go from zero to running 20,000 containers in less than an year. That's going to inspire us to transform. REL 8 is being released right now and I'm here and I really want to learn all about it. I was excited to see Stephanie and Denise come on stage. They did a great job. Multicluster OpenShift management is something that I'm really looking forward. It's opening a new horizon and new ideas. You have built a wonderful company. Maybe even more important, a wonderful culture. A culture about open. And so to... Because I'm a firm believer in open source. As more mature companies transform, we'll have to figure out how to marry the old and the new. Technology doesn't drive itself, people drive technology. Going forward, Red Hat will accelerate even more. This will bring open source innovation to the next generation hybrid data center. And we would not be here today without the support of a myriad of partners who've invested in open source and have been instrumental to getting us where we are. Ladies and gentlemen, please welcome to the stage Red Hat Senior Vice President, Customer Experience and Engagement, Marco Bill Peter. Good morning, good morning. What a great day. Day three of Red Hat Summit. How have you learned this? What have you learned the last few days? You heard it from the videos. Amazing. A lot of new things to learn. Now, yesterday, Paul Cormier talked about bold ideas. Shooting for the moon and how to execute. But having ideas is one thing. Having ideas and innovate while running a smooth operation, a stable operation, that's a whole different thing. Now, today is all about innovation and the future and how it's shaping up. It's happening today in startups. You see in startups change the development process. They changed how much they release, how many times did they release a day or an hour. It's happening. But you know what's more exciting for me? What I've learned the last two days? And what I've learned the last few years visiting clients? It's happening in global enterprises as well. It's not just happening in startups. And that's really proud and really great achievement of you all to innovate while running a smooth operation. And when we talk about innovation, especially at the Red Hat Summit, there are, of course, the Red Hat Innovation Award winners. We heard about them. This year's Innovation Award winners were BP, Deutsche Bank, Emirates, MBD, HTA Healthcare, and Coles. We heard from Deutsche Bank already, from Coles and from BP. And later today, from Chris Wright, we hear about HTA Healthcare and the Emirates, MBD. But before that, we're going to talk about innovation and how Red Hat supports you through that life cycle, giving you stability, security, guidance. I also say accountability and influence. Let me talk more about that. Having ideas, having bold ideas and making them a reality, you know that needs a really, really trusting partnership with your partners and with your suppliers. We understand that. We understood that for the last 20 years because we were always an ecosystem around your IT needs. This is where my organization, customer experience, and engagement comes in. Red Hat products are not just software. They are subscription. In the subscription, you get software, but you get also life cycle. You heard about Drell 8. We have 10-year life cycle, 13-year life cycle to make sure you can operate safely in those times. It's also security, product security. I'll talk a little bit more about that. But I also mentioned accountability. What does that mean? Accountability means to have the expert knowledge from the technical support, from the trusted engineers who built the software that can provide you real answers and changes. You know, a lot of people say, supporting open source, that's easy. Well, it's open source, but trust me, it's not. Because if you got to analyze software starting from the middleware layer, into OpenShift, into the Linux, into further down, these levels, you got to understand all the levels. You got to be able to diagnose it. But then you find the issue. What do you have to do? You got to produce a patch. It's not just patching that system. You got to patch it so it aligns with upstream because we don't fork. But then you also get aligned with our software so we can stand behind it for the 10 years of the life cycles. That is real accountability. If we don't support it, it's not because we want to be difficult. If we don't want to support it or we can't support it, it's because we can't stand behind it. We don't have the partnerships or the certifications, or we don't know the code. Now let's talk about security quickly. Patches for security vulnerabilities are great, but not enough. Give you a quick story. A company, I know, they spend 18 months on a 15% performance improvement. They have 10,000 of servers, so 15% meaningful. The next week, a security vulnerability comes along. Could wipe out 20% of your performance. Your 18 months effort is gone. That's where the product security team comes in to really give you guidance as in, yeah, here you got a patch. Here you got to adjust these firmware settings. Here you got to adjust that. That is the real value. That's how we build the customer experience at Red Hat to be meaningful and valuable for you. I mentioned it every year. Customer experience at Red Hat is at the core of what we do. We don't squeeze or outsource support to save cost. We understand this is really, really important for you to succeed. That's how successful customers engage with us, and we hope to do that. Keep in mind having a smooth operation is the foundation to enable innovation. Now, speaking of innovation, I want to talk about previous winners, not about the winners from this year, because I want to show that previous winners keep innovating, and I want to talk with two clients. They have really good reputations to protect. At the same time, they got to innovate around their bold ideas and stay relevant for their clients. The company Hilti got recognized in 2009 for standardizing their large mission-critical SAP landscape on Red Hat Enterprise Linux. Let me welcome Christoph to the stage. Please welcome Hilti, head of IT infrastructure, Dr. Christoph Beck. Christoph, great, great to have you here. What a pleasure to be on stage. Christoph, tell us first a little bit about what Hilti does and what your department does. Absolutely. Hilti is a leading supplier in the construction industry. We serve out of the Principality of Liechtenstein about 120 countries with our services, solutions, products, and software for the professional construction industry. We are a family-owned company with about 29,000 employees all over the globe, and do about the 5.6 billion revenue a year. IT is for us a super important topic, just to bring two facts to you. We are a direct sales company, so that means we have about 250,000 touchpoints with customers every day. That means we record all of them, including about 75,000 orders every day that we take from our customers. A second fact, our customers are very demanding. Think about a professional job site like out there. Customers do not expect to wait for the tools and products that they are ordering, so that means we are able to serve our customers in almost 99% of the cases in time. That means same day or 24-hour deliveries. That is massive. That's all running on your SAP landscape. Absolutely. Back in 2009, he won the award to move their SAP landscape to Red Hat Enterprise Linux. You can see from the quote here, we wanted to bring up the quote here, we have a very long 20 years or more relationship with SAP and we have many customers that do that. Let me ask you, Christof, how are you pushing the envelope today? What's the latest? In 2009 was a milestone for us. We moved from legacy hardware, from proprietary operating systems into an open operating systems, together with you with Red Hat, and together with Intel-based technology, not only for our SAP landscape, but for all our servers and databases. Now, in 2018, we did another bold step. We placed a big bet. Basically, SAP is pushing us to move to S4 HANA, and that meant that we decided, yes, we're going to go that way, and again, we placed that bet with our partners, Red Hat, and set up our S4 HANA system to run and serve our business. That is massive. You know, just moving straight, I was really impressed, worked with you in those years to jump straight to S4 on the application side, on HANA, on the database. What made you confident you can do that step? I mean, we were the first movers in that direction from a size and complexity point for SAP, so no company in our size has done that before. We talk about a 12 terabyte RAM system that we need to handle. Together with the partnership with Red Hat, that was then working out, and we could only do that because we trust in Red Hat, we trust in our partners that we have. And finally, it was the personal relationships as well, and the trust that we have with all our partners, in particular Red Hat, that we were confident to pull that through. Yeah, and I remember just two days after the Go Live, we were all excited about the Go Live. I knew it was successful and we were sitting in a plane together by coincidence, and I was really happy to tell Christoph which has elevated our competence center in SAP in Waldorf, so it was a great coincidence. And that showed to us that we bet on the right partners and on the right systems because you continue to advance up to today. That's great. Christoph, thank you very much. Thank you very much to have you on stage. Thank you. Thank you. You know, Jim Whitehurst always gives a history lesson every year. I'll give a little geography lesson with having a customer from Liechtenstein here. Now, let me welcome another Innovation Award winner on stage. In 2018, UPS built this innovative solution on Red Hat technologies, like OpenShift Container Platform, Fuse and Red Hat Enterprise Linux. They recently migrated their largest mainframe application to OpenShift. In December last year, they exceeded one billion transactions per day on several occasions. Pretty amazing. Let me welcome Ken from UPS to the stage to tell his story. Please welcome UPS President of Information Technology, Ken Finnerty. Good morning, everyone. It's a pleasure to be here today and highlight how Red Hat is playing a key role in UPS's digital transformation. You know, I think all of us, regardless of industry or geography where we're working, share one common thing, and that is that we're facing increasing competition in our marketplaces. We might also, though, find increasing opportunities as well. And I think you'll all agree with me that reacting to both of these conditions requires technology, specifically making clever and innovative use of technology. But, futureists tell us that windows of strategic advantage won't last as long as they once did. And to us, that means that we can't afford to be good only once in a while. We need to create technology organizations that can continually deliver value. And that requires a modular design for your software, along with modern tools and platforms, and all of those things need to coexist in a healthy and productive technology ecosystem. You know, to UPS, being digital means having the ability to action the important events that occur within a supply chain and do that through software. For us, it means affecting the moments that matter to our customers and doing it without heroics. You know, there's an old saying that a good idea is worth $1, but a plan to implement it is worth a million. Well, at UPS, we have a plan. We're building a global smart logistics network, one that is fully digitized and operational in over 200 countries and territories around the world and delivering 21 million shipments every day. We're already using technology to provide mass personalization to customers. We're simplifying shipping, delivery, and returns. We're removing friction from the e-commerce marketplaces and we're even helping our customers improve their sales through increased digital traffic. Let me tell you how. UPS, my choice, is our flagship digital engagement product. We have over 59 million members signed up worldwide. And this offers those members a personalized delivery experience, a personalized and flexible delivery experience. It's personalized because the members can go online, save their delivery preferences, which we honor, but it's flexible too because they could make changes to that on an ad hoc basis, shipment by shipment. And they can do all of that without having to pick up the phone and call UPS. And shippers also enjoy the benefits of my choice because they can promote their brands in the electronic notifications that we send to consumers. And consumers receive those through three channels, through text, through email, or through push notification if they have the mobile app. One of the best features about my choice is that it offers consumers the ability to redirect deliveries to an alternate delivery location. We call that UPS access points. That's a network of more than 25,000 retailers worldwide and they'll receive and hold deliveries for consumers who can't be home to receive them during the day. Giving consumers a simple digital way to manage their deliveries improves the experience for all parties involved in e-commerce. It improves the experience for consumers because they enjoy proactive shipment notifications along with flexible delivery terms. It improves it for sellers and shippers because they experience a reliable service that includes digital promotions. And it's good for UPS as a carrier because we improve our operating efficiency. Now, you can see how my choice creates value in the marketplace, but I could tell you as a technology person that it also demands a healthy ecosystem, one that begins with a modular software design where business services are exposed through web APIs. In other words, it requires application building blocks. However, one of the things we found was that building blocks alone don't make up the whole ecosystem. Now, we needed a platform, a platform that would allow us to deploy, to support, and most importantly, to scale as we innovate. Enter Red Hat. We picked OpenShift because it is highly scalable. It's an enterprise-ready container platform with powerful orchestration through Kubernetes. It has become a foundational element for us. It's enabled us to realize our cloud-native microservices architecture because, after all, we want to go beyond the 12-factor app. Now, pairing our cloud-native microservices apps with Red Hat has given us the foundation to process those billions of transactions that you just heard about and provide customers with accurate and timely shipment information. Now, along with OpenShift, we've also deployed other tools in our tool chain, things that allow us to do build, orchestration, to do automated testing, and also to instrument our applications. All of these things working together in harmony in a technology ecosystem have enabled us to reduce our deployment windows, to increase our change control success, and to improve our overall system uptime. Having Red Hat's team of experts working with us on our journey has provided valuable insight and knowledge, and it's accelerated our success. They've been a true partner to UPS. We all know transformation isn't easy. It requires a robust technology ecosystem. It requires talented and engaged people, and it requires a process that's fully automated. Now, Red Hat gave us the foundational elements for our technology ecosystem. It's an ecosystem we're using to create a global smart logistics network. Thank you for allowing me to share our story today. Thank you, Ken. This was awesome. Unbelievable logistics. Now, innovation doesn't stop because you achieve something, you win an award. After every success, you actually have more opportunities and you can take greater risks because you have established a culture of innovation, a culture of change. That's really important. And Red Hat will be there for you, not just with bits and bytes, but a company standing behind you with a real partnership. Everything you do with us and every time we have an interaction with you, it only helps us to make our products and solutions better. Better for the ecosystem, better for the open-source community, but also better for your business. Thank you for being awesome customers. I earlier teased the Innovation Awards. We're getting closer to that, so stay around Chris Wright's keynote and the demo from Bear Sutton team, and then we will reveal the 2019 Red Hat Innovator of the Year that you voted for. At this time, I would like to hand the mic over to Chris Wright, our CTO and good friend of mine. Please welcome Red Hat Vice President and Chief Technology Officer, Chris Wright. All right. Good morning. Day three. Thank you, Marco, and thank you to Hilti and UPS. You're doing amazing work showing that you have to constantly embrace change and new innovations to stay relevant. We know that business is ever changing. You have to continually adapt to changes in technology, needs, and demands. All of this will meet in your customers where they are. Over the last couple of decades, we've seen a cycle of innovation. It's not only fundamentally changed our view of IT, but also business and even society. The ubiquity of the internet, open source, smart phones, DevOps, big data and cloud. Red Hat has been your trusted, stable and open source platform through all of this. This is a common platform where you can innovate and not worry about availability, flexibility, scalability, security, lock-in. Now we're entering the next round of innovation. The data-centric economy. The mass adoption of AI and distributed computing. True innovation and disruption doesn't come from technology on its own. True innovation happens when new technology connects people and new ideas in your business. Disruption comes from the culmination of incremental improvements in technology, paired with creativity, diverse ideas and open processes, how you actually apply that technology to create something entirely new. An important enabler for this is the ability to utilize technology without undue hurdles. A quick time to value, stability, a platform that enables you to focus on the innovation in your core business. Now throughout my time with you today we have a lot planned. We'll see how Red Hat Red Hat's platforms built around RHEL and OpenShift are truly your trusted platforms for the next cycle of innovation. And we'll hear from software partners that are helping operations and developers use AI and ML in novel ways. We'll show you how these new capabilities are helping business deliver value in disruptive ways, not only for themselves, but for entire industries. But before we get too deep into the software and platform side of things, everything starts with hardware. Without hardware innovation, we can only squeeze so much power out of our platforms. And modern innovation using AI is hardware accelerated. Hardware is changing. Moore's law is hitting the laws of physics. And the next round of hardware innovation is about specialized hardware, especially for AI. You see Google with TPUs, Intel with DL Boost, GPUs, even FPGAs. Now during my keynote last year, I talked about performance and tried to make the raw numbers very real, normalizing a CPU cycle to one second. With that time normalization, we saw that advancements in hardware reduce the time I.O. operations take from months to days and even hours. And that was a year ago. We were looking at CPUs with just under 30 cores compared to more than 5,000 cores for a modern GPU at the time. What a difference a year makes. The same convolutional neural net training that I tested last year is 56% faster. We're also seeing the performance improve as much as two times within the same power envelope. Now this is great for inference, deployed at the edge, for example. But in the post-More's law world, this isn't just scaling up hardware anymore. Achieving maximum performance out of hardware also depends on end-to-end optimization. The hardware and drivers optimize network fabric, specialize tool chains, optimize software stacks and AI frameworks, and of course applications. A partner in this effort is the leader in GPUs, NVIDIA. NVIDIA has been with Red Hat all along ensuring that their newest hardware capabilities are available and supported end-to-end solutions. To help tell this story and to show you what you can do with all these optimizations, I'd like to welcome Chris from NVIDIA to the stage. Please welcome NVIDIA Vice President, Computing Software, Chris Lam. Good morning and welcome everyone and thank you Chris Wright. NVIDIA and Red Hat have a long history of working together to deliver enterprise class high-performance solutions. I've worked on CUDA for about 12 years now and I remember back in CUDA 1.0 how key it was for us to launch with Red Hat. And here we are now, 10 generations of CUDA later, and we're working closer than ever. Most recently, we certified Red Hyatt Linux on DGX, our enterprise class AI supercomputer. We now have Red Hat virtualization integrated with VGPU for virtualized and of course, OpenShift support on NVIDIA GPUs. I'm talking to you now at critical time in computing. Moore's law has ended and data is exploding and it's completely clear that in this new era of computing acceleration is going to be the key that allows our data centers to continue getting faster. Now NVIDIA GPUs are the most widely adopted accelerator across the industry and this is for a really good reason. Solving this problem requires an accelerator that is programmable across multiple domains of software but within a single architecture so it can be deployed everywhere. And this is a really hard problem. It's a full stack acceleration problem. You have to do optimization from the hardware to the firmware to the drivers to the communication libraries, algorithmic libraries for data science and machine learning, framework optimizations all the way up to the services and the application layer itself. It's a really difficult problem and this is one of the reasons why we've taken that stack, an optimized version of that stack and put it inside containers in NGC so there's a repository of fully accelerated applications with their stacks across multiple domains. An example might be the major training frameworks or RAPIDS which is a Python-based framework for accelerating common primitives in data science such as extract, transform, and load or classical machine learning algorithms. So ultimately what we want to do is we want to make it easy to install and run these optimized containers on OpenShift. Our goal of working together here has been to simplify the deployment and management of this full stack needed for this era of accelerated computing. So let's start with operations. You're an operator and let's see a demo of how easy this can be. OpenShift 4 makes installing the cluster really easy. It reduces the install time from the better part of a day to less than an hour. So here you see OpenShift 4 installed on a bare metal cluster in our lab. You've got three bare metal master nodes and then three worker nodes with GPUs installed. So then let's install CUDA. Using the GPU operator this is really really easy and the upgrades going forward are easy too. This also installs a monitoring stack that gives you metrics so you can make sure you're getting great utilization of your GPUs. This is something that we hear again and again from our customers is really important that you're getting good utilization out of them. Ultimately the point here is to make it easy for a data scientist to launch one of these optimized NGC containers from a web console with just a couple of clicks. So let's now switch over to the persona of a data scientist. Somebody who wants to set up an accelerated Python notebook. So here we're setting up Jupyter Hub on OpenShift so that the data scientist can spin up a notebook on demand. The one here we're using uses the RAPIDS framework so that somebody could work say with the DB scan clustering algorithm. Ultimately this is what's going to make it easy for a data scientist to just go get the resources they need on demand. So in short what you've seen here is a preview of what accelerated AI and data science looks like with OpenShift and NVIDIA. We are really excited to work with you, our joint current and prospective customers to join us in a developer preview to try this out with your AI teams and your workflows. And ultimately our aim is to make it easy to deploy clusters on-prem environments. We've got a blog talking about preview access and how you can sign up to be part of our Early Access program. And we're also working with leading OEMs to make it simple to get starter kits for GPU accelerated OpenShift clusters. So if you haven't had a chance to stop by our booth in the Expo and come by and see what we're doing, please do. We can share more on what we're doing in our work together. Thank you all. Enjoy the rest of the show. All right, thank you Chris. This is a great opportunity to talk about how OpenShift is important. OpenShift's hardware work with our latest platforms is crucial for these workloads. So with all of this impressive hardware delivering better AI performance through put in RHEL and OpenShift, we need to talk about the people keeping that hardware and those platforms running. Operations. Modern software stacks are making it available on demand. But we know that operations can be a thankless job. You have pressures from the business to deliver more with less. You have to keep everything running. And you have to look for ways to make that ever more efficient while delivering new capabilities for your developers and applications and ultimately your customers. But the unexpected happens. You have hardware failure. You have demand issues that go beyond what your hardware and infrastructure are planned for. You have puzzling performance issues to track down, discover why they happened, and fix them. Now with Murphy's law, you expect the unexpected, you just don't know when. This is where AI can help. To enable innovation, you need everything running reliably. And that means enabling ops to be better than ever. The operational paradigm of the cloud offers a service abstraction that encapsulates operational excellence. But in order to enable you to innovate, that concept has to meet you where you are. This is where AIOps comes in. We're really talking about autonomous clouds, self-driving clusters. AIOps is the combination of platforms, big data, and AIML that enhances practices like performance monitoring, event correlation and analysis and management. At Red Hat, we're actively enabling this with solutions like Red Hat Insights and core concepts in OpenShift 4 such as operators. With AIOps, the infrastructure learns from the data and gains the ability to predict issues before they become problems. An example of a partner who is doing incredible work in the AIOps space is ProfitStore. Their solutions are being built on OpenShift and enhance its scaling and scheduling capabilities. ProfitStore and AIOps help operations teams predict and optimize workloads and resources in your cluster. And now, I'd like to welcome Brian to show you AIOps in action. Please welcome ProfitStore's solution architect, Brian Jang. Thanks, Chris. So as introduced, I'm Brian. I'm a solution architect over at ProfitStore and I'm going to be talking about our AIOps solution, Federator AI for OpenShift. So, Federator AI, what we do is we simplify the cost-optimization process for both day one and day two operations in OpenShift. For day one, as many of you know, there's hundreds of cloud providers out there, each with their own instance types and price structures. And new users to cloud environments might not readily know which cloud provider to choose or even know their own application workload. So that's where we come in, ProfitStore. You just tell us the application and the optimization policy and we'll recommend which cloud provider and instance type to choose from. And then for day two, that's where our machine learning AI comes in. We learn the resource usage of each pod in your cluster and we predict future resource usage. And with those predictions, we can apply it to the native Kubernetes scheduler and auto scalers for a much more intelligent resource utilization than the current historical algorithmic method. And from these two solutions you can kind of see that we're using the resource multiplier with OpenShift to enable your team to streamline operations, save resources and allow them to manage systems more efficiently and in less time. So just some more details about our day one solution, the user tells us what application he wants to deploy, about how many requests per day and the optimization policy, whether it's cost, performance, SLA. And then we can also use the cloud provider and instance type to deploy your application into. But if the user also wants to deploy directly into his own cloud, we can just directly recommend how much resources he'll need in terms of CPU and memory. And then day two, as I said before, that's where our machine learning AI comes in. We predict the future usage of each pod and we apply it to the native Kubernetes scheduler and auto scalers and you can see the graph behind me. There's a white dotted line which is our predicted CPU usage. It's about 10 minutes ahead of the blue solid line which is our current observed CPU usage. But you can see that these two lines are really intertwined which shows that our prediction engine is really accurate at this point. So with these accurate predictions we can recommend where to put the green line and yellow line respectively. And we can also automatically execute these recommendations so your operators don't even need to worry about scaling the cluster themselves. Okay. And then let me just switch over to a web browser. This is a Grafana environment. It's a side by side comparison of the native Kubernetes horizontal pod auto scaler and our federator AI horizontal auto scaler. And I'm going to show you how to do that in a little bit. I'm going to show you how to do that in a little bit shorter. But you can see we found that we outperformed the native one in these three main categories. For the same identical workload we can use 19% less replicas. So that's directly saving you 19% resources. And then we can also reduce your CPU over limit instances by 90% slow down. So it's really big that we help out there. And actually the biggest thing right now is we reduce your out of memory instances by 90%, almost 90%. Every time an application hits an out of memory issue it stalls out and crashes. So you want to avoid these at all costs. We do that 90%. So this is a Grafana environment. There's all this data. We have a booth that you guys can go over with us later . We're going to talk about how we can do that. We can do that in comparison of the HPA, the horizontal pod autoscaler. But we can be applied to the vertical autoscaler, the cluster autoscaler, the scheduler, and all these different facets of your open shift cluster can be optimized using machine learning AI with our federated AI solution. And the really cool thing is that once we have those usage options, we're going to be able to create a cloud provider and instance type to choose. So now you're full stack from your resource usage all the way up to your cloud providers fully optimized using federated AI, profit store. So that's it. We're going to be in the general section. We have a booth, booth 1134. If you guys have any questions or want to get more details come say hi. Talk to us. We're going to be in the general section of the open shift. We're going to be using data to make predictions for the future, keeping containers and pods running within set standards. Our apps built on top of open shift can automatically adjust and become more intelligent out of the box. And the key here is data. We have data. The dataverse is growing and we're going to be using it to make predictions. So in the last 90 years, half of the world's data has been created in just the last two years. Yet only 2% has been analyzed. So how do we connect data to your business to create true innovation? Let's talk about AI as a workload. How we help you innovate with AI. Developers need to be able to use AI. They need to be able to use AI. The work of training models and connecting that to apps has historically been done by small groups of highly skilled data scientists. In a traditional AI workflow, data scientists are a precious resource. But they can become a bottleneck. So we need to enable developers to help data scientists scale. Open shift enables AI to move faster than the developer. This benefits users as apps are able to move more intelligently faster. Now my next two guests are democratizing developers' access to AI in different ways to make the AI workflow easy, intelligent and accessible. You'll hear from Perceptelabs who've made their deep learning model training as simple as a mouse tool where you know what your data is, but you just need to apply it to glean the insights or in this case return a result for your customers. Then you'll see how H2O AI is approaching the problem of massive amounts of data where the model you should use isn't clear. Thanks to work from companies like Perceptelabs and H2O AI, developers now have access to AI platforms built on OpenShift made by Perceptelabs. I'd like to welcome Martin and Robert to the stage to show you how to train and teach your AI to do powerful things the easy way. Please welcome Perceptelabs chief executive officer Martin Isaacson and Perceptelabs chief technology officer Robert Lundberg. Hi, I'm Martin. I'm Robert. We are the co-founders of Perceptelabs. Unlike many Silicon Valley companies, Perceptelabs is not built in some bay area garage. Perceptelabs was built in a Swedish garage. So to start with, how many of you have ever worked with AI? Let me see your hands. That's some. For those of you who don't work with AI, take my word for it. Developing an AI model can be a long, tedious and complicated process requiring specialized knowledge and skills. Well, we now have a tool that helps enterprises save time and money when creating AI models. We simplified the model development process by substituting math and code with a simple drag-and-drop interface. We built all of it using UBI-based containers on Red Hat OpenShift to make the deployment quick and easy. Let's show you how it actually works. I just want to mention that if we had been running this as containers, we would have to wait at least 5 minutes for the virtual machines to start up. I'm just saying. Anyway, to your left, you see the different operations or the ingredients of the AI. In the middle, we have the workspace where we can mix all different kinds of operations. To your right, we see a project menu and on top, a toolbar. Pretty simple. And we're going to do this a little bit like a cooking show where we can use the app to do the same. First, we'll build or bake a model from scratch and start training it in the oven. So we created a data set containing images of red fedoras or red hats. We bought a model to learn to classify if an image contains a red fedora or not. So let's load this data into a data set on the workspace. And we will be able to do the same as we expected. And you can continue to use this drag and drop interface to build out the entire model workflow. But here, we will switch over to a complete workflow and show you the complete process. So we can see the image data layer on the workspace. We have also loaded the label data or the ground truth into another data layer. And this is what our model's output will be compared against during the training. So we have two classes, red hat and not red hat. In the platform, we can define what sort of AI technique we want to use by choosing a training layer. And we can select from multiple options including reinforcement learning, genetic algorithms, dynamic routing and so on. But here, we will choose normal supervised learning. And if you're too lazy to do that, you can do it like us and out to generate them. Well, let's start training our model. First, we will set some general settings. For example, for how long we want to run the model. We automatically get thrown into the statistic dashboard where we can see various kinds of metrics. And this might look a little confusing at the start, but Robert is here to walk us through it step by step. At the top, we have the statistics for the training layer. This shows the overall performance of the model. Here we can see things such as the input model or the current accuracy. We can also see the network output in blue compared to the labels in yellow. And just to the right of that, we see the same thing, just average of many to give you a nice distribution. If you want to keep track of how the model is progressing, you can swap over to accuracy and see it there. At the bottom right, we have something which we call the view box. It's like a peak call into a model where you can peak into all its different parts. You can select which part you want to look into by clicking on the map over to your left. This gives you full transparency of what's going on inside your model. You can see things such as the output, the biases or even the gradients. If you want to, you can pause your model, change some things up on the workspace and keep on training. It looks like the model has finished its training. Let's go to the test view to see how it would perform against real or live data. It hasn't been trained for that yet, but if you want to know how it works, you have a clickable map right here as well. If you're satisfied with your model, you can go over to here and you can export it as either draw digits, a TensorFlow model or a container image coming with a full API. So the key points we want to highlight are the simplicity and efficiency of modeling. A nice looking dashboard allowing you to custom edit every component. And finally, the ability to perform cutting edge AI on OpenShift. Okay, Martin, do you want to talk a little bit about how the audience can engage with this? Sure. This model is hosted in OpenShift and we can communicate with it via Twitter. So we gave this a try and uploaded a picture of how it works. So we're going to go to Twitter, upload a picture and mention at fedora finder bot and it will tell you if it thinks there's a red fedora or not in the picture. And this will be running through the entire day, so have fun with it. We will be hanging around the emerging tech booth if you want to come and see us. So we're going to go to Twitter. We're going to go to Twitter and we will be hanging around the emerging tech booth if you want to come and see us. So in closing, we just want to say, containerize it. Thank you. Please welcome H2O. AI, chief executive officer and founder, Sri Ambati. What an amazing demo from Perceptileps. Thank you for having us red hat. We're so excited to talk to everyone. Open source is about freedom, not free. It's about culture, code and customers, ordinary customers like you who supported us over the last seven years and supported Linux over the last few decades. Thank you for your support community. We're really excited to talk about driverless AI. Automatic machine learning has come of age. We have five common steps for large and small enterprises to use AI. First, you need to drag and drop your data so connectors to your data sources is key. Then you want to do quick data sense. You want to understand how your data is laid out. Automatic visualization allows you to understand your data in quick, simple ways. Finally, data is the most wanted talent across the world today. Automatic machine learning augments them to enable build models of high quality and deploy them quickly as scoring engine pipelines. H2O's driverless AI allows you to prevent the common pitfalls in doing data science like overfitting and automatic feature engineering allows you to get that extra to get your model with that into production. Finally, trust is so important in AI. Your models are not trustable if you cannot really interpret them and explain them. So, explainability is key and that's kind of the final step before you can take a model to your business user. Let's look at how this works in real time. We deployed our models or software on the OpenShift container platform and driverless AI is allows you to connect to several data sources whether it's traditional file systems on the cloud or open source, object stores like Minio or time series databases like KDB. We have a simple demo setup for us here to understand sentiment analysis of the conference. Classic Twitter data sets correlation graph which allows you to understand the hidden structure inside your data and its dimensions. Let's run a simple prediction. Classic test and train in your data set and looking for conference sentiment we'll just look at text. NLP is the heart of all AI. Obviously, you can use not just one framework, multiple frameworks including H2O open source XGBoost, gradient boosting machines, TensorFlow, Torch and your favorite algorithms of choice. H2O's driverless AI now allows you to tune eliminate features that are columns that are collinear or common pitfalls in how you do your test and train division, your validation set preventing leakage of data and signal. Time series, text and transactional data needs a lot of nice recipes, smart recipes that can allow you to come up with the best signal from your data. Right now you're seeing automatic engineering from the data and pulling up signal so you can quickly improve accuracy of your model. We ran an experiment earlier and deployed that on OpenShift. About 400 features have been analyzed across in just a matter of few minutes several of our customers have built models to democratize credit to prevent fraud prevention to save lives and you'll see one of our customers later who's using it to fight sepsis across multiple hospitals. Now it's time to deploy the model. H2O models generate code automatically so you can deploy them as real rest endpoints and so Red Hat Summit has been beautiful awesome experience for all of us and really excited to show that the conference sentiment on this has been spectacular. Finally, OpenShift allows us to truly democratize AI and take it to where developers and data scientists and DevOps for machine learning can truly get. AI is a very powerful way to build code. Data is automatically generating code for you using AI and we're really excited to democratize it and make it for all. Thank you for having us. Creating ecosystems of solutions for our customers will always be important to Red Hat. However, it's also important to consider how our customers are innovating. With our ecosystem partners, customers and the broader open source community, we're creating the trusted platform for the next cycle of innovation. Together we're enabling better hardware to drive AI and IO. I know it's a little cliche, but in ways we've never seen before. We're enabling ops to be even more efficient by peaking into the future to predict issues before they become very real, very expensive problems. We're giving developers the tools they need to train their applications to take advantage of historical shared data to make decisions. We want to create the right primitives and connective tissue in the platform to enable a broad ecosystem of specialized solutions. At Red Hat, we focus on the upstream. Key open source projects in this area are CubeFlow and Open Data Hub, which you can learn more about today on the emerging technology track. All of this innovation delivers value to some organizations and to customers. But what is value? We can talk about money, saving money, or expanding and creating new sources of revenue, but technology and innovation isn't only about money. I want to talk about value that's beyond financials. Value to some organizations is related to health, where delivering value means keeping people alive. It's important to enable innovative customers who understand how innovation impacts both technology and people, like HCA Healthcare, who is one of our Innovation Award winners today. To tell their story, I'd like to introduce Dr. Jackson. Please welcome 2019 Innovation Award winner, HCA Healthcare Chief Data Scientist Dr. Edmund Jackson. Morning everybody. When the organizers Red Hat told me that the theme of this conference was expand, I did not really appreciate that this meant the stage as well. This thing is vast. I could get lost up here. I'd like to start on a personal note to thank the open-source communities. Both our commercial partners, such as Red Hat and Sree from H2O that you heard from just now, but also the non-commercial projects. The list of such projects upon which our work relies is long. GNU, Linux, Clojure, Elixir, Kafka, even our good buddy JavaScript. Without the contributors, the moderators and the maintenance of these projects, our work would be impossible. So thank you. It's my privilege Thank you. It's my privilege this morning to represent a company HCA Healthcare. We're a national based healthcare provider operating 180 hospitals and about 1800 sites of care across America and the United Kingdom. And to tell you one of our stories, Spot. One of the themes of our organization, a Trey, is a relentless commitment and energy towards improvement. A few years ago, our clinical leadership decided to take a bite out of sepsis. Sepsis, as you can see, is a body's overwhelming toxic reaction to a bloodstream infection. It's unknown, but deadly. When thinking about how to defeat sepsis, there are two important things. One, every hour that it goes untreated results in an increase in risk of mortality by between 4 and 7%. But, if you can detect it, treatment is relatively easy as far as these things go. So the name of the game is rapid identification and even more rapid treatment. So we looked into our data to try and understand how we were doing. Here's the data for one hospital. Rows or days, columns or hours of the day and the number and color represent the number of sepsis screens performed at that time. A sepsis screen is a unit of nursing work in which we look for sepsis in our patients. And you can see we're doing this at 8am and 8pm. Shift change. Now there's 12 hours between there and in the fight against sepsis, 12 hours is too long. But our answer couldn't be, hey nurses, could you please do some more sepsis screens? Because nurses are already the most heroically busy people basically on the planet. So we had to get smarter. We looked deeper into our data. We used our data warehouse to look backwards in time and we applied a really pretty simple algorithm and we saw, hey, we can see sepsis in the data. If only we could do that in real time. If we could do it in real time we could alert the nurses, we could coordinate a workflow and we could give people the time advantage necessary to fight sepsis. If only we could do it in real time. Our IT teams, networking and data said, no worries, we got this. They pulled data from all of our hospitals in real time to our data centers. 5 minute latency. That's awesome. Now we can do this. The spot product teams pick up that data. They create a patient object that represents every single person under our care at all times, every hour of every day and every single transaction updates that state and looks for sepsis. If it's detected, it coordinates a pretty complex symphony of action and care in the hospitals to make sure timely care is provided. All of this is done on OpenShift. All of it. Now, the key point here is that data itself are worthless. Absolutely nothing counts in this world except action. It's a coordination of a very complicated workflow. It coordinates nurses who care about all of the concerns of one patient. Sepsis coordinators who care about sepsis across all of the patients and a management structure that cares about the performance of the entire system. But my favorite part isn't the lines of workflow. It's the lines of communication between those teams and also back from those teams to us. In the application, people can send us a message or report bugs, or just say hello, which sometimes they do. And as a standard practice, a non-heroic part of our business, we can turn around those feature requests in the same day. Zero downtime major version upgrades with OpenShift and our culture of empowerment, we've done that. But all of this technical juju provides one thing, a time advantage to our clinicians. What we provide them is a five-hour head start and in the hands of clinicians, five hours save lives every day across over 150 hospitals. Spot the algorithm isn't the story here. Spot the platform is a system. We have created a system that will provide high reliability care at scale and allow us to radically transform the way that we with our providers improve lives and provide care. For us, it's that. It's a system that provides care and improvement of human life. Thank you. At this point I was promised somebody would come and talk to me. There he is. Thank you. All right, Dr. Jackson, I love this story. One of the things we were talking about is the introduction of AI into the workplace and technology in general can actually make people uncomfortable, but it can also make people uncomfortable. What do you see AI in the workplace and the concerns about job loss coming together? What does that look like for you at HCA? It's a really key question, Chris. We deal with human lives. It's a sacred responsibility. And I think that spot, I hope that spot provides an example of the right way of doing it. We try to let the computer do what it does best. We try to let the computer do what it does best, provide empathy, provide care, and dignity for people who are sick. And I think at this sort of Promethean time of AI, if we as engineers and creators huge that line of trying to enhance humanity rather than compete with it, I think we'll be fine. Thanks very much, Dr. Jackson. I love that. So that's computers doing what they do best, really we're talking about machine enhanced human intelligence. I think that's a great way to think about it. So I want to expand a little bit on health care and the implications of the work you just saw. We know that data is valuable. It helps you learn. It helps you do important work. Some data, however, is sensitive private data like patient records for example. The tension here is that you need to be able to use this data while respecting the privacy of the party it's tied to. Patient records have to be kept confidential for both legal and ethical reasons. But we must study them. The future of medicine depends on it. A red hat along with many other partners have been working on ways to share and generate new knowledge across businesses around the world without sharing the data that must remain private. The result of this work is secure multi-party computing or MPC. MPC is a cryptographic solution wherein multiple parties jointly make computations using private data from each party. Some secret number for example. Together they want to compute a function using these numbers but they don't want to reveal their private data. I know it sounds a bit like a magic trick but let me quickly convince you that it can actually be done. So let's say I have a group of people who want to compute their average salaries. I choose a random number, add it to my salary and pass the result to the next person. They pick a random number, add that number and their salary to the number I gave them. They pass the result on and so on around the room. At the end we have one very large number. Now I subtract my random number and pass it on. My neighbor does the same and so on around the room again. When the result comes back a second time, it's now the sum of everyone's salary which we can divide by the number of people. In the end I know the average salary of everyone in the room but no one has shared their salary with anyone else. Now you have to take my word for it that you can do this with any calculation with more efficient protocols which you can. Imagine a city trying to do traffic planning without access to data from ridesharing companies or patient info shared across multiple hospitals without breaching privacy laws or revealing sensitive information. Now as fun as it is to talk about the theoretical use of technology I'd like to make this real. Let me bring up someone who is exploring MPC right now in one of our patient's lives. Please welcome Dr. Grant. Please welcome Boston Children's Hospital Director Fetal Neonatal Neuroimaging and Developmental Science Center and Professor of Radiology and Pediatrics Dr. Ellen Grant. Good morning. Thank you Chris. It's wonderful to be here and to talk to you about why we think multi-party compute is critical to the future of medicine. There's two ingredients, key ingredients is Red Hat's infrastructure and another is Chris, a platform we've been developing at Boston Children's Hospital. Unfortunately, Chris has no relationship. Now, our Chris what it does is it takes data in the hospital such as images and allows us to compute on the cloud. It leverages open stack for rapid analysis, open shift so we get reproducible results and multi-party compute to securely compute across multiple bodies. Let me walk you through an example of how this would work that is almost in production. Now, we have a ten-year-old boy that arrives in the emergency room with his parents. He's had multiple seizures and of course they're very distraught. One of the first things we do is get images of his brain to make sure there's nothing grossly wrong. What you see behind me are multiple images from one of the many volumes of images that we acquire. There is one from the side, one from the front and one from the top. So as a radiologist, I would be sitting now in the reading room looking through these in details about 5,000 images in total looking for any subtle abnormalities. I look closely, review, and I don't see any abnormalities so I call it normal. The patient has blood worked on, has a neurological exam and is basically normal at that point in time so we send home on a standard anti-seizure medication. However, over the next years he's not controlled. They bring him back and they try multiple different medications. He gets more MRIs. He gets other tests such as EEG, MEG. No focal onset is found and so he goes on to have continuous poorly controlled seizures. That results in poor school performance and incredible family hardship. Now is there anything we could have done different in this scenario? If I had more data and more compute could I pick up something subtle earlier and guide more targeted therapy? Now imagine this child is in the emergency department again. An MRI is done and the images are sent to me. Now while I am visually scrolling through the images and looking for subtle abnormality I pull up a web interface called Chris. I select this child's images and I go to the different plugins that we have that are all in containers and I decide to choose one plugin called free-surfer which does a detailed analytical assessment of this brain. I point and click, the data goes off and before I am even finished reviewing the images which takes some around 15-20 minutes the results are back and this is the colored image you are seeing up here on your left. I have now detailed characterization of that individual child's analysis. Some key features to notice. It was easy. I just pointed and clicked in a web interface. Using OpenStack it is rapid. Using containers it is reproducible. These are three key features to get anything to the front line of medicine. Now I have this brain characterized in far more detail than I could ever do by eye. Even as an expert doing this for 25 years I cannot remember or even perceive this kind of detail. And I have characterized all these functional regions of the brain with volumes, surface areas and incredible detail. And again this was easy, it was rapid, it was reproducible point and click back to me in the front line. But we don't stop there. I know now the detailed anatomy of this one individual but I want to compare to other individuals to see if similar to him, to see if he is going to bring this normal or whether there is subtle abnormalities. What if I could get detailed similar information not just from my patient record but from many hospitals across the country, across the world working together. This is extremely important in pediatric medicine because we have many rare disorders that may be only seen at a couple of hospitals or if we want to really get a good comparison to demographic graphics I have to bring multiple hospitals to data together or my data will be skewed. So for example if I want to get the same gender, the same age and the same ethnic background I need to pool resources across multiple hospitals to be able to get that data. Together if I put all that data together and I can access to that we have an amazing opportunity to do that. I want to give you a brief of comparative data that I could use to guide my decisions. Through our collaborations with Red Hat we have now taken Chris to the next level. It communicates with the MPC infrastructure and encryption is built into how Chris manages the MPC compute. We have set up a case of enclaves in the cloud with collections of brain data. I'm not sure what any individual is disclosed so I maintain patient privacy at each of these individual hospitals. Now imagine I'm sitting there at the front line. I've now analyzed my data. I send it to the enclave to do comparisons of my index case, the patient I'm seeing against this wealth of knowledge that we have now that we're sharing. The first question I ask is how different is this brain from typically developing kids of the same age and background. And I get the results of this. A red is keyed as standard deviations above normal and blue as standard deviations below normal. So now I've exquisitely characterized how different this child's brain is from a collection of normal kids in a much more detail again than I could ever do by eye. But again we don't stop there. I go back to this enclave, these multiple enclaves and I search for matches in the same pattern of abnormalities with the same history and we flag a result. There's a case that comes up, a collection of cases from different hospitals, only few and they suggest this may be very similar to this genetic disorder. We then get a blood test on the patient and it confirms that rare genetic variant. That then allows us to give that patient a targeted gene therapy that specifically addresses the poor function that results in one process that results from that gene. So the child is given an initial gene therapy within two to three weeks of us actually doing this MRI. His seizures are controlled and he goes on to lead a productive life without the disruption of multiple tests and failures at other medications. Now although this is a theoretical example images used from an actual MPC calculation created this picture and always securely performed across several prototype enclaves. Now this is the future of precision medicine. This is what we want to do and this is not possible without the red hat infrastructure and Chris to bridge those two worlds together. And one other important point is that open source community because our engineers, our lead engineer Rudolf Panar is working hand in hand with the red hat engineers. So there's no black boxes and that's another critical point in medicine. I need to know what happens to my data. I need to trace it through so that I understand the analysis that I get. Working together in open source yet in crypt environments has now helped us share our collective knowledge to better serve and save while protecting individual identity. Together we are changing how healthcare works and it's about time. Thank you. These are awesome stories. Thank you Dr. Grant. So these technological advancements and more importantly how they're applied are obviously having a massive impact. As you've seen from HCA healthcare and Boston Children's Hospital, the innovation here is changing lives, the lives of patients and their healthcare providers. So it's really about fundamentally changing these organizations and well, you really heard it from Dr. Grant, it's disrupting their entire industry and it doesn't stop there. This massive shift of machine enhanced human intelligence is making data actionable and usable, pairing truly innovative approaches and happening across many organizations. It's having an impact on those organizations industries as well. Telecommunications, banking, even the way you interact with your car. They're all undergoing disruption thanks to innovation and thanks to Red Hat's solutions and platforms. Take something as elemental as the human voice. How we communicate. Sure there's been advancements SMS, instant messaging platforms, emojis, gifts. These all help us communicate but there's room for innovation yet. Enhancing the human voice itself. Changing not only how we communicate but opening up the doors on who we communicate with in ways we couldn't easily do before. Please welcome and Vasily from Optus to the stage. I think you're really going to enjoy what they're up to. Please welcome Optus Senior Innovation Manager, Guillaume Poulais-Martis and Optus Principal Software Engineer, Vasily Chikalken. Good morning. Good morning. For thousands of years people have been talking to each other. In ancient Greece Aristotle was walking with his purples and having conversations that would revolutionize our world. Later conversations have been carried through other means. In the 19th century Graham Bell invented the phone call. Telephones have been used to carry conversations whether it is a conversation about love, about war or to make peace. In our modern world we have a multitude of ways to make conversations chat, emails, social media you name it. But phone calls are failing out of the fashion. There is this perception that phone calls are dead. In fact, it's slagging in terms of features. No emojis, no chat history, no multitasking. On the other hand, our voice is the most natural way of communicating. Many organisations are working on improving their products using voice. Voice assistants to answer your questions. Voice biometrics to authenticate you with your bank. Voice boards to improve service. So, a little while ago Adoptus, we asked ourselves can we bring modern voice technologies to the phone call? We have capabilities to implement native phone calls. We have towers, data centres, fibre channels but our mindsets hadn't changed. Phone calls have remained wires and switches. So, how might we, as a provider of communication challenge ourselves to rethink the phone call? To leverage our core capabilities and open a mindset to lead innovation within our very own network. Today, we'd like to show you a step change in how we see the phone call. Could I, mate? How it's going? Great, and you? I'm feeling a little bit nervous and hope this demo goes well. You know, we are making this call because we want to demonstrate some of the cool things that we are doing at Optus. Let me keep a record of this. Voice Genie, start taking notes. So, this phone call is now being transcribed in real-time? Yes, and look at the potential. We just made this phone call open and digital. You mean that we can integrate this conversation into different systems? Yes, of course. Email, calendar, contacts can you develop? For example, a web service? Exactly. Is this system only for English speakers? Of course not. Do you speak French? Voice Genie, translate into French. This call is being translated by Optus Voice Translate. You're kidding. You're kidding. No, I'm not. But no. Do you speak French? And I can discuss our telecom network. Why not? Let's have a conversation about it. We should explain to our friends in Boston how it works. We should explain to our friends in Boston how it works. So, phone calls are little complex. Our voice is not asynchronous. You have to deal it in real time. And please do not drop way too many packets or delays. Otherwise you will have very bad quality. And let's look at the details. John is in Perth. And he wants to call Mary to tell her about the weather. Mary is in Sydney sipping a coffee and getting through her emails. When John starts the call his cell phone is going to send a message to the tower. And the message gets carried all across the country to Mary via our mobile call. When Mary answers the call an RTP media connection is established between our western Australia exchange and New South Wales. There is 4,000 kilometers between them. And if you don't want bad quality of service you need to be cautious about latency. To avoid additional latency for the call you have to deploy or virtualize media functions on the same path as the call. And it brings additional challenges. As a software developer you want a simple way to package your software in a portable format. And from an infrastructure point of view you want to deploy this package in multiple geographical locations have convenient means of upgrades and monitor the health of the platform. And from an infrastructure point of view you want reliability and support which brings us to containers and OpenShift. We package our virtualized network functions into containers then we distribute them across our exchanges with OpenShift pods. And by doing so we get benefits of a single platform to build, scale and monitor our software. We do all of these while introducing a repeatable software development lifecycle for future innovation. By abstracting the complexity of our network and harnessing software advances we are opening a safe environment for software developers to build, deploy and operate a new breed of telco applications. This is a unique opportunity to uplift equipment legacy into the digital age. Thank you for your time today. Come on, that was awesome. All right, thanks Guillaume and Vasili. When you think about how Optus is changing phone conversations you can see how new expectations for an old medium led to a game-changing innovation. It's remarkable to see how we're still iterating on something that's inherent to humans communication. Now the basic premise of banking has also remained consistent over the years. Advancements in digital technologies have led to a whole new set of expectations and places where you must meet your customers. This is something that Emirates NBD is acutely aware of and they happen to be another one of our Innovation Award winners. So please join me in welcoming Ali Ray to discuss how they're changing shifting expectations in a long-standing industry. Please welcome 2019 Innovation Award winner, Vice President Cloud Platform, Ali Ray. Hello Chris, great to be here. All right. Thanks for joining me. Emirates NBD started a four-year digital transformation project. Can you tell us about the goals and milestones of this project? Absolutely. Emirates NBD is a leading bank across the Middle East. Over the last few years with rising customer expectations and increased competition we realized as a bank we needed to change. So with that in 2017 we started a four-year digital transformation. One of the first key tenets of that transformation was launching our private cloud based on Red Hat's OpenShift technology. We named it Sahab, which is Arabic for white cloud and it was very much the enabler for our digital transformation. So you just mentioned Sahab, my first Arabic word. And that was a major enabler for your transformation ambition. So can you kind of walk us through some of the challenges and maybe some of the benefits you've seen over the last 18 months? Sure. So we worked very collaboratively with Red Hat. We took the best of our internal IT talent and bought some great engineers from Red Hat on board to deliver our private cloud, which was very much a first for the region. From there we looked at how can we deliver on our strategic goal. We wanted to really push forward with always available banking. Very similar to utilities such as electricity, internet, how could we serve our customers 24 by 7, so banking whenever they needed it. But we also wanted to pivot. How could we change and innovate at speed? How could we deliver faster? We wanted to deliver in days and weeks, not months and years. So looking at these two strategic goals were very much an overall view of how the bank wanted to move forward with its international growth and expansion. So you mentioned Sahab and it's a key part of Emirates MBD's infrastructure. Now, how will you use this private cloud to personalize banking? Great question. So Emirates MBD is continually looking at ways of pushing the innovation boundaries with adopted open source technology. And a great example of this is WhatsApp digital banking. We built this using plug and play APIs on our private cloud within three days. This would have taken weeks, months, years previously. And we've now launched that as a global first to all our customers. We're now looking at how do we move over the next couple of years 95% of our applications over to our private cloud platform by 2020 at the end of the year. So you've called us more than a vendor, which I love. Thank you. Throughout this journey with Red Hat what have you learned and what are you looking forward to? So working with Red Hat was amazing. It was a fantastic experience. I think me personally what I learned was we're looking at how we change the culture of the bank. How we were driving forward with the new culture and the culture that Red Hat brought really worked well with us. It enabled us to really achieve anything we wanted to do and move towards our vision and our goals collaboratively, which was great. Looking forward into the future, we've got some goals around as the public clouds become available in the region, we want to work with the partners and move more towards a hybrid model. We're looking at how we enable real time banking with Kafka and finally we're working with a lot of our partners and vendors to move more of our critical workloads onto our private cloud, such as our core banking system. So amazing opportunity over the next year that I'm very much looking forward to. That is awesome. Talk about expand your possibilities. Thank you. Thank you, Chris. Thank you. So one thing you've seen today is that starting with the single innovation, it's just the beginning for many of our customers. To deliver value from that innovation Emirates, NBD had to continually transform their business using data to personalize their customers' experience. This data and customer-centric approach is important in every industry, including the auto industry. Our next guest, Dr. Lank, is here to discuss how this transformation has taken place at BMW Group. Please welcome BMW Group Lead Architect, Connected Vehicle, Digital Backend, Big Data and Blockchain. Dr. Alexander Lank. Hey, everyone. I'm Alex. I'm with BMW. I hope you all know BMW. Company has been around for more than 100 years. We're building cars. Have been doing this quite successfully. But in the recent times, not just the car, but also the digital services become more and more important to our customers. Even though if you know BMW, you might not know how the connected car and IT plays into that. So, first of all, we have a product. This product is called Connector Drive. And it's basically everything that is in your car as a digital service needs to have some connection to the backend. We have map updates over the air. We have a concierge call where you can push a button. You get a call from someone who helps you make a reservation in the restaurant and so on. You can use your cell phone to open the car, to heat it up in the morning when it's cold outside. All the things that are really required when you have a car. So, Connector Drive has been out there for quite some time. The system itself started 20 years ago. It was designed for a few cars because all the digital services were not that important. Today, they are important. And we have on these systems that consist, by the way, out of 300 microservices about 12 million cars. We're getting every year because we are thankfully selling 2.5 million cars every year, or IoT devices. Very fun IoT devices, to be honest. We're getting more and more cars onto our backend. And because digital services are so important for the company and for our customers, we in addition add new services to the existing fleet. And of course, the new cars also get these services. So, what we end up is basically growth of the request rate on a yearly basis of 30%. You can imagine if you have traditional IT, you have all these microservices running on a shared infrastructure, all the processes behind it and suddenly you get this immense increase of requests that you run into problems. And you need systems that can deal with this. And for us, this basically means we have really one billion requests per week we have to deal with. At some point, you just cannot tackle this with traditional IT. So, we decided a few years back we started our journey with Red Hat and with OpenShift in 2016 and decided to have our backend completely migrated to the OpenShift platform. So, we started with a connected car application. We're migrating, first slicing our applications from all of this to microservices, putting them on our OpenShift platform. We have worldwide four clusters running all on OpenShift. And by the end of this year, we want to be done with this migration. So, it took us, let's say, full-time migration about two years, but it came with a big transformation of the culture of the company as well. Today, we're running 12,000 containers and we're not just having the four connected car clusters, but 19 clusters worldwide. Because we need to be more scalable in the future, we're really looking at OpenShift dedicated. I think this is a product that helps us in the future when we need to scale even more. We want to utilize the public cloud for us because this gives us the scalability and the resilience we need. We can localize clusters in different markets if we need to. And therefore, serve our customers on a worldwide scale in a best way. We really think that the public cloud in many cases is the future for us, especially when it comes to data. You've heard Chris talking about data, how data is increasing over time and we see this as well. In order to give our customers the best personalized experience, we really need that data. We need the data from the processes, we need the data from the customer in order to really tailor the services to our customers and make sure they get the best possible service from us. We're building our data like currently up in the public cloud and it makes sense to also have with OpenShift dedicated some runtime for application that can utilize this AI platform we're building up there. Because in the end, it's all about creating a good customer experience, creating new services and giving you basically the best experience possible. I'm very excited for the future and see what comes. I hope you are as well. And maybe we see each other again in a few years where we then explain how we use our AI platform to get you the services you need. Thank you. Alright, thank you Dr. Lank. Now that's five back-to-back examples of how technological innovation is truly disrupting business and entire industries. The key ingredients to broad scale human and economic change are transportation, how we move goods and people. Communication, how we talk to each other. How we manage power, money, time and resources. A set of key changes in key industries are what really unlocks the next huge phase of humanity. But what that means for us, you and me, is that we will have seen massive change in our lifetimes. Sometimes we see change in unexpected places. I travel a lot and not too long ago I was in Vietnam. I was using Google Translate to speak with taxi drivers. I wasn't just asking them to get me from point A to point B. We were actually trying to have something like a real conversation and it excited me then to see how much that kind of opens up the world. Think of how much more in-line translation like you saw with Google Translate. And the most delightful thing about that is innovation never stops. Not in the world, not at Red Hat. We continue to build the platforms that help make more innovation possible. And now we're going to show you what is possible today in the realm of event-driven AI and industrial IoT. I hope you've got your dance moves on because you're going to need them. Welcome Red Hat Global Director of Developer Experience, Burr Sutter with Sanjay Arora, Hiram Chirino, Jeffrey Dismet, Stuart Douglas and Paulo Petirno. We're about to kick it up several notches. Are you guys ready? Whoa, that's not good enough. Are you ready? All right, I know you're a bunch of IT professionals. We're going to talk about that in a second. But let me tell you a little story real quick. For one thing though I want to know, did you enjoy ML8 and OpenShift and Kubernetes native infrastructure demos yesterday? Well now we're going to show you something a little bit different. We have built an application in this case and we're going to show you how we built that application and then we're going to let you interact with that application. You're going to be my beta testers because we're going to production here in just a few moments. So be ready with your phone. Now there's one thing I know about the Summit audience. You're all some of the IT elite of the industry, meaning you know bits and bytes better than anybody, right? Even those arcane commands on the keyboard we talked about yesterday, that's what you know. But we actually live in a world of physical things and as you saw a moment ago at BMW, right? We live in a world where things have to be connected and when I say things, I mean things like you see in my simulated factory right here, we have large machines that might be paint processing machines, they might be metal sheet presses, they might be large industrial fans or conveyor belts. These things have to be monitored and we have the ability now to monitor them at greater than ever before. But the hard part is the hard part is how do we process all that data? How do we receive that massive stream of data, make sense of that massive stream of data and then do something about it in a real business positive kind of way. So be paying attention to that because you're going to see this dashboard a lot more, alright? But just remember physical things have certain physical properties, especially in the large industrial Internet of Things technology you like to see here and one of the key things to understand is vibration, okay? Vibration is a leading indicator of machine failure. That is how you know if it's going to break before it actually does break because when it breaks it could cost you millions of dollars in machine outage, right? If you have a failure in this processing pipeline you might have a situation where your customer orders are getting backed up and of course those machines have to go into maintenance, a mechanic has to get routed to go solve the problem. And more importantly, let me just say it's easy for all you folks here in IT because I'm an IT person too. Remember that moment in time when you had an old car? Some of you may still have an old car, you appreciate old cars. But do you know that if you feel a vibration that seems a little bit odd, you probably take it into the mechanic. And the rhyme I like to make is when things go shaky, things go breaky. And that's the theme of what we're going to be showing you right now. Okay? So let's jump into the architecture diagram. I want to show you that. And so you can see right here we have our new technology called Node.js. You have a ton of sensors and that's smart phone in your pocket right now. Pull it out of your pocket or you're going to need it in a moment. Be ready. By the way, I should mention we have special prizes for people here in the room. You have to be present to win. So we're going to basically monitor those sensors in real time. We have to receive a massive influx of data from your phone. We're going to do that through this technology called Node.js. One of our engineers built this amazing technology called Node.js. They have a lot of data in it. We have a lot of data in it. We have a lot of data with Apache Kafka because you need something that can handle that at scale. And you're going to see that with Red Hat AMQ. We of course have to filter the signal from the noise. We have to know that not all shaking is created equal. Some vibrations don't matter at all. It's everyday vibration. Some vibrations really, really matter. And we want to make sure we dial in on that through this technology. We want to make sure that our customer orders which are in flight, which may be impacted by a machine outage, are at least part of the algorithm we use to understand do we actually prioritize this event to make sure that we solve that problem. So we don't have customer satisfaction issues or loss of revenue. That's a key element. Also, if you look at the Quarkus element right there, you're going to see us live code that here on the stage. We're going to build a new REST API endpoint. You're going to see that on the screen. This is where red integration comes into play. We're going to bring in that Quarkus technology with that new API, salesforce.com, our event stream input. We're going to run all that on top of Knative with its dynamic auto scaling. You're going to see that happen live on the stage also and you're going to see us build that application integration component. Then, of course, we analyze that data. We get the damage record. We understand all the information. We get the damage record. We're going to run all that on the stage. We're going to run all that on the stage and we're going to basically determine what is the most efficient route to move mechanics from point A to point B. Of course, you see that all on that dashboard that you saw earlier. This is where we've worked incredibly hard to unify and create a coherent application environment. One underlying application set of middleware. You can run all that on top of OpenShift. That's what we're working on. I want to take you through AMQ. So we have here on stage Pollo. Pollo is my AMQ streams expert. He's going to show us how do we set that up on OpenShift. Pollo! Thank you, Buur. Let me show you how we solved the first challenge. First of all, how our Internet of Things backend needs to be highly scalable because if you have thousands of sensors like the ones in your pockets right now can easily This is what Kafka is built to handle. But we want also Kafka is to be really easy to deploy and manage. So here we have an OpenShift in order to handle our containerized infrastructure. And there is a new software operator, which is under the AnkyoStreams product, in order to handle our Kafka cluster running on OpenShift right now. So the AnkyoStreams operator provides you a really simple way to deploy and manage a Kafka cluster running on OpenShift with all the related entities, like for example, topics and users in a cloud native way. Okay, so we saw software operators just yesterday in a big way, and I love the fact that we now have this software operator to run Kafka at great scale, the brokers, the zookeepers, all that. So that's great for operations. But one of the things I hear from developers all the time with this Kafka thing is it takes me weeks to get my IT department to provision a new topic. And that's a huge challenge. I don't wanna wait as a developer. So how do we solve that problem, Paulo? Yes, absolutely. And we have been here in the same story. So I'm excited to show you a new self-service console. Using this console, the users can see all the topics already provisioned in the Kafka cluster running on OpenShift with all the related metadata, like partitions, replicas, and consumer. But the users with the rep permission can easily create a new topic. So just fill in a name, let me say sensor stream, and then for example, setting the number of partitions, the number of replicas, the data retention policies that can be based on agent storage or the compacted one. And then if needed, adding some more new advanced configuration and then just clicking on a button, the topic is provisioned in a few seconds. Oh, wow. So right now we just provisioned a new Kafka topic with a few points and clicks. Can you imagine that self-service, you know, empowering your developers at some point in the near future? Thank you so much for that, Paulo. That is fantastic. My pleasure, Burr. All right, so up next, I mentioned that we have more of our backend to talk about, right? We have some really cool things to show you there. And now we wanna talk about a technology called Quarkus, where we again, we've changed the game for Java developers and you're gonna see exactly why right now. So right now we have Stuart on the stage, he comes from way down under and he's gonna walk us through what is Quarkus. Can you tell us about Quarkus? Quarkus is our supersonic subatomic Java where we've optimized Java for a Kubernetes and OpenShift environment. With all your favorite frameworks and APIs, plus we can pile it down to a native executable. Wait, you said native executable. Well, tell me more about that. Well, there's no JVM involved, it's just a normal native Linux executable. Okay, okay, now Quarkus is a really funny name. Wouldn't you guys agree? Now, so I like a better accent. Can you tell me where the name comes from? Well, a Quarkus is a reference to the subatomic particle because it's very small and light and the Astra refers to the heart of software development, us humans. Okay, but it's smaller. Incredibly. And faster? Dramatically. And it's not some funny Australian exotic animal. No. So saying is believing, so I'm gonna show you what Quarkus can do by finishing off a half-complete Quarkus app live on stage. So what we have here is a machine maintenance API of our application, which is a Jaxrs and JPA-based REST app. It has two endpoints at the moment, one that lists the machines and another one that shows the maintenance history. Now, at the moment, this isn't that useful because even though it tells us how much damage the mechanic repaired, it doesn't tell us what the final health of the machine was, so let's fix that now. So as you can see from the URL here, this isn't actually running on my laptop, this is running on an OpenShift cluster, so I'm gonna do some cloud-native development. Now, the first thing I need to do is connect my laptop to the cloud by running the remote dev command, and now we're connected. Now, the maintenance history is stored in the maintenance record JPA entity, so I'm gonna add a new field here to store the health. Now I'm gonna go to my Jaxrs resource and I'm gonna expose that new field to the application. Now, if I just go back to my browser and hit refresh, we should see the change. So now I've got this final health field there. I love that edit save refresh development model, that's fantastic. Yeah, and this is running on the cloud too in an OpenShift pod, this is true cloud-native development. Awesome. So now we've got this information, what I'm most interested in is the current health of the machine. So let's add a new endpoint to do that. And to do that, I'm gonna use something called Apecurio Studio, which is an API designer and part of the Red Hat integration studio. Now, we can see that I've got two parts there, I'm gonna add a new one. Now I'm gonna add a path parameter, which is of type integer. Now I'm gonna add a get operation. Now this operation can have an ID called current state, which will translate into the Java name. Now I'm gonna set the content type, which will be JSON, and add a response type, which can be machine state. Now I've designed my API, I need to get it into our application, and to do that, I'm gonna use the code generator. So I'll just click through this wizard here, and the end result of this will be a pull request on GitHub. It just takes a few moments to generate. So once this has been generated, I can review this as normal, and see what changes it's made. You can see my new method there, that all looks good. So I'm just gonna be a bit naughty and merge my own pull request, and then sync it with my local workspace. We can see, you can see here, we've got this new method that I just designed. So now let's implement it. Now I've actually got a method here, calculate state that has most of the logic we already need, so I'm just gonna call that. And I'll update this to include my new field, go back to my browser, to my new endpoint, and we should see it. Crucky. Oh wow. I've got a query exception. Sorry, two secs. Oh, I've got a typo. Yeah. Whoops. Rehearsing backstage. All right, we'll just pretend that didn't happen. So you can see here, so you can see here I've got my new API with the current health of the machine. Now, because I've been doing development, I've been working on the JVM with standard Java libraries, but we can also compile this down to a native executable. So if you have a look here, here's one I prepared earlier, that's running on OpenShift using only 20 megabytes of RAM. And this isn't a hello world app either. This has JAXRS, JPA, transactions, all the stuff you need to write an enterprise app. And because it's a native executable, it will also start really quickly. So if we see here, I can just scale this up to 20 pods. If we have a look, we can see these pods coming up now. This is a full stack enterprise Java application deploying right now into OpenShift, scaling out, and you can see they're already running. So that is fundamentally game changing. And this really matters a lot. We're building an application here that has to dynamically scale based on an influx of sensor data. So you have to think about the firm moment. We're gonna basically pound this system to death. And if you don't have live scaling like that with a super small, super fast runtime, it completely changes the way you think of Java, for sure. Thank you so much for that, Stuart. Totally awesome stuff, hard live coding in front of several thousand people. Okay, we got more to show you though, all right? So up next is Hiram. And what we wanna do now is talk about how we integrate these technologies. How do we take an API like you saw Stuart just create as well as salesforce.com, but also take that influx of sensor data and bring all those worlds together so we can basically understand better what our damage looks like. What are our repairs? How should our repairs be prioritized? So let me introduce you to Hiram. He's a key part of engineering team, part of the Red Hat integration team specifically. And he's also our Apache Camo guru here on stage. So Hiram, please show us more about what you have there. I'm gonna demo to you guys how we can easily add Stuart's new API endpoint in an integration pipeline. This pipeline's job is gonna be to combine data from different systems so that we can schedule machine repairs. I'll be doing this with Fuse Online, which is a part of Red Hat integration. Let me select the integration that we're gonna be updating here. Okay, so this first step here is connected to the Kafka topic that was created earlier. It's receiving all those machine sensor events. We then need to understand how customer orders are linked to machines. So we're gonna query our corporate asset management system and Salesforce, and that's gonna let us know the cost of failure for the machine reporting event. We then combine all that data and map it into a repair record, which we then send to an API which deals with scheduling those machine repairs. What we'd like to do is improve this by prioritizing based on which machine is closest to failure. So let me edit this integration. And we can use Stuart's new API endpoint to get the current machine health and include that in the repair record. I'll add a call to his API here and let me scroll down and select his history API. Here's his new endpoint. All we need to do now is add a data mapping step, which lets us configure the input into his API. And let me map machine ID into the ID parameter. Finally, let's also update this last data mapping step. It's combining information from the original sensor event and Salesforce and mapping it into that repair record. So let's also map in the health fields from the API response into the repair record and now we're done and we can publish. In a few seconds, this is gonna be running as a K&A to service, thanks to a new upstream Apache project called KAMOK. Okay, Bert, we're published. We just need a whole bunch of sensor events sent into that Kafka topic. All right, so it's already deployed out into ObaShift, right? ObaShift 4. That's right. Okay, and so we wanna also see our Kafka Grafana dashboard because we need a big stream of events flowing in, right? Let's make sure we got that. All right, fantastic. You can see there's no messages right now if you look over here. So what we need is a huge influx of data. All right, so just gonna give you an example of what this looks like. I happen to have a ton of sensors right here in the smartphone right now. Should I give it a try? Are you ready? My console here shows the deployment. It's got zero pods running. As soon as you shake that phone, that starts scaling up. All right, so this is where you see K&A of auto-scaling responding to a series of events. So I'm gonna get shaking right now. So here we go, pushing in accelerometer data, great volume into this application. Let's see what happens here. Okay, we've already spiked up 334 messages per second on that side, and you can see 100 pods coming online right now. There we go, 16 available, 19, 21. That dynamic auto-scaling of K&A, happening right there right now. All right, fantastic, look at that. What do you guys think of that? Did a great job, Burr. You shook that phone, generated a ton of Kafka events that cost K&A to scale our integrations up from zero. And did you notice how quickly those pods started up? That's because Camo K is also running those integrations using Quarkus. Camo K on Quarkus, dynamic auto-scaling out with K&A serving 101 pods now available. Look at that, fantastic. Okay, for those of you watching on the live stream right now, this is where you wanna tweet your friends, let them know some more cool things are about to happen because we have even better things to show you. And I know all of you right now are anxious to be shaking your phone, aren't ya? You're gonna get a moment. We're gonna have that opportunity. But right now, we wanna tell you a little bit more about our application. So you see now how we architected the backend. Let's talk a little bit more about the things you see more on the front end of this right now. And we wanna talk about artificial intelligence, specifically machine learning. And we have here with us on stage, Sanjay, from the office of the CTO, where they worked really hard to train a very special model to help analyze certain vibrations and understand what that shaking needs to look like. So Sanjay, can you tell us more about the role of the data scientist and the machine learning on top of OpenShift? Absolutely. So analyzing data like this for patterns is a perfect fit for some of the technologies we have been working on. One of those technologies is called OpenData Hub, and you can check it out at opendatahub.io. Now, the workflow of the typical data scientist involves data curation, exploratory analysis, model training, validation and serving. OpenShift and OpenData Hub let a data scientist do all of these using their favorite open source tools and at scale. So let's see how one would get started with OpenData Hub. You can go to your OpenShift console, select the developer catalog on the left, search for OpenData Hub, and install it right there. This gives us access to a SEF instance to store our sensor data as well as Jupyter notebooks. These notebooks are the main interface between data scientists and their data, and they serve as the arena where all data modeling takes place. So let's take a look at a notebook. This is what a typical notebook looks like. You can write your regular Python code. And in this case, we plotted the sensor time series data. OK, so I know one of the greatest challenges with getting data science and AI ML integrated into your overall cloud native development world is capturing data, training a model, performing iterations on that trained model, and coming up with your hypothesis. So how do we train our model for this application? Yeah, so we had to get a bit creative there. This is a simulation. We obviously don't have real machine parts here. So we want our audience members, all of you, to serve as proxies for the machines. Now, it is very hard to tell someone to move like a broken machine versus a healthy machine. So we picked a few specific moves that represent our machines with various levels of damage would vibrate. We spent the last few weeks training our models around these movements. And the goal for all of you is to emulate them as well as you can. But let's first take a look at how we train the model itself. The first step was to collect the data. And some of our colleagues wrote this great app that lets us do just that. So as you can see on the screen, we have a volunteer showing the shake motion. And a user would click on train model, execute the motion with the phone in their hands, and submit the data, which we would then use as our training data. Some very enthusiastic Red Haters, including Burr, generated a ton of data for us in our Raleigh office. Let's take a look. Those are our models to train a model. And at this stage, we have training data. So data scientists can get to work. We would go back to our notebook, load the data, refine it, train our model. And once we are satisfied with our model's performance, persist it and create a rest endpoint. This rest endpoint will then evaluate your moves. So Burr, we have a few thousand people in this room. We should get them involved. Yeah, yeah. Well, I think this is where we reveal what our little application does. And we get you guys a chance to connect to our back end. You guys ready for it? All right. You got to go to red.ht slash demo 2019, red.ht slash demo 2019 on your phone. You will see that the game will begin shortly because we have it paused right now. Are you ready? Okay, let's turn it on. Let's see that first one. You'll notice the shake there. Sanjay will click on the shake icon. And all you got to do is do it like you saw with Kyle Buchanan, our training model, training our model. Okay? And you'll notice you'll score points. You'll see the points show up there. And then there we go. There he's got, Sanjay got the shake. All right? That's that simple. Wow, we have a whole data scientist team to worry about that. Exactly. This is how we do it here. Okay? But actually, there's a couple more fun things we added to the application. So if we now could push out and dynamically add the X and the circle. There we go. So there's your circle. All right? Come on, get that phone moving there. I don't see enough waving. Oh, now we got 100. I see about 1,000 of you waving now. All right? And now the X is kind of tricky. Got to go kind of slow. All right? So we're aggressive about it. It won't pick it up. Fantastic. Okay, so we have tons of data flowing into the system right now. What do you think about that? Yeah, this is great. And I would like to clarify one point. So each of you is associated with one machine. So in my case, I'm associated with machine E, which is in the lower left corner of my app. And so the machine learning model actually thinks you are the machines. And when you move, it thinks the machines are vibrating. It then evaluates what move you made and how good it was and updates the damage on that machine. Yeah, you can see with the machine E right there, it actually is orange because, you know, there's receives, it's now the sensors are picking up damage associated with it. All right, now, I know you guys are having fun, but I need everyone to stop. Check that out. We took control of your phone. All right, be ready. There's more coming. Now, we actually have to show you more of our backend architecture here because there's one more thing that is incredibly important to understand. And when you see all our machines up here on the big screen, you have to understand that we have to route mechanics in an optimal way. Just like you take your old car to the mechanic when it starts shaking, we've got to make sure that our mechanics move to these big industrial machines and start fixing them. So what we have here right now is Jeffrey, who's part of our Red Hat Business Automation Team, Process Automation Team, and he's going to talk to us about OptiPlaner. Thanks, Burr. So, routing repairmen might seem simple. After all, we have a prioritized list of repairs, but it's actually quite complex. We have a limited number of mechanics. We need to decide for each mechanic which machine he will fix, and there's a traveling time between all of the machines, so we need to do that efficiently. And there's also a heck of a lot of audience members here detecting machine damage. So, we need to do that as efficiently as possible, and we're going to use Business Automation Product and specifically my baby OptiPlaner to do this. Okay, so how does OptiPlaner help us with a problem like this one, though? Well, OptiPlaner is an AI constraint solver. It optimizes planning problems by using advanced algorithms. So for example, it can solve the vehicle routing problem in which we need to send a fleet of vehicles to a number of locations across the country. And when we do that, we want to decrease their travel time and we want to decrease their fuel consumption. And with OptiPlaner, we can reduce their travel time and fuel consumption by 15% or more. Obviously, saving fuel consumption is good for the environment, and it saves some of our customers hundreds of millions of dollars per year. Hundreds of millions of dollars per year just by a more efficient routing. That is fantastic. Now, how does it make an impact on our mechanics? It's basically the same problem. Instead of trucks, we have mechanics, and instead of locations to go to, we have machines to go to. So it's the same problem. We want to reduce their travel time, so the mechanics spend less time walking around and more time actually fixing the machines. This productivity boost allows us to compete, allows us to keep all the machines afloat while you guys send in machine damage. So in the scenario of setup here, you can see all the machines are damaged. The three machines in the middle are heavily damaged. We have a red machine and two oranges machine, but even the green machines have some damage. So there is actually a reason to send a mechanic to all of these machines. Now, which machine should we fix first and in which order should we fix these machines? Now, you might think, let's just send the mechanic to machine D because that's the most damaged machine, but that means that he will need to head back to machine H and then to machine E, and that's actually not efficient because he will spend a lot of time traveling between those machines. On the other hand, we could find the shortest path, which is in the academic world known as the traveling salesman problem, but that's not going to help, that's not going to be super efficient either because then we might lose some of these machines. So we need to do something in between. So I'm going to add a mechanic, and I call this one Mario, and you can see the order in which Mario will fix the machines. Let me just zoom in for you. So here you go. So Mario will first go to machine H, then machine D, and then machine E, and you see, as he goes to machine D, he fixes machine H along the way because that's just more efficient. I see it, one, two, three, and you can see Mario standing right there by the break room. So what happens though if one of our machines really sees a lot of damage, starts deteriorating rapidly, you know, really falling apart. Yeah, if all of you guys, for example, focus on machine C, it gets a lot of pressure. So let me simulate that. I'm going to put some pressure on machine C, and you can see that OptoPlanar immediately changes its mind. That is fantastic. So it still maintains the optimal route based on what it sees right now in real time. Yes. So let's get back to our factory floor. Okay. Let me let them go. Well, I think we should let the mechanic lose, see what it looks like here. Yeah, here we go. So as you can see, the mechanic starts fixing the machines in the order that OptoPlanar optimizes. Well, I think at this point, when the mechanic's in play, we're ready to start playing the game again. Hold on there just a second. There's thousands of sensors here in the room. Actually, 1,030. Oh, 30. Yeah. So that's a little bit a lot for just one mechanic. So let me bring us a second one. Okay. I call this one Luigi. The green one there. All right, fantastic. So as you guys know, right, we basically have this application running right now. There's 1,000 of you connected to the game or more. You can go to that same URL. You see it at the top of the screen right now and get connected to our game because there are special prizes for the top 10 winners. I mentioned that earlier. Maybe you weren't paying attention, but now you can join the game. Also, I'll tell you one other trick. If you actually score all six of the things you're about to see, you get 1,000 bonus points. Remember that one. That was for people who really want to be hardcore about this. Okay. Now, we need to get this game started one more time. So go ahead and turn it back on for me. Let's let people play a moment. All right. You want to make sure that you pick that shake motion and get that one nailed or you want to pick that circular motion and make sure you nail that one too. These are already, look at there's like 1,000 phones waving at me right now. That's awesome. Okay. And of course, you can see Sanjay's playing the game there. He's knocking down some points and you can, wow. Keen Sting. Look at that one way up there on the top leaderboard there. Oh yeah, fantastic. Okay. Stop one more time. Okay. You guys like that one, don't you? All right, we have the power up here. Now the team, we wanted to challenge our AI team. We're like, okay, circles and Xs. That was easy. The training that model wasn't that hard at all actually, you know, separating different types of vibrations. So we want to take it up a notch. So I challenged the team with a couple of different things. If you remember that famous movie from 1977, it's called Saturday Night Fever with John Travolta rocking the disco stage. This is my era of people. That's what we're talking about here. So we actually have added a couple other things. But right now I have a couple other people who are going to come up on stage and help us do this right now. So get on up here. You know, we had some volunteers who want to ensure that we make you guys understand how these motions are created. We're going to get everyone down here. Fantastic. Okay. All right. Now, we're ready. Let's unpause the game and show them the roll motion. There it is. And this is that famous John Travolta move. That's the fever move. Just one more time. Again, we have the power. You guys are looking good up here. That's fantastic. Now, the younger people on the team were like, Bear, that 70 stuff is cool and all that. I know about it because of Fortnite. You guys with me? Your kids playing Fortnite, they learn those same moves. I learned about them more in the real, but that's a different issue. They're like, well, we need to train to say I model to do one more thing. You guys ready for it? We want to show you the floss. Let's add it in there and get going. Here we go. Win this game. You got to score those points for this first second. Look at it. You put over a thousand people playing the game live with us right now. We had over 12,000 transactions go through our system. Scaling out on that back end you saw earlier with all that infrastructure you saw is put together. And of course, we had 9,500 recognized motions. That is absolutely incredible. And if you remember, all that red hat middle wear that you just saw, we pulled together in a coherent, unified application environment running on top of OpenShift where you guys can interact with it directly. Absolutely fun stuff. So I know you're super excited about what you just saw. You can come to our booth, the DevZone booth, at 1 p.m. And you can actually get a behind the scenes tour of all the other cool things that went in here. There's actually a lot more cool things you didn't even see today. So just keep that in mind. All right, just 1 p.m. DevZone. Now, I recognize we're IT professionals here. We don't all dance. I'm a lot of breath because of all that movement. All right, but you do innovate and you, oh, by the way, I should mention, if you see your name on your phone up here, come see us at the end of the keynote. Don't rush the stage right now. We do have special prizes for our top 10 winners, okay? But you do innovate as IT professionals. And one of the things you wanna see right now is yet one more award given away for the Innovator of the Year. So I'd like to welcome back Marco and Chris. Thank you. Please welcome back to the stage, Chris Wright and Marco Bill Peter. Thank you, Burr. Thank you. Wow, Chris, wanna do some flossing. Every year, it gets more difficult to pick the most outstanding and innovative projects from the pool of nominees. Well, okay, we look at the impact of business, the nature of the transformation, really elements of community and openness. We also look for projects that are unique and creative, you know, cool. Cool, yeah. And the top five become our innovation award winners. You've heard from them throughout the week. All of these organizations are winners, though. Not because we picked them, but because they are showing us the way forward. This year's top five include BP, where they're developing things differently with OpenShift. Deutsche Bank is working to create everything as a service. And for best supporting actor, Emirates NBD using data to personalize customer experience, HCA Healthcare, enabling machine-enhanced human intelligence, and Kohl's, they're all in on cloud and open source. And from these five winners, you choose the winner, the innovator of the year. Chris, give me the envelope. Dude, you're killing me. What? Seriously? Seriously? Guys, guys, forget a little something. Oh, oh, oh, thank you, thank you. Thank you. Wait, hold on. Someone's in charge now. Well, why are you here, Jim? Why don't you do the honors? Oh, it's good to be the CEO. All right, I'll do that. Did PWC certify this? Absolutely. All right. Oh, the winner, I'm very excited. I did not know literally until this moment. HCA Healthcare, saving lives. It's the honors. Really inspiring. Thank you. I have to say we spend so much of our time, you know, deep in technology because we're passionate about it, we love it. But having an opportunity to observe what that technology can do to literally have life-changing impact, it's an extraordinary thing to see and it really helps personalize everything that we all work so hard to do. So again, thrilled with all of our Innovation Award winners and certainly HCA. It's just such a great story. So Summit has brought us so many new things, a new version of RHEL, a new version of OpenShift, a new logo, and even new ways of working. And so we are so excited to have you here to share them all with us. For those of you who've been with us for a decade or more now, we really appreciate you being here. And for those of you who just started to join us, you know, welcome, it's great to have you. You know, I went to a number of receptions last night and several people said to me, God, you know, we're here for the last Red Hat Summit. Just want to be clear, we're not going anywhere. The party's definitely not over. We have a great party tonight and we will see you for Red Hat Summit next year in San Francisco. It's gonna be bigger and better than ever. Thank you so much for being here. See you tonight.