 I just wanted to give just quick introductions to us up on the stage right here. Again, I'm David Cain. I work with partners inside of Red Hat on repeatable assets, solutions, reference architectures for Red Hat's partners in the strategic and OEM space. Hi, I'm Jose Palafox, and I'm the program manager for CNCF contributions coming out of Intel. So we've got a team contributing pretty regulated to Kubernetes. And we're going to talk about some of that stuff today. I'm Jeremy Eater from Red Hat, the performance and scale team lead for OpenShift and Chief Trouble Waker. All right, so just for the talk that we have today about 25 minutes here, we've broken up into four distinct units right here. First, I'm going to take you through just a little bit of background and why to kind of set the stage for the talk that we have here. Jose is going to talk next about futures, kind of where we're going, where we're seeing things in the community and talking a little bit more about some differentiated services paired with accelerators. And then Jeremy is going to talk through us a little bit about what some stuff is coming in OpenShift version 4, specifically in the networking space. And then we're going to bring us all home with some things that we can consume today, specifically, with deploying OpenShift on bare metal. And I've got a couple slides right there and infographic to kind of take us through that. So just a little bit of a background. I don't know if folks here in this room have seen maybe over the past 10 months, 11 months, there's been a lot of announcements in the greater community here talking about the bare metal cloud market. Historically, cloud providers have offered mostly virtualized instances. And what we've seen right now is most of them are offering the ability to consume bare metal instances on their respective platforms right there. And we have two analyst reports that are one from Grandview Research that says that by 2025, this is meant to be a $26 billion market with an annual growth rate of about 40%. So really, really spinning up the tails there. And just examples like AWS, Oracle, the IBM cloud all have offerings where you can consume this. But it's not just public clouds, also private clouds. OpenStack had its annual conference, our semiannual conference in Berlin just a month ago. And the user survey results are summarized in here on the bottom right, which emerging technologies interest OpenStack users broadly. And OpenStack is a platform you can install. OpenShift slash Kubernetes on the top three were containers, bare metal, and hardware accelerators there. So a lot of interest in both public and private clouds. Today, what we've seen the majority of OpenShift and Kubernetes deployments are in virtual machines, whether that be deployed in VMs atop of OpenStack, whether that be in public cloud platforms or in traditional virtualization like vSphere. And we've seen that reasoning for that is, a lot of folks have standardized on these platforms right here and they've come to rely on a lot of the automation and a lot of the orchestration that these platforms provide. So it's in some cases a lot easier to get started with deploying a Kubernetes or an OpenShift environment on virtual machines just because it's a lot easier than finding a stack of bare metal hardware right there. But what we've found is the interest is definitely growing and deploying Kubernetes slash OpenShift on bare metal, driven by a couple of factors that I want to take you through here. Certainly from the keynote earlier from Chris Wright, who spent a slide there on bare metal, one aspect is specifically reducing that VM sprawling cost, having that software and that infrastructure and expense there. But also really a lot of these emerging applications right there, like a couple examples given, we have running databases specifically directly on bare metal or artificial intelligence, machine learning or applications that specifically benefit from having targeted access to dedicated hardware devices. We call those workload accelerators there and definitely in the last version 310 of OpenShift with device manager, we've had numerous awareness, the CPU manager right there. So stuff is coming in the upstream and in OpenShift to really start to take advantage of a lot of these specialized hardware devices for that. So accelerators are a big thing for us. So when we talk about accelerators at Intel, we have actually a pretty broad portfolio of accelerated hardware products to complement our core Xeon offering. So this is just a quick splash of some of them that we have. I think there's probably more. But we've made the Altair acquisition brought with it a lot of FPGAs and then we've made a number of other acquisitions in the same space to expand our portfolio. So when we think about how we use these accelerators in production, it's not just as simple as racking it up and then magically it works. There's got to be some stuff we do behind the scenes to set that up. And so where we're headed is adding these accelerators and making them schedulable resources inside of Kubernetes or inside of OpenShift. So we'll talk through the strategy for that. Do you want to jump us forward a little? You can just keep going. So if you think about what Intel's role is in Kubernetes or at least what my team is working on inside of Intel, we take some hardware accelerator or Xeon feature and we try to expose it to Kubernetes so that it's aware and able to schedule against that resource. Then we're looking at taking a workload, something like Redis or maybe even just a task like compression, decompression, or TLS offload, something like that, that we can use an accelerator for. And then we make changes to the upstream product to take advantage of the accelerator. Once we have that done, we're working with Red Hat on the operator framework. And so our plan is to try and help accelerate the ecosystem around operators, which you heard about earlier today, so that there's a library of common utilities available for people to manage these workloads. And then the last step, I think, where we're going is to help write open service brokers or some way to expose this inside of a pass. So the technology pieces maybe we'll shift a little bit here, but the general story is how do we take the accelerator feature, expose it to the scheduler, make sure it's consumable, and then make it a developer self-service ordering process for you. So that's sort of our roadmap. And to give you an idea of what that looks like, we've highlighted two examples here that maybe are relevant to this community. So if you think about our portfolio, we just introduced a new memory medium that we are able to offer instead of DDR4, or not instead of DDR4, but to complement DDR4 for in-memory databases. So we create this new product. Great, what does that do for us? Now we have to make sure that drivers for it are available in Linux. So we work with Red Hat and the REL team to make sure that the kernel can support this new hardware product. Then in Kubernetes, we write a CSI driver. So this is an extension point in Kubernetes for storage that allows you to use the memory as a storage device. So once we publish the driver, which we just released today actually, so if you go to the Intel GitHub page, you can find our new CSI driver for persistent memory. We then go and modify the upstream open source products that can use persistent memory. So in this case, I selected Redis as an in-memory database. Then we'll work with the operator community to standardize on an operator and then write the open service broker. So you kind of see we have like a vertical stack of enabling activities that's going through the ecosystem. Another example that we brought in is with Istio and Envoy. So in this case, we wrote a driver for our QuickAssist technology, which can do TLS offload. Then we make sure that the upstream products can utilize QAT after exposing it to the scheduler. And then we're building a community around making sure that this use case can be taken advantage of both on-prem and in cloud contexts. So these are just two examples. There is a current PR out for some of the changes we've made in Envoy. If you're interested in checking it out, we've added the link there. So yeah, if these are interesting use cases for you, please let us know. I'd love to kind of talk to folks afterwards about what's important to you with accelerators, because I think this is kind of an emerging space of thinking at least a cloud-native context. So how do we go forward is definitely something we're looking for feedback on and hoping you can help influence with us. OK, so yeah, thanks, Dave. This morning we had Chris on stage, Chris Wright. And I just wanted to share this slide with you. So I've got some arrows pointing to things that are kind of a feather in our cap, because if you're an engineer and you get one bullet on a keynote slide, that's some kind of an achievement. We got two bullets on the slide. So we're talking about bare metal here, and we're talking about performance sensitive applications and enabling those sorts of things are key to our customers as the feedback we hear, as well as the tea leaves are pretty easy to read. Public cloud providers are adding accelerators of all kinds as differentiated offerings, such as TPUs, GPUs, FPGAs, those sorts of things are all commonly, bare metal servers are all commonly available in a cloud provider at this time. OK, so I don't know if you've seen this slide before, but it's actually mandatory that I show it. And this is a project we've been leading for a couple of years in the upstream, along with Intel and other vendors to bring performance sensitive workloads to Kubernetes. The extent of my graphic design skills is here. And you can see how the idea here is just that there's some overlap between all of these high performance workloads and the initial work that we should be doing as open source vendors, open source developers is we should be working horizontally to enable whatever workload you want to run to run on top of Kubernetes and OpenShift. Whether it's network function virtualization, which I'll talk mostly about today and you just saw a presentation on GPUs that was last year's land war. So HPC and all these other workloads have similar accelerator requirements or similar high performance requirements. So there you go. Coordinate and plumb these generically. Here's some market research that we have done and some anecdotal feedback we've collected from customers over the last couple of years. This is kind of an eye chart, so I apologize, but I'm trying to make two points. One, there's plenty of overlap between the different market segments. As a team, that would be a good place to start because it's super high value. The second piece, which is the bottom graph, is across all these public clouds, a lot of these capabilities already exist or they are on the roadmap for inclusion immediately. So this is a bare metal talk. We'll stick to bare metal, but all these other cloud providers offer things like this and I apologize if anyone's here from a cloud provider and I've got your cell incorrect, see me afterwards and we'll have the flogging. Okay, so next slide please. Here's the second land war for Kubernetes from my perspective. It started about two years ago and we had discussions maybe even longer than that. We had discussions in the upstream SIG network community around multiple networks. Who here has one single interface on all of their computers and they will never ever need a second one? So let the record show there were zero hands. So that's pretty obvious, right? Cause I came from a data center provider, every server had like 10 nicks in it and it was just like, you get a storage network, you've got management network, you've got out of band management, you've got prod, whatever. There's like, and each one is bonded, right? So there's like eight interfaces minimum. It's pretty standard. In public cloud, that's sort of all extracted away from you, but let's just say you're building on bare metal, which is why we're here today. So we talked about upstream. We couldn't really necessarily come to an agreement on yet on what the, whether Kubernetes needs to know about multiple networks. So there was, the Intel team founded a project called Maltis, which allows Kubernetes to call more than one CNI driver. That's the main function of it. Kubernetes right now can only call one. Maltis can call more than one. With that base of understanding, that project was founded about two years ago. In the interim time, kind of skipping fast forwarding here in the interim time, the Intel team worked on Maltis V1 and V2. And by the way, if I think there were some Maltis contributors in the room, Maltis is again, available on Intel's GitHub all open source and so forth. We should be able to put you in touch with one of the developers if you have a specific feature requests on Maltis. Last year at KubeCon in Texas where it snowed, I don't know, I guess I showed up there. It snowed in Raleigh. I just escaped the biggest storm of the century down there. And we did a face to face between Intel and Red Hat because we felt we should align behind some, we should align behind some open source project and kind of help to break the log jam of what the upstream community was kind of stuck on. So at that time, we talked with them. We kind of agreed, ladies and gentlemen, agreement that we would work together on the upstream in Maltis. And at the same time, we didn't want to leave the upstream community out of the loop. We founded a new working group. Dan Williams from Red Hat helped push forward the network plumbing working group, which meets every, I don't know, every other Tuesday I believe at some ungodly hour. And we, through that community, have established some important beachheads in this space. One, we have, you guys heard a lot about custom resource definitions this morning. We have a V1 of an API spec for multiple network attachments that's pushed into Kubernetes SIG's GitHub org. That took a long time, but that was one of the major deliverables from the last 12 months. So now we have a pseudo standard that vendors can rally around if they need to attach another network interface. Here's the spec that you write to and it'll be portable, which is, I guess, the reason for any spec. So Intel worked heavily, heavily with us on that. In fact, we had a shared Trello board that the teams and bi-weekly meetings. So there was a tremendous amount of engagement, not only in the upstream, but also Red Hat and Intel working together on the back end. So that was most of the first half of this year. We finalized the spec. We finally started putting it to work. We have a demo at ONS, which was in Anstradam Open Networking Summit, I believe, right, Dave? Yeah, Dave was there. Open Networking Summit in Anstradam, and that showed Maltis running on Kubernetes. We actually have another demo, two other demos, Maltis running on OpenShift and, so that's like a functional demo, and then we have a performance test of SRIOV on bare metal, actually, and that is in the Intel and Red Hat booths. So on Tuesday and Wednesday, we have demos scheduled, I believe it's, I won't, I'm sorry, I'm gonna get it wrong. We have demos scheduled on Tuesday and Wednesday of both of those pieces of technology. So if you're interested in multiple networks and you wanna talk to the guys putting it together, our booth or Intel's booth is a great place to find, a great place to discuss with the folks who have the most skin in the game. Okay, finally, what I can talk about now as of OpenShift 4.0, since we announced it this morning, is we're planning on including, all of that work from the last year doesn't go in the waste bin. We're going to include Maltis 3.0 or, let's just say 3.x in OpenShift 4. We're gonna create operators around it. We're gonna create a device plugins, obviously, along with Intel and admission controllers, and they're all containerized and so forth, and that's something you can see in the demo. Now, the user interface, let's hope it's not a mess. So what we wanna do is add Maltis as a feature, so this morning it was mentioned, I think, by Mike Barrett, that every scrum team inside OpenShift has pivoted and started writing operators. Ours are included in that, and so we have glommed on to the SDN team's work to build a network operator for OpenShift, which if you hit try.openshift.com today, you will get that operator, and we will be a feature in that operator. So you will be able to express through these CRDs, an additional configuration, and you will be able to plumb fast data path into your pod. Two years later, we're finally nearing productization and the work's not done, I will admit, there is more to do, but we can finally get fast data, and you'll see how fast it is if you stop by Intel or Red Hat's booth. Okay, last bit of the story is some other things we've been working on, so that's like, I don't know, whether it's the left half of my brain or the right half of my brain is overdoing that. The other side is working on the upstream resource management working group, which was 2016's Land War, and we've got out of that delivered, we didn't deliver new metapology yet because Derek has been busy, but maybe he'll get to that. CPU pinning and device plugins, as you know. The other thing that we're bringing into for data, which I have known is a missing piece in Kubernetes for, since 2015, is some kind of hardware discovery agent. If you've bought brand new Intel chips and your developers have optimized their code for AVX 512, and you're running on a machine that only has AVX 2, you're leaving performance on the table, and the way I like to talk about performance is the faster you're, whose developers like their apps to run slow? Who here likes to spend more than they need to? That's where performance helps. You will reduce costs, improve density, and you will keep your developers, and more importantly, your customers happy. So to optimize where the applications run, we need some kind of hardware bootstrapping agent, which is, we're not getting any patents on this stuff, it's just, it's pretty obvious, if you ask me. Intel put together a project called Node Featured Discovery, which both us and some other vendors are kind of rallying behind as a tool that discovers hardware and publishes it up through the scheduler as something you can route your workloads with. Currently uses labels, whether or not it continues to use labels, I don't know. But that will also be an open shift for, and as you can guess, we have an operator for it. So that's what SIG Node has delivered, and the resource management working group, which we work with heavily with Intel, and then SIG network has put together the plumbing working group, which by the way, you should participate in if any of this has rung a bell for you. More than happy to get real users in there, would love to have real users, and playing with these demos and giving us feedback. The CRD spec is kind of an implementation detail, wouldn't get too much into it, unless you're really like a hardcore engineer, that would be something useful to contribute back to. And then of course, we've got Multis delivering a Multis 3.x in OpenShift 4.0. So that together should hopefully telegraph the fact that Intel and Red Hat have been working very closely, and will continue to work closely, including a hack day we've got tomorrow. Hopefully we'll come up with some other fancy operators and features to help really drive home the fact that OpenShift 4.0 is a self-driving environment, even when you've got high performance workloads. So when we first started this engagement with Red Hat, it was almost two years ago, I wanna say. It's right when I started at Intel, yeah. So our first goal was just getting a base infrastructure available for test-dev use cases. So we worked with Red Hat to divine a reference architecture for OpenShift, and then created a solution that would host a couple of nodes to get started with so that you could try OpenShift out and get it up quickly without having to do too much manual configuration. So we wrote a fair amount of scripting that is all open source and a part of our reference architecture, and then worked with a number of partners to deliver this solution out, which Dave will talk a little bit more too. So I think where we see kind of all this stuff going, as we have this base infrastructure solution that we released with a number of partners, and now with the accelerators, we can begin adding in specific variants of hardware nodes that fulfill a certain workload task. So sort of a workload-specific node definition that we can move out through the ecosystem. Yeah, here was the original launch of the solution around Kupcon last year, right, before that snowstorm in Austin. This URL here kind of details 24-page reference architecture as well as associated automation through the form-advanceable playbooks that's available on the Intel GitHub page and a two-page executive solution brief to accompany that. And I wanted to just walk you through really what the workflow is, kind of what the automation is with that, with the time that we do have left. Really, the prerequisite here is the hardware is, of course, racked, stacked, cabled and powered on. We can't change physics from that standpoint, but we have the hardware, racks, stacked, and cabled, and at this point you consume the reference architecture and the automation. You identify the management node. We typically have that denoted as either the Ansible installer or the Bastion node. So this is the workload-deployer node that gets identified. You configure and provision it, put REL7 on there, and like I mentioned, those GitHub Ansible playbooks are there to really help you with some of the initial steps right there to configure this as the Bastion node right there and you subscribe them and you download and clone those relevant playbooks that are there. So with the initial reference architecture, we used Arista top-of-rack switches there, and so we're actually leveraging some Ansible code that's there in Galaxy to actually provision and configure those top-of-rack switch modules there, setting up high availability features like multi-chassis link aggregation, LACP downstream to the individual server nodes, setting up VLAN, Spanitary, all that fun stuff that we demonstrated in the lab right there and built into the playbook so that you just acquire the hardware. You have it available and accessible to the management node and off the town it goes. So after that, we provision Red Hat Enterprise Linux on the rest of the nodes that comprise that cluster right there, and so like I said, the top-of-rack switch modules are configured, and so at this point, we're gonna start two containers right here, just very, very lightweight. One's an iPixie deployer, so it just runs a DHCP and a Pixie server, and we have an Nginx web server that hosts Kickstart files that are provided as sample and auxiliary material on the GitHub, so you just, it's all contained a part of that repo and you make customizations there, the containers start, and we're communicating through IPMI, waking those systems up and they're being discovered by the deployer and REL is being deployed on there. Similar to how we use OpenShift Ansible today to denote which nodes are masters, which ones are infrastructures and which ones are application nodes, we're modifying our Etsy Ansible host file to really accommodate the OpenShift Ansible playbooks, which that's the same tooling and methodology we use currently today to deploy OpenShift on top of whichever architecture is either virtual or just consuming the same native tool sets we didn't invent or create any new tooling from there, and so we launch the OpenShift Ansible playbooks, it takes about an hour, and then we have essentially our OpenShift cluster built out, essentially with three master nodes, three infrastructure nodes, and then five application nodes for high availability. Also, I forgot to mention, we're also taking advantage of OpenShift container storage, so this is cluster containerized for persistent storage workloads in there, and all wrapped up and ready to go after the automation is deployed. So, like I said, I work with Red Hat's partners, I don't mean to sound vendery, vendery up here on the stage, but I just wanna commend and really highlight some of the work that we've done with our partners, specifically Lenovo, Cisco, and Dell EMC, and calling out just URLs where you can download and consume this material today. Partner Lenovo worked with us, one of their initial partners on a version 3.5 of the architecture, they've just refreshed that to version 3.9 of OpenShift. Cisco has worked with us on a Cisco validated design here on the UCS platforms, around April-March timeframe of this year, and our partners Dell EMC worked with us on a reference architecture on Dell PowerEdge hardware, and I wanna point some of those folks that are in the room here today with us. Remember, these are validated designs, we do this in the lab on real hardware to give folks the confidence knowing that these are fully validated and vetted. Yeah, and to compliment the initial base configuration we've given in the reference architecture, I think where, as I was mentioning before where we're headed with this, is creating a specific hardware types through our Intel Select Solution Program, which helps the OEM community pick up and productize sort of workload specific hardware configurations that will fit, hopefully very cleanly right into the initial install of OpenShift that we've described in the reference architecture. So it's a way of landing the test dev equipment first and then getting comfortable with it and then being able to expand to high performance use cases afterwards. All right, and I meant to do this when we started. Anybody in the room running OpenShift on bare metal in production today who wants to raise their hand? Dev test? Good, well I hope this gives you at least some, maybe examples or guidelines as to the work that we've been doing, hopefully that's helpful. One other call out I wanted to make specifically is the OpenShift Commons Slack channel. Just I always like to say we grow when we share and a lot of the folks that worked jointly with us on these solutions that I mentioned are on those channels right there. So happy to engage in further conversation after this off the stage or in the interwebs. One more shout out for the booth. So the persistent memory use case we talked about here will be shown at the Intel booth as well as a close approximation to the Istio Envoy story. We weren't able to land all of those changes into the demo that we set up but we did do it with HA proxy so it's a very close approximation of what you'll see with Envoy. So that's also running at our booth and then as the guys mentioned we also have multis at our booth and at the Red Hat booth. So if you are interested by any of this technology please come and say hello. We'd love to have a chat with you. Thank you.