 Hello, I'm gonna talk about something that we call it Intel software defined everything and I know this is embedded conference and software defined networking software defined compute is historically a data center thing and because this is an embedded conference I want to at least touch a few things on this. Software defined compute we all know what it is it's about virtualization containers isolation performance those kind of things it's about slicing a system into something that is dynamic that you can run by the second that you can spin up spin down software defined networking well it avoids having to run a wire somewhere I get you want to change your world you want to change an application you don't have to call the network guy to run a wire from building a to building b software defined storage same concept you you you turn something physical into something more fungible and this has driven the data center this has driven the cloud and as Jim talked earlier about cloud native computing it's all it's enabled a whole new world on top of this sort of software defined infrastructure you have dynamic capacity you have flexibility and that allowed a new class of applications you have machine learning all at scale you have hadoop you have spark you have many of these applications that are no longer a small little thing in a box but they're actually based on scaling infrastructure scaling up scaling down this was cloud and if you there's almost a slide I could have put on the on the on the screen two years ago five years ago and this was cloud now software defined everything is basically admitting that the software defined data architecture data center architecture is becoming the architecture of everything it's a dominant design pattern that really is going to dominate every single industry where software is running today we know the cloud but it's also taking over iot your nest thermal stat is just an example of software defined everything half the application of your thermal status running in the cloud half of your light bulbs if you don't think about it half of what's making your light bulb smart is actually not in your house it's somewhere else and that's something that everybody keeps talking about edge what is edge well edge is a cloud distributed or it is your home to the cloud all of these industries and is are taking over the pattern of software defined everything industrial is an example of the next generation of evolution and I was in China a few weeks ago where they really were manufacturing is fundamentally changing rather than building a factory for building one specific product really they make a more generic factory that they with software can define at the time of production production what they want to do rather than retooling for four weeks to change products they can retool in minutes to a new to building a new new device after that the hardest part that we haven't conquered yet is automotive and Jim talked about automotive linux earlier but a car is hard if you buy a car today and you go you or you see one outside here a typical modern car has about a hundred microcontrollers someone in control your brakes your tire pressure has a hundred what about a hundred little CPUs doing things these CPUs run about eight operating systems um there's about a hundred million lines of car in a code in a car from the dashboard to your brakes to the aneroid for the kids in the back it's all complicated and complicated systems that are fixed are rigid are sort of the the past so the car is becoming a data center and when you go to a cell driving car it's even more obvious you have a stunning amount of machine learning you have a stunning amount of map data that all makes that car a data center so what makes cars hard what makes cars hard is safety um if you run your amazon machine and once a while you have a little stutter for 10 milliseconds 100 milliseconds and it's back to business you don't notice if your braking system at 100 millisecond delay you kind of notice um so the car is a little has a has a set of constraints that are a little harder than the data center and functionally safe and Jim mentioned it earlier functional safety is actually the next barrier for software to find everything how do you do software to find everything in a world where functionally safe matters and then again safety is about not dying when you when you push a brake pad at Intel we look at this in a sort of an architecture where you have a system where parts of the system is functional safe but parts of the system run more generic applications and they have to interact together in a way that is still safe but you have flexibility for the application and I'll that's a bunch of logos on the slide and I'll cover some more them a little bit later um in this software defined world one thing we were really realizing is the old paradigm of hey we add a feature over here we add a feature over there that's kind of breaking down in that if you have very complex technology set that touch many pieces of the software stack you really have to build the whole stack and test it and optimize it to make sure that you don't miss a little piece here or that it all works together well at Intel we care about performance we add features to our CPUs when we add a new instruction like let's say AVX 512 it turns out that the the pieces of software you have to touch to make this work all the way to the end user is a stunning amount yeah sure we have a kernel patch which is 10 lines that's easy you have a compiler change you have a gdc change it turns out you have to hit touch kvm you have to touch Kubernetes you have to touch your whole open stack whatever orchestration layer you put below or above it you have to touch all the frameworks in the middle the math libraries all the way to the machine learning frameworks and even above that just for one little feature and the only chance you have to get that to work is to actually build what we end up calling a reference stack build it open source the stack even though it's just mostly configuration build it show it measure the performance analyze it and actually allow and the user to verify and see what it what it is doing machine learning is the sort of the obvious candidate there but we're looking at database as a server we're looking at all kinds of ad use cases where this sort of vertical stack integration is actually needed and actually it's hard and good I like hard problems like Jim said so that's okay at the basis of these stacks is an operating system the kernel Linux or as plugging my hobby project or my my my sort of passion project the clear Linux project is a is a distribution we make an intel for for the last four or five years where we really want to sort of change a little bit how operating systems are built but also have a place where we can actually innovate make sure all the pieces work together very well in order to sort of be the foundation of those vertical use cases if you look at any use case in the cloud Linux is at the bottom and earlier Jim talked about Linux is everywhere yeah it is so it starts with a Linux layer it's up the Linux is for obviously optimized for the hardware functional safety is important because it starts at the bottom you need to have that layer where the foundation is can be functionally safe and that's as much as people here in the room like to write software functionally safe is a lot about pro-center paperwork so there's a lot of that on there you need to be able to update you need to be able to write the modern software with using CI CD it's all of those kind of things are part of how you build an operating system on top of that you have what you can call the isolation layer and some people like VM some people like container some people like a little bit of both but for a lot of the isolation layers people use virtual machines especially in a world where you need to be functionally safe and the Linux foundation project called acorn is a very lightweight hypervisor what is really trying to be simple and small so that you could actually make safety claims about it some of that is about paperwork but some of that is about showing that no there aren't memory allocations that can fail and then that your brakes don't work right you want to make sure that everything is configured up front you know that nothing is going to fail randomly on you so that if you hit the brake pedal you actually slow down the car that is actually hard and a simple hypervisor underneath allows you to partition a physical system in a part where you can make these kind of assumptions and claims and a part where you don't need to do that where you can run more fancy software and some of it is you also have to figure out how do you do graphics because it turns out by regulation your speedometer in your car is actually safety critical thing if your speedometer doesn't work anymore you're you're not allowed to drive your car and sure so you have to have graphics that actually show the speedometer and guarantee that it is there so graphics have to be shared you have to have real time you can't be big and small footprint is not an optimization statement small footprint is really a statement about can we do enough paperwork and analysis to prove that my brakes will work and if you have a million lines of code you're not going to get there if you have 20 30,000 lines of code you have maybe a chance to be able to show that this code path cannot fail because there isn't there aren't constructs in a code that can fail so that's this acorn the second layer of up is about the trade-off between speed and security and Jim talked about containers versus virtual machines a little bit it's historically people consider virtual machines secure containers they're fast they're sort of secure depending on your security pattern of your threat model can we do security and speed I've talked many times before about a project we call clear containers that shows that a you can use a hypervisor as a backbone of your container infrastructure so that you use container infrastructure for deploying the software but you can use the security of a virtual machine as for the isolation part we've partnered with hyper to run through a project called kata containers which is really about making building an industry base between a series of partner companies that are and community contributors so that we have one container infrastructure based on virtualization and the end result is you have something live because it's containers but at the same time it's it's secure because the isolation is done using the virtualization hardware of your system okay so that's half of it we've been if you look at a traditional hypervisor setup you have kvm in the kernel you have qmu on top and then you're doing your your guest or as in the in the vm it turns out that qmu is kind of big qmu does many many many things including emulating a floppy drive including a floppy drive controller and emulating a cable between the floppy drive controller and a floppy and all of those kind of things you don't need in a cloud setup and from a security perspective you don't want if you have and we looked at it a typical hypervisor setup has about two three hundred um device emulation models running at any point in time any of those device emulation models is a place that is code that runs on the hypervisor side that the guest can talk to which by definition becomes the security exposure we started looking at okay do we need that we don't need a floppy controller anymore because if i hire a new employee today they don't even know what the floppy looked like so we started saying okay how much can we remove from a emulation hypervisor layer to still run all linux and maybe some windows without without losing any of and it's a project called nimu and we sort of managed to reduce the code footprint of an active qmu by 10 times instead of two million lines of code you have about 200 000 lines of code running at in reality it saves memory it's a startup time and it reduces your security exposure okay so we talked i talked earlier about exposing hardware features you have to do the os you have to do the kernel you have to do um the hypervisor and you have to do the run times all these kind of things if you do it if you do that right across all these pieces of the stack you can actually get very very significant performance increases we've noticed that if you take it out of the box sort of an optimized stack and you spend a week or two fixing the right operating system making sure all the pieces are there you can get sometimes an eight times 10 times performance 15 times performance increase just by changing a few software things so getting it wrong versus getting it right is a very very significant amount of performance and performance is cost performance is power it's all the same kind of thing um so we're free as a as a from an Intel perspective we're trying really hard to make sure all of this just works for you um that and that means that we have to optimize not just the kernel not just the hypervisor but also the layer on above it um the eigen libraries if you want to do tensorflow the how doob layer if you want to do big data um tensorflow itself of course zlib there's a lot of um libraries that we're now really working on making sure all the little pieces of optimization are there and in a don't in a way that you can actually consume and use um and the last thing is I need to plug our booth if you want to see anything I talked about today we have a series of demos in our booth in the show floor a few floors down um so if you want to talk to us I'll be there all day several of our our engineers are going to be there okay and with that thank you for listening and back to Jim. Thank you Ari.