 So, good morning everybody. I just woke up. My name is Edith Levina and I'm from EMC Dell. Oh, Dell EMC actually, I see. And this talk, we're going to talk about Unikernel a little bit and about integration with Cloud Foundry and a little bit of Intel little thing. It would be a very interesting talk. So, let's talk about the motivation. Why did we start working on these things? So, this is the stack as you know it when you start working with something specifically like Cloud Foundry, right? That's what you're running. Starting with the hardware, right? That's the first layer that you will have. And on top of it, you will have the hypervisor. In the hypervisor, you have hidden the drivers, right? Then you have the guest OS that you're putting on top of it. Guess what? They have another driver, right? And then on top of it, you have the OS kernel, the user processes or separate your processes. Then you have the Docker runtime, kind of doing the same thing, but it's a very great packaging tool. And then in Docker, you're putting the shell libraries, the language runtime, the application and the application config. This is all the stack that you're running every time you're running Cloud Foundry. And my question is, why are we trying to actually do? What we're trying to do is to run a single application with a single user on a single server. That's what we're doing today in the Cloud. We're not sharing anymore. We just dedicate one container, one server for your application. And if you look at this stack, it's kind of like Overkill, right? You have a lot of redundancy, for instance. If you're looking at isolation, you have isolation on the hardware layer. You have isolation on the OS user processes, Docker, the same thing, isolation. And then eventually you have the isolation on the application level for the user. And the kernel is very, very complex. And the question is why? Why it's so complex? And the answer is that because the job of the kernel is actually to protect. So what is protecting? It's protecting application for application, user from user, and application from user. So if I'm running my application in the same servers as someone else is doing it, I need to make sure that if it's doing something wrong, it's not killing my application, right? And that will make a lot of sense when we were actually sharing. When we had, you know, computer running more than one application. But we're not doing it anymore, right? So why do we still need this? And that model is coming all the way back when we bought this mainframe that was very, very expensive. We cannot have four running one server in one application. It was too expensive. So what did we do? We let software isolate it. The thing here is that it's not the case anymore, right? And there's quite a lot of complicated software that involved in this. Like for instance, permission check. And the rings. I want to separate the ring because what's going on if I'm running in my application again with someone else and he's doing something nasty to the kernel? I lost my application as well. So I need to separate. So I need to separate as an address space because I need to make sure that you are not doing anything to my memory of my memory application. But again, running one application, one server does not make sense. Now, if you're looking at the kernel itself today, there's quite a lot of unnecessary component there. You have your floppy driver, for instance. I don't know anybody using floppy driver. Guess what? You actually have that on your machine. You also have the USB driver when you're running a machine on AWS. I cannot get to the machine. Why do I need this byte on it? And a lot of other stuff that you have on your machine that you're really not using and catching quite a lot of storage and memory and sometimes they're not even running, but they're still there. And if you're looking at the update model, again, the same thing. You basically don't get up or yum. You're getting all this junk to your machine. You don't know what it is. Some of them you don't need. Some of them you already have. It's still going to be there, right? So again, less of control. You don't know what you're running on your machine, but you're in quite a lot of stuff that you don't need. In terms of security. So just describe you that you have quite a lot of stuff that you don't even need. So guess what? The surface of attack will be huge, right? I can SSH to the machine. I have back end drivers that you don't even know that I can actually go and talk and mimic. So there is quite a lot of vulnerability because the surface of attack is very, very big. In terms of sharing microservices, we all like love, but actually we're sharing quite a lot of stuff with it, right? We're sharing the kernel, we're sharing the memory, we're sharing the file system, we're sharing the hardware. And the only thing that we do in order to protect it, we're using C-group, right? So it's kind of like the same like I will buy a really big house with windows and doors and I will put a lock. That's maybe work or maybe not. But if I would take one room and I were not going to put any room and windows, probably it would be much more secure, right? It would be easier for me to protect. So, again, let's just talk. I just want you to understand what you're running because I'm not sure that we pay attention to it. The kernel language, in order to support one kernel, one need to know all those language, all this technology. I don't know a lot of people in the world and it's going on and on, right? This is your Linux kernel. I'm not even talking about your real distro or for Ubuntu, right? So this is what you need to know in order to maintain one. I will argue that there's not a lot of people in the world who knows that. In terms of size, I really like the way I define it is that a small application today is around 10K line of code. If you're looking at medium to large, it's probably 100 lines of code. If you're looking at huge application, we like to talk about millions. Guess this is the Linux distro, only the kernel, 22 million lines of code. This is the Debian distro, right? Again, the operating system that you're running, 419 million lines of code. In order to understand what actually happening there and maintain it, think about the potential of bugs and how you actually prioritize that because there's quite a lot of code to maintain. So just describe you why what we're doing is all wrong. So the question is how did we get there, right? And the way we get it is by pure evolution. We started all the way for the mainframe, went to the personal computer all the way to today in terms of things. And the only thing that actually was there, the entire world is Unix, right? So if you're looking today, what can Linux run on? Anything. And what can run on Linux? Anything. So I can take Pentium 1, take the last Debian distro and the Pentium of 10 years ago and it's just going to work. I don't need to do anything, right? So we make a choice in the community and the community decided that it's more important for us to save compatibility versus efficiency. That's where we want to put our target. It's important to us that someone can run on a very old machine. Okay, that's interesting. So we make it work, right? It's working. We're running all the data center and everything is working very well, but I believe that we should make it right right now and then make it fast. So the question is how, right? What can we do in order to make it better? Should we rewrite it? Maybe we should make one from scratch. Probably it will be too much job, right, to just take and build the system. Or is it? And I think I basically suggest that we will think different. This is a quad of Linux, the guy who built Linux, you know. That was a totally different argument about monolithic Linux kernel versus micro kernel. But what I wanted you to take is the end of this argument and basically what he said is this. In the end of the day, the Linux, the source is portable, except a tiny kernel that you can probably, that you can, and I did, rewrite totally from scratch in less than one year without having any prior knowledge. What I'm trying to say is not that complicated. We can actually do that. So actually, people do it. It's called Unicernel. So what is Unicernel? Let's understand how it's working. So this is the stack you're running today. No matter what the application you're running, in your guest OS, you will have the kernel itself, you will have the libraries of the OS, the runtime, and you have the application. No matter which application you're running, you always going to have those libraries and this kernel. What Unicernel is doing, they're saying, let's start from the application and see what it's actually need in order to run. And it will take only what it needs, so it will look something like this, right? Only the pieces from the kernel that it's actually need, only the pieces from the library that it's only need. And then it's packaged, right? This is a little bit, I will go fast because I don't think I have a lot of time. But basically, this is the git and the guide of how it's actually working. It's based on library OS, which means all the drivers are basically libraries. It contains only what it's need. It's a single address space and again, because you know what? Only one user and it's running on kernel mode anyway. One process. I don't mind. I don't need the separation of the address. Single process and that's key. So if your application is forking, you cannot run on it. It is supporting threads, it's just not supporting forking. No virtual memory isolation again, because no content switching, no user mode, no kernel mode, right? It's basically really straightforward. So how do you go about building one? You're taking your application, binary, your application config, your application dependency. You're taking your language runtime and your drivers and you're basically putting in kind of like a magic tool, a packaging tool. And the result of it is basically a bootable image. So you can take this bootable image, you can run it on bare metal. Probably it will not make sense, because the beauty of all of this is that it's so tiny. Why do you want to take a big machine and put very tiny operating system on it and very tiny application? So usually using an hypervisor. Except if you're using Internet of Things, an embedded device, then it's make sense. So it's a VM, right? You have everything you need, you're just putting in the regular hypervisor that you're running, nothing very special here. And the advantage of running it on the hypervisor, so it's an hypervisor, a very tiny VM, versus running on bare metal with container is the fact that the hypervisor give me the physical isolation, right? It's actually using the chip itself, separate the processes, which is much more secure. So what did we do? The stock looked like this suddenly, right? Much less layer, much less code. Easy to reason about what we did. We just removed what we don't need. So again, let me just summarize that. What is the advantage of Unicolonial? There is no user, there's no multi-user support. There is no permission check, right? Which means that you actually utilize 100% of your machine to your application. Isolation is only on the individual hardware only, which means that you're only sharing hardware. The minimum size of VM today, for what we take VM, I don't know, self, co-founder, it's probably, I don't know, you're starting with one gig, right? I mean, this is what you usually at least starting. The Unicolonial is tiny, it's in case. So basically, the size of the Unicolonial is the size of the application, because the head of the reporting on top of it, it's really close to like 2009. That's it. Very, very short boot time, because again, there's not a lot to boot, right? In a sec. And then in a millisecond, actually. And the last thing is that the surface of attack is very tiny, right? Because there's not a lot of stuff, you cannot SSH to it, for instance. But it's also custom. Which means that if someone was very, very motivated and somehow succeeded to go to your machine, it's not the same script will not work on a different machine, right? It's not going to work, because it looked different. The libraries are different, the kernel is different, everything is different. So that does, again, another advantage. Just for your knowledge, there's two types of Unicolonial today in the market. The first one is what called a back for compatibility, which means that it's supported POSIC, which is the API of the system, which means that every language that you run before you can run run. So Java, it's no JS, it's go, it's C++, just name it Ruby, it will be there, right? The other one is they're more special. And they're not POSIC compliance, therefore, you need to include some library of the separate system. So usually it's probably better for Greenfield. But you're getting better performance, right? Because you don't need to translate the POSICs. So this is an example, Mirageurize, for instance, the company that the OS of the company that Docker bought, you need to write in onocomel, which is interesting, but not a lot of people running oncomer. On comkernel, you can run whatever you want. So this is an example of performance. This is a company called Cloud Use in Israel. They started actually from Unicernel, they pivoted for it. And the way they pivot is they basically created a database called a scalded B. They took a sender and rewrite it on Unicernel. And this is the result that you see, it's 10 times better. And I think this is like, you know, it's talking by himself, we're actually getting 10 times better performance than if you would run a regular Cassandra. This is two projects. Okay, this is two projects. I will show you a two example. And I don't know if we have time. Okay, so this is a two project that out there in the community of the Unicernel today. One of them is this, I will show you that real quick. This is the Pinata. So this is a challenge of the Mirage-wise guys. They just want to show how secure it is. So they created this VM, which is a Unicernel. And they basically challenge everybody to come and try to take it out. And to be fair, it's still there. And it's quite a lot of time. And I'm telling you that there was a lot of attempt, they just fell in, right? Because there's not a lot of option to go inside and attack that. So that's one project. And the second one is I really like this one, because that's showing you data. So when I click this button, a machine spin up. And that was 0.2 seconds ago. This is how quick is to actually spin up a Unicernel, right? Now, if you think about how quick is to spin a Unicernel, and now it's interesting because, for instance, for serverless, today the technology is reusing container with Unicernel. You don't need to. Much more secure, right? So just spin your application in a Unicernel and it will be just work. So you can look at it and I will share my slides so you will be able to do that. So what did we do? I just explained to you about a very cool stream inside the open source community. So when I looked at this technology, what I was wondering is, for me, the technologies that look like a very, very attractive solution. It's just much more, you know, if I will need to go about and design a system, this is the way I will do that. So the real question is why I'm the only one who thinks like this? Oh, I'm not a lot of people think like this. Why are they not using it? And the real answer to it is that it's exactly what happened with the container, right? Until Docker came and actually make it easy to use, no one used it or very few would use it. So that was the attempt of this project. We said, that's an interesting, but it's very, very hard to create a Unicernel. You need to understand the driver and spin it up. It's very, very complicated. What if we will create something Docker-like that we will be able to actually just run a VM? Give me a code. I will build it. I will run it and then we provide it what you want. So that is what Unic is all about. It's an open source project. It's reading and go. Very clean. Go and look at it. I will show you in a sec. So how do you work with it? It's very simple. Unic Damon and spin up your environment. Unic build, you're saying, where is your code? What is the Unicernel type that you want me to deploy? Because all the beauty of Unic, I don't know which one will win. There is quite a lot of Unic allowed there. I don't know what you want. I mean, maybe the use case makes sense for you to run Mirage, even though it's running in OCaml, but maybe you have all the applications that you want to run. So you will want to use one kernel. So I didn't want to make this choice for you. I want you to do that. So you just need to tell me which Unicernel type you want, which language is your code in, and where do you want to run it? Because this is probably the key difference between Docker and Unicernel. I need to know where you want to run, because I need to compile the drivers. Versus in Docker, it's infrastructure independent. We don't care where you're running it. So versus the system-based. And then after it, we're creating that for you. You can just do run, and I will show you in a second a demo. So as I said, this is from, again, my opinion, this is the most important slide about Unic, which is the fact that we make Unic not opinionated. And the reason for that, as I said, we don't know. It's a new area. We just want to give you all the option. So Unicernel type that we support today, there is OSV, very good for running Java. There is IncludeOS, one of my favorite and very interesting to watch. The only problem with that is that they're not supporting POSCIC, which means that your code has to be re-reading and it's in C, C++, that's the only option. Run kernel, you can support and run everything you want, POSCIC, MirageOS, or kernel. In terms of provider, again, we want to give you all the option. Today, this is the provider that we support. So OpenStack, actually a great project in EU, Contribute is good, micro-enginal, and they basically just, you can right now spin up an environment of OpenStack and spin up Unicernel on top of it. Most of the Unicernel type will work, except of Mirage, because the drivers is for specifically gen. So if you're not running, then it's not going to work. VirtualBox, I want you to try it on your machine, right? Maybe you don't want to spin it on the cloud, you just want to run and dock all your machine, AWS, makes sense. KVM, again, if you want local, QMU, if you want to play with it a little bit more, you can attach debugger and so on. VirtualStream is the EMC cloud. VirtualAV Center, right? I mean most of us running and FUTON, which is the new technology of VMware. But also when I look at this architecture, I think what I notice is that Internet of Things will be a very, very good use case for it. And the reason is because, as I said, very small, very secure, very fast, right? All everything you need in Internet of Things. So that was important to us to support ARM. And this is what we did. So embedded device. We have a Denys here actually sitting who did for us a unique hub. So the motivation for a unique hub was you're building your own a unique kernel. Sometimes the one who's building it is another one that we're running it. And also we wanted community. That's what all this project is about. This is why we open source it. So we want people to build a unique kernel and basically share it with the community. So we did that and Denys, make the calls. Thank you, Denys. Appreciate it. This is an example of what we're doing. We're basically searching a unique kernel from the app and then we're running Minecraft that we show it live in a sec. Docker integration, we noticed that somehow this community, the open source community really, really liked the Docker API. So it was really, really important to us that they will not need to change the script and their system build and so on. So what we did, we teach you how to speak Docker API. So the idea is that we use the minus H flag, right? Today you can just target a daemon that is remote. You can right now actually do the same thing with unique. You can just target a unique instance instead. And then what we're going to do, we're going to translate the Docker API to a unique API. Basically. So you can do Docker run, you can run a unique kernel. That's simple. This is when we started this project, the city of the EMC Doja, that was always our target. Our target was to run it with Cloud Foundry. So unique is very good tool, but it's like Docker, which means that we're spinning up the unique kernel for you, but we are not monitoring it. We're not checking that it's health. If it's dead, it's there. We don't know, we don't care. Therefore, we need to average something like a cluster management. So of course, Cloud Foundry was a very native solution. So what we did is we just basically created a build pack. So it's that simple. And we showed a demo running with PWS, which is the people. But you can do it with every destroyer that you want of Cloud Foundry. And basically what you're doing, when you see, of course, you're just putting the build pack of unique and you're just going to build instead of your code in container, you will do it in unique kernel. And the beauty of it is that you can run your code one time here and one time here, or some in container, some in unique kernel, and you're leveraging everything that the platform is giving you. Or outing and everything that we're getting for free, you're just getting it. So let's see a quick demo. Do we have time? Yes. The worst case, you will not eat lunch, okay. So. So I have a machine. Oh, she just ran something. Nevermind. Yeah, I just instant the machine. So what you see is that by accident, I just spin up a unique kernel. You're going, you're doing unique. This is what we're supporting right now. The beauty of it is that it's not a very white space. It's not a new space like containers, when basically we need to figure out the volume where it's going to work because there is no support or network. This is a VM. You're running it with your regular tools. You can see it in vCenter. You can live migrate it with Vmotion. You can, everything the same, right? The tool volume supported from the box. Everything is very easy. So what we're going to do is we're going to do unique images and you will see that I built some images before. So what I just did by accident is I spin up one of them called Node.js. So if we go and we'll do unique PS, you were getting what I'm running and I'm probably running that because as you see, I just did it. As you see, I have an IP address. If I will take this and put it in a regular app provider, aim, sorry, browser. So that's running by the way on AWS. So it's Node.js application. So we'll do 8080. As you see, this website is running right now on AWS. 56 meg. Just give you an example. This is how big is this VM? 56 for a website. So let's do some other stuff. For instance, let's spin up a game. So I will do unique run. Actually, it's a bit easier just to do this. So what I'm doing right now, I will spin a game. I will spin Minecraft. Just to show you how simple it is to do something like that. Image name will be Minecraft demo. So what I'm doing is I'm just spinning up a VM. As you see, it's still pending. We'll do PS. We will get, it's still pending, but we're already getting the IP address. So this is a Minecraft unit and running on AWS. What I'm going to do is I'm going to take that and I will try to play Minecraft. Did I close it? Shit. So I need to open it again. Basically, I just wanna show you how simple it is to play something like, to do something like that. So we're gonna play. We will server with the IP address, which is, I don't know what it's really good at. So it's 54.166.193.49. So, yep, let's run a quick server. What is that? Still pending? Let's run it. Did I put the wrong IP address? I'm gonna do that. Add us with four, one, one, two, three. Yeah, I don't know why it's not here. Try to connect. No, maybe it's the integral thing. Let's go figure it out. Okay, so I will show you in a second the logs and stuff because the cool thing about UNIG is you can actually go and look at the logs. So I'll put this IP address and I will do, let me say, six, eight, seven, seven, six, nine, slash logs, we can hopefully find the logs if it's wrong. Yeah, I don't know what's going on here. Yeah, that could be the reason. Still can't, maybe it's not the right thing. Well, no working, we'll look at it after it. For some reason it's not working. But let's go back to Cloud Founder because that's why we are here. So what I will do right now is I will show you that I have the CLI of CF running here. I don't have network, right? That's why the problem is nothing is wrong working. I have a network problem here, right? That's what's the point. Is that the right one? Wifi? No, let's close it. It's very important. Not working and then just create it right now. Okay, organization, it's just great. Okay, so now we will do CF. Ups, I'm pulling it into this back. Yeah, so now what we're going to do is we'll go to a directory that I have here. See, for example, sprint application. So I did this one in this. So and now what I'm going to do is just as simple as CF push. I'm putting my build back and my manifest. And basically I'm spinning up a unicolon right now. As I said, it is working with PWS, but you can run it with any VCF that you have. And while that's taking time, it's not going to take a lot of time, but while that's happening, what I will say is that the way we did it is that you're still running a little container on your infrastructure, on your cell. But in this container, we're running a very, very small instance that what it's doing is basically monitor the unicolon. That's one thing that it's doing. So it's spinning up the unicolonal, then it's monitoring it. So which means that if the unicolonal dead or the container dead, the other one will kill and then Cloud Founder will spin up a new one. So that way I was basically, it's a proxy. It's also a proxy for the actually IP, right? So for the internet. So if I'm going to the container to work. So see what's going on, CF up. So now it's running. In a second, we'll go show application. So we take a second because from my opinion, it's still spin up. It's going on today. See if the application is still up, still up. So it's probably taking time a little bit to spin up. What we can do for the meantime is do something like CF scale, minus four in the application name. So we're just spinning up, for instance, and I didn't do minus sign. So right now if I will do CF ups, for instance of this application, let's see if it's up already. Still working on it. Don't want it to work. It's wild, it's spinning up. I don't know why it's taking so long. But what? Yeah. But what we're going to do for the meantime is we will go to Unique and we'll see what we're running right now. So in Unique we're running six machine right now, right? The Minecraft one, the space website that I did, right? And for instance of the Unique channel of Spring application. What we're going to do right now I'm just going to try to kill one of the Spring application to show you that Cloud Founder is just going to spin it up again, right? That's simple. So it would be easier. In Unique, so Unique delete instance, minus, minus instance, and I can just put one of the instances. I'm working with that. Unique instance, what would it be? Instance. So apparently, I think that should be fine. I think the problem is, maybe I can put the name, but if I will put the name it will kill all the application that should work as well. Let's see what's, yeah. Well, I wish I had a demo later. But I want one second to continue and take just a second to continue and show you what I think that we did. And then I will go back and hopefully it will work. So as I said, I'm a big believer that we should create our own future. And I think that Unique is a perfect fit for Internet of Things. So this is why we went and did this exercise. We were the first one to actually run Unique on Raspberry Pi. Embedded device, right? That was the point. So when we put the Unique on the beginning on the Raspberry Pi and connected it to monitor, surprise, surprise, we didn't see anything. It didn't even load. It was like a black screen, right? So what we did here, we basically connected Raspberry Pi that will debug the Raspberry Pi with a GDB and I don't have time to show you the demo. And also maybe it's better that we're not show any demo today. But basically what I did in conference before and you can go online and look at this, make Unique make me a toast on stage. So basically just spinning up. We had a switch. The Raspberry Pi give the power to the switch whenever we want. So I could just click a button in my phone and it's making this. So we're not show that. We'll talk one second of where we're going with that. So we added support for my RADUOS. RADUOS is an interesting, the reason we decided to choose is because Docker bought a Unicolonial system. Therefore there is some of attention to this specifically group. And they have a community that is nice in size and people are already running some in production. For instance, by the way, when you're running Docker or Mac today, you're running Unicolonial. You just don't know that. And they're doing a lot of stuff that related to NFV world, like a network function virtualization in a serverless fashion. This is a project that for instance, Ericsson did. So there's quite a lot of very cool, interesting work there. And we wanted to make sure that the people will not need to run, to go to configuration file and start playing with it. That's too hard to run a Unicolonial. So with Unique it's just very, very simple. Solo five feeds, basically coming from IBM. What IBM did, they took the MirageOS. MirageOS is coming from the feature called MiniOS in Zen, which means that it's only will run on Zen. What the guys from IBM did, they basically took the low level of the Unicolonial and changed it and put a different, you know, the KVM drivers and make it the ability to run on KVM and QMU. But they did another thing that very, very interesting for my opinion. So you know, usually when you're building Unicolonial, as I said, you see what your application will need, you're putting everything you want. But you're using a regular iProvisor. Regular iProvisor will give me everything I need. So for instance, if right now I'm building my application and my application doesn't need a volume, I will not put the driver of the volume, but on the iProvisor when he actually give me the, you know, the device, he will give me a backend for volumes, right? That's very dangerous. One of the best, the worst exposures that happen is someone who actually went to a machine and mimic a, you know, a backend that wasn't needed and basically killed the iProvisor. So what we're saying is that it will be interesting to make the Unicolonial build the Unicolonial itself, right, but also build a special iProvisor that will have only what the Unicolonial specifically need. And that's what called UKVM, Unicolonial KVM, it's a project by IBM research project, right? So, surprise for us, we will be supporting it in probably next week. So we already have running, we're just testing and stuff. And again, because our purpose is that every time that you want to run a Unicolonial, you will not need to deal with anything manually. Just come to us, we will build it for you, we will run it for you, it will be very, very simple. So AMP support will come, we actually have that, but we need to open source it, so we should work with the EMC legal on it, dotnet support. So again, think about it, when you're running right now, Linux, it's interesting. But what about the dotnet? There is quite a lot of ecosystem out there, that building a dotnet application, not really using Docker that much yet, and we'll want the dev of kind of like experience. The cool thing about that is that you can actually take dotnet running in Unicolonial and paying zero license, right? Which is really, really cool for my opinion. And the last one is serverless. So it's open-stock source, go to the repository, we have a, it's a very cool, you know, just go to my repository, and what you can see is that there is community around it, there is issues, there is full requests, actually we take care of all of them, which is quite fantastic, and very good, explainable, how to run, what to run. It's very, very easy. And the last thing that I will say is that, that's exactly the point that we're trying to make. We cannot do it by ourselves, not even emcee. What we need is your help as a community, and we really believe, I believe, I'm a big, big believer, that that will be, at least in the cloud, it will be as part of the future, as Internet of Things, it will be the future, for my opinion. And therefore, we need a community to help us. So we already do our best, we basically on all this project right now is me and two more people. We have a slack channel that people are asking questions, that we're helping them, if there is any issue. There is a unique Twitter that you can just go and follow some announcement there, every time that we're doing a new feature, we're going there, and there is the repository itself, you can put issues and so on. And that's it, that's all I have. I mean, good at that, I'm happy. Thanks so much.