 Hi, folks. I'm between you and lunch, and I'm very, very aware of that. So I'm going to be talking about Unicernals, where they may be useful and maybe where you don't want to use them. I'm going to go to the first half of this talk fairly, fairly quickly. I have a demo, and I want to spend a little bit of time on the demo, so I may slow down at that point. So firstly, I want to thank all the open source contributors who've ever pushed anything into Unicernals. So it's because of those guys on all the open source activity that anyone can stand up and talk about it. So who's heard of Unicernals? Show of hands. Some of the room. Have those people who have any of you tried them building anything? I want to talk to you guys afterwards. OK, so this is me. I work at Docker, and that's in Cambridge, England. That's the real Cambridge, not the fake one in the US. And I look vaguely like this, but we're slightly more here on my face. So overview of roughly where we are today. So we've heard a bunch of stuff from earlier today. So software is typically an application. So there's a bunch of stuff. There's your binary. There's some runtime stuff. And we set that on top of an operating system. And usually there's a lot of assumptions baked into that application about what it's sitting on top of. But we don't always use the stuff that's there in the OS. So if your application is this thing in orange, it may only use some aspects of the libraries that are underneath. And even this doesn't include everything. So for example, there's the shell, the other utilities, other device drivers that come alongside the OS. So another way of looking at it is something like this. That stuff on the top is the code you actually care about. That's your business logic. That's the thing that makes you the money. The stuff underneath is all the code that the operating system insists you need. And I was at LinuxCon a few weeks ago, and I heard someone mention that there are now 21 million lines of code there. Does your next project really need 21 million lines of code? So another slightly less charitable way of looking at this is something like this. So there's your application on top, the good ship, sailing on the open seas. And underneath is all this stuff. And you don't really know what's going on, but it's just lurking there in the background. But it gets more complicated than this. These days, we build this stuff locally on machines like this, so I'm running this off a Mac. But we deploy this stuff elsewhere. So this is the cloud, it's far, far away. So the environment is already very different. So you're deploying your building in one environment and trying to deploy it somewhere else. But it's going to get even more complicated. We're going to deploy things even more remotely. So this is the internet of things. And just in case you think I'm just talking about a tweeting toasters or fridges that order milk for you, just pause for a moment. So this thing on the bottom right, that's an insulin pump that talks to an app. That thing just above it to the left, that's a wireless internet-connected pacemaker. Just pause and think about that for a moment. In the future, you'll be deploying your software to devices like this. That brings a whole new meaning to terms like embedded systems and also things like Heartbleed. So software today is complex, even though most of the applications we're deploying are single purpose. Typically when we're deploying to places like the cloud, they just do one thing. And it's that complexity as the enemy. So we've heard some of this already in Inners' Talk. So the more pieces you have, the more tricky things are to configure, the more duplication you have, especially for using traditional virtual machines, which duplicate the OS. Those are inefficient. The bigger things are, the slower they take to move. And the more stuff you have, in general, the larger the attack surfaces of your whole system. But things are getting easier. I work at Docker. Docker's changed the unit of production, so now containers are the things we try and deploy. And they've also shipped a product called Containers as a Service, a platform that sits in between infrastructure as a service and platform as a service. So it gives you just the right amount of control and flexibility. But let's put this aside for a moment and take an extreme view. What if we went right back to basics, looked at the way we've deployed stuff to then, and say, how should we do it? Well, firstly, we should disentangle those applications from the underlying OS. Start breaking up those assumptions that we make between the thing you're deploying onto and the thing you're trying to build. And you want to break up that OS functionality underneath into modular components, into separately reusable pieces, separate libraries. And then you can link only the system functionality that your application actually needs and ignore the stuff that it doesn't need. And once you've been able to do that, you can have an ecosystem of libraries underneath and target different platforms from your single code base. And obviously, this is where unicunnels are useful. So what are unicunnels? There's a Wikipedia page, so we're legit. But very briefly, using a modular stack, every application ends up compiled into its own specialized operating system. And you can target that for deployment on the cloud or onto embedded devices. Another way of thinking of that is just enough OS for your application to perform. Now, it's important to note there isn't a separation between the OS and your application code. It's all one thing. But it's just enough of the OS components that your application needs and nothing else. So brief, so I go into what unicunnels and containers. And because these used to be compared against each other, like unicunnels versus containers. But really, we see these things as being on a continuum. So at one end, we have containers. And if you go across the spectrum of increasing isolation and specialization, we naturally end up at a place where unicunnels exist. So these essentially sit on the same continuum. And we can talk more about that in a restaurant or lunch if people would like to. So there are, people are describing this as a zoo of unicunnel projects. There are multiple different implementations out there. This is just a handful of them. There are a couple on there, but a couple that are not up there that I've seen recently that pop up using the Go language. So how do we separate this out? Broadly speaking, we can think of them as two approaches. One of them takes care of legacy or considers legacy. And the other one takes a clean slate approach from the ground up. So what the legacy approach tries to do is take the existing code that's out there. So for example, Rump Kernels on the left takes the net BSD stack and tries to break it up into reusable components. So you can take just the networking stack and use it in user space, for example. And Linux Kernel library is trying to do the same thing for the Linux Kernel. But these others take a clean slate approach. They tend to be more language specific. MirageOS, I'll talk about that more in a moment. HLVM is written in Haskell. IncludeOS is written in C++. And they've rewritten all the protocol libraries from the ground up. And so you benefit from things like not having to worry about POSIX. You can start having clean APIs everywhere. And you can also benefit from the package management ecosystem that each of these language comes with. So I'll talk a little bit more about MirageOS, which is written in the language of Camel. MirageOS is an incubated project with Zen. That's why that's up there. And what essentially it tries to do is you take all the stuff that would traditionally be on the left-hand side of this image. Everything then becomes a library. You take the libraries that you need, and you compile that down into a Unikernel. And you target that Unikernel for different environments. So for example, this one is targeted to run as a Unix process. But you can also, just by swapping out the system libraries underneath, target different environments. You can target x86 with Zen, which is typically what the cloud is, or on ARM. And just by rewriting the necessary libraries, you can target multiple different platforms. So the thing that's useful about this is, let's say you're developing with Mirage, let's say you're already an Camel developer. Are there any Camel developers? Anyone tried a Camel in the crowd? I see one hand. I want to talk to you as well. So the benefits are, if you're using one of the language-specific Unikernels, and it's a language you're familiar with, all the normal tools are there for you. All the usual tools that you normally use for debugging, figuring out what's going on properly, they all exist. But you now can target your code and make it much, much more specialized to an array of different environments, just by swapping out the system libraries. So now what I'd like to try and do is describe how that works, give you a demo of how that works. I'm going to build something on my Mac, and then I'm going to show you that it works, and then I'm going to try and deploy that to the Internet of Things, which is represented by this thing here, this little board with flashing lights, it's a cubby board, so I'm going to deploy something onto here and run it. This will be a live demo, things may go wrong. So I'm going to build and run an app in a Linux container. I'm then going to re-target that app for an ARM backend using a different container, and then I'm going to deploy that artifact onto this ARM device. The demo I'm going to do is this 2048 game. Who said of the 2048 game? Yes, many people. So this was built using a camel, and then compiled down to JavaScript because a camel has a very good JavaScript compiler. And because I'm going to have many, many terminal windows open, this is a guide to help you understand where I am. So let's drop out, can we see? Yes, we can. So I am in my OSX terminal. So this is the repo that has the code for that game. It's essentially a static website, so everything is running in JavaScript, so it's a fairly simple demo, and it's fairly simple to understand the components that go into building a static website. So the first thing I'm going to do is run a container. So I'm going to run a Linux container. Here's one I made earlier. I should type properly, and I need to sort out my ports. I'm running a Unix one. I hope that will work. Yay! So let's check where we are. So I'm in a Linux container, and I already have this built before. So here is the code for that. I'm actually going to, there's one thing I need to do first to tidy up. I made this at high speed. So this is the code. It's all in this container. So what I'm going to do now is the configure step is what tells the Mirage tool what I'm targeting it for. So I'm targeting this for Unix, and I will want to use a socket stack, because I can do that. And I also want to avoid the package manager doing any work. So what would normally happen here is the package manager, Mirage would use your camel package manager, and then go off and get the necessary libraries for you. This container I know happened to know already has all the necessary libraries, so I'm going to skip that step for the sake of speed. So now we can have a look at what is done. So now it's generated a bunch of files. Now we just run make. So this is now built, the image that's going to run on Unix. And we're going to run that, and we're going to cross our fingers. We can all see that. Okay, so that's now running in the container. And it should. Yay! Okay, so this is the true before it game, running from within the container on the Unix package. So this is fairly straightforward. Let's not get distracted. That one for now. So now what I'm going to do is retarget that same code base to now run on the ARM backend. But this time I want to mount it from a local directory. So as we saw here, this is my local directory on my Mac, and I'm going to mount this into a container, but that's going to be an ARM container. Now I'm going to go to run my ARM container. I'm going to mount a volume. Let me check. I'm in the right place. Okay, and we can check where we are. Okay, so I'm in an ARM container, and now I will navigate to where I mounted this. So these are all the files that I have. And now I do the same thing I did before, pretty much. Oh, before, double check. Okay, I'm going to configure it for, I'm going to target Zen this time because I'm going to run Zen on that ARM device, and it's also going to get its network address from the laptop, and I'm going to overwrite some of the default files. So you don't need to worry too much about what's going on here. And again, I don't want the package manager to do any work because I already know that the packages are there, things crossed. And now we can have a look at what's happened. So it will have generated a bunch of files for us. Okay, and now we're going to run make again. And this will take a couple of minutes. So what would happen in this scenario if I did not have the packages locally is, if I were using the other container, for example, it would simply have, let's say it was an ARM container, it would have gone off, checked with the Open Package Universe, pulled down the necessary libraries to build for the ARM backend. And so, because those are already part of this container, that step can be skipped, and this build has succeeded. You can see what we have. So we have a couple of new files there. So there's mir www.zen, and that is the image that will run on Zen. So let's copy that across. Let's go back to my environment here. We can see here there might be a bit easy to see. So we're going to copy this one, this file, two, okay. Ooh, password, I'm glad I remembered that. So now let's log into that board. So now I'm going to log into that board, cheated, and we can see what we have. So there it is, this is the file that we just copied. And now I'm going to run that. And now these are just Zen commands. So what it's doing now is there's a file there, and so it's picked up an IP address. We're going to need that IP address. Now we're going to go to our web browser. This should have gone away, yes. Fingers crossed again, yes. So this is the same application without any changes, now running on the ARM device, being piped through, being piped from here. So there it is again. So that's an example of how we can take the same code base, and we targeted at different environments. And I didn't have any of the local development environment installed for this, so everything was happening within containers. So it's actually fairly easy to get started if you just do a docker pull of the appropriate image. So what do we see there? I built and ran an app in a Linux container. I then retargeted that same app for the ARM backend, and then I deployed the artifact that was created onto an ARM device. And this isn't the only thing we can build this way. So we can also build many other applications. So another example of, again, a static website is the Bitcoin pinata. So it's also a static website. The entirety of the TLS stack was written in pure camel, and there's way less code that actually went into this. As an example of the reduction of code, now I'm sure we've been able to strip out. Let's say this on the right is a traditional deployment, a traditional stack. How much smaller do you think the thing on the, I'm gonna show the unicorn on the left, how much smaller do you think it would be as a percentage? Shout out some numbers. 14, five, seven, eight, not bad. It's about 4% the size. So that thing on the left does the same job as the thing on the right, but it has 4% of the amount of the lines of code. The actual size is about 8.2 megabytes, everything running, that includes the networking stack, that includes all the necessary machinery to serve that webpage. And that's it, and nothing else. Meanwhile, this thing on the right has a whole bunch, it's obviously doing the same job, it has a whole bunch of extra code, but we don't necessarily need all that code, so somewhere underneath that is lurking our old trend. We don't necessarily know what's going on there, we don't necessarily know what that code is for. So it's a recap. So unicorns, they're highly specialized, you use them to build specific applications, they sit on a continuum with containers, they can lead to fairly robust deployments because you have less stuff you have to worry about, and one of the benefits I alluded to earlier is that everything is a library, and when everything is a library, libraries can be reused much, much more easily, and we'll come back to that in a moment. So there are deployments, so many people have taken this code base, all the code that we have, and actually built and deployed unicorns for themselves. So here's a selection of them, the Bitcoin Piñata is one, I actually didn't describe this in detail, so the Bitcoin Piñata essentially holds the key to a Bitcoin address, so essentially it's holding 10 Bitcoins right now, and it's on the internet, and we invited people to try and break it, and if you break it, you basically get the Bitcoin. So it's kind of like a bug bounty way, if you manage to break in, you keep the spoils. Now that doesn't necessarily prove security, that's not what we were trying to do, but it does demonstrate the resiliency of the entirety of the stack, because it's held up for well over a year now. There have been tens and tens of thousands of actual attack attempts, and the Bitcoin is still there. There are other deployments too, so simple REST service to do apps, so people are building simple things with this. There are also slightly more commercial deployments now as well. So Cyberchaff is a product built with Unicernals using HALVM, which is a Haskell-based Unicernal, and essentially what they try and do is deploy an array of Unicernals on an array of Unicernals in an environment that look like services, so you can deploy something that looks like Nginx, and let's say a vulnerable version of Nginx, except it's not, it's just a Unicernal that looks like that. So when you scatter a load of these onto the network, the intent is if your network has an attack and there's an intrusion that takes place, one of the first things the intruder is going to do is look for the next machine to attack. So if there is a flood of different services out there for them to try, they may try connecting to something that is a Unicernal rather than an actual real-life service, and at that point, the network administrator can be alerted that there's an attack in progress, and they can react however they want to. And that's possible because of Unicernals, because these are small, single-purpose devices deployed everywhere. Those Cyberchaff Unicernals don't necessarily need much maintenance. Ericsson has built a network function virtualization platform that you built using MirageOS Unicernals. So that one's currently improved for concept, and there's more information about this on a blog site I'll mention later. But because libraries are reusable, we've also taken, for example, the networking stack from MirageOS, which is a camel implementation of TCPIP, that's found its way into the Docker for Mac product, and I was using that just now to show you all the stuff going on with the containers. So once you've built all these components, they are reusable, and they can solve different problems in other areas, and that's extremely useful. So when should you use these Unicernals? Well, I mentioned earlier that software is complex. I also said complexity is the enemy, and we heard that in a previous talk as well. But we also heard that it kind of depends. Complexity is not necessarily that bad, it depends on what you're trying to achieve. So here's an analogy to try and help us understand that. It's about using the right tool for the job. So all of these will get you from A to B, in the case of the F1 car from A to A very, very quickly. But you would only use some of these for certain tasks. So you may not want to use this amphibious vehicle to get your kids to and from school, although I suppose it depends on where you live. But likewise, you wouldn't necessarily want to use the F1 car for a trip to the shops. So these all have their relevant purposes. So what properties do Unicernals have that help us figure out where we should apply them? Well, you build a single service. I've mentioned that a couple of times. You don't have multiple applications running in a Unicernal, it's just one thing. You end up naturally with a distributed system in which you have a collection of these things, and you can deploy each of those things independently. And because each of those things is independent, you can take different Unicernal implementations. You can have Unicernals running with containers, running with VMs, running with different Unicernal implementations. So obviously this sounds a lot like microservices. So if you want to deploy Unicernals today, microservices may be the best place to do it. So what's the pathway to Unicernals? So the pathway here is gonna be very, very similar to the pathway to microservices, unsurprisingly. So if this is your monolith, this is the big thing that you already have, then identifying where the boundaries lie within this monolith is gonna be the first thing that people have to do, and then breaking some of those out. And once you've started to break up that monolith into a bunch of separate components, you essentially end up with a distributed system of microservices. Now this has other issues around it, some of which we've heard of in terms of operations, but the pathway is pretty much like this. So you have your monolith, you start breaking it up. Some of those things that come out of the other side may be appropriate to rewrite as Unicernals. So certain things like web servers or firewalls perhaps, so networking services in the short term. And this is where we feel that Docker's going to help you get to this, be able to do all this stuff. So another question that is often asked is, are these production ready? So again, I'm gonna say it depends. So let's continue our analogy of transport vehicles. This is a Daimler from 1899, so this is about a decade before the Ford Model T. It's the internal combustion engine, it works. People could drive this thing, but if you had one of these, you were probably quite well off, and you probably had someone nearby or you yourself kind of knew how it worked. So if it didn't work quite the way you wanted to, you would get in and tune it. You'd open up the hood, you'd roll up your sleeves, and you'd get your hands dirty. That's kind of the stage we're at right now. So if you're willing to roll up your sleeves and get your hands dirty, and you saw the kind of demo I gave with the various commands that I'm running, then yes, by all means, absolutely get started. If something doesn't exist that you need, you're probably gonna go and write that thing or talk to someone and give feedback. But fast forward, over 100 years, this is what cars are like today. This one is all electric. This one probably has an app that comes with it. This one will probably drive itself away if it's broken and needs fixing. So the question you should really ask yourselves is whether or not you consider these things to be production ready, because clearly, unicorns are in production, is are you a mechanic? Are you expecting to be hands-on and get your hands dirty and understand what's going on underneath? If you are, then absolutely. Now's a great time to start getting involved. Start writing libraries, pick your favorite language, and go start contributing to those library OSs and those unicorn implementations. But if you're expecting something that's gonna be push-button, work straight out of the box, you can take your existing code and just make it work. You may want to wait a little while for the tooling to mature, because this is quite early. It's still early days for having that automated tooling, but we are getting there, and there's a lot of work coming from open source contributors that is accelerating and we're doing really, really well. At the moment, the community is gathering at unicorn.org, so some of the projects I mentioned earlier, especially the ones that are getting close to production ready, you can read about those on that website. And I will pause now, we've got a few minutes left for questions. Thank you for listening, and please do join me for lunch if you have more questions for me. Made it.