 Enable new technologies that operators are trying to leverage. Give you an example, virtual tp that which allows operator to deploy much, much more simplified tp, for example, to reduce the capex and the opaque and enable them to launch new services much faster than in the traditional way because everything can be launched in the cloud and the user can access it from home or from small enterprise directly. So the advent of this distributed NFV coupled with SDN are putting actually more requirements while the picture looks very interesting that you can move stuff to the cloud and so on. There are more requirements coming that which are mainly about how you stitch these NFV together once you move them to the cloud, how they should communicate together in the most optimal way. So that's an important problem that actually is the focus of this talk. So it's not about NFV per se. We know that NFV can be created with the Unifernal as you've just seen. It's more about how to dynamically stitching these NFV together because as I said there are new requirements coming like per device, per user, per application or per any combination of the three. So the second one is we all know that the containerization simplifies the virtualization stack. However, it constraints app to run on the same kernel and we all know that there are some security issues that for example they will make it difficult for providers to offer multi-tenancy with containers only. Also both containers and VM, they run on a full-blotted kernel. So there is a large amount of death code which means a larger attack surface. System vulnerabilities are on the rise. Long time to boot, so they have to be always on. So there is no zero footprint which also means high power consumption. Also, last but not least, operators are moving toward highly distributed small data centers. For example, AT&T is deploying what is called NGCO for Next Gen Central Office. Orange in Europe is doing the Next Gen Pop. And this means a limited number of CPUs because we have really a limited space. And they are used to run operator NFV. So it would be an interesting opportunity for them if they can leverage these small data centers to host third-party applications. For example, if they can put on the host much more than what the current VM or containers are consuming. So we ask ourselves, can we do better on scalability, on security, on boot of time and on service training? So we looked at Unicernel. So Unicernel, as we saw already four o'clock, so I think you're convinced by now that Unicernel beautifully addresses scalability, the security and the boot of time. However, the service training question is still an open question. We don't claim we have solved it yet. As I said, it's an ongoing project, but hopefully I will raise your interest on that and maybe the community would take a closer look on this. So Unicernel at a glance, so we all know this is the representation of a VM. Container will simplify the picture. If you want to secure the container, actually what you do is what you're hearing is that you run the container in a VM. So you're back to this theme. And what we are after is something actually just like that, the Unicernel. So I will not repeat all this, but so it would be interesting to have something like zero footprint cloud whereby no instance is running waiting for request. And what else? I think all the others have already been discussed. So what we are up to, we want to synthesize specialized on-demand NFV and stream them into our next-gen cloud appliances. So the first goal is to leap past current VM and container technologies by introducing these more Unicernel. We want to build them to user or per app or per device. And so it's not only about slicing the network as what SDM is capable to, but also to slice the infrastructure. So once you get your own slice of the network, it's like plugging on this slice your own application too. But these applications should come up and they should know how to talk to each other on the fly. It's not only like another configuration from SDM or something else. So respond also to network traffic in real time. Unify automation, orchestration and SDM control. So these NFV are created only when needed. They are stitched together when they come up and they are removed when the demand is fulfilled. So the whole dedicated slice is actually freed after. Also to enable in-network processing cloud, like as I said, to host third party NFV to do NFV acceleration, low latency services. So what do I mean by Unify automation, orchestration and SDM control? So as I said, NFV are created only when needed. Let's say this is LTE. So you switch on your device. It goes to the radio. It does some authentication and the network will know that, oh, okay, this is a person X. He needs these NFVs. So it will be created only for him and they will come and they know how to talk to each other in an optimal way. Also on the fixed side, for example, with virtual CP, you turn on your laptop, you get your dedicated NFV. And if the app is running in the cloud, it will come up also on the same slide. Sensor sleep most of the time. When a sensor wake up, it will get also its own slides. And then when it goes to sleep, the slides will also go away. So where are we implementing this technology? The first, the short term is on our virtual router. Ericsson has a virtual router. I will not do marketing for it. I will not tell you about it. You can just see this on the slide. And also another longer goal for us is this hyperscale data center system or what is called the HDS, where the key technology is about hardware desegregation. So we believe that the Unicernel would be a very, very interesting fit for this technology. So give you a few words for those who are not familiar with this. On the current server, you have the CPU, this RAM. Everything is on the same card. Nothing is new. Server has a fixed configuration. It needs to fit all workloads. And if there is a problem, usually you just change the server. In the HDS or the hyperscale architecture, basically the hardware is desegregated in the sense that you have now few slides in the machine. You have an optical backplane. And if you need, for example, more RAM, you add another slide that is only about RAM. You can put the storage on the side and so on. So you can start moving the hardware itself to the pieces around and create a network out of them. So about SDN, since I said this is really our main focus on stitching NFVs. So what is happening now? How systems are being deployed with SDN in the cloud? So you have basically three main pieces. The service level orchestration. You have an SDN controller. And you have the data center orchestration. These three pieces have to work tightly together in order to allow stitching the virtual machines all together and to slide the network also. So a closer look. Here's an example. A user authenticates, for example. Then the AAA will send the policy for this particular user to the SDN controller. The SDN controller will go to the virtual switches and will do a configuration that allows the traffic to go from one VM to the other VM if it is on another host and so on. However, if you are on the same host, whereby we believe there will be a very high number of unikernels, as you saw, 10,000, the colleague just presented. So if you apply the same architecture, then the traffic will be bouncing on the OVS all the time when it goes from one VM to the other. This is what we want to remove. Unikernel already saw and give us so many beautiful features. You want to do the optimization all the way. So we thought how can we remove this part and make things more interesting. So unikernel now how we are looking at them is that since they are too small, they will go just really inside the OVS. And the way we are looking at how to process packets. Now what does it mean to stitch these NFZs together? Let's say it's all based on shared memory. So a packet comes, then unikernel 1 acts on it, unikernel 2 acts on it, 3 and 4, and then the packet goes away. And another packet is pulled in. So what you are looking at to implement this mechanism now is the following. So U0 let's say has been given access to the NIC. It pulls the packet into the shared memory. It tells unikernel 1, do your work. Unikernel 1 finishes work. It tells unikernel 2, do your work, etc. Until unikernel 4 finishes, it signals to unikernel 0 that I'm done. The packet is pushed and another packet is pulled in. So this zero-coffee-shared memory is basically a key component in order to enable really an optimized and very fast service chaining. So we are not there yet, but this is what we want to do. So how would the system work in this case? We are using also, you know, Mirage, someone mentioned the morning, Mirage has this beautiful distributed data center that is called Irmin. We are using also this piece along with our own module that we call Lightning and GZENSTOR. So the AAA, for example, put the policy in one of the database, the database signal to the Lightning module and to GZENSTOR. And then when a packet comes, the system will create the whole chain on the fly, as you're going to see. The packet goes between each one without using, without bouncing on an OVS. So an example here. So for example, a packet comes to, sorry, the subscribe policy comes to Irmin. The Lightning module itself is created and then this creates a packet IO unikernel that will deal with taking the packet in and taking it out. And then here's the first packet, it's a DHCP. For example, there is no DHCP for this subscriber yet. The DHCP comes, goes to Lightning, Lightning says, OK, it needs a DHCP. The DHCP is created, the packet goes to DHCP, it gets an IP address, it goes out, and then data packet comes, then it goes to Lightning to say that, oh, Lightning says, OK, this needs this particular chain. For example, a NAT and a firewall, as you're going to see. Then the packet after that goes to the NAT and firewall. This is only for the first packet, all the packets after, of course, go directly into the NAT and the firewall. So I'm going to stop here and I'm going to let my colleague do a demo for you to see that, to show you that what I'm talking about is actually real. So yeah. Our current version, we have implemented this with using the common Mirage OS components. So the blue ones here, as you see, those are kind of like our custom Unikernals and the DHCP we are using existing Mirage DHCP library and for the NAT we use the Mirage NAT. And we have split it in this way so only the packet IO portion uses like the network stack and from that point on we use a shared memory between the different domains. And yeah, the blue ones here, those are kind of like full Unikernal VMs running in DOM use and the lightning is actually Unikernal as well, but we run it as a Unix process so we can spin up VMs in DOM 0. So we have here an empty Xen data center. Let me get rid of the slides. So we have the subscriber. Basically we use, in the box, we just use a simple JSON to represent the subscriber and we kind of like identify the subscriber per the MAC address in the demo and then we have a configuration for different VMs. For instance, the packet IO we use 32 MEX, it says here memory and then it tells which bridges to use to get the packets in and then it gives a simple configuration basically for the external and internal IP addresses that I use for the mirage IP stack configuration. And then, for instance, the DHCP server, we have a simple DHCP setup for that subscriber so it knows which IPs to deal out and what is the router configuration and those sort of things. This is basically the configuration that you need to give for the mirage DHCP for it to work. And we have a simple deploy script that takes this JSON and what it does, it pushes it to the Ermin database and creates the two, sorry, I used the wrong command, create. It creates a packet IO which connects to the, connects to network interfaces to the bridges to the external world and then we run the lightning module as a UNIX process in the screen here. So basically now we have a packet IO waiting for packets and then we have a net space for the client so we can see that it doesn't have IP address yet so we can do a DHCP request and we got the IP already and here it shows the time it took for it to configure and boot up the DHCP server. I can make it a little bit larger. It took what it says, 30, 36 milliseconds in this case. And yeah, we got the IP address. And as you can see here, it created the Unikernel and once it created it, it connected with the shared memory to the packet IO. So now the packet IO, when it gets the DHCP request, it's actually not pushing them to the lightning module. It's actually pushing them now to the DHCP server there. And this is the same kind of like process is for the data packets so if I start pinging now from the namespace, it will create the net and the firewall VMs here. It took it like 72 to create those two and the first packet, it made it to the lightning module and then through the service chain but the following packets, they are directly sent to the shared memory that leads to this NFP. And this is basically the demo now. And yeah, we haven't optimized the memory usage or anything like that and we are using basic grand table shared memory here so it's not the kind of like zero copy stuff but that's where we are heading. This is our current kind of like implementation of things. And this is heavily like inspired by the Yitsu experiment that was mentioned already in the morning and we are using, yeah, we are reusing a lot of that code and then the existing Mirage code from the Mirage GitHub repositories here. If you have any questions for me or what's seen, I think we can take them now. Right now, we managed to get like something like 80 megabits through it with IPerf but I mean we have had this running like what, one week now? So we are kind of like starting out with this stuff that's our next steps to kind of like ramp that up. So we are not nowhere near the click always at the moment. Nor have we kind of like started kind of like optimizing the VM so we can't do 10,000 VMs with this. Yeah, EVR product, as of today it's kind of like OpenStack based. Kind of like a traditional VM, VM based product and it has like a traditional router split into VMs that run the like the RP, the control plane and then the forwarding plane and they use like a DPDK sort of things there and the idea here would be to kind of like, since we are not using any kind of like a net map of DPDK here or SRIOV stuff, is to hook this kind of like thinking into that and do this kind of like, as the slide showed here, we have like the main thinking and what we want to optimize are these like memory circuits to the services. So basically, I mean, orchestrating these things and bringing kind of like fast feature velocity through that to the existing router kind of like platform. So this is not, I mean, in this sense these red arrows that would be what leads to the EVR for instance. Yeah, and once you have this memory circuits that's something when you disaggregate the kind of like the data center you don't want to bounce back through those red things to kind of like know where to go next. Yes, yeah. Yeah, yeah. So I have to say that this demo now here is running inside a virtual box in this laptop so we have a little bit better performance in real hardware at the lab. But yeah, we get something like 0.0 milliseconds through this kind of like simple thing. But yeah, since this kind of like HDS, this hyperscale cloud, you will disaggregate the memory and if you go through the memory already kind of like you are going through a fiber optics or something like that there, that's kind of like architecture. But this is all inside one like a hyperweather, no? No, no, the virtual TPE basically is taking most of the TPE functionality into the cloud and leaving it at home or if it's in me, it's in the TPE interface and it's the incapable TPE. So for example, if the child is connected to the internet, then the NFV, the chain that will be created for the child will have for example, 4 under control. If not, it will be something else and so on. So you can offer services for the user for the device for application or for any combination that's what the virtual TPE allows the operator to do. Because everything is running in the cloud, also they can offer new services in a much faster way instead of having to change your PPE and send you another PPE in order to run the service as well. So this is not, this is starting to be deployed, actually that's not something that is just on the slide. This technology has matured enough that it is now being tried. No, the backfall is still in PLS. You can keep the VPN with particular functionality on the PPE, that for example. So the encryption, it doesn't change from, but you can program it with SDN if you need to change. Okay, we're taking our final break. We still have two more talks to come, so come on back in about 15 minutes and we'll reach the conclusion of today's activities. Thank you. Thank you. Good afternoon folks. My name is Amir and I'm going to talk to you about one of the projects I'm working on related to Unicernals at Unicernal.org. And I'm wearing my Unicernal systems hat today. I work on a number of open source stuff, but as some of you may have noticed from the news yesterday, that's now also part of Docker. So briefly about myself, I worked mostly with Mirage OS and what I tend to do there is herd cats, so I make sure that the community side of things works well, that we're having regular chats about the kind of things we're working on. I also started looking at what's going on in the wider Unicernal community as well. And just a bit about my background, I used to be a physicist, then I was a neuroscientist for a while. I've been in the Cambridge Computer Laboratory for four years, experienced from startups and big companies as well. And this year, I hope, is going to be the year of Unicernals. Of course, big statements. Clearly, if we look at what's happened over the last year though, we've had a lot more people writing about what's going on with Unicernals. Some of these are now interesting to read in hindsight, given yesterday's news. But lots more people are interested in this technology, what it could mean for them, and generally what's happening in this space. And as part of this, we've also had new implementations being released. So RuntimeJS and IncludeOS also came out towards the end of last year. And this is really, really exciting. There are now at least 10 or so projects that I know of. Implementation is all trying to take this approach of building these systems, thinking from the ground up, either thinking from the ground up or taking existing pieces of software and making them more usable as libraries. And we've heard from a number of these today as well. So Adam from HALVM and Richard from MirageOS. And a number of these are also out there. Some of them weren't able to make it today because I actually asked if they were able to come and talk. So it's great that there's so much activity, so many people trying out these ideas and coming up with their own ways of moving forward with this. And this is awesome. This is fantastic. So it's great to see other people trying to do this. So I know there have been conversations between some of the projects about issues with TCP and what have they learned while they've been building their version of the networking stack. There's no choice out there. And that means there's lots of opportunity for both the people building these things and implementing them, but also from learning from each other while building. And this is an example of cross-project pollination. So because we're all working in the same kind of area, generally some people are trying things out. And it's worth learning from each other what's, learning from each other is the best way to do things, especially when those things are common. For example, MirageOS has some deploy scripts for ET2. They were kind of bashed together. And someone from one of the other projects came along, took a look at these, and had a go at making a launch script for Rump Run. And they discussed that on the Rump Run mailing list. So they took the script from MirageOS, refined it, made it much better. And then that also got imported into Rump Run. So it became part of the Rump Run project. And that shows an example of how something that started off in one project helped benefit a different project and ended up being better for everyone. There's also other work going on where MirageOS in this example is trying to run on bare metal and using piece components from Rump Kernel and make great progress is being made. And this is a case of two different projects coming together to try and achieve something. From our point of view, this is fantastic because we're all working on these different things. We can all talk to each other and figure out what's going on and then combine things as and when we need them. But there's a downside to this. So with great choice comes great discombobulation. People get confused when someone out there says, what's this uniconal thing? What does it mean for me? Where do I go find out some information? What is it? And then they go and find out there are about 10 projects. They're all having their own way of doing things. There are maybe 10 different ways of building stuff, 10 different environments that you might need. And so it gets a bit complicated. And so then you end up having discussions like this on the internet. Essentially someone saying, well, you just need one of them to win, don't you? And I can do this because I know Gareth. I know Gareth very well. We talk a lot. And essentially his point is it would just be easier for this whole movement to do much better if there was just one of them. If one of them wins out, wouldn't that be better? Is that a question? I see what you mean. I see. Yes. So the point of this week essentially, it would be easier for the whole thing to get forward if there was one that was clearly ahead of the others. But all of those projects are making different trade-offs. For example, we've heard from Mirage OS today, which is taking a clean slate approach to everything. All the libraries are being built from the ground up. That's the principle. There's a lot to be done. There's a lot to learn from there and a lot of benefits to that. And of course there are a lot of trade-offs to that. Rumpkernel takes a different approach, which is looking at all the solid components that exist in NetBSD and making them available as libraries. There are trade-offs there as well. And you, as someone writing applications, should be able to choose with your own trade-offs which pieces are going to be most useful to you. So having these multiple projects out there is really important and it's important to nurture them. And because a bunch of the projects are actually language-specific, it will be really sad that this became a proxy language war. That's not really going to benefit anyone. We've been there before. So what do we do about it? We all want to have all our own projects. We all want to work on stuff that we're working on because we have certain needs. We have certain goals that we're trying to work towards. But unicorns are a thing. People are asking about them. People want more information about them. We all want to get more users on board. We all want to get users on board and more developers, more contributions on board. So we have to do something about this to help balance both. So this is the approach, one approach to taking that. So Unicornl.org is a community website. So it's going to be, it exists and it's meant to help all the projects achieve more users, more growth, more contributions, and actually help solve some of the common problems. Develop.unicornl.org is a place where there's a forum which works very well with email before anyone asks me about that, and where you can actually grow your project's communications. So it's community driven, which means essentially we get to decide what direction it's going to go in. It's just getting started. The aim is it's going to be a single entry point. So when people want to find out what Unicornls are, what's the point, what are the benefits, what are the trade-offs, what's the place we can start pointing them. And it will start to increase in value in terms of helping people understand what's going on in this ecosystem as well. It will provide a window to what's going on with the other projects. It should also ease the process of bringing everyone in, as I just mentioned, across all of the open source projects. And importantly, it's not just a community-facing website. This should also help us start solving common problems around configuration, deployment, and monitoring. All the things that we are all going to have to solve if we want to have deployments of our Unicornl projects. So this is what it looks like. The first thing you'll notice is that I am not a designer. So this is a fairly simple site. There's information on there now. It will go through a redesign. So I'll just quickly show you the site here. And if I can find my mouse, there. Great. I'm on the wrong side. I'll just show you what the site looks like. Let's try that. So it's fairly basic because it's mostly about the content. So we just have some information so that people can get a quick overview of what's going on. All the projects are listed here. So if you're interested in following any of the projects you've seen today, you can just go to Unicornl.org slash projects. They're all linked from here. And so if you know of one that does not happen to be here, please do send a pull request. There is resources because many of the projects also write papers. So this is one about the Rump Colonel project. Here's one that talks mostly about Mirage OS. So this is also a place to highlight the work that's going on. And we also help highlight the individual projects as well. So yes, that was yesterday's news. But Include OS being released also wrote a post for this site, which was shared with a lot of people because Include OS was newly released. And volunteered a post for the site, which was great. So it's a sign that this is working. And this is the forum where essentially we can comment on blog posts and share information. So it's possible to set up categories on this site where essentially Include OS is going to be, is just thinking about doing this where a category will just be effectively like a mailing list for them. But it's easy for them to then engage with something across a different project just by looking somewhere else or tagging someone in a certain question. So it's a little bit like mailing lists and get-her-issues if you're familiar with both of those. Okay, let's go back to the slides. If you have questions, please do shout-out. So that's essentially the website itself. And this is the developed part of the web forum. And so what happens next? We are getting better designs for this site. And there's some kind of, we will have some form of binding for it as well. So I've noticed that people like, when journalists like to write articles about this, they would like to have a graphic. So we'll come up with a logo. And I would like all the people who are working on Unicode implementation to discuss what needs they have. So essentially get involved in the forum and say what there is that they're working on, what things will be useful for them, and we'll see what we can do to help. There's also discussions that we've had. I know that's taken place on the Mirage list and perhaps others about joint Unicolonel events. For example, the idea of a Unicolonel install party has happened. So this is where people from one project want to find out what it's like. Essentially just get started with tools from another project. So a bunch of the Mirage OS people want to try out how VMs sit in a room together and actually get started or people from different projects gathering together to essentially help each other get started with the projects. And we can also start working towards shared infrastructure. So Unicolonel doesn't have to be just a website. It can also be a bunch of stuff behind the scenes as well, essentially serving things for people. So for example, we could set up some kind of continuous integration system that helps deal with all the packages from all the different projects. And this is a discussion we can have with everyone to see how useful this would be and what shape that should take. And essentially it should grow as the community grows. If more Unicolonel implementations arrive, we can work with them to figure out what their needs are and incorporate them into this ecosystem. The idea here is generally all the projects should do well and then the whole movement will do well. So here's what I need from all the people working on Unicolonel. It's essentially we need you. Get involved with the website. Then up to the forum, raise your questions. And if you have information or content that you think should be on the site that is not there, either send a pull request or comment or send me a note and we'll start working on it. And so back to this tweet that I've pointed out earlier, essentially this idea that there should only be one Unicolonel project that should succeed. And essentially it's this site. So if this site works, then essentially all the projects will raise the tide for all the projects. And this movement has a chance to actually start becoming mainstream and helping people write software for the software in a different way. Any questions? Go ahead. Standards, I think I'm talking from a personal viewpoint, is occur from two ways. You either start doing stuff and see what works across projects, try things. So iterate your way towards something. Or if there are too many multiple things already out there, you essentially have to get everyone in the room and sit together and talk lots and try to figure it out. What I'm hoping is that since we're all still relatively early, if we can all work together and actually iterate our way to something that was common for everyone, that's going to be a way to achieve something that is like standards without having to have lots and lots of long meetings. We actually produce things that work for each of us. Yeah, and that's the kind of thing we should work towards through the forum. Through forum and meeting each other. Go ahead. Excellent question. At the moment, it's people who are working on those projects. So if someone wants to submit a pull request to the site, then essentially it'll just be looked at and merged. If someone has a suggestion for some infrastructure that they need, then essentially we'll figure out how to make that work. So in terms of governance right now, a few of us made the site. We came from unicolonial systems and now that's part of Docker. So Docker will support this site going forward, but it is still a community-driven site. What does control mean? They'll have some influence because Docker tooling is something that will be useful. So MirageOS and the Mirage and Rump Run, the demo that we did was useful. So that's something that those two, the people who are working on those projects will be interested in. But in terms of control and telling this community what's going to happen now. Pretty fairly significant. Well, that depends on what happens with people who are working on the unicolonial implementations right now. So if the community gets involved with this, then it's the community that's driving this. Excellent question. Not yet. You'll notice an issue on the GitHub issue tracker. So this is all on GitHub, so this is all available to look at. And yes, we want to run it on a unicolonial, but not just one of them. I want to be able to run it on all of them. So essentially I would like to be able to load balance across all the different unicolonial implementations. So I know how to turn this into a Mirage OS unicolonial, for example, because I've done that before. But I would also like to be able to run this as a Rump Run unicolonial using Nginx, and perhaps also as a HLVM unicolonial and essentially have something that load balance is across all of them. So when someone visits a site, it serves them one of them. So that would be really cool to get going. But at the moment, no, this is not a unicolonial. So if you want to help me get to this, get this to a unicolonial, that would be great. Anymore? Yes, I've talked to about half of them now, and they're happy with it. So Include OS, for example, contributed a blog post to this website, which is fantastic. Mirage OS is keen on this. Contributes to the Rump Run unicolonial project are also keen on this. So there are people who are happy with this. And the forum that I set up, for example, I didn't set that up without checking with those people first that they would actually be interested in using it. Yes, people are involved in this. I'm pointing at the tweet. I should also point at this. So I'm also interested in getting feedback from people on this talk and also the things I've said. So if you go to that URL, you'll see a little form. You can leave me comments there. Anymore questions? So how many of you have come across this website before? Wonderful. How many of you are now going to go visit this website and sign up for the forum? If there are no further questions, I'm 10 minutes early. All right. So let's get some, save a bit of time so we can go for beers a little bit earlier. Or should we just wait for 10 minutes? Wait or go for it. Raise your hands. Go for it. Okay. So my name is Lars Kurt. I'm the community guy at the XSEN project. I also happen to chair the XSEN project advisory board, funds the XSEN project with the Linux Foundation. And Russ asked me to give you a little bit of an update of what's going on in the hypervisor. You know, maybe a little bit of introductory stuff and try and give the whole thing relevant to unicornals as well, which is actually quite hard to do in 30 minutes. So if it doesn't quite flow and there's some gaps in between then that's because, you know, I had to make choices. So I think it kind of sort of works. As I'm a community guy, I sort of just wanted to sort of share some stats, you know, with you, which I find quite interesting. So, you know, what this graph shows you is, you know, the recent hypervisor releases history, oh, I start history, right? That's good. And this kind of shows you, you know, how over the last few releases we actually managed to deliver more features, you know, per month. And, you know, that trend seems to be continuing. So that's good because, you know, more features actually means more good stuff for all you guys to use. If you start looking at hypervisor git commits, oh, you sort of see that similar, well, actually it was really interesting that for a long time we kind of, you know, we've been fairly stable. And this year, last year, things really jumped, you know, significantly, you know, almost 40% more commits than in a year before. And I see that trend continuing. Now, what's really interesting about this, this is really showing us more innovation. But what's happening at the same time is that the demands on the project are also changing, right? So we're doing more stuff. But actually everyone in the ecosystem requires that the whole thing is of better quality and we also deal a lot better with potential security issues. And, you know, despite all that, you know, so the effect of that has actually been that it's harder to get code into the project, particularly for newcomers, you know, where it can sometimes be a bit disheartening because, you know, the number of review cycles in those cases keep on going up. So that's sort of just setting the scene a little bit. You know, more innovation, more stuff happening. And, you know, more goodness, which hopefully you guys can also then benefit from. So now to the... That's sort of just setting the scene. Now to the introductory bits. So actually, first quick question, who's familiar with the hypervisor and how it works? All right, probably about third. So I'm going to go a little bit through some of the architectural stuff. And then I'm trying to interleave it with some stuff which is relevant to Unicornals and throw some new things in it as well. So, since a type 1 hypervisor, and I call it a type 1 with a twist, because it's kind of not quite like a type 1, and what does this mean, right? So at the bottom you have the hardware. There's, you know, memory, CPUs, ILOs. Then we have the hypervisor itself. And that, you know, that seems like, you know, set-up configuration. It has schedulers. It deals with memory management, timers, interrupts. And then we have the scene. And that's kind of where the type 1 thing comes from, right? It sort of abstracts the hardware. If you look at the classical type 2, you would have your host operating system in there and it interacts and threads the VMs in some way, shape or form. So this is kind of now where the twist comes from, right? So the first virtual machine we have in a system, DOM0, it's called DOM0 because it's the first one which gets started. And that runs the host kernel, which, you know, is Linux so it can be one of the BSDs. And the main purpose of that, of that kernel is really to provide you access to the, you know, to the low-level hardware, to the device drivers, IOM2on and so forth. If you... It is also the main entry point to interact with the outside world. So we have these things called tool stacks, which then, you know, interact with your cloud orchestration stack or, you know, command line tool or whatever. That's how you control, you know, everything in the system. Then we have, you know, your typical guest VM. So I'm just showing one. You know, you might have a guest OS and your applications in there or, you know, it might be a unique kernel. And these typically talk via the PV interface to the device drivers to the IO. So there you have a pair of, you know, PV front-end drivers and back-end drivers which then talk to the hardware drivers which talk to the hardware. PV? Oh, pair of virtualized, yeah. Never. And then there's you just replicate this, you know, across the board. Now, that has some architectural advantages. You know, it's thin, gives you big density and it makes it excellent, you know, for supporting a lot of small workloads such as unique kernels. It's highly scalable. You can run a huge number of VMs. Again, you know, very good for a lot of small unique kernels. It's also good for something called disaggregation. Actually, the same idea behind unique kernels is used within CEN as well. You can drive, you know, you can run individual device drivers or QMU and stuff like that within different VMs as well. And in some sense, those things are actually like unique kernels, right? So, you know, we have these things called stuff domains which is basically QMU running on top of MiniOS, which is kind of, you know, a unique kernel. Then, of course, you know, this whole architecture has some security implications. You know, HostOS is isolated within a VM. And then, you know, but let's not get distracted about that, you know, just... It's well known that we have a lot of advantages in this area. One other real interesting thing, and I'll get into this in a little bit later, is that you can use... You can plug and play with schedulers. And that could potentially be quite interesting for, you know, different unique kernel workloads. But I'll revisit this a little bit later as one of the things which is interesting and where it also has been quite a bit of development in the last two releases. And, you know, then there's the whole idea of QB, which is fundamentally the basis for a parallelization, fundamentally the basis for unique kernels today. And it's really easy to implement a unique kernel base against, you know, PV drivers. And that's one of the reasons, you know, why I send this to Dominion Platform for unique kernels today. It also gives you really fast boot times, which are necessary for a lot of unique kernel workloads. If you have any questions or any time, just interrupt me. So that's kind of some of the basic stuff. And I wanted to look at some of the more interesting things. So it makes this a bit more relevant. So I wanted to look a bit at the interface between send and unique kernels, mainly from the viewpoint of virtualization modes, you know, where they're going, and what the implications for, you know, unique kernel development might be there. And then there's going to be a slight call to action for some of you guys to maybe think about that, engage with the project, and make sure that this is going in the right direction. Because of the type of developers who do the very low level stuff and your needs might be slightly, you know, there's a different set of people. And, you know, our core developers tend to anticipate what you need, but they don't always get it right. So let's look at the different virtualization modes. So, you know, we have PV here. This is how it all started. Then hardware extensions came along. So the HVM mode was introduced, and then we started to optimize this along the time and make it better and faster. And what's really interesting is that this is actually today the fastest virtualization mode on Cyn if you look at benchmarks. This one is the one with the lowest latency and the simplest one in terms of implementation. So we kind of wanted to get to a point where you have both, you know, the fastest and the simplicity somehow, and this is how PVH was formed. I'll get to that in a little bit later. So why is this not showing anything? Then it shows you, you know, when each version was introduced in the Cyn version, and actually, you know, today, this is really where all the unicolonial bases, or the majority of all the unicolonial bases interface with Cyn. So, you know, both Rump Run and Minerals interface with this mode today. There are other unicornals like USB, which is a different approach, but I think the majority used PV today. So let's look a little bit at the various trade-offs and where this might be going, right? So this kind of table shows you how some of these things are implemented and kind of explain... I'll just ignore those first two. So the top, those virtual columns show you what's implemented in which way. And, you know, if you look at this column, this is where you get most of the runtime performance from, you know, through hardware virtualization of, you know, of your normal operations. And that's where a lot of effort by chip manufacturers, Intel, and so on, that's where all the effort goes, and that's why... to a large extent, you know, PBA is the fastest virtualization mode today. And then, of course, for IEL, you just want to use the PV driver, because in hardware, that's kind of not so optimal. Now, there, in a traditional parallelization mode, we kind of do this in software, and that's why in a lot of workloads, PV is a little bit slower than PVH. At PVHVM. This one here, boot and motherboard, that's kind of where you get the fast boot time from, right? So there, this is obviously done in... this is kind of done in software together with some hardware support, and you have the whole setup process, which is why it takes a little bit longer. So what we ultimately want to get to is get the benefits of both of these and marry them. So I wanted to explore a little bit why we went to that mode and what we did and how we can... where this is going, because there's some changes coming in the next two releases. So the motivation behind PVH was fundamentally, we want to basically be able to run DOM0 as an HVM guest to get all this extra performance. Right now, DOM0 always has to be a PV guest. And the reason for that is, if you look at this architectural model we had, if, you know, HVM today requires QMU, right, to run. Now, if you want to boot up DOM0 and you want to run QMU in it, you would need QMU to run QMU, so you have a certain dependency there, so that's why you kind of can't do that. And that's why today, you know, the DOM0 almost must be a PV guest. The real motivation behind that is, you know, performance. You know, if DOM0 is faster, then all the other VMs become faster. So that's one of the reasons, and that gets us to the second point. You know, we want, basically, we want PVH to be as fast as HVM. So the initial approach which was taken there was to take an HVM container, which gives you all the hardware to run PV in it. The problem with this is that there's a lot of restrictions which come from the PV, from the PV in architecture, right? So, for example, everything's implemented in the software, you know, so you can't really play with, you know, for example, a guest can only run in protected mode. You can't change modes after the guest has been created and all sorts of other things. But actually those limitations aren't really strictly necessary for that mode. They're just there. The whole concept was also designed before all these extra quality and security requirements, you know, came in. So we implemented PVH as it is today, and then we found if we wanted to get all the features in place, we would need to add a whole lot of extra code, which is new code, you know, and it isn't shared with anything else which already exists in the system. So last year there was a big discussion in the community to see how can we actually fix this. And the way, you know, so, you know, like, if you look back at it, we took an HVM container and we put PV in it. So what basically we sort of thought about was actually what if we just took a HVM and caught it out and removed the dependency on QMU. And, you know, that would mean it's actually exactly like all the other HVM modes. It has no QMU, so it doesn't have this sort of circular bootstrap problem. It can be run as Dom0. And it should give you, you know, all the other benefits. From an end-user perspective, it behaves exactly like what we did beforehand, you know, with PVH. It just implements it in a slightly different way. Now, this is already, you know, this is already in 4.7. And it's really interesting. This is actually, we ran benchmarks and so on. It's very fast. It boots up relatively quickly, but it's not completely finished. So basically, you know, what we've done is, we, you know, PVH, we've renamed it because it's actually not a PV thing anymore. It's really HVM with stuff taking out. And, you know, you can already try that in Zen, you know, today. So, you know, you just use a HVM guest and you choose device model version of none. Normally, it's QMU, right? So it doesn't use any device emulation. And here you are. However, where are we with this? Well, it's in Zen get today as some user port. It will be in the next Zen release. We will have a few decisions still to make about naming. How do we name it? How do we make it accessible to users? You know, it does create a little bit of confusion. And we sort of need to manage this. Then in the next release, there's already a DOM zero prototype for free BSD. It works. There's no Linux implementation yet. So that's, you know, that's coming before we, that needs to be implemented before it's going to be fully available. There's some cleanup tasks required. And the interfaces aren't yet to clear stable. But, you know, we're almost there. So there's discussion going on around validating all the interfaces. And the benchmark is already very impressive. So it's basically very fast. So here's what I kind of want you guys to do, the one who, you know, who might be using this in the future. So currently, the stuff isn't used as student kernel-based. But it seems that this is the way forward in the future. So really, you know, to make sure that this really works is we need to make sure that the architecture actually works for you before we declare the IPI stable. So give it a run, try it, play with it, you know, see what it does. Mini OS and Rump Run, you know, haven't been ported to that yet. So we need to make sure that this actually can be done and is relatively easy. So in a lot of ways, and that creates an opportunity, avoid duplication. But in many ways, if I remember back in the early days of the uni kernels, you know, we had actually quite a few different Mini OS clones, which were different branches of Mini OS, and it took quite some time to resolve this and get it, you know, all back into one version. And, you know, by starting to and get this process right now we're sort of avoiding those types of issues. So my chord reaction there will be, you know, work with us, engage with the project, and, you know, and see how it works. And, you know, give the developers who are working on a feedback. So that's kind of the, that didn't really work that well the whole time, but hey. So I wanted to look at some other things. Performance. That ties a little bit back to the mode I was talking about earlier. So in the last two releases we had an awful lot of performance improvements in Xen. I'm just listing some of them here. So these were in, I'm not going to go through that in detail. These were in 4.5. This is stuff which came in 4.6. And actually that really made a big impact on a number of work, workloads. So if I look at the last published benchmark, if you look at this one, this was a, you can get the detailed benchmark in here. This was a 4.2-level kernel with Xen 4.5 and compared against QMU. And it's really interesting that most of the benchmarks, you know, either Xen 1 and clearly, so if you look at this, this is like a 2.3, 1.3 split. And this is even before a lot of the improvements which came in 4.6. So that's, you know, on a whole performance front I expect that there will be a lot more activity and a lot more competition and benefits for Xen going forward. But, you know, a lot of this is mainly HVM, right? So all of these benefits are on HVM, PVH, and so on. Another area, schedulers. So, I mentioned earlier that Xen has the capability to plug schedulers and how does this really look, right? So, there's a number of different schedulers which have different properties. And in the hypervisor layer, per host, you can choose which scheduler you want. But it's actually even better than that. You could create a CPU pool and attach a different scheduler to it. Why is this relevant? Well, you could, for example, have a number of very latency-sensitive unicernals and then choose a scheduler which works better for workloads. And kind of optimize your system in that way. So, how does it look? Say, here you have your RTDS scheduler. Here you have a credit scheduler. And fundamentally, these VMs will use one scheduler and the other VMs will use another one. Then you can change the scheduler parameters in per host. You could change them per CPU pool or per individual VM. And that gives you a whole lot of ranges for optimization of a complex system where you have lots of different parts which may be collaborating with each other. So, this just shows you a table of the different schedulers which are available today. So, the ones which are worth looking at possibly in the context of unicernals are credit to because it's a very low latency and a low latency is important for unicernals. RTDS which is actually really a scheduler embedded in automotive use cases may not be very interesting, but I don't know if this is interesting come to me and I can try and figure something out and get you together with some of the other scheduler people. Scheduler use cases so the RTDS scheduler that's one which is aimed at automotive use cases latency sensitive and guaranteed quality of service so you know that in a specific time slice you will get a specific time slice for a specific VM and you can control this at a very small granular level. Incidentally this is also interesting for some cloud based use cases such as gaming, video decoding and so on. Or you know areas where you might another interesting areas where we started seeing this being deployed by vendors such as Verizon who have guaranteed quality of service so they're doing this in hardware right now by choosing different hardware buckets but you could also do this in software using scheduler technology. That's kind of the part on scheduler so I just wanted to give you a quick rundown on some other stuff that is going on in the community and wrap up with a conclusion. I'm going to cover a little bit various areas a little bit officially. We've seen an awful lot of contributions around x86 hardware support in the last year and a half predominantly from Intel a lot of it is really specific send supporting specific hardware support features for specific use cases around NFV optimizing your system and so on. Rather than going into detail with that there's a lot of activity in this area and what we are fundamentally seeing is this is actually really interesting just in the past we saw Intel contributions hovering around maybe one or two percent of the entire community and it's shot up to about seven or eight percent recently. The same is happening in the ARM space so ARM hardware is tracked very closely and the project also runs a test lab and will be adding ARM hardware 64 bit hardware to it as it becomes available and there will be a lot of activity in this year and I guess also from the ARM I guess for union kernels your guys are mainly interested in 32 bit and smaller devices today but that's also going to provide future avenues for you. Another area where there's some activity is graphics so I'm not going to show you this demo because we won't have the time but there's been a lot of activity around graphics virtualization it used to be called sendGTG but it's you know technology is now called IntelGTG and what this basically does is allow you to share a graphics GPU amongst different VMs and you know basically each VM has access to a partition of the graphics VM now this is already so right now part of this is still out of free but most of it is in send today it's in use by send client and the upcoming send server 7 version most of the send patches are already part of the tree but there's some refactoring going on around Linux and QMU which might affect the stuff which is already in send a similar approach is also being developed for embedded devices by Global Logic they're reusing some of the code which Intel has developed for x86 for ARM based architecture and there was just a blog post showing some demos that sort of focused around embedded you know embedded architecture there's another interesting set of technologies called virtual machine introspection so what that does is basically send has this API which allows you from within from within a dedicated VM to monitor what's going on in another VM and these interfaces are called VMI interfaces they started appearing in the last two releases and that has some really interesting implications how security can be handled in the cloud so again there's a link to a demo which you can watch once you look at the presentation later so how does this work the cloud security today you have a typical system with dom zero and then you might have a number of agents running in each VM let's just say these agents are is antivirus software or some disk and memory scanner or network more than that and that kind of A introduces a management overhead but also it means that all of those VMs might start doing virus checking at the same time and that has an impact on your system typically called antivirus storm so how does this VMI approach can assault this so actually what you have there is you have a new VM basically a security appliance that uses that VMI interface and it monitors a number of VMs and you could have several of these security appliances which monitor different sets of VMs and it's relevant of course if you have a multi-tenancy environment there's also a hybrid approach possible where you have several agents leaner agents which run in those VMs which then collaborate with that protection engine now again, there are some products being worked on by a number of security companies like Bitdefender, Novetter and a few others and some of these things will be coming to market this year well the thing is actually you could run this in DOM 0 but then you're going to have you're loading more stuff into DOM 0 which you don't really want and also of course optimized for cloud deployment so actually what you really want is you want to have a VM image which can be easily installed and managed rather than having to manually put something in a VM into DOM 0 and then deal with it right so it's actually a lot and then there's also there's this little lock thing here which is also quite important which is active VM as the equivalent of SC Linux and it allows you to control what what that engine actually can do and what it can't but there's a whole set of other issues and interesting conversations around the topics to be had and I can only just cover this superficially a lot of other security features are also currently being developed so one of the things we've been we've been last year you may remember there was the venom vulnerability and following that there's a whole lot of other QMU bugs and this has been rather annoying for a lot of for people within the community so basically right now QMU runs in unprivileged mode so any QMU vulnerability is basically a risk to what's done so what we decided to do is basically sandbox and be privileged QMU but also the other emulation technologies within within Xen to basically protect ourselves against QMU vulnerabilities to make the system better so that's a direct response to eliminate a whole class of issues where we don't have a lot of control over another thing and that's coming in 4.7 another thing which is being developed but this will take several iterations the complete is hot patching based on the code word so what that means is there's a security patch it can basically be wrapped up in a payload and deployed on a running Xen instance hypervisor instance we can do this without rebooting the host and that means that things like cloud reboots will be a thing of the past in fact some cloud providers have that capability already and are collaborating with the community on an extensible framework and common technology to make this all happen so we're starting with some simpler use cases and successively add less common ones then another thing which is also being there's some work going on right now is better configurability so again if you look at security the more code you have or even if you don't use it the bigger your attack surface is not everybody in the Xen community uses the same type of features so ideally what we want to end up with is have the capability to switch and remove code both at runtime and at compile time which aren't being used to reduce the risk so there's some work going into 4.7 with K conflict but actually we're going to look at the whole architecture to come up with a common approach for the future and expect that will also go over several release cycles and we're nearly at the end so actually I think I hope I could show that a project has a history of proactively innovating the rate of innovation is increasing that means more features more quickly and the demands on the project are actually shifting quite a lot so a lot more focus on quality and security and what I've been dealing with particularly in the last years that has actually created tensions within the community because we have a lot of newcomers which want to get their stuff in quickly but then a lot of the more established players want to ensure quality and security are at the level as it was or even that level is sort of increasing every single year and that sort of led to conflicts to some people being more verbal about this within the community well let's say as a community we haven't made conscious decisions about trade-offs what's more important than the other one and we're kind of going through this process right now to decide when is a new feature more important than quality can we have mechanisms to facilitate the whole process and it leads me that the project does have a track record of adapting to challenges and criticism like some of the security things we've been working on I've been showing recently is a direct a consequence of some of those challenges and I think it's also the best platform for Unicurl in the cloud because A it's basically running most clouds today it has a lot of unique features and there's a lot of innovation and that's it, any questions please go ahead but the thing is like as an open source community this doesn't work right and particularly in the world as it is today if you don't innovate and you don't do new stuff then you don't exist you don't get any press releases people think you're not relevant anymore and then there's a new kid on a block with more features and how do you solve that it's like this kind of actually comes back to this whole finding the right trade off between different things and I think this is generally a challenge which we're facing as an industry today because I think all projects today all successful projects will eventually face that challenge if you add new features you know you're adding new stuff you're growing a code base you know if you do stuff too quickly it might affect quality there's all these different trade-offs you have to constantly look at and one of the lessons I learned is that you actually a lot of the times this stuff happens implicitly without actually having had a discussion or a conscious decision about it and that's when you start your problems so you have to bring some of these things out in the open and then make some conscious decisions and then things then you get buy-in and people are actually happy with the direction you're going in and I guess at some point all these unicolonial communities will have to face similar similar issues it's always quite nice when you start doing stuff and once you have users particularly when a big corporate user then things will become different any more questions well there are a lot of there are a lot of tools out there which you know like I mean so you can't formally prove it right particularly in a language like C you might be able to prove some of it in some more modern languages you can use fuzzing techniques to find issues and we do know that some of the big vendors which use CEN they do use fuzzing to try and identify issues this is also what actually typically tends to happen is that some of them they start a little project where they focus on some components and then suddenly we get a whole bunch of vulnerabilities in which were discovered by some of those fuzzing tools and the same happens for other open source projects and then you kind of get these a whole batch of security vulnerabilities that come at once and then you get the whole if you're a popular project then a media might pick up on it and then somebody found that the area is actually bad but is it really? it actually means some of these looking for issues and I actually have a talk on Friday which looks at this whole security on Sunday which looks at this whole topic comparing how different open source projects deal with this it's actually shockingly bad in many ways however as an industry we deal with this issue there's a whole range of different approaches they're often not really compatible but that comes to my focus on Sunday so on the individuals so the overall number of contributors is increasing what I have so what keeps on happening is we keep on getting a spike of people who look like individuals and then later on it turns out that it maybe was a start-up in stealth mode but there's a number of other interesting trends that did start which have happened particularly in the last year or in the last few years so traditionally CEN has been a very European and US-focused project a predominantly European a lot of there's a lot more contributions now coming from Asia in particular China so the number of Chinese developers have gone up over the last three years from 2% to almost to 12% and actually I'm regularly going out to China gift training because there's also interesting cultural challenges working with which sometimes lead to problems another thing I have started seeing particularly in the second half particularly last year a lot more security companies are starting to build to contribute to CEN and it's a reflection in some cases we know that they're building products and in some cases we know that they're doing something but we don't exactly know what they do so it will be interesting to watch that space as well and then suddenly some new company names start appearing and like one which started to become quite active recently and they're quite they're probably also they're also looking at some unique kernels that potentially they're called Star Labs there's a couple of other ones as well which I know exist but they haven't come out perfectly yet so I'm going to share and yeah I mean that's some of the some of the insights around there's some more but Colin finally asked any more questions? let's give Lars a big hand and just for closing I just want to say thank you all for coming out spending your Friday with us was this worth your time? there we go that's what I wanted to see that's cool thank you again I do have talk three o'clock tomorrow Lars has another talk on Sunday please feel free to drop by we thank you very much and have a very good scale conference