 and then what we're going to introduce in the next year. This, I have like 25 minutes of time, so we have a lot of things to talk about, so it's probably not to be sufficient, but I hear that we can do workshop on one of the other days and it was a little bit more time where we can sit down and talk about whatever you guys want to talk about if you have any further questions. But anyway, in this talk I just want to cover a couple of really important major points to get you up to date and what's coming on the various distributions. So it's basically all the distributions have now adopted system D as you might know, like if you use Debian, like the next Debian version will be system D, the next Debian version will be system D. Fedora has been using system D for a while, REL 7 has released with system D, it's pretty much everywhere with system D, there are only a couple of foldouts more precisely that's basically again 2, that doesn't have a binding fault, but it includes it in the Slackware, it's pretty much in a big distribution that still do not use system D. Anyway, so let's get started. Let's jump right in with Canevas. You might have heard of Canevas, I have been talking about Canevas for a long time already, like for the last two years I already did talks about Canevas. Canevas is an IPC system for the island's kernel that we have been developing closely with system D in mind. Just to get you up to speak what that actually is, Debas is an IPC system we have been using in Linux for quite some time, it's basically GNOME has been using that and Cane has been using that. Most of the basic building blocks of the operating system actually speak Debas as an IPC system, so with that you can actually issue commands to the lower level of the stack and say, please do this, please do that, and things like that. Debas has been introduced 10 years ago and Canevas is kind of the reinvention of the same semantics in most ways, but doing that in Linux kernel itself. Just last week we posted the fourth kernel patch and apparently as it looks right now that's going to be the kernel patch for good, but yeah. And then as that happens the user space side of Canevas since it's in system D, so the system resets up all of that. To understand a little bit what the main thing about all of this is a most modern operating system design started out with a good IPC, like if you look at micro kernels and things like that, they started out with a good IPC and designed everything around it on Linux and Linux because it's not a very modern design. We never had a proper IPC. What we had was IPC primitives like streams and like fee-fluous and things like that. And with Canevas we finally will fill this gap and introduce a really like IPC system that the kernel understands was all built and with those was method calls and things like that to Linux as well. Anyway, so yeah, it's a powerful kernel-level low-glow IPC. Well, let's already jump to the next topic which is containers. You might, like, containers have been pretty well known like in the last years or so because of Docker, Docker, everybody knows Docker. Like I know managers at Radhat it was every second world as Docker these days. With system D we saw we had the duty to actually make sure that containers are integrated well to the lower levels of the operating system. This isn't pretty news in many ways because old operating systems like for example Solaris had really good container support already 10 years ago with what they call nodes. And with what we've been working on in system D we kind of want to close that gap that has been popularized by Docker but it's actually much, much older with systems that don't and support some of the stuff in system D itself. More specifically, those three components that one component calls isn't the Antspawn isn't the Antspawn as a mini container manager. It's like, in a way, if you know Docker, Antspawn does kind of the same thing. We originally wrote this mostly for our testing purposes simply because we wrote in-it system. If you write it in-it system, so that's basically the system that brings up your computer to make sure that it stays running. But if you hack on that, it's really annoying that every time you make a mistake your entire machine stops working. So we wrote this container manager so that we could develop our operating system inside the container and if we made a programming mistake we wouldn't have to reboot the actual physical machine. It would completely suffice if we just rebooted the container that we developed. So that's how system Antspawn was born. But it actually came really useful in the last years because we kept, like, for testing purposes and for making sure that system it works very nicely inside the container. It can also close containers nicely. It actually had a pretty full-grown container system that in many ways can do the same thing that Alexine can do, the same thing as Docker can do in a way, but all much, much simpler and write in a low-level operating system. There are two other components related to the system, the machine D and system D. The priority, most people probably don't have a common contact to that, but they're basically exposed to the concept of a container to the rest of the operating system. This specifically means that basically all the various tools that we have in the operating system in our opinion should be aware of what a container actually is. This starts, for example, with simple things like PS. You know the PS, the Unix PS command that shows you processes. We said we want that there is a column in PS that shows you which process belongs to which container. And that actually is implemented and it's a couple of years already, I think, by now. But it goes through all the rest of the stack as well. We wanted to make sure that the container concept is available, for example, in the IPC system. I talked about Katie was earlier, that you cannot just connect to you so that this is on the local host but also any on the local containers. Or any of the system D's commands, IQ Web Containers as well, like if you use system control, which is like if you have a Belva system D, it's a primary interface to system D. It has these switches, a dash of a KZM that allow you to connect to any local container. There's also a command that you can not only show the services of your local host but also all of the containers running below it. So many of these concepts aren't really new again. Solaris has a long time ago, so basically every tool in Solaris was from the ground up aware of the concept of zones. And we kind of want to close that gap from us to me as well. So something we very recently added is the system D import tool, system D import is basically where how we can download and import and export containers. There's a big difference towards things like Docker. I mean this is not really an attempt to re-implementizing that Docker is doing. This is more about, on that is actually that we think that it's not a good idea to introduce a new container for us. We just want to make normal images that are already existing, compatible with containers. More specifically, many of these divisions provide cloud images, which are basically images that you can run on KVM or another virtualizer. We saw, well, let's just open those up for containers and make them bootable with containers. That's kind of the follow-up we've been following here and spawn and import in these kind of things. They kind of focus on not using any new form or any new concept really. They just focus on to make use of all the images that are already out there and running them in a container context instead of strictly virtual machine running. Yeah, the takeaway here is really we want that containers are part of the voice concept itself. They're not this thing that you're on top of there. It can't be built into the operating system. The next topic, which is a close service firewall that we will add very soon. It's like, if you administrate a line, you're pretty sure you're in your IPTELs and these things like how to configure a firewall. We looked into the problem of firewalling and figured out what's really missing for a little firewall. There's some kind of hookup between services in the firewall because in many, many cases, when people actually write down the firewall, port 80 may be accessed from the Internet but we believe that in many cases in the case of port 80 which is HTTP, it's really easy but without port 80 what we believe is actually that you probably want to express a patchy shall be accessible from the Internet rather than port 80 shall be accessible from the Internet. So then it doesn't really matter which port is used in the market you actually can express inside of the firewall well, it's a patchy or it's an icicle or it's whatever else but it's actually accessible. So something that we'll add very soon to system.ly there is a port service firewall which basically, I mean on one hand we don't really want to be able to click system.ly it's not supposed to be this is the place where you actually configure the full set of firewall but everything that we want to provide is the most basic connection between service management and firewall so more specifically it will boil down to one option where you can basically say firewall equals accept rejects, deny which do what you might think and in the custom what the effect will be is that all traffic generated by a service will actually be directed to a separate chain in IP tabs if you know IP tabs Anyway, the takeaway of this is that we're kind of closing a gap that was local and then services also has one nice side effect is that we will actually be able to do traffic accounting for service so if you do type such a bunch of status to see the status of a specific service by using the firewall accounting functionality it will then show you basically the individual traffic that specific services have generated in coming. It's actually kind of cool then another thing that we have been working on and will continue working on this year is the system in network D we believe like the way we define system in our days is basically the system we should contain all the most basic building blocks that the vast majority of systems require we believe the network configuration is one of them so a couple like a year ago so we had this component system in network D which is basically a network management solution that we think is a lot smarter than the previous ones like we sit down and try to figure out what we actually want from a network management solution and that's what we came up with nowadays there's a lot of functionality that is not like that all the other network management solutions do not have that's a couple of things that network managers can't do on the other hand there's also a lot of things that can't do that if you have the runs and contain environments and runs in very good environments and things like that which is all something that network management we have been like our philosophy with system in our D is really that we don't want it to be the thing where things are glued together with shell script and sync we want everything to nicely integrate with proper APIs and see what's API so this had the effect like for example because we figured out that basically that's DHCP for example we didn't mind the existing limitations together but don't really integrate that nicely and given that DHCP is actually a relatively simple protocol we hence came up with our own and we're doing this which is actually really interesting because the network manager decided that we're doing this well the couple of things coming up in that area as well like for example we will have a native PPP implementation and a couple of things we have this is also hooked up to the firewall and specifically mask trading and things so previously IP configuration like IP firewalling IP routing which completely separately configured from the actual physical interface and it was never really kind of want to resolve that and to make sure that you can actually configure that so yeah it's growing what we had in mind there is that it's good enough that you can actually build a router from that and actually some people are doing that like Intel working on an embedded router like a switch that you assist in networking for the configuration and Ken knows all these modes like uplink downlink but you can actually say that yeah you need those downlinks connected before the uplink so this all kind of hybrid stuff is something that the network manager could do so anyway enough about the network D something else we have been working on is this will resolve D it's another little demo it's responsible for name resolution name resolution being things like DNS like hostname resolution and like the rationale why we wanted to do this is basically that previously in Linux basically every single application will do its DNS requests inside of its own process which is a certain level of security time because every single process that you run implements a little bit of a DNS stack and mustang is pretty cashed so our intention with resolve D really is to have a local a small local demo that can do DNS resolution and can cache things and also does fundamentally a couple of things that previously weren't possible more specifically more specifically it has support for actually multi home posts in a small way like the traditional problem being there on my laptop I'm connected to the Rathat VPN but I'm also connected to my local LAN traditionally this means that I either could resolve the hostname that was defined by the Rathat hostname or I could resolve the hostname to my local LAN which I'm my private hostname spot on both because you could only configure one DNS so there would be periods with resolve D and we kind of resolved these issues by allowing that requests can always be sent to the DNS servers of all interfaces in parallel in the first positive and the last negative answers used basically bridging the DNS servers that way so that's in start words what we're working for other things the system did result in does is actually LLNR which is a microsoft protocol for the local name resolution and some other networks in the LAN so they can just talk to each other and know each other by name without any further configuration it's implemented by all the Windows systems and we've implemented the system to resolve the background this is actually containers again because what we wanted to do is that if you run multiple containers as a new local host they're connected via some virtual network that they can actually find each other by their names without having to configure anything and as well as DNSX support because we believe today's internet the DNSX stuff should not be optional it should just work and be there by default and also the DNS the NSSD we're just like the Apple Calendar and other stuff that I talked about earlier so the last thing that I'll have is the second boot stuff something that we believe that does our energy to provide have an operating system where you have a trusted pass where every software, all the software that runs in your system is verified and normally can modify the system without you knowing it this is this particular element in a plus known world where data centers can't be trusted anymore because the NSA or whoever gets access, like physical access to machines can manipulate the operating systems and the operating systems would just work as before and nothing would notice that they have been manipulated so we're just going to be thinking about the duty that operating systems can actually be locked down so that the hardware refuses to put properly signed operating system and they have a complete chain of trust from the earliest hardware all the way to the rest of the operating system this making use of the UFI second boot UFI second boot has a bit of a bad reputation in the open source community because the hardware was always marketed as something on Microsoft wanted to be nasty to Linux we think it's actually a great opportunity for Linux because well, it is all not only how you can find yourself to Microsoft it's also a way how you can kick out Microsoft from the NSA but also from your computer because basically you can say it's the only boot software that has been signed by the Fedora project or it can even take it one step further and say it's not myself and nothing else anyway, something that we have merged very recently in the system is GumiBoot GumiBoot is a bootloader a UFI bootloader the reason we did this is basically to get this chain of trust into place so that we have a scheme basically that we can say that the GumiBoot bootloader the GumiBoot bootloader only boots kernels that have been probably signed as content in R&D that is also signed and so on and so on that will only boot from signed disks so that you get the chain it's really awesome stuff not only for clients but it's also something one of the reasons we are doing this is for the tested data center we would like to have a scheme where data centers are integrated everything that I have in my time is over so let's have a few questions and other than that again I think we can do a workshop one of the other days but I still have to register that so I can't really tell you what that's going to be and if you have any further questions we can