 Good morning everyone. Our current speaker, Gennot Heizer, will be presenting SCL4 as free. What does this mean for you? Good morning. Great to see you here. Good to see a few familiar faces and I must say I'm very happy to be here for more reason than one. So SCL4 is free, what does this mean? What is SCL4? Not everyone may be familiar with this. It's a microkernel which is the latest and greatest member of the L4 microkernel family which has been around for 20 years. So it's based on 20 years of experience with building microkernels. Some of them are in fair amount of use. Probably somewhere between a quarter half of you will have a phone that runs a Qualcomm modem chip in which case one of our earlier operating L4 kernels will be on there. How many of you have an iPhone, iPad, iPod, whatever? None? Yeah, okay. So you have it as well because the security co-processor of the iOS devices now runs a version of the L4 microkernel one that came out of my lap about 10 years ago. So there's a few billion of the predecessor systems deployed which basically means we have a bit of experience of building kernels for real-world use. So this is L4 in general and SCL4 is part of that tradition but with a few interesting extra bits. So in particular what SCL4 is, it's the world's only operating system kernel which has with some degree of credibility can claim to be secure. And that's a pretty strong statement but it's actually backed up by pretty strong evidence, actually the evidence of mathematical proof. That's the only real way to guarantee anything about any code. All this standard software processes, code inspection, etc. peer review, thousand eyes and all that nonsense is really just self-illusion compared to actually proving things correct. And of course the really cool thing is it's been open sourced half a year ago and that's of course the only reason I'm here and which is why I'm glad to be here. So what does this proof stuff actually mean? So we have a bunch of C code, a pile of C code, about 9,000 lines so it's a very small system and as any real operating system kernel it's written in C mostly very little just a few essential bits of assembler and it's really this is the stuff we care about right and where we want to have certain properties convince ourselves that it has to write properties. And what the proofs mean is we have an abstract model of the kernel and that's a mathematical artifact. It's a description in a mathematical logic of the functionality of the kernel. So this completely describes the allowed kernel behavior. It basically says under any combination of inputs how will the kernel react and in that sense gives a complete description of the kernel's functionality. And what we have is a so-called refinement proof that the implementation, the C code correctly implements its specification. What it strictly means is that any behavior possible under the semantics of C for this code is captured by this abstract model and in that sense the implementation is correct against this abstract specification so bug free etc. Implications there's many of them but just sort of the a few obvious one it's not possible for this kernel to have a buffer overflow it cannot it's provably does not use uninitialized variable they've referenced null pointers there's no stack smashing you can't have code injection there's no return-oriented programming any of that stuff is just not possible provably. So that's nice but of course what really runs on the machine is not the C code it's the object code the translated C binary that's gone through the C compiler and the linger and loader and all that stuff so what we really would want is guarantees about the binary because as long as we have properties shown about the source code then okay that's fine that's more than anyone else has but of course we still trust the C compiler not to stuff up and we also implicitly trust that we use the same assumption about the C semantics as the compiler and C of course the C semantics is ambiguous it does not have a well-defined semantics so we had to define a subset that is well-defined and we can't guarantee that the compiler makes the same assumption so in order to eliminate that risk we have also a proof that the object code is a correct translation of the C source and that eliminates the compiler from the trusted computing base so we can use any odd compiler we generally use GCC we should probably be using LLVM by now but it doesn't matter we can use any any compiler it's available if the compiler has a bug it will show up there or if there's a mismatch in semantics it will show up there because that proof won't work out and so what that does that give us overall is we now have a proof that all these properties everything that's implied by functional correctness hold for the binary so that that is a really strong statement and it's the first time any of this has been done the first the functional correctness proof by the way was done finished on July 29th 2009 so the open sourcing happened on the 5th anniversary of that day so that's all fine we know that we have a model that's correctly implemented and we therefore know exactly how this kernel operates it doesn't guarantee that it does anything useful in particular that by itself does not allow us to make any claims that the system is secure or safe or anything like that we need more than that so what we have on top is the classical CIA security properties confidentiality integrity availability and what we really want is to be convinced that our kernel can enforce these properties and that's in fact the case we have further proof that our abstract model allows us to enforce these security properties and because of the proof chain below it we know that's actually done by the binary so the binary of this kernel as it runs on hardware at least on arm hardware will enforce confidentiality integrity and availability and that's the sense in which I claim that this is the world's most and probably the only actually secure kernel that's around we shown a few some other properties one that's basically a safety properties timeliness this is a prerequisite that allows us to do hard real time on the kernel what it gives us is actually proofs of safe upper bounds for among others interrupt latencies in general all kernel operations so we know how long any kernel operation can take worst case and how long it can take worst case for an interrupt to be delivered to the interrupt to the device driver and again as at least as far as protected mode systems is concerned as CL4 is the only one that can give you these guarantees anyone else who claims to have a hard real-time capable operating system that basically just hand waving the typical way do it is you just load up the system hammer it with interrupt apply a factor 10 safety factor and claim this is your safe upper bound of course it's not no one actually knows what it is except us right so one question is okay how much sacrifice did you make for this in particular how much form performance did you lose for getting all this assurance and turns out the news is good here I don't believe in trading performance for security and SEO4 is in fact the world's fastest microkernel so we outperform even our previous kernels partially due to a more optimal design but also because we can actually afford to basically optimize the crap out of the system because if you change something we run the proofs through if the proofs check out then we know we're good if not they tell us where you have where you want to debug or when you need to debug or either fix up the proof or the the bug so I did if you if you make a change it can invalidate the proofs in two ways either you get into a configuration which is not covered by the proofs in which case you have to add more proofs or you introduce an actual bug and of course in this case the proofs cannot work out by the way feel free to interrupt me any time with question if you have any yes please I repeat the question if necessary this might sound like I'm being patronized but I'm is there a proof for the proofs because you said if you make a change there may there might be a proof that you're missing so how do you know to start with that you have all the proofs yeah okay this is actually a really good question any of this stuff would be utterly pointless if it was pen and paper proofs right the proofs are machine checked and as a matter of fact the proofs are massively bigger than the kernel so the kernel is about 9000 lines of code the functional correctness proof alone is 180,000 lines of code so we have we have a factor of 200 blowout in proof versus code so that a proof of that size would never be correct right unless it's machine checked so there is a proof assistant in which all the proofs are done and it has a very small proof core that runs all the proofs so the proofs are done more or less manually either completely manually or directed by the programmer or the proofer but then they get checked by the proof checker core for soundness in the mathematical logic isabel hall so the question was what proof assistant we use is a well yeah it may be easy as if I repeat the question yeah will this talk show an example of some of the sorts of proofs and how that sort of thing works no but I can take show some of that offline I've got some slides that show a lot of examples but it I mean I'm not proving anything myself I I'm a systems hacker right and I don't know any I'm not a former methods person it takes a fair bit of expertise to do this sort of stuff it's not for the faint-hearted the assembly that you write for the low-level architectures is that mathematically checked with the proofs this is my caveats there's a few things that are excludes that excluded from the verification at the moment there's two assemblies is in two bits there's some in the initialization code and then there's some in the in the context switching code and at the moment none of the assembler code is verified formally it's validated in the traditional sense we look at it and we test it set up that that's not a principal limitation it's just that we we haven't really got around to fixing all that up yet but it's a valid observation is it radiation hard and actually I'll get to that short answer we're working on it but yeah if I'm don't mention it explicitly ask again yes so the purpose of open sourcing it a rule for people to embrace it and extend it in terms of adding and porting into different architectures is there any architectural assumptions you've been making in this hoarding this from okay can I defer that question to later when I talk about the development process yeah this one might also be deferred for later I don't know but I just want to know the we we've got the exclusion to the privileged state and caches multi-core and the timing channels is that actually completely excluded from any of the actual checking or is there some kind of a guarantee around those three so when I say privileged state and caches that basically means we don't have a formalization of a memory model yet so which means at the moment we we cannot we don't have a way of knowing when we have to flush caches for that that just requires programming intuition and actually we have found bucks in the currently exactly there so that that's really sort of an interesting and both and scaring observation it shows basically anything that's not proven is gonna have bucks and so there's still a chance of those bucks lurking in the kernel in exactly those bits and the obviously memory models are quite complex it's something we are working on we but you need a you need a formalization of how the memory model of you need a formal memory model of the hardware and the other thing is the MMU is model at the high level so not quite the ISA level so there is some some high level model of how memory how virtual memory works and we prove against that it's also ongoing work to really get that down to the basically level of describing MMU operation at register transfer level language sort of thing multi-core we have a high level concept proof if you like which we know we can execute in world eventually so we basically working on on this list and initially the list was about twice as large and about every year I can take one dot point off the initialization bit we're not doing anything at the moment because it's just bloody boring right we sort of it's pure engineering we had an issue proof of initialization of a high level model early on which is not current anymore but we know how to do it but it's basically someone who actually wants to deploy this they can give us some money will do it it from the research point of view doesn't buy anything for us timing channels so know that I restrict this to timing channels so our isolation proof in principle excludes storage channels and that's also something that's really unique right this is the only OS kernel which you know is free of covert storage channels the general belief in the community in the sister security community is you can't completely get rid of timing channels but we have ongoing work which I'll touch on later on as well and turns out the kernel design actually enables a lot of that okay that's got carry on what is l4 not l4 se l4 is not an operating system it's an operating system micro kernel and to get a complete operating system you need additional stuff and all the interesting things that make an operating system in a way is pushed out to user land and that's common to all l4 micro kernels so it's basically your problem right I have nothing to do with it it's not not quite sure right in order to build stuff with it you need at least some of these things turns out these days by just running Linux in a virtual machine you can actually get a lot of the traditional OS services for example you can have a file system you just encrypt stuff by forehanding it to Linux and then it can be safely stored in a Linux file system so a lot of the traditional OS services we don't necessarily need depending on what kind of system you build the point is that because all these things run as user-level processes they're in encapsulated by the strength of the isolation properties that are enforced by the micro kernel and that gives you a really strong kind of peace of mind if one of these things blow up or misbehave etc the damage they can cause is limited and we can actually analyze the system formally to establish the extent on the damage that can be caused which is also really unique obviously isolation by itself is not good enough because that doesn't allow you to do anything you need communication and so we have high performance IPC channels which controlled in the sense that we have strict enforcement of who can communicate with whom and can make that subject to an OS system-wide security policy so in that sense we can control information flow in an SEL4-based system right so what is what's different between SEL4 and other L4 micro kernels besides all the had the verification story the biggest difference is the way we do resource management so from the OS design point of view this is where SEL4 really broke new ground in the sense that all memory management gets exported to user level in it and in a very complete sense the kernel other than sort of when it boots up has no memory allocator it just allocated static memory and that's it everything else needs to be supplied by user level this is really the core of some of the strong isolation guarantees you get so the way it works sort of in high level view is kernel boots up it grabs its own data and memory for its own data and takes somewhere and then the rest is handed off the user level so there's a protocol to start off an initial tar an initial process which we can tend to call the global resource manager because it's in charge of everything everything except the pure static kernel data and then it's up to this thing to do with the system and one thing it can do for example is partition the system so it has all the free memory and got all rights to this and it can set up say two exclusive partitions and then let each of these partition totally autonomously manage each other and they can then allocate free memory etc create a address basis threads and all that stuff without ever referring back to the original global resource manager that can if you want a statically partition system that could for example then just allocate itself and get out of the picture and the partitions are strongly partitioned and they need they then manage themselves and because the kernel doesn't have a memory allocator means whenever you want to do an operation that requires a kernel to have to use memory object is scriptures data tables thread control blocks and all that stuff you need to explicitly hand memory to the kernel for doing this and this is what forces the isolation from user level back into the kernel because if those two resource managers they allocate objects they need to from their own memory pool supply kernel the kernel with memory for backing the for basically kernel meta data and because this comes from partition pools and user level it's totally partitioning the kernel and that's really what leverage the information flow proves we did which guarantee the isolation and you can carry it on sort of it works all recursively so that this is sort of what what's unique about ACL for from the kernel model point of view so can you build actual systems with it yes this is what we're doing at the moment with our friends at in the US Rockwell Collins Bowie in Galois and University of Minnesota in a project that's funded by DARPA the US Defense Department funding agency where we are building a high assurance drone well actually to you one is the research vehicle which is the thing we play with where we pretty much build everything from scratch some may have heard of the smack and copter which is the flight control software that's being built by Galois this is all tightly integrated into a high assurance system and then there's the Boeing optionally manned helicopter so this is a full-size helicopter that can fly with with or without a pilot and the technology gets transferred on there so by probably mid this year this thing will fly in a CO4 and what's under the hood very high level view that's the structure of the system you see that there's two processor board the control board which is sort of the cerebellum and the mission board which is the main brain and they run all the software on that system we could have run everything on a single processor on SEL4 but we specifically choose this distributed architecture because it is more reflective of the actual Boeing commercial helicopter they don't want me to put up the block diagram of their system but trust me this is a very fair a very reasonable model of the actual Boeing system so it has a on the one part a board is a microcontroller it's an arm M4 that runs Econos which is our verified RTOS one second and all the low-level flight control runs on this and then there is the separate arm A15 processor board which runs SEL4 and sort of the high-level command control operation as well as a lot of this is of course where the SEL4 properties come in we can run an unverified Linux untrusted we can be sure it can't interfere with the rest of the system the project has a red team whose whole purpose in life is to try to break in there and our purpose is to keep them out we assume they will compromise Linux if surprisingly they didn't manage to do that they will give them a root shell anyway so we make sure the enemy is on the platform and we keep the rest of it safe that project is two and a half years into a four and a half year period so it's got another two years to run and as I indicated there will be a first flight demo with the actual helicopter this July which is what we really looking forward to yeah about a year ago I was looking at the block diagram for that knife we're sure I saw a version that had an arm 11 there rather than the A15 did it start on a v6 or is that me imagining things no we never had an A11 the the thing as it comes off the shelf has it may even be an arm 9 or something like that on there but we ripped all that out so we replace the electronics as well as the software so it's already always use been using the hardware virtualization as the basis yes that yeah that's that's the reason why we went for an A15 we need the hardware virtualization extensions because power virtualizing Linux is a lot of engineering work and be very error-prone and sort of takes away a lot of the assurance started okay so this is a really exciting project of course right and we only only at the last PI meeting in last July so two years into the project the security expert at DARPA stated that this is already the world's highest assured UAV and we only at the beginning of everything so and it's actually scary that how true that statement is the red team had no problem breaking into the Boeing helicopter and now this is supposedly military grade secure system blah blah because and not a directly take the high gun is DARPA funding you or Boeing which contracts you business and security question so now the prime contractor for the project is Rockwell Collins and they are the integrator and we are subcontractors to Rockwell Collins so is Boeing so you are not bound by DARPA conditions directly no no we are I mean that doesn't matter who spends the DARPA money it's subject to their conditions I'm thinking about the ita or kind of things we initially when this whole thing was set up which was before a co4 was open source and before we knew we could open source it that was a very painful thing to negotiate ourselves through the lawyer illegal jungle of trying to make this all work all that's gone away they actually very keen on open sourcing as much as possible and now I think we we will be able to open source everything that's running on the research vehicle the basically our the software recreated but didn't own for historical reasons was the main stumbling block for open sourcing all the stuff Galwan Rockwell Collins contribute that's a whole being open source and so yes we we shouldn't mention this because it's not official yet but we open sourcing e-cronos as well end of this month and so really everything will be open source which is of course cool because that that really allows everyone to benefit from the innovation that's being done in this project and there's really gonna get a lot of really cool stuff coming out of there for example the the word Galwan does on using high level domain specific languages to generate code that's not some of it will actually be generated provably correct and so that basically pushes the assurance story into user land and that that's very exciting so it's it's an incredibly cool project in many ways besides being able to fly on a helicopter okay so this is sort of what the current state is what are we actually working on so pretty quick overview there some of that is just engineering so we've got multi-core support running very stable very good performance scalable to where we want it in the lab and it's basically ready to be pushed out it will happen as soon as we get our act together and getting it releasable which should happen in the next few months full virtualization support for both arm and x86 is running very well and is in a similar state so it will be pushed out probably yeah I don't want to nail myself down too much but definitely in the next few months 64 bit support for now only for x86 is not quite as ready as the first two but it's pretty close it's running very well and should also be pushed out in the next few months and that's sort of the basically removes a lot of the usability barriers I guess and so far we haven't got any arm 64 hardware will get unless you ordered some right you haven't got it yet yeah right as soon as we have arm 64 hardware will be working on 64 bit support for arm as well so these are the engineering bits and there's stuff that's more researching I mentioned timing channels before so we we're actually working on mechanisms in the kernel partition system resources in a way that eliminates most timing channels and we we're making good progress on that so that that will be pushed out some time this year I can't be any more precise and that similar different kind of temple issues we're working on a revision of the API to make it more suitable for the most general class of real-time systems so-called mix criticality systems where you have mixtures of hard real-time soft real-time or hard real-time of different criticality where some for deadline misses are disastrous and others can be tolerated occasionally etc. that's very stable now we're in the evaluation phase and it should hopefully come out sort of mid-yearish I'm hoping to push that one out as being mature enough and then that was the question red-heartened we actually working on that and the idea is to use a multi-core processor in a DMAs or duo or triple modular redundancy with independent kernel images that check each other for consistency at kernel entry and exit that work is reasonably material we just submitted a paper on that and again this should come out sometime later this year hopefully sort of again mid-yearish we should be able to push that one out and of course that that is an interesting development because it yes it allows you to get around the issues that well at the moment if we have very strong guarantees right but if the hardware under our bumps flips a single bit then all bets are off and with this support we can get the bets back on and so this is short-term research and then there's sort of a bit more long-term research things we're working on and they're basically all around reducing the cost of assurance so just to give you an idea good at the moment our evaluation showed that we spend about three hundred eighty two hundred eighty dollars or so per line of code on the functional correctness proof that was very cheap labor partially etc and we spent a bit more for the rest of the system but sort of the ballpark figure 400 bucks per line of code for getting proven code this is design implementation verification everything included a whole life cycle cost if you like and that compares to thousand dollars per line of code which is a 10-year-old figure from Green Hills for building a high assurance system so we're already very cost competitive in the high assurance and and of course we out we have a current that's as fast as get all the other so-called high assurance systems which have no proofs their performance is very poor and then the other data point is the pistachio kernel which was done about 10 years before seo 4 similar it's a member of the L4 of microkernel family done by similar group with similar experience so quite comparable and they spend about 200 bucks on low assurance code also high performance so we basically only a fact of two away in the cost of our proof of the correct code from classically engineered low assurance code and we basically working on closing this fact of two gap and so if we can get our overall cost down by fact of two then we can produce verified code cheaper than anyone else does unverified code and of course that's going to revolutionize a fair bit and I think we'll pull that off in the next five years and there's sort of what we're using is a combination of synthesis so we're doing synthesis of device drivers where you given a formal specification the hardware interface a formal specification of the OS interface and from that you synthesize the device driver works a simple driver this doesn't really work for real world yet but it's promising approach a in a way sort of a bit more traditional but in other ways more ambitious is similar to what the Galois people doing code and proof code generation so specifying the logic and we're trialing this on file systems at the moment specifying the logic in a very high level language which over ready gives you probably the fact of two productivity boosts already and from that generating not only the C but also a correctness proof for the C and that eliminates then all testing and this is going very well and I hope to declare success in about three months time on this one and that that's going to be very cool and the set the approach if it works for file systems it will work for network stacks and other systems code so that I think that's the coolest thing we do at the moment and then you really want to get away from see what sees the right vehicle for the kernel application stuff including other systems code should be written in something more suitable something which is type safe memory safe we can then reason in the lot in the semantics of that language and do verification there where it's much cheaper and that requires in order to for this hold to hold together requires verified run times and composite and we're working on that with people from the ANU and Purdue University again quite exciting but probably a bit this is a few more years to go until this one becomes mature so what do you hopefully interest is sort of what what's the ecosystem like how does the development process work at set bar so this is what you see when you go to the portal if you go to SEL 4 dot systems and you get directed to a GIT repository this is what you see there's two GIT repository two branches in GIT up what we call the SEO for stable and stable means it's the verified kernel that's the real thing and you get all the proofs with it but everything is open source and then is what we call experimental which some people think this is experimental code it's not it's airy it's pretty solid code we released it what it is is the difference between experimental and stable is experimental is not verified but it's basically by getting in there it's on the roadmap to be verified there's no timeline associated with it but we're committed to verify them we know we can verify it that's the important thing so it's it's if you like the staging repo for verification so this is what you see and then of course people have private branches beyond that we don't have like an internal staging branch for feeding in there it's everyone has their own and then if we agree that we want to push it in the public version then we do that by releasing it and what this really means is pushing it in the public version is that we make a commitment that eventually we will we will verify it and that means we have to be convinced that we actually can verify it and this is why we don't really expect community contributions to the kernel itself because it really requires this commitment to verification and therefore you need to really understand what you do with respect to verifying we won't stop people submitting patches in particular platform ports etc but remember we're not going to put it in the public version unless we're convinced that we can verify it and we think we will eventually so question is how can you contribute really cool if people will building user level stuff for that so libraries is an obvious one at the moment the library support is pretty rudimentary it's an incomplete sea library a little bit more than that but not much platform ports so we have a few platforms which we sort of support more or less some way support and others we sort of support and it be cool if people port it to other platforms and continue to be trippin to that back device drivers obviously for any new operating system big pain point eventually we hope to solve that with synthesis but that's still not ready yet so if people contributed to drivers would be fine cool similar network stacks and file systems so we have LWIP sort of there's definitely more that would be really useful tools we have a few our particularly our component system there's definitely lots of work to be done and then languages so C plus plus we have core C plus plus support that's actually just been pushed out yesterday it doesn't support the standard template library but if you don't need that then most programs should actually work at least Adrian claims if not fix it or tell us would be really great if people took that one and made it more complete we are working with Galois on providing Haskell that should get out in the not too far future no one as I know is working on Python it would be really awesome if people build the Python port on SEO for okay so why would you not use SEO for okay one reason is it's a bit rudimentary so you need to be a bit sparse of a Spartan for doing it at the moment so that's a fair point but it's the one you can help with right everyone can help fixing that one other than that okay maybe you like insecure systems right some people like sunny weather some people like rain some people like safe systems some people like unsafe systems okay if you want to shoot you maybe you like the thrill of danger why am I saying that so imagine you're building a security or safety critical system and you're starting now to design a new safety or security critical system and it comes on the market in three years time and another three years later someone gets killed by malfunction I think there will be a lot of lawyers who try to work try to rip your skin off because you build on a safety critical system on technology that wasn't state-of-the-art and at the moment everything that's not SEO for is not state-of-the-art as far as security or safety school so this is something people should be aware of there's lots of lawyers who like to rip people skin off so it's I think I think this is this is a real serious point right if you build some if you build a nuclear power plant it needs to be damn safe and you should have to buy technology for it sorry I've wrote I signed something that everything I say it had to be no T rated I believe so and maybe you just want to use SEO for and of course that's the right answer and that's all I've got for you thank you very much happy to take more questions Peter here was first actually two questions one you touched on the beginning the MMU model is sort of missing from the proof now I'm aware that most of them ever imagined is out of the kernel but what plans do you have for further proof on or modeling the MMU and the DMA that's one half and the other half was I noticed you one of the boards there you mentioned to port to was Beaglebone which doesn't have the virtualization extensions how do you see so for being used on it look on systems without hardware virtualization okay the first one I am DMA will of course DMA you can only have secure if you have either trusted driver or an IOMMU both are possible right on systems without an IOMMU you need to trust at least the DMA controller driver and so yes if if you wanted a real strong assurance about that we'd have to verify that one and otherwise yeah we use an IOMMU and we will verify the IOMMU code which is just another many management code right and then that should be the end of that the the second one of course we've been running when SEO4 was first verified there were was no hardware virtualization support on ARM and so you can use the system fine for but of course you can't run a virtualized Linux system because we no longer supporting power virtualization but but it's it's obviously in many many most designs were looking at there is some virtualized legacy or is lurking somewhere just provide to provide legacy functionality or just networking functionality etc we're just starting to touch on that with the legacy and the hardware virtualization support which leads nicely to my question you in the slides you were showing that inside the system of a very you know real-world example is actually a Linux kernel is that sufficiently isolated that you would not actually worry running to those side-by-side if you were giving the red team their own whole kernel would they be able to get to the next one no they won't okay this is fundamentally what our security proofs guarantee they of course they don't guarantee that the system is secure but they guarantee that you can set up a system that is provably secure in the sense of providing this kind of isolation right so it's all it's never impossible to build an insecure system on a secure kernel but the secure kernel allows you to build secure systems if they're structured architected correctly and now isolation proofs are actually constructive in the sense that they allow you to check whether your system satisfies these requirements and that's sort of following to anyone who's actually thinking of putting Linux on there is it actually possible to provide a lot of the missing pieces that you were illustrating was sort of still missing using the links yeah exactly so I alluded to that already right so what you can use encryption is a great thing right if you have a trusted crypto library which hopefully I mean we're not crypto experts we can verify that the code has been implemented correctly but we can't check that the crypto makes sense or if you have that then you can you tunnel a lot of stuff through an untrusted Linux system right it obviously what it still can do is denial of service but beyond that it can't violate confidentiality or integrity of your data if you use your crypto correctly and that's been done before at our friend's entice and have done a trustworthy file system based on an untrusted Linux file system with encryption about six years ago or something like that and then of course we're using similar things for tunneling data through a network stack that's running in Linux etc and of course in many designs we have multiple Linux's because we want to break down functionality further and rather have one big monolith have just one Linux for basically one service and if you have enough memory then it's a reasonable way to build a more robust system so it seems to me that given that there have been exploits in virtualization systems that it might be a good idea to use this for like a bare metal virtualizer to run provide VPS's with rather than running Linux on Linux absolutely yes I mean if you use Linux as a hypervisor as KVM does then your hypervisor is a million lines of code and it's all untrustworthy right yeah and this is basically that's the failure of the cubes model and if you look at our list of suggested projects cubes on SEL4 is one and we actually have an undergraduate student working on that one but that would be it would be nice if there was a bit of a bit more critical mass behind that one would you really cool yeah because as as with the thing mentioned before if you're running to side-by-side would you trust that there have been exploits in virtualization systems which would allow you to do exactly that absolutely and it it's obvious that they'll have to be right you have such a big trusted code base no matter whether it's KVM or Zen or so they all weigh in at least a million lines of code and of course the scary bit is really the hardware because the question is how much can you really trust the hardware of course everyone trusts the hardware on top of their massive code base at least we only trust the hardware and only otherwise code we can prove to be correct but there's no guarantee if you ever looked at the narrator sheet of any modern process they're scary right yeah but that's not a problem we can solve right it's an orthogonal problem we can only take the hardware we got do you see there'll be a time where you take a journey in the full-size helicopter while the red team has access I would but they won't allow me and they were quite specific that they were not going to do hacking at attempts while in flight but for simple safety reason that's it we have time for maybe one more question yeah thank you very much cannot we just like to present you with a small gift to say thank you for your presentation today thank you and thanks for all you coming and asking questions much appreciated