 Hi, everyone. So yeah, my name is Serge. This is Joy. We both work at Cisco. We work on a team, a small team that kind of builds the base OS for a couple of products. These products are network appliances. They're not like little nuts that you put next to your bed. They're big appliances that you put in a rack in a cooled room in a basement. So people, when these things reboot, people don't want to have to run around and type in a root password. They certainly don't want to have to babysit the boot. So we want unattended boot, but we need encrypted storage to protect customer data and whatnot. So to address that, Paul and Joy presented two years ago what we do. In addition to that, we're going to build on what was presented two years ago. The next thing that our products have is that they want to be clustered. And so three or more of them want to work together to do some things. They're going to share some secrets. They're going to make decisions. They're going to configure switches. So we want to be choosy about who we allow into the cluster. So to that end, we use the Secure Unique Device Identifier or SUTI. There's specifications out there for device ID and whatnot. It's just what we call it. So when a customer buys one of these boxes, it'll be provisioned at a secure factory at an undisclosed location. Part of that provisioning process is to make a SUTI key, sign the certificate with a CA that only the factory has access to. I can't sign one of these and put the product ID and the serial number into the certificate. And so we want it to be such that at least a part of the cluster admittance criteria is that you present a SUTI certificate and prove that you're the owner of it. In order for that to be meaningful, we need to make it difficult or impossible to extract one of these SUTIs. So if you can either buy one of the products or hack into one and grab the key from it and then put it on an arm box and plug that into the closet, and now you can mess with the customer's data, that's no good. And also if you manage to hack in through an OD and make some changes, you can add a service that has opened up a root shell on some port, whatever. It's one thing to keep you from doing that until the next reboot, but we want to guarantee that after the next reboot, there's no remaining changes. So based on this, our use case is for, like I say, cluster admittance, but this also should be useful for simple remote attestation. And so therefore, while again our use case is network appliances, we've also been looking at some teams where they might want to do IoT devices, which have some of the similar requirements. They want to do unattended boot, probably want to encrypt some file systems, and if these things are roaming around the world, you might want some way of verifying that it really was your device and has not been tampered with if it's phoning back home, which you could be a company selling devices or you might be just an individual and have your home network where you've provisioned your own IoT devices. And then the other thing is cloud deployment. So if I want to do some, offload some computation onto the cloud, I want to just push a button and have it bring up a cloud instance and send my data to it. So again, I want definitely unattended boot and I want to be able to prove to myself that I'm sending my data to something that's running software that I authorized. And with confidential computing, we're hopefully getting to a point where we can do that even in the cloud. Even without Cocoa, we should be able to, on our own hybrid cloud, inside of Firewall, be able to have some guarantees. So how we make that happen is what we're going to be talking about today. And Joy's going to start with the power on through a certain point in boot and then I'm going to pick back up. Okay. This is going to be a really high-level overview of secure boot and the features that we use for our Moz. So a really simple definition is that secure boot is a UFI firmware security feature that uses digital signatures to ensure immutable sign software is loaded during the boot price. So in other words, each step of the boot sequence is verified from a previous step. And it does this using a chain of trust to make sure that only cryptographically verified binaries are executed. So in Moz, our chain of trust... Oh, with that me. Okay. Our chain of trust uses the four key databases that are in the UFI firmware. Our root of trust is the platform key to PK, which is usually a single public key certificate. And it can be more than one. And it signs and verifies the key exchange key with a kick, which is usually a list of public key certificates. And it signs and verifies the DB and the DBX. So the DBX is usually a deny list, and it's usually a list of hashes and public key certificates that are not trusted, software that is not trusted. And then there's the DB, which is a allow list, and it's also usually a list of hashes and public key certificates that are binaries that are allowed to run. So our chain of trust also includes the shim, the Linux shim, which is a first-stage bootloader. And the Linux shim usually includes a built-in vendor allowing deny lists. So we put three public key certificates in our shim. Our shim's built-in trust database. And this is to allow for three levels of access, because what happens is like... So when the shim boots a UEFI binary signed with a particular key, it results in a distinct PCR7 value. And we use that PCR7 value along with a TPM extended authorization policy to allow access to certain secrets in the TPM. So we have a TPM admin public key certificate in our shim, and it's distinct PCR7 value along with a signed TPM EA policy allows access to the TPM password. We have a production public key certificate whose PCR value along with a signed TPM EA policy allows access to our LUX secret and our SUTI private key and certificate that we put in a TPM. And then we have a limited public key certificate which has special purpose and has no authorization to access TPM secrets. So I'll talk about those three keys a little bit more in detail in the next few slides. So just a typical secure boot workflow. You see the UEFI firmware loads and validates the shim using the key in its UEFI DB. And then the shim, the first stage boot loader, loads and verifies the second stage boot loader, which is usually grub 2. And grub 2 using the shim protocol usually validates and loads the Linux kernel. And at this point in the boot service, the boot service is exiting, control is given to the kernel. So the kernel usually uses the unit RD to set up a temporary root FS until it's real when it's mounted. And it may also contain other software or drivers like to do hardware, to access hardware, petitions or whatever. None of that is protected. Okay, so in our Moz workflow, again, that's a typical one, the UEFI firmware validates and loads the shim. The shim validates and loads the second stage boot loader. But in our case, the second stage boot loader is our smooshed kernel, which we will now refer to as a unified kernel image, or UKI. And our UKI includes the unit RD, so our unit RD is protected. Okay, UKI. UKI is usually a combination of a UFI bootstub, a kernel, a unit RD and other resources that are all put into this single PE file. And a UKI can be launched directly from the UFI, shell or from a shim. And it is also digitally signed, providing authenticity and integrity to all the components of the UKI. Our Moz UKI includes stubby, which is our stub boot loader. It contains a .linux section in the PE file for the kernel, .command line section for the kernel command line, .initrd section for the unit RD and a .subot section for the subot. And we sign and verify it with one of the three keys that I mentioned just a few slides ago in the shims built in database. Okay, so stubby, our stub boot loader. It is based off the system DEFI stub. Now, securing the kernel command line is a bit restrictive, and we wanted to ease that restriction a little bit. So we provided a whitelist in stubby, and the whitelist is pretty much, I want to say it's a tokens that are allowed on the kernel command line. So for example, if you pass in a, if the .command line section is missing in the UKI, and instead you pass the kernel command line to the UKI executable, then stubby will validate that kernel command line. And if any of the tokens are found invalid, it will exit on error. Other, if you're in non-secure boot mode, it will just give you a warning and continue execution. And we also added support for, we did conclude the support for adding runtime commands. Okay, so Moz also utilizes the TPM, some TPM2 features. So we utilize the PCRs, which the platform configuration registers, which are just memory in the TPM used to store measurements taken during the boot process, and software does a measurement, sends it to the TPM, and the TPM extends it into a particular PCR. And by extend, I mean it takes a hash of the current value in the PCR concatenated with that new value. And we also utilize the TPM's NVRAM, which is just memory. It's a, and it's memory, they usually contains two classes of data. One of them is TPM data structures, and the other one is unstructured user-defined data that we're going to refer to as an NV, that is referred to as an NVIndex. And a user defines this size of this memory, and he accesses it using that NVIndex value. So NVIndexes, they read and write, access can be controlled separately, they can be used to store secrets during the boot process, and access can be controlled by an authorization value or policy. Something else that we utilize was the TPM's extended authorization policy feature. So TPM2 has, so authorization controls access to the NVIndexes, and TPM2 has many ways to authorize and the policy is just, can be a single authorization or a combination of authorizations. So we utilize a feature in TPM2 such that you can seal things to a PCR value that has been approved with a particular digital signature, rather than a particular PCR value. So in other words, authorization is based on that digital signature and not just that PCR value. So for example, if you have some software that has many versions, several versions or whatever, and each of those versions result in a distinct PCR value, then you can sign each one of those PCR values and it would represent or indicate that this is the approved versions of software that can run. So in MAS, we use these features to authorize access to our secrets. So when we provision the TPM, we generate a TPM password, we store it in the NVIndex, and then the PCR7, we use a PCR7 value to authorize access. So a PCR7 with a signed TPMEA policy. So what we do is we... So the PCR7 value that results when SHIM boots a binary sign with the UKI TPM key. Remember those three keys that I mentioned prior in the SHIM? So one of them was the TPM key. So the PCR value that results with that particular key, we generate a policy for it and we sign it with our TPM pull admin key. There was a lot of keys going on, so I'm just going to warn you. So the policy generated with that authorizes access to that NVIndex to get the TPM password. And then we also did something similar with our luck secret. But we store it in NVIndex, but we use two authorizations in order to access the data in that and authorize access to the data in that NVIndex. So we use a TPM... One authorization is the TPM policy version value and the other one is, again, that PCR7 value that results when we use our UKI production key. Another key that I mentioned of those three keys in the SHIM. And a policy generated with these two authorization values we signed with a different key, a TPM pull lux key. And we added that TPM policy version here so we could have the ability to revoke a prior EAA policy based on a version number. We also secure our SUTI private key and certificate the same way, using that very same policy, signed policy that we use for the luck secret. And that's it. All right. Okay, so what Joey showed you was how we get to a point where either we... the PCR7 values are such that we have policies that let us unlock the TPM. And if that's the case, then the NIDRD has to have exactly the values that we had signed. Or we don't have access to the TPM and then it doesn't matter. You can go to whatever you want. But so what we want to do is, again, protect cluster admission and remote attestation. Those services would run from some other container service, not from NIDRD itself, most likely. So now we want to extend the same kind of guarantee to the next stage. So the simplest way to provide an RFS to boot into from here on would be to create a static partition on the hard drive. Luxe encrypted with Luxe password from the TPM and write the RFS to there. And then during NIDRD we mount that and give it into it. That would protect from offline tampering, for the most part, since you have to boot into R and NIDRD to be able to get the key for it. But it wouldn't... if you managed to hack in through some ODE and make some changes, it would be hard to tell if anything had changed since install. In addition to that, we also... one of the other things we wanted to do was really run a lot more things as containers, including have the RFS be a container too. So that's another motivating factor here. So now, since this is not ContainerCon, there are upstairs or something, I'm going to just have two slides about OCI because this is important. So the open containers initiative image spec. That whole thing I'm just going to call OCI from now on. It was a standardization of basically the Docker container format. You have an OCI layout, which has zero or more OCI images in it, each image being a container. The OCI layout is all in one directory. Under that, there's a blob, a six directory that has content-addressed blob files that are usually JSON or tar.jz. So you can easily shaw some of the files to verify that the name is correct and detect tampering. And the way it's laid out is at the top level... Oh, my. What have I done? I guess I can't highlight. At the top level, there's an index.json which has, among other things, an array of the container images that it has, and each container image section has a size, an annotation which has the name for the image, and then a SHA-256 pointer into this other directory to one of the files which is a JSON file which is the actual image manifest. That file then has an array of layers where each layer is going to be traditionally a tar.jz file. And so if we wanted to ship our root file system in this format, for instance, we would create slash just root empty. We'd read our manifest, and we would unpack the first tar file, then we'd take the second tar file, untar it on top of the first one, and there might be some special files in there that are whiteouts that delete files from previous extractions. And so we do that for each tar file in the set of layers. And when we're done, we have something we can pivot into. But you can see this has the same problem as the first suggestion of just unpacking the RFS in that once we've untarred everything, it's hard to tell what, if anything, has changed there. So a couple years ago, Tyco had the idea, hey, instead of tar.jz, let's use squashfs with DM Verdi wood hashes. And so that's what we actually do. So we ship our OCI layouts like this, and during init.rd, each layer that's needed for the RFS, we mount a squashfs with Verdi, and then we take the overlay of that and mount it onto sysroot and pivot into that. And then we do the same thing for all the container layers that are going to run. So now, as long as we can trust this information here, we basically have what we want. If anyone's made any changes, they'll be to a writable overlay. If they've made any changes to the blob files, they'll be detected by DM Verdi. Changes to the manifest will be detected by the shot sum of the manifest not checking out. So now we get to the last step here. When we whip up the next version of a piece of software for one of these products, what we do is build an install manifest of the container images that it's going to have and some relationships between them. One of the services will be a special service type called hostfs. That will be the rootfs that we're going to pivot into. Once we've written the basic manifest file, we then run a publish step, which will take all of the container, the OCI disk or the Docker colon slash slash URLs, mostly from internal repos. It'll fetch them, check the signatures against cosign certificates that we've authorized for the namespace. And as long as those check out, it'll fill in the digest of the manifest for each layer in the install manifest. So it'll add to it. Then it'll sign that and post that resulting manifest as an OCI artifact back in the container registry. So it's like a container image that isn't a container. The signature and the certificate for verifying that signature are then posted as other OCI artifacts referring back to the manifest. So we can say, hey, I'm supposed to boot from this Docker URL. So we fetch that and we say, okay, what are all the things that are pointing to this that are of this type certificate? And usually there'll be only one, but if there is more than one, one of the certificates has to be signed by a manifest signing CA, which is on the NDRD. So that's our final hook now. Since the NDRD we know has to be pristine, we can trust the CA that's on there. So now our boot process looks like this. So we have maybe an EFI partition, either on an ISO or on disk, or we have Pixi Boot or HTTP Boot. Somehow we get a shim and a UKI. We run through those, and as long as the TPM gets unlocked, that means everything was fine checked out and the PCR7 values were correct. So we trust what's on the NDRD. Amongst what's there is the machine OS controller binary, which will do the rest of the setup for us, and the manifest signing CA certificate. Then if it's an already installed system, there'll be a configuration, an encrypted config partition where we'll find what manifest we want to boot from. Otherwise, we'll use the command line to figure that out. So that'll be, again, a dist OCI or Docker URL to a manifest, which could be... So if there's a live CD and not a network boot going on, then what NDRD will actually spin up a little OCI registry instance against its local storage. Otherwise, if it's network booting, it might go out to some Zothub.io or something. So we'll get the manifest. We'll ask for the referring artifacts. We'll verify that the CA verifies the certificate, the certificates and the signature verify the manifest. The manifest has digest for all the container images. Everything else is content-addressed and DM verity checked. So we now have what we wanted, basically. Another way to look at these steps. So, again, the UEFI... Let me just check the time. Oh, lots of time. The UEFI will verify the shim. The shim verifies UKI. PCR7 unlocks the TPM. While we're here, the first thing we'll do is we'll take the SUTI key and certificate out of the TPM and V indexes. And right now, what we're doing is putting them into a tempFS that's root-owned and schmod 700. The plan is to actually not put them on the file system but load the key into the TPM as a transient object so that you can do PKCS-11 operations so that extracting the SUTI key won't be possible at all. We load the LUX key and unlock all the file systems. For now, we're then taking the LUX key and putting it in the root key ring. That doesn't need to be the case. If you're unlocked, we can drop that. But once you drop it, you can't get it back without rebooting, so that's a harsh thing. After we do that, we extend PCR7, so now the TPM is locked, and now we can relax, and we can go on a verifier configuration, create the root file system, pivot into it. Yeah. So on the plane, I'd made another chart on my phone, but I didn't get into the slides because I was going to show how the key sets are structured at Cisco versus what we have here. As Joey said, there are a lot of keys. At Cisco, like I said, the factory has its CA for creating the SUTI keys for machines, and then there's a team that has a lot of these other keys. They are the ones who build the kernel and the root file system. It's not the root file system. They build all of the artifacts and sign the shim, sign the UKI, and have a manifest signing CA certificate, which goes into the INIT-RD. And so they build the shell, and then each product that uses this has their own manifest signing key in certificate, and their certificate has their product ID in it and is signed by the CA that's on the INIT-RD. So you have one team that ships the UKI and the shim, and other teams can all reuse that. And during boot, we verify that the product ID of the SUTI on the host and the product ID of the manifest match so that we avoid allowing product one's ISO or whatever from booting onto product two. In case one product has laxor security standards or just something bad happened. So that's how it's being done now at Cisco. That's all with hardware tokens and whatnot. But to have the community try and play with this, we needed something different. So there's a trust program which will create all of this for you. So you start by saying trust keyset add in the name of an organization, and that will create keys for signing a shim, for the UKI, EA policy signing keys, and a manifest certificate. It'll actually create the UKI with the CA, the manifest signing on it. It'll sign that. It'll create the EA policies, and this is now kind of science fiction, but in a few weeks, this will all be automated. We just have some expect scripting to do. But so it'll create the EA policies and create a data directory with all the signed data that you need to be able to boot. And so once you've created the keyset, now you create a project, one or more projects, which are like our products, which will then have a manifest signing key which you can later on use when you actually want to publish something. And it'll sign that with the CA and create a UKI that can boot as a live CD or a provisioning or install CD or can boot as an installed image. And then the intent is that once you've done that, if you want to create a VM, if you just be able to say machine launch, give a project an organization and product name, give a serial number that you want to give the VM, and it should just automatically create the CD key, sign it, provision, install, boot, and then tell you when it's ready. Okay, I should ask you, are there any questions about this so far? Yes. Hold on, she's gonna... So you said right now, the protection for the SUTI is that you drop it in the boot process? You mean on a running system? Yeah. Yeah, it's that it's sitting in a tempFS which none of the... Everything should run as a container, a new AD separated, so they shouldn't be able to find it and if they can find it, they shouldn't be able to get to it. Okay. That's not the ideal, that's what we're doing here. Eventually it'll be in the TPM. Yeah, so you'd use pkcs11 to do sign and encrypt, decrypt, whatever. Thanks. Okay. Yeah, so the promise of the talk was to talk about end-to-end secure OCI. What we've mainly talked about here is the right side of this. That's because the left side is basically already done and has been presented in other places. So one of the first things we did as a team was to build a stacker. That was Tycho starting that again. It's a container image builder that can run fully unprivileged. Any day now it can integrate S-bombs, do cosine signatures and whatnot. One of the reasons we really needed it was that it's willing to create squashFS-based file systems, which as you see are crucial to our security story right now. We actually have someone working on, actually Tycho started a file system in Rust called PuzzleFS, which will eventually probably take the place of the squashFS approach, but that's in the future. So for all the products, everything is built using stacker that's been published to our Zot instance. So Zot, ROM wrote a containers registry implementation, implementing the distribution specification. This again, it's willing to host squashFS images, so we actually really needed this Docker registry wouldn't do that. We have big instances that run for all of our CI and all of our products. We also run tiny instances, like in NIDRD, we run a tiny instance against local host so that we can do our querying of what refers to this manifest and things like that. So these things are sitting on Zot with their signatures, and then here's where Project Machine now walks in, and we want to write our manifest, which lists what images to use from Zot, and they can come from various URLs, but the published step will find those images, verify the signatures, get the digest of the verified images, collate all that into a new object, which will be the install manifest, which it signs and publishes back again to Zot, the container registry. And then if there's a system that's already installed, Maw CTL, the controller, will just boot from locally, otherwise we can network boot or ISO boot or whatever and get layers from Zot even on the public internet, but we can verify that everything has not been tampered with, so a pure network boot is just as good as a local boot. So the code layout here right now is on github.com slash project machine. This is subject to change. We're still figuring out, especially as we do PRs that have to go across different projects that gets fiddly, so we're still going to change some things, but right now there's a trust repo that's for administering key sets and signing things. Machine for the moment is our actual VM runner. It will probably be renamed because we want machine to be a higher level thing where you really just say machine run this from this... with this key set from that URL just make it happen. But right now instead it's actually... it's a lower level. It's a great tool for doing quick spin-ups of VMs that are secure booted with pre-provisioned OVMF variables with virtual TPNs. It's again... it's a rewrite of an internal version that we have that does some more things that we want to also carry over where it can start up clusters of VMs with expect programs hooked up to each console so you can have automated tests of complicated workloads where one is configuring a Pixi server and other one's then configuring an NFS server and now we start off the network booters, etc. Anyway, Moz, the machine OS eventually the idea is you would never interact with this. You would use machine to do the publishing and the starting of things. For now Moz B is the builder. It builds and install artifacts. Moz CTL is the controller which runs on the actual machines. There's a keys repo which is just a snapshot of the result of doing a trust keyset add and trust project add so that if you don't want to run that you can download the keys and take a look and just see how things are structured. And then there's Bootkit which is, again, you're not meant to actually see this but it has all the artifacts for building a shim, building a UKI within an IDRD with everything provisioned and it exports an API for doing the signing that trust will do for you. So this is still very much working projects. The way this has happened a lot of what we've done like see stacker is odd and there's a lot of other things were done from the first in the open. But the core part of this was done it had to be done so that project products could use it immediately. So it had to integrate with other pieces of build infrastructure, had to meet timelines and whatnot. So for a long time, every year we'd get together and say it's now the time we can just open source it and we'd say well we're not ready yet. So last fall we said that's it, we're going to do a grounds up re-implementation and then we'll move the internal version over to that. And so that's again that's what project machine is. So the first piece of future work is just to finish what we're saying we do. All the pieces work individually but the glue is just not there yet. We want to provide alternatives to a TPM. So right now everything uses TPM. Actually two years ago we were close to supporting another hardware token but in the end TPM was available and we ended up using that again. But there will be cases where we really can't or don't want to use a TPM and so we'll want to use a different approach and that will be an interesting research project because this is so tied into the the TPM and UEFI Secure Boot right now. We're going to need to support hardware tokens. So right now trust only it keeps everything under dot local share machine trust keys that's clearly not something you want to do as a company if you're going to be signing things and that's not what we do to move our stuff on to this we're going to have to support remote offline key generation and signatures and whatnot. The actual install YAML that you define right now to write a workload is basically there for proof of concept purposes there are things which we support internally which we don't support here yet and that's because we want to do a better job of being more generic and more having a better fit to what gives us a flexibility without introducing insecurity. So for instance if you're firing up an NFS server as an IoT device of some sort you're going to want to have persistent disk mapped into some container that right now with this version is not possible you're definitely going to want to have some services share some disk one might be a key KSM generating keys and another thing might be NGINX just wanting to read a key from that but doesn't need access to anything else so we want that sharing to be supported. Network configuration here right now all you can say is empty name space or share the host name space but don't have privilege because you're in a user name space definitely want to be more featureful there but the question is what's the best way to do that some people would say CNI it's out there so it might be a good way I don't know and then we really want to say that everything should run without privilege there should be no server service running in the RFS itself except for things that are needed to run the container containerized services and those should all run in a UID name space so to do some meaningful things like allow one service to say hey I need another 100 gig partition we're going to need some way of specifying privileged operations that containers can do so one research project that has been installed now for a little while would be codenamed keyhole you would take some language specification of things to say this can talk to this service this can run if config hopefully not that hopefully higher level but anyway you would sign that with a owner key that trust would generate for you and who certificates the service the device has so it can verify that you've signed this and then there would be a unix socket so that you can do you can verify the credentials of the service that's making a request and you can say is this service really allowed to make this request and if so then go ahead and do it then building on that right now we can install and we can reinstall but we want to have an update Damon so what we would want to do here is on your device as one of the container services which you can specify you can customize whatever one of the things would be an updater that would just periodically ping a service thank you on the internet and say okay are there are there new manifests on the other end on the public internet you'd have something that would take your latest manifest and re-sign them with short-lived keys you Damon can say well I don't see any new manifest that's been signed in the last week something's wrong or if if there's just been no updates they would say well there's a new one but it's the same as the old one I don't need an update okay we're down to five minutes so I'm going to stop here and are there any more questions sorry to make you run all the way up here for the microphone so this is very interesting I'm curious about a couple of different things so one you are doing some very intricate work to protect all of your keying and certificate material do you have mechanisms to revoke certificates as part of this in case one becomes compromised in some way and then second is I'm curious if you can talk about your use models for non-TPM platforms by use model I mean what use case we would have for that one every time I think we have one we don't have one yet but we want especially with IOT devices they might not have a TPM available we might want to plug in a UB key or something that prevailed it but for the first question so Joy explained that the EA policies that protect the SUDI key and the LUX key they are a two stage one is check the PCR7 value and the other is check a TPM NV index which right now just says one and that's the version so to revoke we would bump that version and now all the previous EA policies would stop working that would be a fun day that would be a fun day but so the program that would have to do that would be an EFI binary sign with the TPM admin key we would never take a live CD and sign that with that key but it would be a single purpose thing that would say let me bump this and and the SUBOT as well we can use the SUBOT to also with key revocation just bump up the values in the kernel.efi for the SUBOT that might I have a question over here oh yes you forked the systemd stop thing into stubby thing any reason why because the things that you were discussing like was the allow listing of the kernel command line options we're discussing the same thing upstream yes and I think we want to go back to using the systemd stop for that I think that was just a for the sake of getting it done but yeah we don't intend to be a fork there's nothing specific weird in stubby that we couldn't do upstream anyway well right now probably just because we do the command line filtering so friends we have some products that want to boot off of TTY0 some TTYS1 and so we have to change the command line there but at the same time we don't want to let you add rd.shell exactly that the discussion is going on upstream you should join then yes we will join that yes yeah we want to switch to that so I just want to say thanks real quick to the rest of the team which is not here now but we'll hopefully watch this later including some of our former team members Paul and TTYCO thank you guys and also some of our former execs have been very supportive of it's doing work in the open source and a huge help here so do you want to turn it off thank you