 Good morning everyone, hopefully you all had a little coffee this morning and You're a little bit ready to go and you're a little awake So you're ready to listen and learn and hopefully I'll have some time at the end if you have any questions We can jump in so my name is Ben Fisher. I work at red hat principal product marketing manager and part of the nRx team and we can go talk a little bit more at the end about some other things so first off I'm going to talk about Basically the problem. So there's I mean data has lots of problems, right? But one of the problems is organizations have data lots of data It's the crown jewels a lot of times But there's a lack of maintaining the confidential integrity in at least some of their data, right? Some people don't care about some of the data. It's kind of generic some of it's very very very very personal very very important very valuable to the organization and There's there's lots of use cases all over the map, right? There's lots of industries types of various types of companies There's regulations that all force different reactions to data and algorithms and various and in the applications I get that use all that data in those algorithms Today from an infrastructure perspective We generally have in terms of kind of compute workloads Most people have kind of left the full dedicated application stack and they're going to either virtualizing everything or containerizing Everything in the stack, you know and especially when they're they're moving it off to the cloud and off-premises Unless it's a SAS model, which is basically the same thing These are kind of the two different Paradigms that we have in terms of stacks Pretty much the same thing With the you know a little bit of difference in how you treat elements in the stack For XK CD fans here's kind of the security The Sarcastic security perspective of the stack and how everyone kind of gets blamed, right? There's just all these different places in the stack that can be compromised You can look at a little bit later But you know basically everything in the stack is looked at as being compromised and so there's lots of weak spots everywhere, right? It's like a it's like a big sieve There's just lots of holes and there's lots of opportunities for threats and you know the simple thing if you think about threat actors, right? They are always going to go after where the easiest spot is so And it also depends on that particular person whatever they have access to This is kind of what it's talking about, you know whether a Bitcoin miner an employee or whatever whatever they have access to That's what they're going to compromise the easiest that they can so We kind of understand where the problem is You know You're kind of stuck in a little bit of a quandary, right? You have the stack You can't really do anything to CPU. There's only a few CPU manufacturers that are out there the applications Hopefully as an organization, that's kind of what you develop so hopefully trust yourself in doing the development Hopefully you follow best practices and CI CD, you know static, you know, dynamic code analysis, etc but the stuff in the middle we can abstract and if you can abstract that and try to reduce the The threat vectors that you have and you reduce the amount of things that you have to trust in the stack There's a lot of power in that. There's a lot of safety so If you can think about not trusting the host not trusting the owner or the operator and you can cryptographically verify hardware and software and you audit all the software You can dramatically Reduce what you need to trust and what you need to can be concerned about Basically, you're reducing the the areas that you could be attacked by where your threat vectors So if you can do this This would be a very well-suited application for a lot of things But it's physically microservices any like really sense of data like we you know in regulations like any PII You know financial data records but not but also the algorithms you might utilize to go and analyze that data If you can get simple deployment that helps a lot easy development integration, and if you could be standards-based You're actually making a huge progress not just not trusting certain things, but also getting some benefits out of it And this this kind of comes to an issue where today we have Encryption for data at rest we have lots of solutions the industry is very good, right? You have laptops at least maybe not academic laptops, but at least a corporate enterprise laptops, you know regardless of the operating system It's very standard that those those discs are encrypted if you have databases in your enterprise Organization most likely at least the important data there is should be encrypted a lot of things are encrypted as far as data in transit or data motion Again lots of options. You probably all have VPNs have had that experience. You've seen TLS search to websites, right? This is all ways it while your data is being you know moving moving through various networks that you are not aware of and you don't trust You can have a layer of trust that when you send your credit card information along or whatever that confidential, you know Critical information for you that it's there's a reasonable amount of confidentiality and security in doing that The problem though is while you have it at rest and you have it in motion When it's being used There's there's really no security paradigm to really make sure that that's Both secure and the data integrity is Maintained as well. It's just it. There's a risk there, right? There's lots of things in the stack There's a lot of different types of potential threat pass to integrate To attacking or getting access to various points in that stack So that brings us to trusted execution environments are called to ease who's Raise of hands who's ever heard of a TEE That's great. Okay before the session Okay, who keep your hands up who actually Feels confident and roughly understanding what a TEE is So yeah, a little us. All right, that's good. So hopefully this is this is a little bit more of you So we're just gonna go through quickly what our TEE is in a sense at least our perspective So we kind of think of it as kind of the application of middleware basically supporting the application as as what that trusted execution environment is Whereby only the CPU really has access to that's that TEE and we're just Everything else there doesn't have access to the TEE Because the CPU is gonna block it The other way to think of it is it's it's kind of a protected area in this in the CPU Really what we're doing is we're kind of it's encrypted memory pages, right? That the CPU is utilizing to execute various workloads whatever executing the data whoops backwards So now I'm gonna talk about NRX and we can talk a little bit more some extra details about it But it's it's one of the TEEs that is out there in the universe and In terms of our design principles or actually our main principles is we don't want to trust the owner We don't want to trust the users the hardware or the software, but The exception on the hardware is we have to trust the CPU at certain level Although we do that with kind of a whitelist blacklist so that if you ever Change your mind on which CPUs or which vendors of CPUs or versions of CPUs That you can change with the times because there is a potential that you trust the CPU one day and the next day Maybe you find out a lot of vulnerability on a particular manufacturer or particular line from particular manufacturer And you might want to block that right so we we understand that it's not like we're gonna trust any in every CPU period And from a design person principles, I would expect that most of you probably familiar Or could guess most of this stuff, right? We want to have minimize Minimize what we trust so minimal can trust the computing base We want to minimize all the various trust relationships and that we have to have established We want we want to play into portability. We want CPU portability We don't want to be stuck on any particular CPU CPU vendor version of the CPU We want to have that full portability. It's kind of you know, Red Hat's perspective on You know put your workload wherever you want and then if you change your mind and where it is You want to move to different CPU another vendor another cloud that's completely within your power So we want to empower them to do that The network stack is outside of the trusted compute base, so that's that's kind of important We're trying to keep also the Imprint or the size of the keep as small as possible. Not really quite here. No backdoors memory safety Part of that is actually being coding in rust as a primary language to provide a lot of those benefits Open source we're following open standards Like WebAssembly in wazi full auto pill audit ability And of course we talked about the security at rest in transit and now in use So in terms of the architecture, it's I don't not a big fan of this particular graphic But what we're really talking about is we we have kind of up the Way up there, right? We have kind of the in arcs piece kind of up in the application space where you had the application It's running in arcs and you had the bindings down to WebAssembly layers, and then you have the various CPU stacks that it can ride on one interesting kind of note this actually In terms of in arcs, there's we support both process and virtual based Attempts at CPUs providing keeps or TEs So you have SGX which is like Intel that's that's the big one on the other big one up there on the other side Is the SCV which is AMD's approach arm also has by the way their own thing, but they call it a trust zone But it's not we don't consider a fully a full TE. It's kind of Aspiring to be a TE and it's providing some of the benefits, but it's not necessarily feel It's not meeting the bar of all the things that you really should have in a TE to establish So we're not there yet. They're still developing a future iteration of chips. That's hopefully and they'll hopefully get there in the future So but you have the chipsets you have kind of your WebAssembly support and you support and you have your application kind of running on top So one thing to think about is an arcs not as a development, but more of a deployment framework, right? So, you know, you're going to choose your own language or whatever you want to put your application on You're going to develop it in all we need is we support WebAssembly So you need to compile it into WebAssembly you choose the host and instant configuration and so we call it a deployment framework because your host and instance you'd put on something like OpenShift and You know develop an application and combine WebAssembly. We really don't care about that You can use whatever dev tooling you feel like right And then because it's on something like OpenShift or whatever you're you're running you're running it and you have the ability to have it fully portable and put it on whatever cloud or Internally or wherever you want, right? We're not necessarily stricken you by One CPU vendor or or one cloud vendor or anything like that so let's get a little into kind of how this works so from From the component standpoint We have the host and the client parts on the host We have the keep and the agent and of course we have we don't care about most of the hardware But we do care definitely about the CPU right and on the client side We obviously have the client CLI if we're using that and the orchestrator So in terms of the attestation process you'd start with the CLI or the orchestrator Contacting the agent To say hey, let's set up a workload Then the agent would request from the CPU. Hey, I want to set up a keep Then the CPU would actually create that keep and Load the nRx runtime Then with that empty keep loaded, right? We would do in a measurement of the keep we do some attestation of nRx in the runtime and We verify That basically it's running. It's set up. It's okay. It's what we expected It's basically empty there's no malware unexpected other things that are loaded in there and If we if we verify that then that was then we'd go ahead and then we'd send the code and data algorithms Whatever an encrypted format Execute the workload And it said go to the CPU and the CPU would send it along to the key So this is an early demo, but we'll get to a demo two more slides So basically we have us we're in the demo. We're gonna have a server We're gonna have a client and what you're gonna see is an attestation handshake and You're gonna see the code and data Encrypted being delivered and Executed in the keep So let's see All right, I'm gonna resize that. It got really fuzzy So let's try to Okay, why is this not working doesn't need to be in line so speed. All right, I All right, this is great, of course, I ever do a demo and it's recorded and it breaks So let me tell you what you would have seen and next time I have to do screenshots So it's a very simple algorithm all we're doing is there's an algorithm We're just taking and saying take three take four add them together Send it to the keep okay, and you'll see you basically see a series of like I think it's actually seven lines Like kind of our process before and you basically see okay send it up the key You see the attestation that the keeps been created You see the keeps been created you see that the it's been verified Okay, that the keep was empty, right? Then you actually see an encrypted string of data and then you see You see that it ends and closes and then you you basically get you know the number seven, right? and Then we run it a second time and the reason why I want to send it The reason why we run it the second time is not only do you want to verify that? Hey, this is reproducible to some extent, but The the encrypted data that we send is a different string And so that's important to understand because that that shows that we're having we're having perfect forward secrecy So it's not like you can just take the key and go oh look You know we changed it from three to four to three to five and all of a sudden You know five of the digits are the same and then there's a way to kind of hack through the data So that's that's what we're trying to show so anyways, I apologize for that I will have to find a way to To make that more resilient, so Yeah, and so here's the slides of talking about What we what we should have seen but we didn't see let me just Just gonna and it didn't work right All right, so just kind of reviewing in terms of design principles. This is all stuff we reviewed before But this is just this is really important that we do this and we do this right And it's you know part of the part part of the things that's really important is that we do it in an open standard way That's very portable. It's just that we I can't emphasize that enough. There's a lot of organizations that are out there that are not necessarily Don't have the as much of a history being open and so We really want to make sure that everything is as open as possible And we get as much contribution from the community to make sure that everything is not just kind of beaten up from a Security standpoint, but that it will meet the needs of of the world right of whatever needs that are out there I think I ended fast. No way. Okay. I Kind of I was hoping that I would have worked so I apologize. I'm ending early. I'm sorry. So Basically, we need your help. There's a lot of great things out there. We have a website. There's a code depot That's a getter depot You can see master plan. You can see a bunch of great stuff And I'd love to hear any questions you might have to meet you this one. All right There is no communication through there we're trying to show that there is an agent but the data is it's it's It's it's processing if I remember I think it's just processing through there But it's not it's not actually reading or doing anything with the data Oh, how is it getting loaded? Oh That's a good question Yeah, I mean the CPUs have a Very widely varied implementations. I mean and it's more than just the fact of this process and In virtual based We're trying to abstract as much of that as possible It's kind of a kind of an ongoing challenge. I guess you'd probably say and What one of the issues with like SGX is it has They have their own instruction set so And it is I'm probably not going to say this accurately But they have a huge amount of instruction set that you have to kind of load and it's like you just can't just Run right on the CPU. So it's got it's got a lot of issues that are not well met today and some people don't it's it has a lot of overhead and Some development challenges with with the way it's it's being Intel supporting it. Yes Yeah, I think you're right, I think we need to clarify the slides If you don't Yeah Question You can attest test the chip, yeah I'm sorry, I couldn't quite understand you repeat that It will talk to the chip and it's it's it's talking directly down the hardware stack on whatever silicon it's on, right? It's not virtualized to do networking across The networking is outside of the the trusted compute stack comes to trust and compute base Well, I mean, I know I'm not a developer. So I apologize right there because it's going deep But the there is a way for us to attest to the particular silicon Through the silicon that it's on to the CPU that's on that silicon. I'm not exactly sure what it is and I Know the the in arc the way the in arcs is in planned or part of the design Design principles, I guess yeah part of the design principles. So is that the idea would be that you don't You would be able to have full confidence that it's running on a certain chip set Even if you have no physical access Regardless of the geography. So whether you are, you know putting it in a Google cloud And they have an employee who you know, whatever hacks bunch of machines or you know search You know has physical access and and puts probes on the hardware or whatnot Or whether you're running in China or you know down the street or in Russia that it doesn't matter if there's a physical compromise on the machine or anything like that because Everything that would be used would be encrypted and it would be verified that it would be it'd be empty and secure prior to anything being loaded and Then it would be unloaded and they keep it be dismantled once that process once the application had finished its execution. I Don't have the answer for exactly for what you're saying is how do you know? I know there's ways that they're doing that and that it is a It's being developed, right? It's not finance. Yes. Yeah, we don't care Yeah There is and there's there's a There's what we'd kind of call like a whitelist blacklist so that we're talking about trusting trusting the CPU the benefit would of that would be as your You know as you're running workloads and you're you're operating, you know your various narcs keeps if you ever lost faith in either a CPU vendor or a Particular series of CPUs You'd just be able to say I don't trust them and whether it's permanent or temporary because there's a lot of confusion You haven't established what you want you could just you know knock off the tick or vendor and then it would never one So even if you're running in a cloud, the only thing you'd want to do is just make sure that that cloud has other CPU options and that you want to pay for right Or you'd have to then move those workloads to a cloud vendor that that offered those types of CPUs Yes Yes You do have to recompile it supports WebAssembly. So we We believe that WebAssembly is kind of like JavaScript on right It's kind of kind of part of the theme and one of our lead developers is actually on the WebAssembly Standards boards and so we're helping to not only shape that but we also have kind of understanding of Where WebAssembly is going? So I think it's like there's a way over 30 30 for the languages that now can be recompiled into WebAssembly Even like Fortran and old languages. So there's a lot of support out there. So yes, it is a step But once you get into WebAssembly It's really easy support. Yes No, anything's great Yeah, these these tees. Yes Intel's had it out for a while in a variety of different CPUs and AMD also has some I think there's this newer. I can't remember exactly when that their SEV came out They have their own specs on what their instruction sets are Intel's is called SGV and Ambase is called s SGX did I not say s checks. Oh Sorry, I'm mixing it to I'm like slurring my speech. Okay, SGX is the Intel and SEV is the AMD one Yeah, so there's there's no reason why you can't have multiple NRX keeps on the same CPU So so long as there's enough, you know memory resources just to handle that and you can like it Here you have the orchestrator so you can you could orchestrate that out if you want it as well I believe so. I mean the orchestration of this is is not You know, I think that's kind of we're trying to get more of the CPU get the get get it running in a full full way on multiple CPUs and Keep aware of other CPU manufacturers that might Start offering enough capability so we can kind of you know go laterally and orchestration I think is something that once we kind of have this out and running Orchestrations kind of words about more and further on the feature set of orchestration It's yeah Yeah You wouldn't set the keep up as a a Like a container if you will as analogy just to start running Multiple different workloads and just kind of making it this big bloated thing It's it's meant a little bit more for microservice where you'd execute on some data and algorithm and application Once it's finished that then it would close itself down and if you need to do something else or you needed to Let's say run a calculation twice to verify that the calculation is correct And that would be the reason to start a second key. Oh good The software no If you chose to compile a ginormous application into WebAssembly, it's not on us, right? That's your choice, right? We are explicitly trying to keep it very small That's one reason why we just decided to not include the network stack We care about performance, but performance is not the first and foremost consideration as As a you know, we care about the confidential integrity of the data first and foremost At when we as we are developing the project further We will care about performance because if the performance is you know, let's say if there's a 10 percent performance hit I think some people would be fine with that, but if it's a 3,000 percent performance hit Because it's not scaling then that would be unreasonable and not that nobody would adopt it, right? So performance is important But if we don't have the data confidentiality and integrity piece solved and Have a confidence in in users to actually utilize an arcs For that reason then it doesn't make a difference and some stuff is so critical people will pay an Extraordinary performance hit if if it's that critical, right? So we need to solve the primary use case and we need to then work on it part of part of what we're trying to do is keep Keep an arcs software code very very small very compact. So we have Some kernel experts will always looking for more take to do basically super micro kernel We're actually using virtualization approach, but we're basically It's going to be it's it's going to act like virtualization But not like the way you're accustomed to running virtual workloads Which is a massive virtualization load that takes you know minutes to basically load on a machine We're talking about something. It's a little bit more like a container feel where basically it's it's instant. It's instantaneous when you when you execute it So even though we're talked about Right this this is something And I can't show the demo, but the demo was was it was just Instant and the idea is that even when it's fully it's fully developed that it Still basically be instantaneous in terms of you getting that that workload up and running and then the performance The hard part is okay. How long would it take to run WebAssembly and Whatever you compile into WebAssembly your application That's a little bit harder, but then that's that's where the crypto Computation piece is gonna it's gonna have some hit because anytime you do crypto There's always a performance hit and there's a big difference in different types of silicon potentially there Yes When you say hardware based solution, what do you mean? It's hardware instructions that Intranon Like I tell the CPU go and decrypt this Package of software that you give me a loader to secure memory and make sure that it's trusted Whereas SUV any old process on the system or NPTME can say But this so the pages assign this key that I gave you and encrypt the memory and the CPU doesn't care Correct That's a complicated question, but I think the answer is yes I think you're trying to explicitly trust an arcs There's some level of trust you have to have an arcs because it is abstracting right a lot of the hardware stack, right? We are saying trust in arcs we're trying to prove it to right except the demo failed I Though I will throw a curveball out of and later I think the kinds is presented on key lime I think it's two o'clock One one thing it's been kind of thrown out is is does n arcs work better or does it conflict with a solution like key lime? so key lime is at a simple thing is It's it's it's running against the TPM to our chipsets and it's it's basically doing boot and run time Attestation and so there is actually a good use case to run both of them together It's not conflicting and it can work if you but the what the the issue with that would be you'd have to have control The environment or you would have to make sure that whoever is running that environment for you the cloud provider is running Key lime or some some solution like that To provide that that level of runtime boot attestation to add that extra layer of confidence and security Oh, no you Or Well the recomp the the compiling is is in the web assembly, right? So whatever software you'd have or you're you compile a web assembly So one you you'd need to have it already in web assembly so that you could use I think what we're talking about and I'm a little bit hazy on on what your question was I don't think you would take a running and arcs keep and migrate that you would if you wanted if you didn't Want it on a particular location a particular stack cloud vendor or whatever you would you'd have to kill the keep and then you'd Have to launch it somewhere else or you would say I don't want to run any new keeps here I'd want to have my new keeps on my My lab or you know on my private cloud or only on Google cloud or whatever it would be The key you can validate that belongs to this so then you can create your workload So then if you need a similar work load on a different system You do the same you infragate the on that from that You've validated and you increase to where it's at. So it's not the migrations Yeah, it is a part of the paradox You are not going to run the service here and then life migrated to different You can run multiple parallel Instances doing the old stuff or you can fire here So it's it's a different model and different use case And you could you could use the orchestrator to change You know whatever the rules are and how you'd want to orchestrate it or where you'd want to Launch new workloads or the whitelist blacklist of if you decided all sudden 2 p.m You saw a Krebs article on something and you started going. Oh, I don't know about Intel SGX You know, whatever series Then you wanted a blacklist that then you could put that in and say I want to kill all loads and Relaunch them only on you know the whitelisted CPUs Going forward You'd have to re-substantiate you'd have to relaunch everything because you're building up It's just it's a lot of like you'd be basically you want to you want to you'd want to Restart the whole key and you'd want to start it up with as much trust as possible So if the whitelist and blacklist had changed you'd want to make sure that was considered on the new things So if you had any issues, you'd probably want to kill the keep and then and relaunch the workload and instead of moving something live This this doesn't work for migration, right? You'd need to you can start new things On a new new platform Thank you you in the Liberty, okay So I always answer first and I'll let Dimitri correct me maybe No, no, that's that's the language I speak best so I think as as far as particular use cases we talked about some of the industries and regulations and some of that stuff And if you really want to get detailed then it comes down to hey What's the application and what's the size and and all that I think that's a little bit to be determined What order the really critical things that are going to actually adopt tees? There's a lot of ideas and a lot of potential cases It is best suited for microservices I do agree for things like that because it's small and when we talk about performance as was said earlier Hey microservice something very dedicated something. It's just running as you need it and it's not there It's kind of idyllic right but The reality is who knows I mean if if you're talking about I mean and we don't really know right the world of security It's kind of crazy something could happen You know in two weeks from now and all of a sudden people panic and for something like an arcs They might say oh, you know heck with it Let's just recompile some big megalithic code and throw it in there or on the keep and we'll pay the you know The crazy compute because now we're covered right and we don't want to have the Exposure of this not being protected. I think you could happen, but I don't think anybody's realistically gonna do that Except if they had to for like a very short period of time because it just doesn't make it doesn't make sense to me I don't know. What are your thoughts? I It's the page able to index into a list of security keys that are a mineral so any process can just say MF 10 giga-bram and M&Pi's that that ram range uses this key index and what if it's ran into a preference so horrible You probably just run in that today and the whole idea was like you run a QEMU process and it would pick a E-index that your entire PM would run in that encryption space so that your host provider might not be able to be critical to you So, yeah, you can enter stress day Yeah Yeah I Yeah So I think the merged in ansible use case if that's I understood the question Would be a certain automation step where ansible goes and calls and says start this keep Grab this encrypted data set called blah Pull it in Move it move it in an encrypted fashion run the execution take the result move it encrypted Deploy it encrypted and then you access it later in some other other process because you need it for something else I So So one thing it's interesting about tees from a general standpoint is Obviously, there's developing Developers organizations that are worried about how do I deal with sensitive data and how do I protect it? So there's that interest in tees, but the other instance is you have cloud providers that are saying you know How do we provide the confidence and assurance to our users that the data is being uncompromised and it's both the Concern of how do we provide the level of trust? concern to minimize any state actors Possibly tampering the data or you know maligned employees or whatnot. There's also quite a bit of interest From from certain regions in Asia as well. So thank you very much. If you want to talk further. I'm happy to talk outside. Thank you