 Hi everybody, you're watching theCUBE's coverage of RSA 2023. We're here at Moscone West. This is day four. It's the week has flown by, I don't know, 40, maybe even close to 50,000 people this week in San Francisco. We're really excited to have Ayal Yolgev, who's the CEO and co-founder of NJUNA Security and Arvid Raghu, who's the worldwide business development and go-to-market strategy lead for EC2 core confidential computing at AWS. CubeAlum, it's good to see you again. Thanks so much for coming back on. Thank you for having me. What an event, huh? We're back to 2019 levels, it feels like. I mean, maybe even more. I mean, there's more excitement, more startups. More people. It's good. It's good to be back, isn't it? Yeah, it's great. RSA is always like a big reunion and it's fun to see everybody, you know, people that have worked throughout the years. It's always a super fun event. It's incredible. You go out at night and you see people you haven't seen in a while. I just saw another friend of mine that I used to work with at IDC. Oh, yeah, I forgot who you just seen. It's really like, you know, good old week. Ayo, why don't we start with Enjuno? Why did you start the company? What were the roots? Yeah, thanks for asking. So my background has always been in security. I've been doing that for over 20 years and what I love about security is that the best security solutions are enablers, right? They enable you to do something that you couldn't do without the confidence and trust that security provides. And just one example is the firewall. The firewall is what enabled organizations to connect their internal networks to the internet for the first time. They wouldn't have done it without a firewall. And when I went into confidential computing, which is what we based the company on, it was very clear to me that this was going to be the next bigger neighbor in security. And it essentially enables organizations to take their data and their workloads and run it in any environment with complete security and privacy. And the first thing we're seeing now is this enables cloud transformation. Organizations can take their most sensitive data, any workload and run it in the cloud. Yeah, so I wrote a piece a while ago if you just Google AWS's secret weapon, it'll come right up. And then it was really all about Nitro and it was certainly Graviton was part of that, but the Annapurna relationship, Arvid that started, it's really interesting the history of AWS and it's EC2 and other sort of semiconductor design prowess. But I learned a lot last year at Reinforce when we first met just about your perspectives on confidential computing. I'd say you guys were sort of with Nitro enclaves really ahead of the game, others have now sort of copied that, which is good, that's actually a good thing. But how would you summarize your perspectives on confidential computing? Sure, let's first define what confidential computing is so our listeners know what we're talking about. Depending on who you talk to, the definition changes a lot. But I think there's one thing everybody agrees on, confidential computing is about protecting data in use from any axis. And everything we build is based on our conversations with customers. And based on our conversations there, there are two distinct security and privacy dimensions that we have identified that pertain to confidential computing. Dimension one is where we're protecting customer code and data from operators on the cloud provider side. In this case, that's AWS. So if you are running on EC2 today, there's just no operator access from AWS to reach into your code or data that's running on the instance. That by default is satisfying one dimension of confidential computing already. So if you're running a nitro-based instance, from our perspective, we are already providing confidential computing by default. Now, if you are looking to further isolate that code and data, if you're running sensitive data, not all data is created equal. There's innocuous data and sensitive data. If you're running sensitive data and you want to protect it even from yourselves, that's really when a solution like nitro enclaves comes into play because it's providing additional isolation on top of what you're getting on a nitro-based instance. And that satisfies dimension two, where you're concerned about protecting code and data from yourself. That's our perspective, our definition, how we look at confidential computing. And everything that we have built has been based on working backwards from customer requirements here. You know, I've heard some, what I consider ridiculous arguments where somebody says, so I get it, you can protect from yourself, but then protect from, for instance, Amazon. They say, well, don't you trust Amazon? I'm like, okay, but if the Amazon's protecting you, is it possible for them to get to it? Why wouldn't I take that if it's coming for free? So I heard that as a criticism of confidential computing one time. I'm like, that's the dumbest thing I've ever heard. Thank you for doing it. And yes, I trust you, but even better, I trust you more now. Thank you. But here's the thing, like you got to trust somebody in the process, right? We're all trending towards zero trust, but it's an asymptotic curve. We'll get there at some point, but right now we're not there. You got to trust somebody in the chain. They have this idea of making security sort of an accelerant even to your business or an enabler. I think a lot of people might say that, nah, security's all about risk reduction, lowering my expected loss. How can security become an enabler? Give us your perspectives on that. Yeah, so specifically when it comes to confidential computing, when we talk to large organizations like banks, for example, roughly what we're seeing is that 10 to 15% of the workloads are already in the cloud. About 10 to 15% will probably never move to the cloud, but any of these sort of 70 to 80% of workloads that they want to move to the cloud, there's a lot of benefits to moving to the cloud. There's scalability, there's access to services that exist in the cloud. There's no reason why you want to move something to the cloud, but it's being blocked. And basically security and privacy is the number one and number two reasons why these moves are blocked, and it's concerns around data sovereignty, concerns about the privacy of the data. It could be PII data of customers, personal identified data of customers. So if you have the right security solutions where you don't have these concerns anymore, then security becomes in the neighbor. Then you can take the 70 and 80% of workloads that you want to move to the cloud, but are being blocked, and security becomes in the neighbor. So really the use case that we're going to talk about here is moving data from on-prem into the cloud that you want to put into the cloud, but you can't because it's not, why? Because it's not encrypted, you're not protecting data in motion. Is that right? Can you explain that? Yeah, so the number one reason is, and it's what Arvin basically mentioned is once you, the core sort of difference of the cloud versus on-prem is that the cloud is essentially somebody else managing the infrastructure for you. Right, on on-prem it's your building, it's your servers, it's your people, you have control of the full stack, and even them, sometimes you don't want your admins to have access to your data. So it's to some extent true on-prem, but in the cloud it's even worse because it's a third party that's managing it. And for banks they worry about what happens if the government comes in with a subpoena and a gag order, like can they get access to my data? What happens if the cloud provider gets access to my data? You have third parties running on the same servers. So there's a lot of security and privacy questions that people have, and that's exactly where if you have the right solution, where access to the infrastructure doesn't mean access to the data anymore, that it just solves these concerns and enables those cloud migrations. And is it, is the hard part the data in motion or is it the full sort of spectrum? So it's fine, the hard part is actually always been the data in use. So data in motion is now pretty much solved. You can encrypt it using HTTPS, our data at rest is solved, you can encrypt data when it's in a database or the file system, but the one piece that was always unprotected was data in use. What happens when the application needs to process the data and it decrypts it, loads it into memory, and at that point it's completely accessible. And to some extent that was, that is sort of the core of why that access to infrastructure gives you access to data. Even if you encrypt everything, the data sits in the clear in memory and the keys, the encryption keys have to sit somewhere on the infrastructure. So if you have access to the infrastructure, you have access to the keys and therefore to everything. Once you close that gap, then finally you have full security and privacy on top of your environment. So the architecture isolates pretty much everything, right? Where is it? Maybe explain how traditional virtualization architectures were established and then what's different? That's a well-informed question by the way. And with the nitro system, what we've done as you pointed out already, traditional virtualization versus what we do with the nitro system is we abstracted away a lot of the virtualization functions, if you will, from the host with the nitro system and we moved it away from the host and functions like networking and storage and device models, management, security, all of that were pulled away from the actual EC to host itself and sits on separate PCI cards on a nitro system. And that is isolated from the actual host which is running the virtual machines which are your instances. And the only thing that's running on the host is in addition to all of that is just the nitro hypervisor which is a very light hypervisor which takes care of ring fencing these virtual machines. And that provides isolation between different tenants and it also provides additional isolation that we talked about with something like nitro enclaves. So that's the isolation model we built but let's not get hung up on isolation, right? Because how we do it does not matter. What matters more is what we do for our customers. And what we're doing here is providing that end to end protection that I have talked about, right? There are three things you do with data. You store data, you move data, you process data. The protection for storing data and moving data that's existed for a while. What we are now doing is extending the protection for data in use. Once it comes into the memory and once it gets unpacked and plain text is revealed how are you protecting it? Who are you protecting it from? That's really what matters. But sure, yes, the nitro system itself provides that isolation. That's a very strong isolation model we built. And then everything else is a combination of both isolation and encryption. And the only entities that can get access to that are the ones that are trusted, right? That should have access to that, is that correct? That is correct. And just to add a little bit more to that, with your nitro instance itself, even if you trust us, we don't have access to your instances. So that's already taken care of. And then beyond that, you may have users, even admin level users who have access to your instances. And if you want to fence them out, that's where you provide that isolation. But yeah, in that scenario, you're not trusted, right? You're blocked, right? We are blocking ourselves out of there. Like, you know, the way I like to describe it, and my personal opinion here, but your data is radioactive to me. We don't want to get anywhere near it, right? And that's how we treat it. Yeah, yeah, it's like when somebody tells me, well, I have this confidential information. Don't tell me. I don't want it, you know? Yeah, I was telling somebody yesterday. Somebody walked up to me and asked me, hey, Arvin, how confidential is confidential computing? I said, I can't talk about it. Yeah. Right. So let's go through a customer use case, if we can. So let's say, you guys talked about banks before. Yep. So I'm a bank. You said, let's say 10% of my data's in the cloud. I want to put, you know, another 30, 40% into the cloud. How do I do that? Who's, first of all, who's watching? How do you guys work together? What are the integrations? Let's go through the sort of use case. Yeah, maybe I can talk about a specific use case. We actually worked on it with a large bank. It's a top 20 bank. And the use case was super interesting. Basically, they had their core banking application running in a mainframe. And essentially, the mainframe became a bottleneck as more and more users started using the digital experience. And it caused them two issues. One was the latency started going up and beyond the SLA of what they promised their customers because the mainframe was a bottleneck. And the other issue was the cost per transaction started going up in a massive way. And what they decided to do is they wanted to use AWS and essentially put almost, call it a caching mechanism in front of the mainframe to take care of these transactions and then sync it back to the mainframe, you know, once a day, just to sync it back to the core banking application. So they've built all that. They got it ready to put on top of AWS, but then security and privacy got in the way. And the reason it got in the way is because this caching mechanism is processing PII data. And basically both internal regulation, the status external regulations, there's a regulation out by the bank from the Bank of England, PRA, the talks about protecting data and using the cloud that basically blocked them from putting it in the cloud. And when we come in with Anjuna, so obviously AWS provides nitro enclaves to help solve this problem. The issue that many customers run into is that if you want to use nitro enclaves, you have to refactor the application. You have to rebuild it, you know, for the nitro system and that's where Anjuna comes in. We essentially build a software stack to make it super simple to take any workload, any piece of data and run it in confidential computing environments without any changes. So you don't need to change the code, no need to recompile. And that's exactly what we've done with this bank. We enabled them to take that caching mechanism and put it in AWS with no changes, which in terms of what they achieved, one is the cost per transaction went down significantly and then the latency went down from over two seconds to about 40 milliseconds. Why was the main frame of bottleneck in that scenario? Can you explain that? Yeah, because everything has to eventually get to the main frame, all the data has to be stored there. And then as the number of transactions went up, they have about 50 million users that were using the digital experience, it just became everything had to go through that one bottleneck to the main frame. And just the main frame, this wasn't, the two options they had was other scaling the main frame, which would have been about a $100 million project. Okay, obviously they didn't want to make $100 million. Yeah, it's basically the main frame. Now was data coming in from different locations or was it? Yeah, it's the core banking application, everything eventually gets to the main frame. Right, okay, so they would have had to buy another, whatever, Z20 or something. Exactly. $100 million it would have cost. That was the numbers they shared with us, which is obviously not something they wanted to do. And the cloud is the perfect solution for these types of use cases if you have the right security. That's interesting, because I have talked to over the years a lot of banking customers and main frame customers, they would buy a new main frame, site unseen, because they could make a business case on it. But if you can use the cloud as a caching layer, and the business case is way more attractive. Yes, exactly. For this use case anyway. Exactly. And you said latency went down to, you said two milliseconds? 40 milliseconds from about two seconds. So it's 40 milliseconds from two seconds. And the cost per transaction, can you repeat that one? Yeah, the cost per transaction, basically when the main frame became a bottleneck, the cost per transaction went up significantly. And that just went down with the caching mechanism. Okay, so you also mentioned that prior to Anjuna, you would have to make changes to the application in order to take advantage of confidential computing. Is that right? Did I understand that correctly? Maybe I'll add a little bit of color to that. Yeah, is that been a blocker? Yeah. Please. It's not necessarily a blocker. It's the considerations that you have to take into account when you're using a technology like natural enclaves, right? Are you building your application from ground up? Are you trying to repurpose what you have? If you're trying to repurpose what you have, that's really where you're thinking about, hey, how do I lift and shift this without making any changes, without refactoring this, right? Whereas if you're building it ground up, then you know the rules you have to work with. But regardless of which situation you're in, if you don't want to use a building block and build everything all by yourself, Anjuna can actually speed it up for you by reducing your time to market because they have a solution that can help move it faster than you yourself building it. So yes, there's refactoring involved, but that's involved when you already have code that you want to repurpose. Is there an analog with like a graviton, for instance, in order to take advantage of graviton? You really want to optimize. You've got to think about how to take advantage of it. Is that similar sort of, or is it different in that? Like can I run workloads, right? It's just I can't necessarily have them optimized for confidential computing existing workloads or do I actually have to make changes to the code to take advantage of it? Well, if you're using existing code, you do have to make changes. But graviton may not be the best example to compare here because oftentimes what we find is customers who are migrating to graviton are either moving all the way from on-prem into the cloud and directly on the graviton or from say X86 instances into graviton. So the considerations that are in play over there are a little bit different from the considerations that we have here. Okay, but so the bottom line is that there's things that have to be done for existing apps that you obviously don't have to do them for green field apps. How do you do that? What's the magic inside the covers? So generally, and as Arvind said, today if you want to use nitro enclaves, you have to recode the application. If you're already in the process of recoding, maybe you want to do this, but in many cases the engineers are not necessarily security experts and you want to kind of let them do what they do best and then you can sort of come after the Fagnus run everything in nitro enclaves with no changes, which is the solution we provide, not to mention if you have any legacy application, things that you already wrote and you don't want to change, or third-party applications, if you want to run in nitro enclaves, this is exactly what we can mean and help. The way we do that is that we, maybe just go one step deeper into why you have to recode. With nitro enclaves, basically you have sort of two pieces. There's a piece running inside the enclave, which is sort of the core of what the application does and that's protected and then you have all the communication that's happening with the outside. And obviously that's something that you don't want to enable that within the nitro enclave. So essentially you have to almost like break the application into two, a piece that's doing the core of what the application does and a piece communicating with the outside and then you have this connection between the two. And that's what you need to do yourself if you refactor the application. Or with us, what we do is we take the entire application, we run it inside the nitro enclave solution and then we become the slayer that sees oh the application is trying to communicate with the outside, we're going to take that and make sure that it's being handled properly as it moves to the outside. So again, it's just completely from the outside with no changes. So it sounds like a perfect partnership. I mean, you guys love this because you get more data into the cloud faster, less friction. What kind of integrations do you have to do? Have you done, do you need to do in the future? Yeah, so that's actually where I think additional benefits that comes to play because obviously we've partnered very closely with AWS to be able to connect this to the Kubernetes service of AWS and to the key management service of AWS. And basically we integrate with the different systems and solutions AWS provide to fit it into an enterprise environment. Because a lot of these, again, these large banks or these large financial services, large organizations are using all these services and fitting into the ecosystem of a large company is obviously very, very complex. Which is one of the challenges that we see when people try to build it themselves. It's not just around recoding the application, which is usually hard enough on its own. It's integrating with all these other solutions that is also a challenge. And we've basically taken all that, you know, paying away just to make it super simple. I've always been, I mean, I have to say, I've always been skeptical of Amazon has a, you know, the mainframe migration program. I've been very skeptical of that. It's hard to migrate COBOL. But I love, maybe you have a different perspective on it, Arvid. But I love the fact of being able to extend the useful life of my existing mainframe and save a hundred million dollars. That's real business value to me. Yeah, we see this often with traditional banking institutions, right? They have APIs that have worked for a decade. Nobody wants to go touch it. They know it's working. They know it's secure. Now they just want additional security enhancement there, right? And this is where I want to connect the dot with the earlier question you asked, like what is the security enabler? You know, how do you look at this? PII is not new to the cloud. It's existed in the cloud. But there are a lot more regulatory compliance pressures, everything that these institutions are starting to, you know, have to consider as they move more and more workloads into the cloud. And that's really where this big security becomes an enabler, where by deploying these solutions, by ensuring the isolation, by ensuring there's no operator access, there's no insider access and whatnot, these workloads are coming to life in the cloud. The data itself has been there. But what you do with it and how much more you can do with it, that's really the enabler that security is. I think it's a really legitimate defensible premise that you're making security an enabler because you think about when you're developing a new application and you're all excited and you're focused on the functionality, you're working backwards from what the outcome is. Honestly, you're embedding certain aspects of security and but at the end of it all there's that last mile. You got to get through compliance and audit and the security team, the sec opt-in, all that. And if you can remove that, that takes away friction, it's an accelerant to time to market and there's I think a big win for customers. So guys, congratulations, I'll give you a last word. Yeah, I think where this is eventually going is once you have solutions like Nitro Enclaves, there's really no reason not to use it for everything. This is a layer of security. I think conference computing is just going to become computing. Kind of like how HTTPS became security for everything. I think this is where the world is going. I think this is the right direction the world is going to. And I think it's going to make all of us more safe and secure. Yeah, it's inside, a little inside baseball here, but super important. Guys, thanks so much for coming on theCUBE. It was great to see you again. Thank you, Dave. Thank you, Dave. All right, enjoy the rest of the show. All right, keep it right there. This is Dave Vellante, John Furrier and I will be back right for this short break from RSA 2023. We're in Moscone West and we're live. Right back.