 The Cube presents KubeCon and CloudNativeCon Europe 2022, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome to Valencia, Spain and KubeCon, CloudNativeCon 2022 Europe. I'm Keith Townsend along with Paul Gillan, Senior Editor, Enterprise Architecture for SiliconANGLE. We're going to talk to some amazing folks day two coverage of KubeCon, CloudNativeCon. Paul, we did the wrap up yesterday. Great back and forth about what Enrico about yesterday's session. What are you looking forward to today? I'm looking to understand better how Kubernetes is being put into production, the types of applications that are being built on top of it. Yesterday we talked a lot about infrastructure. Today I think we're going to talk a little bit more about applications, including with our first guest. Yeah, speaking of our first guest, we have Venetian Devinian, CPO, Chief Product Officer at Hazelcast. Hazelcast has been on the program before, but this is your first time on the Cube, correct? It is Keith here. Well, welcome to being at KubeCon. So, we're talking data, which is always a fascinating topic. Containers have been known for not being supportive of stateful applications. At least you shouldn't hold the traditional thought you shouldn't hold stateful data in containers. Tell me about the relationship between Hazelcast and containers. We're at KubeCon. Yeah, so a little bit about Hazelcast. We are a real time data platform, and we're not a database, but a data platform because we basically allow data at rest as well as data in motion. So you can imagine that if you're writing an application, you can basically query and join a data coming in events as well as data which might have been persisted. So you can do both stream processing as well as low latency data access. And this platform, of course, is supported on all the clouds and we kind of delegate the orchestration of this kind of scale out system to Kubernetes. And that provides us resiliency and many things which go along with that. So you say you're not a database platform, what do you use to manage the data? So we are memory first. So we started with low latency applications, but then we realized that real time has really become a business term. It's more of a business SLA. It's really the, we see the opportunity, the punctuated change which is happening in the market today is about real time data, access to real time. I mean, there are real time applications that customers are building around real time offers, real time threat detection. I mean, just imagine one of our customers like BNP Paribas, they have, they basically originate a loan while the customer is banking. So you are an ATM machine and you swipe your card and you're asking for, you know, taking 50 euros out. And at that point, they can actually originate a custom loan offer based on your existing balance, your existing request and your credit score in that moment. So that's a value moment for them. And they actually saw 400% loan origination go up because of that. Because nobody is going to be thinking about a credit, a line of credit after they are done banking. So it's in that value moment. And we allow basically our data platform allows you to have fast access to data and also a process incoming streams. So not before they get stored, but as they are coming in. So if I'm a developer, and KubeCon is definitely a conference for a developer and I come to the booth and I hear, well, that's the end value. I hear what I can do with my application. I guess the question is, how do I get there? I mean, if it's not a database, how do I make a call from a container to from a microservice to Hazelcast? Like, do I think of this as a CNI or CSI? How do I access Hazelcast? So we are, our server is actually built in Java. So a lot of the applications that get written on top of the data platform are basically accessing through Java APIs. Or if you have a .NET shop, you can actually use .NET APIs. So we are basically an API first platform. And SQL is basically the polyglot way of accessing data. Both streaming data as well as stored data. So most of the application developers, a lot of it is done in microservices. And they're doing these fast gets and puts for data. So they have a key. They want to get to a customer. They give a customer ID. And the beauty is that while they're processing the events, they can actually enrich it because you need contextual information as well. So going back to the ATM example, you know, at that event happened, somebody swiped the card and asked for 50 euros. And now you want more information like credit score information. All that needs to be combined in that value moment. So we allow you to do those joins and the contextual information is very important. So you see a lot of streaming platform out there which just do streaming. But if you're an application developer like you asked, you have to basically do call out to a streaming platform to get, to do streaming analytics and then do another call to get the context of that. You know, what is the credit score for this customer? But whereas in our case, because the data platform supports both streaming as well as data at rest, you can do that in one call. And you know, you don't want to have the operational complexity to stand out two different scale out servers is humongous, right? I mean, you want to build your business application. So you are querying data, streaming data and data at rest in the same query? Yes, in the same query. And we are memory first. So what happens is that we store a lot of the hot data in memory. So we have a scale out RAM based server. So that's where you get the low latency from. In fact, last year we did a benchmark. We're able to process a billion events a second with 99% of the latency under 30 milliseconds. So that kind of processing and that kind of power is, and the most important thing is determinism. I mean, you know, there's a lot of, if you look at real time, what real time is, is about this predictable latency at scale. Because ultimately you're adhering to a business SLA. It's not about milliseconds or microsecond. It's what your business needs. If your business needs that you need to deny or approve a credit card transaction in 50 milliseconds, that's your business SLA. And you need that predictability for every transaction. So talk to us about how's this packaged and consumed? Because I'm hearing a bunch of server RAM. I'm hearing numbers that we're trying to capture away from at this conference. We don't want to see the on-lay, we just want to use it. Yeah, so we kind of take away that complexity of managing this scale out cluster, which actually utilizes RAM from each server. And then, you know, if you can configure it so that the hot set of data is in RAM, but the data which is not so hot can actually go into a tiered storage model. So we have memory first. So, but what you're doing is you're doing simple, it's an API, so you do basically a crud, right? You create records, you read them through SQL. So for you, it's kind of like how you access that database. And we also provide you, you know, real time is also a journey. I mean, a lot of customers, you know, you don't want to rip their existing system and deploy another kind of scale out platform, right? So we see a lot of these use cases where they have a database and we can sit in between the database, a system of record, and the application. So we are kind of in between there. So that's a journey you can take to real time. How does Kubernetes containers and Kubernetes change the game for real-time analytics? Yeah, so Kubernetes does change it because first of all, we service most of the operational workloads. So it's more on the, a lot of our customers, we have most of the big banks, credit card companies in financial services, and retail, those are the two big sectors for us. And first of all, you know, a lot of these operational workloads are moving to the cloud. And with move to the cloud, they're actually taking their existing applications and moving to, you know, one of the providers. And to kind of orchestrate this scale out platform, which does auto scaling, that's where the benefit comes from. And it also gives them the freedom of choice. So, you know, the Kubernetes is, you know, a standard which goes across cloud providers. So that gives them the benefit that they can actually take their application and if they want, they can actually move it to a different cloud provider because we take away their orchestration complexity, you know, in that abstraction layer. So what happens when I need to go really fast? I mean, I need, I'm looking at bare metal, and I'm looking at really scaling a homogeneous application in a single data center, set of data centers. Is there a bare metal player? Yes, there are some very, very, like if you want microsecond latency, you know, we have customers who actually store two to four terabytes in RAM. And they can actually stand up, you know, again, it depends on what kind of deployment you want. You can either scale up or scale out. Scaling up is expensive, you know, because those boxes are not cheap. But if you have a requirement like that, where there is submilisecond or microsecond latency requirement, you could actually store the entire data set. I mean, a lot of the operational data sets are under four terabytes. So it's not uncommon that you could actually take the entire operational transactional data set, actually move that to a pure RAM. But I think now we also see that these operational workloads are also, there's a need for analytics to be done on top as well. I mean, going back to the example I gave you, so this customer is not only doing stream processing, they're also inferencing a machine learning algorithm in the same kind of cycle, in the life cycle. So they might have trained a machine learning algorithm on a data lake somewhere, but once they're ready, they're actually inferencing the ML algorithm in our kind of life cycle, right there. So, you know, that really brings analytics and transactions kind of together, because after all, transactions are where the real, you know, insights are. Yeah, I'm struggling a little bit with this, with these two different use cases, where I have transactional, basically transactional database or transactional data platform, alongside a analytics platform. Those are two, like, they're two different things. I have a, you know, I have spending rust for one, and then I have memory and NVMe for another, and that requires tuning, requires DBAs, it requires a lot of overhead. There seems to be some type of secret sauce going on here. Yeah, yeah, so, I mean, you know, we basically say that if you are, if you have a business case where you want to make a decision, you know, the only chance to succeed is where you are not making a decision tomorrow based on today's data, right? I mean, the only way to act on that data is today. So, the act is a keyword here. We actually let you generate a real-time offer. We let you do credit card fraud detection in that moment. The analytics is about knowing, less about acting on it, right? Most of our applications are machine-critical. They are acting on real-time. I think when you talk about, like, the data lakes, there's actually a real-time there as well, but it's about knowing, and we believe that the operational side is where, you know, that value moment is there. You know, what good is to know about something tomorrow, you know, something wrong happened. I mean, yeah, so there's a latency squeeze there as well, but we are on more on the kind of transaction and operational side. I gotcha. So, help me understand, like, integrations. A lot of the, when I think of transactions, I'm thinking of SAP Oracle, where the process is done, or some legacy banking, or not legacy, a new modern banking app. How does the data get from one platform to a hazelcast so I can make those decisions? Yeah, so we have this, the streaming engine we have has a whole bunch of connectors to a lot of data sources. So in fact, most of our use cases already have data sources underneath there, their databases, there's Kafka connectors joining us. Because if you look at it, events are comprised of transactions, so something a customer did, a credit card swipe, right, and also events, events could be machine or IoT. So it's really, you need connectivity and data ingestion before you can process that. So we have a whole suite of connectors to kind of bring data in, in our platform. We've been talking a lot these last couple of days about the edge, and about moving processing capability closer to the edge. How do you enable that? Yeah, so edge is actually very, very relevant because what's happening is that, if you look at like edge deployment use case, we have a use case where data is being pushed from these different edge devices to cloud data warehouse, right? But just imagine that you want to be filtering data at where it is being originated from, and you want to push only relevant data to maybe a central data lake where you might want to train your machine learning models. So that at the edge, we are actually able to process that data. So Hazelcast will allow you to actually write a data pipeline and do stream processing so that you might want to just push a part or a subset of data which applies by the rules. So there's a big, I think edge is, there's a lot of data being generated, and you don't want like garbage in, garbage out. There's filtration done at the edge so that only the relevant data lands in a data lake or something like that. Well, Manesh, really appreciate you stopping by. Real-time data is an exciting area of coverage for theCUBE overall. From Valencia, Spain, I'm Keith Townsend along with Paul Gillan, and you're watching theCUBE, the leader in high-tech coverage.