 Well, hello everyone, John Walls here on theCUBE as we continue our CUBE Conversations as part of the AWS startup showcase with Ariel Aseraf who is the CEO and co-founder of CoreLogix based in Tel Aviv. And Ariel, thanks for joining us, especially under these trying circumstances. I'm sure many people watching fully appreciate what's going on in Israel right now with the bombings that are happening on a perpetual basis. And I just hope you and family, friends and your coworkers are doing well and staying as safe as possible. Thank you very much, John. Yeah, it is a surreal period of time where we're in the office and occasionally going to the shelter for a couple of minutes and then getting back to coding and planning. So yeah, thank you. Well, certainly take care and you're very much on our thoughts and in our hearts right now. And we wish you all the well and safety. Let's talk about CoreLogix though. This is obviously, it's your baby and entering the wild world of data these days, this exponential growth of data. You and I were talking about the really the untapped potential of data a little bit early before the interview. So let's talk about maybe the genesis of CoreLogix a little bit and where you came up with this concept and then the unique platform that you've now established to really help your clients make some sense of these vast reams of data that they have at their disposal. Yeah, I think that's a very interesting topic that a lot of companies are starting to now address. Each one with its own angle. We decided to go with the real-time streaming analytics approach. The problem starts with data growing exponentially like you mentioned, but it's not just growing exponentially it's growing faster than revenue. What happens is that companies that are bound to the cost of data are getting to a point where their margins and their unit economics are being slaughtered by the amount of data that they need to analyze whether it's for BI or marketing and certainly observability which is probably the largest data producer inside the organization. And what typically companies tend to do is to start cherry-picked data. So they only collect relevant information or only collect errors or only collect specific servers or specific environments. And that causes that statistic you just mentioned from the MIT research showing that 99.5 of the data remains untapped or unanalyzed. When we looked at it, we thought that you want to monitor that data at a high level. You want to analyze it automatically or manually or visualize it with good performance. And so the approach that existed slash exists in the market till today is to use storage tiers. But then you have to compromise the quality and the speed of analytics. And we chose instead of that to unlike everyone that ingest analyze index and then analyze to ingest analyze everything in real time including the most stateful transformation and stateful analytics and only then store what matters. That way giving broader coverage and allowing companies economically and also in terms of scale to send everything get the full analytics layer that they need and basically improve both their businesses and their performance. You know, it sounds so sensible. It sounds so simple too, right? We're just going to analyze data as it comes in real time. We'll make sense of it. We'll process that. We'll make it actionable and boom, off we go. But obviously, as you know, this is an extraordinarily complex series of operations occurring now especially in the microservices world, right? Because you have all these inputs and all these instances happening simultaneously in different environments. So untangle that for me a little bit in terms of microservices now, the complexity that that creates and your approach to that. Yeah, so two things that happen. One, there are more services in each company. Two, there are more versions uploaded to each service every day. So the world of CI CD combined with the world of microservices creates a lot of uncertainty. On one hand, that's great because you're less decoupling, you can go faster, you can be faster to market, respond to the market faster. You can analyze data in specific units that allows you more flexibility and you can release a lot more. On the other hand, it gets much harder, the triage to figure out what specific microservices causing a problem to monitor the communication between different microservices and certainly to understand what is the version that broke something? A lot of software problems come after upgrades or configuration changes and these two factors together, they generate a lot of data that you need to start monitoring and analyze. Now, like you're saying, analyzing in real time, that's been done in the past. That doesn't sound too complex but what happens is that the answer from real-time streaming was only applied to a stateless things meaning let me know when you see something. You see an event, send me an alert. You see a metric, send me an alert. What happens is that it's still missing the longer term analytics. So it's some sort of an aximeron to say, on one hand I'm doing real-time streaming, on the other hand, I wanna give you analytics that rely on a long-term state. Let me know if something happened more than it did last week. Let me know if something happened for the first time this month. Cluster the data based on a learning algorithm that learns the data continuously throughout the entire history of time. And this is streamer, the technology that we created that does the real-time streaming but also involves components that store the state of the system in any given point in time. So while other solutions or other approaches use the storage as the state, so if I want to know what happened a week ago I just go to the storage and see what's in there from last week. Now we hold the snapshot of state of everything relevant, whether we automatically discovered or the customer defined it and make sure that our customers can go back in time and compare versions or compare matrix or compare graphs and see how specific versions affected specific microservices and how specific microservices affected their entire production systems. So where, or help me out here just in terms of cost efficiency then, now if you're not eliminating streaming, but you're kind of shifting responsibilities here or shifting process a little bit, right? And making maybe a little more accessible on a real-time basis. What kind of cost efficiencies do you get out of that in terms of not having to go to storage for everything and dig everything out from a week or two weeks or a month ago? Yeah, that's a great question. So it affects multiple areas there. First of all, storage is one of the areas where you can least optimize because it is what it is. Besides compression that's been invented years ago and we're pretty much maxed out there, there's not a lot of ways to really save on storage. So what companies do, they try to put it on lower tier storage but then you lose performance. What we do is we bound ourselves only to CPU and CPU, when you do analytics you can improve and optimize to the max and get to a point where you auto scale, analyze all the data in real-time, get better results and you can continuously improve your code and your microservices in a way that makes them more efficient. We're talking about roughly 75% of savings when we compare that to the closest solution in our space. But it's more than that actually. We believe that at the end of the day the storage approach is not going to be feasible because storage doesn't scale great. Like any CPU that you increase, you get better performance, you get faster performance, you get more power. But when you increase storage size, when you store more data for a longer term you actually lose performance. It's actually slower, it's more cluttered. And so what happens is that companies that need long-term analytics, one, they have to use the storage, they can't do it in real-time two, they also have to have that storage stored for a very long period of time. So it exponentially grows and we believe that we'll come to an era because data grows exponentially and many of our users or engineers that understand exponential growth, it's going to get to a point where it's almost impossible to write all the data to the disk. And then companies are going to need to compromise again. So we feel that the market is going to a place where you'd like to get the analytics taken out of the data and only relevant information for the analytics being stored because the matrix and the logs and the traces, there are means to an end. They're not the purpose for which we are actually generating and storing them. Right, and that's where the clients are all about to write, get me the meat, get me the gold, the data that I actually need and help me separate the wheat from the chaff here. What about AWS? How do they come into play? Or what about your relationship with them and how is that developed and currently, where does that sit? Yes, so we actually moved to AWS about two years ago and moved our entire production and built it on the AWS infrastructure. Our infrastructure is entirely on Kubernetes. We're using Terraform and we have our own CI CD tool that we actually release as an open source. And we scaled on AWS massively and started seeing the opportunities with most of our customers being on AWS. So we partnered with the AWS partnerships teams. We went through the competencies, the well-architected, the accelerated program and now the relationship is at a level where our sales teams are working closely together with the AWS account managers to spot opportunities where AWS customers need an additional layer of analytics or better cloud security or cost reduction and we're working together to find them that solution. Now, to make it easier and more seamless for AWS customers to use us, we are onboarded to the AWS marketplace. So we're under the unified agreement of AWS and we can be paid through the AWS bill. So now prologics can be seen as an AWS service that you're using. You don't have to use another vendor and you can get additional insights and lower costs and 24-7 support that we provide. So that's how we partner with AWS. And of course, a lot of joint marketing and content activity. So we're running a webinar together with the AWS teams at a general, not about us. In general, how can we give back to the community? How do you scale? For instance, we ran a webinar and how do you scale Kafka? Which is certainly not our domain but definitely an issue that we had to handle and had to scale. And it's a pain point for many AWS customers. So we're trying to give back. We're getting a lot from AWS and we are partnering with them to solve problems together. So what's it done for you then? The CoreLogics, you said it's been a two-year relationship. So it's matured, obviously. And you've worked out, it sounds like a very nice, you're leveraging each other's strengths in a very smart and tactical way. So what does it mean to you though, CoreLogics? And ultimately, what do you think it means to your end user, your client base when you bring kind of this combined power into their needs? Yeah, so for us, working together with AWS means that they help us where a startup is lacking the most strength. So startups, they can be extremely fast. They can develop cutting-edge technologies. They can bring new approaches and products to the market. But when you start working with the larger organizations, the most hardest part of a POC because the engineering team see the value immediately is the procurement, is the legal parts, is getting there, opening the door and showing them the value proposition that you have. And working together with AWS allows us to, first of all, meet these customers, understand their needs, and then being able to route through the AWS marketplace. And of course, to make it easier for them, we created like 20 different plugins to all AWS services so they can seamlessly connect all their data. Because you remember one of the things that we wanted to get to is people not having to cherry-pick logs. We're not having to cherry-pick metrics. So now they can connect their entire environment or and get full cloud observability and security within minutes and do it in an economic way. Wait, you're talking about all these capabilities and providing the client base and obviously this is a field that we're talking about, David. And what you're doing with it that's growing so rapidly, what does it mean to you like inside your office there in terms of do you have enough space for people? I assume your growth trajectory is pretty impressive right now. Yes, it's something that we are trying to learn now. This is the third office in three years and we're now outgrowing this one and going to the next one. So we grew from about 10 people two years ago when we moved to AWS to over 100 people now and continuing to hire in East, Center, West US and in Israel and in London and in India and the company is going to double itself within the next few months. So it's definitely a challenge now with COVID era also. But thank God, here in Israel we're kind of past that and it seems like the US is going to be past that in the next few months. So we're going to get back and start hiring and growing the teams. Well, it sounds impressive and congratulations on that particular aspect of your business. I know it's always fun to bring on new people. It's always a very positive sign. So congratulations on that front. Thank you for the time today. And most importantly, again, we do wish you great health and wellness and safety given that all that's going on right now and our hopes and prayers are that it ends as quickly as possible and you can return back to business as usual there. Thank you very much, John. I appreciate the time. Really enjoyed it. Thank you, sir. You bet, a pleasure. Once again, we're talking about core logics here on theCUBE Conversations, part of the AWS startup showcase with Ariel Aserafu as the CEO and co-founder. I'm John Walls. Thanks for joining us here on theCUBE.