 Hello and welcome to today's session for the AWS Startup Showcase, the next big thing in AI security and life sciences, featuring CoreLogic for the AI track. I'm your host, John Furrier with theCUBE. We're here with Jordan by Ariel, as a RAF CEO of CoreLogic. Ariel, great to see you calling in from remotely, videoing in from Tel Aviv. Thanks for coming on theCUBE. Thank you very much, John, great to be here. So you guys, our future is a hot next big thing, startup, and one of the things that you guys do, we've been covering for many years is, you're into the log analytics from a data perspective. You guys decouple the analytics from the storage. This is a unique thing. Tell us about it, what's the story? Yeah, so what we've seen in the market is that, probably because of the great job that a lot of the earlier generation products have done, more and more companies see the value in log data. What used to be like a couple of rows that you add whenever you have something very important to say, became a standard to document all the communication between different components, infrastructure, network monitoring, and the application layer, of course. And what happens is that data grows extremely fast. All data grows fast, but log data grows even faster. What we always say is that for sure, data grows faster than revenue. So as fast as a company grows, its data is going to outpace that. And so we found ourselves thinking, how can we help companies be able to still get the full covers they want without cherry picking data or deciding exactly what they wanna monitor and what they're taking risk with, but still give them the real time analysis that they need to make sure that they get the full insight suite for the entire data, whatever it comes from. And that's why we decided to decouple the analytics layer from storage. So instead of ingesting the data, then indexing and storing it and then analyzing the stored data, we analyze everything and then we only store what matters. So we go from the insights backwards. That allowed us to reduce the amount of data, reduce the digital exhaust that it creates and also provide a better insight. So the idea is that as this world of data scales, the need for real time streaming analytics is going to increase. So what's interesting is we've seen this decoupling with storage and compute be a great success formula in cloud scale, for instance. That's a known best practice. You're taking a little bit different. I love how you're coming backwards from it. You're working backwards from the insights, almost doing some intelligence on the front end of the data probably saves a lot of storage costs. But I want to get specifically back to this real time. How do you do that? And how did you come up with this? What's the vision? How did you guys come up with the idea? What was the magic light bulb that went off for CoreLogic? Yeah, so the CoreLogic story is very interesting. Actually, it was no light bulb. It was a road of pain for years and years. We started by just doing the same, maybe faster, a couple more features and it didn't work out too well. The first few years of the company were not very successful and we've grown tremendously in the past three years, almost 100X since we've launched this and it came from a pain. So once we started scaling, we saw that the side effects of accessing the storage for analytics, the latency it creates, the dependency on schema, the price that it poses on our customers became unbearable. And then we started thinking, so okay, how do we get the same level of insights because there's this perception in the world of storage and now it started to happen in analytics also that talks about tiers. So you want to get a great experience, you want to get a less than great experience, you pay less, it's a lower tier. And we decided that we're looking for a way to give the same level of real-time analytics and the same level of insights only without the issue of dependencies, decoupling all the storage schema issues and latency. And we built our real-time pipeline, we call it Streamer. Streamer is the CoreLogix real-time analysis platform that analyzes everything in real-time, also the stateful things. So stateless analytics in real-time is something that's been done in the past and it always worked well. The issue is how do you give a stateful insight on data that you analyze in real-time without storing? And I'll explain, how can you tell that a certain issue happened that did not happen in a past three months if you did not store the past three months? Or how can you tell that a behavior is abnormal if you did not store what's normal, you did not store the state? So we created what we call the state store that holds the state of the system, the state of data, where snapshot out at state for the entire history. And then instead of our state being the storage, so you ask me, how is this compared to last week? Instead of me going to the storage and praying last week, I go to the state store and like a record bag, I just scroll fast, I find out one piece of state and I say, okay, this is how it looked like last week, compared to this week, it changed in ABC. And once we started doing that, we onboarded more and more services to that model. And then our customers came in and say, hey, you're doing everything in real-time, we don't need more than that. Like a very small portion of data we actually need to store and frequently search, how about you guys fit into our use cases and not just sell on quota? And we decided to basically allow our customers to choose what is the use case that they have and route the data through different use cases. And then each law records stops at the relevant stops in our data pipeline based on the use case. So just like you wouldn't walk into the supermarket, you fill in a bag, you go out, they weigh it, and they say, it's two kilograms, you pay this amount because different products have different costs and different meaning to you. That same way exactly, we analyze the data in real-time so we know the importance of data and we allow you to route it based on your use case and pay a different amount per use case. So this is really interesting. So essentially you guys essentially capture insights and store those, you call them states and then not have to go through the data. So it's like you're eliminating the old problem of going back to the index and recovering the data, get the insights. Did we have that? So in a way it's a round-trip query if you will. You guys are saving all that data mining cost and time. We call it no zero side effects. That round-trip that you described is exactly it. No side effects to an analysis that is done in real-time. I don't need to get the latency from the storage, a bit of latency from the database that holds the model, a bit of latency from the cache. Everything stays in memory, everything stays in stream. And so basically it's like the definition of insanity doing the same thing over and over again and expecting a different result. Here that's kind of what that is, the old model of insights, go query the database and get something back. You're actually doing the real-time filtering on the front end, capturing the insights if you will, storing those and replicating that as a use case. Is that right? Exactly, but then there's still the issue of customers saying, yeah, but I need that data. Some of the data I need to really frequently search. I don't know, the unknown unknowns. Or some of the data I need for compliance and I need an immutable record that stays in my compliance bucket forever. So we allowed customers, we have this screen, we call the TCO optimizer, that allows them to define those use cases and they can always access the data by querying the remote storage from CoreLogix or querying the hot data that is stored with CoreLogix. So it's all about use cases and it's all about how you consume the data because it doesn't make sense for me to pay the same amount or give the same amount of attention to a record that is completely useless. It's just there for the record or for a compliance audit that may or may not happen in the future. And do the same with the most critical exception in my application log that has immediate business impact. What's really good too is you can actually set some policy up if you want certain use cases. Okay, store that data. So it's nice to say you don't want to store it but you might want to store it on certain use cases so I can see that. So I got to ask the question, so how does this differ from the competition? How do you guys compete? Take us through a use case of a customer. How do you guys go to the customer and you just say, hey, we got so much scar tissue from this, we learned the hard way. Take it from us. How does it go? What's the, take us through an example. So an interesting example of actually a company that is not your typical early adopter, let's call it this way. A very advanced and smart company but a huge one, one of the largest telecommunications company in India. And they were actually cherry picking about a hundred gigs of data per day and sending it to one of the legacy providers which has a great solution that does give value but they weren't even thinking about sending their entire data set because of cost, because of scale, because of just the clutter whenever you search you had to sift through millions of records that many of them are not that important. And we helped them actually analyze their data and worked with them to understand these guys had over a terabyte of data that had incredible insights. It was like a goldmine of insights but now you just needed to prioritize it by their use case and they went from a hundred gig with the other legacy solution to a terabyte at almost the same cost with more advanced insights within one week which is in that scale of an organization is something that is out of the ordinary. You took them four months to implement the other product but now when you go from the insights backwards you understand your data before you have to store it. You understand your data before you have to analyze it or before you have to manually sift through it. So if you ask about the difference it's all about the architecture. We analyze and only then index instead of indexing and then analyzing. It sounds simple but of course when you look at the stateful analytics it's a lot more complex. Take me through your growth story because first of all I want to get back to the secret sauce on the same one. I want to get back to how you guys got here. Kind of had this problem. You kind of broke through you hit the magic formula. Talk about the growth. Where's the growth coming from and what's the real impact? What's the situation relative to the company's growth? Yeah, so we had a first rough three years that I kind of mentioned and then I was not the CEO at the beginning I'm one of the co-founders. I'm a more technical guy. I was the product manager and I became CEO after the company was kind of on the verge of closing at the end of 2017. And the CTO left, the CEO left. The VP of R&D became the CTO. I became the CEO. We were five people with $200,000 in a bank that you know that that's not a long runway. And we kind of changed attitude. So we kind of, so we first we launched this product and then we understood that we need to go bottoms up. You can go to enterprises and try to sell something that is out of the ordinary or that changes how they're used to working or just, you know, sell something one of your five people with $200,000 in a bank. So we started going from bottoms up and the earlier adopters and it's still until the day, you know the more advanced companies, the more advanced teams. This is our Gartner-framed Coralogics, the preferred solution for advanced DevOps and platform teams. So they started adopting Coralogics and then it grew to the larger organization and they were actually pushing Coralogics. There are champions within their organizations. And ever since, so until the beginning of 2018 we raised about $2 million and had sales that were marginal. Today we have over 1500 paying accounts and we raised almost $100 million more. Wow, what a great pivot that was. Great example of kind of getting the right wave or cloud wave. You said in terms of customers you had the DevOps kind of hardcore initially and now you said expanded out to a lot more traditional enterprises. Can you take me through the customer profile? Yeah, so I'd say still the core would be cloud native and internet companies. These are typical ones. We have very tight integration with AWS, all the services, all the integrations required. We know how to read and write back to the different services and analysis platforms in AWS. Also for Azure and GCP, but mostly AWS. And then we do have quite a few big enterprise accounts. Actually five of the largest 50 companies in the world use Coralogics today. And it grew from those DevOps and platform evangelists into the level of IT execs and even CISOs. So today we have our security product that already sells to some of the biggest companies in the world. It's a different profile. And the idea for us is that once we solve that issue of too much data, too expensive, not proactive enough, too coupled with the storage, you can actually expand that from observably logging metrics now into tracing and then into security and maybe even to other fields where the cost and the productivity are an issue for many companies. So let me ask you this question then, Ariel, if you don't mind. So if a customer has a need for Coralogic, is it because they're data full or they just got data kind of sprawled all over the place or is it that storage costs are going up on S3 or what's some of the signaling that you would see that would be like telling you, okay, it was an opportunity to come in and either clean house or fix the mess or whatnot. Take us through what you see. What do you see as the trend? Yeah. So like a typical customer that'll come to Coralogics will be someone using one of the legacy solution and growing very fast. That's the easiest way for us to know. Because- What's growing fast, the storage? The storage is growing fast? Company is growing fast. And you remember the data grows faster than revenue and we know that. So if I see a company that grew from 50 people to 503 years, specifically if it's cloud-native or internet company, I know that their data grew not 10X, but 100X. So I know that that company that might started with a legacy solution at like $1,000 a month and they're happy with it. And for $1,000 a month, if you don't have a lot of data, those legacy solutions, they'll do the trick. But now I know that they're going to get asked to pay $50,000, $60,000, $70,000 a month. And this is exactly where we kick in. Because now when it doesn't fit the economic model, when it doesn't fit the unit economics and it starts damaging the margins of those companies. Because remember those internet and cloud companies, it's not costs or not the classic costs that you'll see in an enterprise. They're actually damaging their unit economics and evaluation of the business. It's a bigger deal. So now when I see that type of organization, we come in and say, hey, better coverage, more advanced analytics, easier integration within your organization. We support all the common open source syntaxes and dashboards you can plug it into your entire environment and the cost is going to be a quarter of whatever you're paying today. So once they see that, they see the deaf friendliness of the product, the ease of scale, the stability of the product. It makes a lot more sense for them to engage in a POC because at the end of the day, if you don't prove value, you can come with 90% discount. It doesn't do anything. You got to prove the value to them. So it's a great door opener, but from then on, it's a POC like any other. Yeah, cloud is all about the POC or pilot, as they say. So take me through the product today and what's next for the product? Take us through the vision of the product and the product strategy. Yeah, so today the product allows you to send any log data, metric data or security information, analyze it a million ways. We have one of the most extensive alerting mechanism to market, automatic anomaly detection, data clustering and all the real-time pipeline things that help companies make their data smarter and more readable, parsing, enriching, getting external sources to enrich the data and so on and so forth. Where we're stepping in now is actually to make the final step of decoupling the analytics from storage. What we call the data list data platform in which no data will sit or reside within the CoreLogics cloud. Everything will be analyzed in real-time stored in a storage of choice of our customers, then will allow our customers to remotely query that in incredible performance. So that'll bring our customers away to have the first ever true SaaS experience for observability. Think about no quota plans, no retention. You send whatever you want, you pay only for what you send, you retain it how long you want to retain it and you get all the real-time insights much, much faster than any other product that keeps it on a hot storage. So that'll be our next step to really make sure that, we're kind of not reselling cloud storage because a lot of the times when you are dependent on storage and we're a cloud company, like I mentioned, you got to keep your unit economy. So what do you do? You sell storage to the customer, you add your markup and then you charge for it and this is exactly where we don't want to be. We want to sell the intelligence and the insights and the real-time analysis that we know how to do and let the customers enjoy the wealth of opportunities and choices that their cloud providers offer for storage. That's a great vision. In a way, the hyperscalers early days showed that decoupling compute from storage which I mentioned earlier was a huge category creation. Here you're doing it for data. We call it hyper data scale. I mean, it's got to be a name for this. What do you see five years from now? Take us through the trajectory of the next five years because certainly observability's not going away. I mean, it's data management, monitoring, real-time, asynchronous, synchronous, linear, all this stuff's happening. What's the five-year vision? Now add security into observability. Was just something we started preaching for because no one can say I have observability to my environment when people come in and out and steal data, that's no observability. But the thing is that because data grows exponentially, because it grows faster than revenue, what we believe is that in five years, there's not going to be a choice. Everyone are going to have to analyze the data in real-time, extract the insights and then decide whether to store it on a long-term archive or not stored at all. You still want to get the full coverage and insights, but when you think about observability, unlike many other things, the more data you have, many times, the less observability you get. So you think of log data, unlike statistics, if my system was only, and recording everything, was only generating 10 records a day, I have full, incredible observability. I know everything that I've done. What happens is that you pay more, you get less observability and more uncertainty. So I think that with time, we'll start seeing more and more real-time streaming analytics and a lot less storage-based and index-based solutions. You know, Ariel, I've always been saying to Dave Vellante on theCUBE many times that there needs to be insights has to be the norm, not the exception, where, and then ultimately, there'd be a database of insights. I mean, at the end of the day, the insights become more plentiful. You have the ability to actually store those insights and refresh them and challenge them and modern update them, verify them, either sunset them or add to them. You know what I'm saying? It's like, when you start getting more data into your organization, AI, machine learning, prove that pattern recognition works. So why not grab those insights? And use them as your baseline to know what's important and not have to start by putting everything in a bucket. So we're going to have new categories, like insight-first software, not data-first, you know. Go from insights backwards. That'll be my tagline if I have to, but I'm a terrible marketing guy, so don't call me. Well, I mean, everyone's like cloud-first, data-first, data-driven, insight-driven. What you're basically doing is you're moving into the world of insight-driven analytics, really as a way to kind of bring that forward. So congratulations, great story. I love the pivot, love how you guys, entrepreneurially put it all together and had the problem, your own problem, and brought it out to the rest of the world. And certainly DevOps and the cloud-scale wave is just getting bigger and bigger and taking over the enterprise, so great stuff. Real quick, while you're here, give a quick plug for the company. What you guys are up to, stats, vitals, hiring, what's new, give the commercial. Yeah, so like mentioned, over 1,500 paying customers growing incredibly in the past 24 months. Hiring, almost doubling the company in the next few months. Offices in Israel, East Center, West US, and UK, and Mumbai. Looking for talented engineers to join the journey and build the next generation of data-less data platforms. Arias, our CEO of CoreLogic. Great to have you on theCUBE and thank you for participating in the AI track for our next big thing, interview startup showcase. Thanks for coming on. Thank you very much, John, really enjoyed it. Okay, I'm John Furrier with theCUBE. Thank you for watching the AWS startup showcase presented by theCUBE.