 Hey, good afternoon everyone. Welcome back to theCUBE's continuing coverage of CrowdStrike Falcon 23 Live from Caesars Palace in Las Vegas. Lisa Martin and Dave Vellante here. We're very pleased to welcome back whatever alumni, Ilya Zaitsev, the Chief Technology Officer of CrowdStrike. Ilya, great to have you back on theCUBE. Thank you for joining us today. Thanks for having me back. Action packed couple of days. Woo, yeah. 4,000 plus customers, partners. The biggest event in person that CrowdStrike has had. The most news. Give us your synopsis as we here we are on day two of what's been announced and what the feedback has been from the community. Yeah, I mean we just had some absolutely massive releases and announcements the last two days. I mean, George kind of teased and previewed it all for everyone yesterday. Charlotte, the Raptor upgrade, XDR for everyone, and then Mike followed up today and kind of brought it to life for everyone to show the actual product demonstrations and I'm just super excited. These are things that I've been thinking about and we've been working on here at the company for years. Our customers have been asking for it. Well, the ones that realized asked for it for a couple of years and a few other things that I don't think anyone ever expected years ago when we were starting out in the endpoint space. So it's just been incredible. And you've been with the company for how long? A little over 10 years, 10 years in chairs. So almost from the very beginning you said you were employee number 45? Number 45 I think I was the first sales engineer at the company joined just before we launched the original Falcon platform. So you've seen so much evolution in that time period. Oh yeah, we were first EDR only and then we added NextGen Antivirus then we started building additional modules onto the platform and here we are today. We just announced our big Raptor upgrade which frankly I think is the biggest update to the platform since we first added NextGen Antivirus years ago. So explain what the objectives are or were of that upgrade and what it's going to mean for customers. Yeah, so it really modernizes and supercharges the platform. It opens it up so that customers can now bring in whatever data they want into the platform. And we could talk a bit more about that. I've had some really funny conversations with customers who've been asking about that for years but it also unlocks a lot for our customers. They can now take advantage of Charlotte our generative AI assistant and it's going to bring XDR capabilities for all of our customers as well which is super exciting. I think it's going to be a game changer for the way that analysts really think about security products and how they go through their day-to-day workflows. So was it more sort of a combination of integration, the visual interface, the capabilities of what you can do and ingest, maybe peel the onion a bit on Raptor. Yeah, it's a bit of all of that. It's a pretty big combination of upgrades. So number one we've taken LogScale which is our super high speed data platform that we acquired several years ago through a company called Humio and we've integrated it directly into the platform. So first of all, everything our customers are doing today with that part of the system is getting better. Things are getting faster. They can save new kinds of queries. They can create live dashboards and visualizations. But the other big one is that they can now open the platform and bring external information in. It doesn't even come from CrowdStrike. I remember years ago, one of our earliest, I think, technical advisory boards, we had an executive at one of our customers stand up and say, hey look, 80% of all the data in my Sim and my Data Lake is coming from CrowdStrike. I'm pulling all that information out and then paying somebody else to store it again. Can't I just give you that 20%? That seems like a much more efficient way of doing it and now we can finally do that. And that's just step one to get the data in. And then when you see what we're doing with Charlotte Float, you access it with XDR to allow you to interactively explore that through what we call Instant Workbench. Really, really groundbreaking stuff from that workflow perspective. And Mike Santonis today talked about, make sure I get this right. Obviously, some of the benefit is being able to better prioritize threats. But he talked about distributed scanning, distributed scans, and then eventually being able to ingest data from third-party tools. How do those relate? Is the distributed scan capability there today? That's actually separate from the Raptor Upgrade. This is part of what we announced under exposure management, where we've combined some of our proactive security capabilities like Falcon Discover and Spotlight and Surface and added a bunch of new capabilities on top of that. So the distributed scanning is something that we just released. We call it active scanning. And what that allows you to do is take our Falcon sensors and actively query the network around them to identify additional devices. In the past, we've done passive scanning where we basically listened only and seen what's nearby. But now we can go proactively interrogate and expand a wider viewpoint of everything that's going on in the network. And that allows us to build these attack paths to predict how is the adversary going to potentially get in? What's the implication if they do? And how can we get ahead of that and lock down and sever those attack paths before anything bad can happen? Got it, okay, so back to Raptor. How does Raptor enable Charlotte AI or the reverse? Well, it's not the technology itself. It's this whole bundling of upgrades to the platform itself. So Raptor is more than just log scale. Charlotte is one of those pieces. So Charlotte is going to be a capability that will be accessible to customers after they go through that Raptor upgrade. It's going to bring a whole variety of generative AI workflows to the platform. The chat interface is one of the big ones where they can have a conversation with the Charlotte AI system to rapidly automate a lot of capabilities that in the past would be very manual, but it also has implications for doing investigations. So part of the Raptor and Charlotte capabilities that we're releasing is this concept known as AI investigator, where an analyst can start with an initial alert, a detection, an indicator they've picked up, and then they could go ask Charlotte, hey, tell me everything you know that's related. Bring me all the power of everything that CrowdStrike knows about my environment through our modules, through that third-party data that they're bringing in, and build out this much more expansive view of an incident, not alert. And this is a pretty big shift going from alerts to incidents that I'm very excited about. I think it's going to be a game changer for the users. I've buzzed the show floor a number of times, but I still haven't gotten to the Charlotte AI demo. It's always too crowded when I get there. What is that SOC analyst experience like interacting, which I mean we saw George on stage speaking like Spock to the computer, but is it going to be more like a, it's a chat, GPT-like experience, right? Yeah, yeah, yeah. It's going to be that more of a chat interface where you're going to type in your questions or prompts into the system, and then it's going to give you the output. Now it's going to be in conversational format, so you're going to, it's going to give you some results. You can then ask follow-up questions. And one of the really cool things about the way we've implemented, it's not just pure chat. Because Charlotte is accessing the underlying platform capabilities and bringing those results back, we're rendering it like other parts of the platform. So in the past, you might have five different tabs open for all the different parts of the product that you're looking at. Charlotte can stitch those components of the user interface and actually put it together for you in the chat interface. So you're going to see tables and other UI components just like you would if you were manually driving the platform, but Charlotte's going to be delivering and serving and populating all that for you. So it's a pretty cool interface. So it sounds like every time you tried to go by and see the demo, it was packed. What's been some of the feedback, the anecdotal feedback? It was announced at Black Hat last month. We got that great demo yesterday. What's been some of the feedback on the street from customers that were either involved in its genesis or not? I think the resounding unanimous feedback is, when the hell can I get access to it? Which is? We're targeting Q4, I believe December is what we're aiming for. Okay, and then I think it's not your wheelhouse, but I think you announced pricing today. We did, we talked about it a bit at our investor day fair. Now, really I'm the CTO, so I shouldn't be talking about the pricing. That's for Berger CFO, but generally speaking, we're looking at doing it pretty traditional format that people are used to, based on your endpoint count, we'll add on attached to that. And Charlotte will be able to take advantage and leverage the data of all the platform modules that you currently own. So it's an endpoint per user per month, oh, sorry, endpoint per month pricing. We do typically per year, but yes, yes. Yeah, yeah, yeah. Yeah, times 12, right. And then, presumably you have to package some kind of query bundle, like t-shirt sizes, right? Yeah, so depending on the size of your enterprise, the number of nodes you have, that'll dictate how many queries you can leverage with it with that initial access, but of course, they'll be able to get additional capacity if their needs are more than what it starts with. So commit to some level and then go from there. And I think you're going to learn a lot over the last, you said it's Q4, it's released. Yeah, that's what we're targeting, yeah. You're going to learn a lot over the next several months. That's been one of the big mysteries. We know that, hey, we can start to convert all these traditional workflows into this generated version. What percentage are people going to switch over from the way they're using the product today to the new Charlotte world? We're going to learn a lot, but luckily we built it in a way that we'll be able to handle the usage as it grows. So how did you deal with the challenge that we all face with generative AI today when we use chat GPT or BARD, of you don't get consistent answers. Great question. And it gives you, with authority, incorrect data. Hallucinations, right, how we deal with hallucinations. So we've been very careful about how we've architected Charlotte. Now first of all, it's a multi-model system. So it's not this one big black box. There's multiple components that make up Charlotte. So what most people are experiencing when they're working with Charlotte is this input agent, if you will, that's taking your questions, your prompt, and your plain spoken language. And then it's understanding, okay, what is the question you are asking? And really behind the scenes, we're telling Charlotte, hey, Charlotte, you've got multiple tools or actors that you can leverage. So given that first question, which of these tools or actors is going to answer that? And many of these tools and actors are actually just driving the platform. So the key at the end of the day is Charlotte isn't making up an answer and being very confident about something that's incorrect or wrong. It's understanding, what are you asking the platform to do? And then we're accessing the platform to return that data. So you're getting high-fidelity, crowd-strike data. We're using Charlotte to understand what you're asking the platform to do and then to automate the interaction between all those different elements of the platform. So is the enabler there that you've narrowed the dataset to high-quality, crowd-strike data and so that you're not guessing the next word and just making stuff up that's anywhere on the internet? Is that sort of? That's part of it. And we're also doing things like proof of work where as Charlotte is performing the request that you ask it, we're first of all breaking it down step by step. So it's actually telling you, okay, you asked me to do this, so I'm going to do step one, step two, step three and it's going to show you exactly how it's doing each of those steps. So if it's writing an API query, if it's creating a remediation script, it's showing you what it's doing so you can inspect it and audit it as an end user. By the way, great teaching tool because now I can see, hey, I don't want to read documentation on how to write this query. Charlotte, just show me what your query is as you go and of course I can then inspect it every step of the way and make sure that I can validate and confirm that, hey, you are writing the right query and if not, we have human response loops built into it so you can say this is not what I was asking about, hit the down button and then as we train and update the model, it will rapidly improve there. And in fact, we're also doing that internally before any customer gets to touch Charlotte directly. Our own teams, our Falcon Complete managed service team, our Overwatch team, our Threat Intel team, they've been basically validating the information so it's already getting pre-tuned out of the box before a customer is going to touch it in a couple of weeks here. So it'll give you accurate information even though it might not be the information you're looking for and then you can re-ask it. So it's not high probability that it's accurate information. It's actually accurate information based on the constraints of the dataset that it's working on. We know that the data is correct because it's coming from the platform. Could it have answered the wrong question? Yes, but that's where the tuning elements come into play to make sure that it's doing really good at that core instruction following task. That's like that front door of Charlotte. And you've built this using sort of all, I'm sure you're experimenting with all the LLMs that we know and love. We're using a combination of systems because different ones are suited for different tasks. So when you think about the big, like what they call frontier models like the open AI models, the Anthropic Cloud models, they're really good at that core instruction following of understanding in English or dozens of other languages which side note great benefit of these systems. Now I don't have to localize for every country. They can handle the translation for me but they're doing that core instruction following but then we're going to need to fine tune a lot of specifics that are unique to our platform. So for example, detection explainability. This is a huge one and one where I think we've got a really unique capability as CrowdStrike to solve and customize LLM models because guess what we have? We have 10 years worth of data of humans explaining to our customers via Overwatch. Here's what the product is telling you happen. We're going to explain to you what does that mean and what you should do. We can take that data and train that into a generative model. Ironically, if you haven't invested 10 years worth of humans creating that data for you, if you're all of a sudden just getting to the game and you're all technology and you're saying we're going to build a general system, you can't do anything that chat GPT can't do because you've got no unique human created data to train on and that's what we have with things like Overwatch, Complete, or Threat Intelligence team. So these are all going to be bespoke models that we train for each of these functions of the platform and then combine it with these frontier models again for that basic instruction following capability. That's why I love the Cube AI because we have 13 years of transcripted conversations like this. That's why it's so accurate. It's not so much accurate today, but it will be. So you're saying the hologram is going to interview the X-File Converter, right? Maybe not the next one, but maybe some days. Or maybe there'll be a hologram and I'll be at home for a change. There you go. No more red eyes. There you go. Share with us in our last couple of minutes here, Ilya. Your vision for Charlotte AI really transforming the SOC analyst experience and also elevating the differentiation that CrowdStrike has in the market. We saw the digs with Microsoft, we knew the competitive landscape, but what is that vision for you as CTO look like? Yeah, well, you know, we touched on the differentiation piece. No one has been making that investment in human capital for the years that we have, so everyone is just going to be hopelessly behind, I think, when it comes to creating generative systems to explain what the platform is doing. I think that's going to be the clear differentiation there. As far as the SOC experience, but one, I think it goes well beyond the SOC. I think one of the big things about Charlotte is it's going to enable a whole new category of users who can never take advantage of something as powerful and robust as Falcon, which is like, you know, as our CEO, race car driver, George Kurtz calls it, we're the Ferrari, you know, we're the F1 car of endpoint platforms. I probably couldn't jump into an F1 car and get it out of, you know, into first gear. Charlotte's going to allow someone who has no experience with these platforms to right away, like say a CIO, a CISO, who's not hands-on, ask some basic questions like, hey, I just read this in the news, this new vulnerability, this new threat. Am I vulnerable to it, am I protected? And they can actually start to use that directly with no training. Then you take the level one analyst who maybe is just getting started or maybe has experience with other endpoint platforms but not CrowdStrike, they don't have to go read documentation anymore. They can just start jumping right into it and asking the questions and getting the answers they need. Then you take the seasoned CrowdStrike analysts, those level two, those level three, now you're making them 10 times more efficient because you can automate what used to be lots of individual steps that they know how to take, but it still takes time to go through all of them. So it's going to really turbo charge productivity and, sorry for getting out of time here, again, that internationalization piece, it's going to really expand the population of users. You don't have to be a fluent English speaker. You don't have to have localized our product to support your SOC team and what language they speak in. Charlotte can actually do that work for you and they can just focus on the outcomes, right? Stopping the breach. Elia, when you think about this browser moment, like the Netscape moment or the iPhone moment, and then you look back to whatever, 1994 or five or 2007, I guess, when the iPhone came out, you kind of laugh and you go, wow, look at that interface. Look at that webpage from the 90s. And I would imagine we're going to see something similar in this space, maybe even more compressed. What should we expect? What should our expectations, realistic expectations be for generative AI and large language models, specifically as it applies to SecOps? You know, I have to say, in my career in technology, I have never seen a space move as quickly as generative systems have been moving. The speed at which they're increasing in power, the tasks that they can address and handle, at the same time, the way they've been able to shrink down some of those capabilities because it's not all about the open AI approach and make them bigger and bigger and bigger. Look at what like MET is doing with Lama. They're getting more and more capabilities trained down into these smaller and smaller models. So I think really the sky is the limit. I think we're going to be really shocked as these emergent capabilities come out. Just recently, for example, ChatGPT, they taught how to play chess. It's not designed to play chess, but all of a sudden it's become like an 1800 level ELO player just by coincidence, by happenstance of how powerful they're getting. So I think we're going to move rapidly into this world where these generative systems will become more predictive because they'll understand the platforms, they'll have been trained on the platforms and the threat landscape so well. They're just going to automate more and more than we ever thought was possible. And who knows, maybe George's demo where he's having a conversation and there's the voice coming from the sky, just solving all your problems. Maybe it's not that far away. Fascinating stuff, Ilya. Thank you for joining us on theCUBE, sharing your perspectives on the evolution of CrowdStrike, what you're delivering to customers, how they're influencing the direction and really kind of what we can be excited for with Charlotte AI and more. Thank you so much for your time. Thank you, can't wait to do it again with you all. Yeah, likewise. For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE from CrowdStrike Falcon 23. Stick around, our next guest joins us in just a minute.