 Okay, welcome back everyone. We're here at the RSA Conference 2023 live coverage, Cube Fourth Day, wrapping things down. Man, wall-to-wall content, 45,000 people. It's not letting up. Even the last day, you'd think it'd be kind of a lag, people kind of strayed, like, yeah, people would leave to go to the airport, but still, a lot of people. I'm John Furrier, host. Dave Vellante left the building. He's on his way home to Boston. As we wrap up RSA, we're going to take a look back at the event, but also get some real solid commentary from a Cube alumni, Bruno Kurtzik, chief strategy officer, Sumo Logic. Bruno, thanks for coming on, appreciate it. All right, I'm happy to be here. So, appreciate you coming on, and I know you've been super busy, because it's a hot show, and there's lunches, there's dinner, five dinners, five cocktail parties, and then next thing you know, it's 11.30 and you're hitting the pillow, or maybe later. What are you seeing here at the event here, RSA? Large numbers, events are back. That's a steady state from, like, 2018. What are you seeing? So, we're seeing sort of a coming back together, like the energy is back in the show. It used to be, you know, the last three years were hard, and being back here and feeling the energy of people kind of packing together, talking and collaborating and discussing. First of all, that's great. Second, we have lots of customer conversations. People are very much still keeping security top of mind. We're seeing a lot of sort of intrigue and thinking about how do we start applying some of these new technologies, like AI and ML even further now, that we've had the resurgence of AI in the last few months. That's becoming sort of at the hype cycle top of mind for people, but it is real, right? A lot of progress has been made, and these are legitimate questions. I mean, think about how this applied to all of us in the industry over the next two to five years. I mean, Dave Vellante and I were joking. It was just invented over the holidays. Picks up in January, but I think, again, chat GPT open AI, I think opens up the world to see what the insiders have known for a long time. Machine learning has been booming. AI is right around the corner. I think the chatbot market was terrible up until this point, but I think it opens up, like, wow, that's magic, and that gives people a taste of horizontally scalable it is from a use case standpoint. Like, it impacts everybody. I've had people that aren't in tech saying, oh, I can see how magical this is for my life. So I think it captures the attention and ups the awareness of, okay, NextGen is here. And I think that to me is the big, and I wrote a story about this, and it kind of didn't get a lot of play when I met with Adam Sileski at Amazon. I said, we're NextGen Cloud. He's like, no, they're ISVs. No, we're the cloud. I'm like, okay, well, not really. You're the cloud, but yeah, everyone's on that. But when I build on top of the cloud, I can have an ecosystem without building the cloud. You build the cloud. So what is an ISV? What's an ecosystem platform? So we're now in this whole nother NextGen where the AI is like, wait, if that's going to happen, I can see things being thrown away or automated away with machine learning and do twice or 10X the performance almost in all categories. Yeah. There's a lot of toil that these tools can take away. And it's interesting. I think the chat incarnation of this technology was a brilliant marketing ploy to bring the masses to understand the possibilities. Like these language models have been around being trained. I mean, chat JPD finished training up to 2020 data, right? So like, this has been going on for a while, but the general person could not grok what it actually means. And I think it was a really interesting way to showcase the power and now we can move forward and see how we can apply it properly. Well, I personally love it. Dave and I were also loving this because it brings back a renaissance of how we first met you guys at Sumo Logic going back to the big data days. It sounds like prehistoric days when there were animals on earth, roaming 2010 is called Hadoop. It didn't have the promise was there and then it just was hard. And then I'll see, so it happened there. You got data bricks came along, snowflake, Amazon cloud grew, Hadoop kind of fell away, it's hard to use. But we were kind of on the right track and then what happened was being a data first. So now the AI is data-driven, very key part. How do you see cloud scale, distributed computing and now AI machine learning changing the architecture of what customers might put together, knowing that there's going to be significant performances in reducing toil or mechanisms that accomplish a job that aren't needed to impact labor, could impact machines. What's your take on the impact of architecture? That's a great question. In fact, these three levers were kind of the architecture, the cloud and the size of data and the machine learning AI has been sort of part of what we found the company around. The challenge that I see with, you can build an architecture that scales with data to be able to sort of cope with the exponential growth of data. So we can capture more data and more data and more data but human on the other end of that data, on the other end of the keyboard is always going to be a harder and harder time to capitalize on what that data is telling him or her. You need to mind it for insights and this is where in order to bend that curve of exponential growth, this is where AI and ML are critical. And so now it's all coming together. We've got cloud that allows us to scale, deploy new microservices, scalable architectures. We've got growth in data and we've got AI and ML that can be combined into something that can potentially keep pace with growth in data productively, finding the right insights, finding the right challenges, issues, breaches, whatever it might be and helping the human be more effective at the job. Yeah and that script has flipped. You guys are successful because of it because you can't scale the amount of data coming in with humans. So humans can have an influence on the machines as augmentation to scale. That's kind of, everyone kind of gets that, well I think that everyone gets that, they should get that. But now the next question is okay, I want to program with the data. I think what I see with ChatGPT and these kinds of new things where a prompt into a data set is like a query, so that could be like a SQL query or you could look at it as a procedure call. So it's almost a prompt engineering is a call to some other thing and then you got operationalizing it, prompt operations, some people are saying, now the new thing is prompt tuning, where it's now tuning itself based upon the learning. This is like the old self-healing, remember those days, now it's possible. So if you believe that to be true, that means developers going to code on data. So the question that we were asking at KubeCon, I'd love to get your reaction is, developers actually don't make the decision on where to store the data. What if they could flip that script? What if developers can determine where the data is stored to maximize their programmability of it? Because if prompting is a call, that's a code. Then developers might be coding data. A little bit of out there concept. What's your reaction to that? It's interesting because in part I think this whole sort of convergence towards platforms in the sort of DevOps and SecOps space I think is in part going to help that become reality. Because in the sort of old worlds of BustaBread tools, tool sprawl all over the place. Each tool needed its own data, needed access to it, so there was fragmentation and silos. And now as we are starting to learn that putting bits of disparate data together and mining it for insights and analysis is actually in part forcing this platform convergence. And developers will now have a choice of how to store that data. And also through APIs and various access we'll be able to reach into all of their data through APIs with easy access. So I think that's going to drive a lot more as you call it data programmability, right? Yeah, and again, it's an open question. It's just trying to put a frame around, understand it. The other thing that's come up in the open source community is obviously open source of scaling. I won't go into my code pollution rant about how auto driving code in will cause more problems than it will gain. But I will bring up one trend in CNCF which is wasp or web assembling. Which is something that should have been done years ago, get a binary, have it work on everything. Why rewrite code to do something? So that's concept of productivity that's so obvious that's now happening. But if you go to the AI side of the tool chains you can have an open data lake, but then the proprietary tool chain is that a feature or a bug? But then that comes back to say if I want to run Snowflake and I want to run Databricks on Azure like now we're in this mode of I'm a developer that's not going to really kind of work for me. So now we're kind of going down that next level of iteration of if data becomes programmable and you see the scale, the success of Databricks, the success of you guys, success of Snowflake, this data has to be programmable, has to be accessible. I don't know how to think about it. That's why I'm asking the question. How do you see this evolving? Is it just natural evolution? Will that be a standardization model? What's your take on that? I remind a few years back there was a lot more sort of black box AI, ML implementations. But if you look at today the open source technologies, cloud native technologies, TensorFlow and others are sort of cracking open those black boxes. And even the black boxes are now getting built on these open standard ways of building algorithms and all of that. And so I think the general democratization of algorithms is going to continue. But I do think that forever there's going to be proprietary algorithms. People are going to try to monetize on it. Companies exist to build and eventually those things will shed back into open source and things like that. But so I don't know that it's going to change much but I do think that there are many more open source both tools and practices that are available to developers too. Yeah, I mean, Dave and I were talking, we think the data and look, we have a lot of language data on our side. Every CUBE interviews index over 35,000 interviews we've done over 13 years. It's just a lot of language jargon. It's language, it's legit, tied to video and audio. And so we were thinking, well, okay, we're not going to build an open AI. We're not, no, not going to be a query although we do have a prototype up and running but we're trying to think, okay, our value is our data. But if I take an interview snippet of say a soundbite of what you say and I pump it in the chat GBT, it'll write a blog post for me. From your pure data. So data seems to be the proprietary or it could be open but the way they differentiate versus giving everything to the LLM. And if you think about it, like if I look at our own space, right? We sit on an ocean of data, like not in a lake, it's an ocean of data. And the data that we process is very special, specialized. So it's technically pre-trainable on, right? You can take a model that has been pre-trained on language and then tune it and train it on specific things that where the model might be able to recognize what incidents mean, how to remediate, what next steps minority, how to create a playbook and things like that. So that's, there's an opportunity, especially for people who are sitting on large amounts of data and are not sitting on silos of data to leverage these technologies to tune and train the algorithms on very special use case specific arenas. And I think insecurity at RSA over the next couple of years, I expect to see a lot of that work starting to show up, companies starting to talk about, we certainly are thinking about that, right? And, you know, because it's an opportunity for us to do better for the human. That proves my thesis of the more data you have, the more observation space you have, the more data that could be put into training to actually be used for something else. It's input to a function. That's right. And what you said, you said data is proprietary or data is the key, right? We happen to sit on a piece of data, large amounts of it, but most people don't, right? So we have an opportunity to actually leverage the data and make it into a product. You know, that's one of the things that Dave and I thought was really wrong with the Hadoop world was that the people selling the software didn't have the problem. They didn't have, they weren't data full. They were just selling software. It was the big hyperscales that had all the data. You have tons of data with your customers and that's cool too. So that's a new value for you. Now, the benefit to you guys is that there's now new opportunities. So what are those conversations that you're having here at RSA with customers? What are they, I mean, taking me through the kind of early elementary conversations and more progressed conversations. Take us through what are those, what are you having with customers? What are they thinking about? Probably tire kicking, trying to learn to more advanced narratives. That's a great question. And we do, our customer, if you looked at our customers set you would find a lot of them sort of on the beginning of the curve, learning curve and let's talk about the curve of going from multi-tier apps to microservices apps, moving to the cloud, digitizing their business models and all of that. And that spectrum of customers is really here to learn what's the right operating model in the cloud. If I move to the cloud, how am I going to secure my environment? Who's going to control it? Developers are already out there. How do I quote unquote control? How do I integrate all of that stuff? And a lot of those are good questions and also the wrong questions as we've learned over the last 15 years that when you go through cloud it's a very different model, right? And then we've got a lot of customers who are way down that curve, learning curve who are in the sort of the most advanced sort of stages of really converging their observability and security processes. They are fully, when a security incident occurs, the security team's 100% collaborates with the development teams because security team knows the security developers know the code and the architecture and you cannot solve incidents without those. And so not only that, they're also asking questions about how do you reduce the cost because there's massive data overlap between security and observability particularly in the cloud. So we're essentially having that full conversation from what do you guys do? What's multi-tenancy in the cloud? To like how do I hyper-optimize how my teams shave 10 seconds out of an incident that's costing me tens of thousand dollars per second, right? And so it's really interesting to sort of be context-witching between those. You're going to see massive innovations from that. You're going to see customers move faster on identifying known things but also using the telemetry data to see other things. That's fascinating. We're already seeing, hey, in the APM world of observability, while you're looking at transactions, well would it be useful to start monitoring behavior of those transactions from the security perspective? Yeah, yeah. I mean we've always seen this addressability of data needs to be available. And so how do you balance the siloed protection with making it available because their decisions being made are based on what the data is available and if the data is not available it can't reason what it can't see. So how do you guys think about that with customers? Because it's a hard thing to solve because now you're making it accessible. Is there a new security protocol that comes around this? Is it a, I mean I just, I'm having a hard time wrapping my mind around what that looks like. How do you get the best of the data without, that's why I'm on this whole developer coding data. How does that work? It's tricky. I don't have the full answer for you here but we deal with that day to day. We have very sensitive, because we're a platform now, right? And we've got use cases all the way from development use cases to compliance use cases. And if you look at this data across the spectrum you will see that certain countries, certain regional, certain vertical regulatory rules affect who can touch what, who can see what. And since the data is now in a single pool how do you govern access to it? How do you make sure that you can productively give data to people who can make benefit out of it? And so right now all of that is about general governing access, monitoring access, auditing everything. You know Bruno, I will say that to end the week out here it's great to have you on because provocative conversation but also relevant. If you look at the shows that are converging we just came back from Mobile World Congress. That's a DevOps show now. We're here at RSA. This is basically a DevOps show. So I mean look, what we're just talking about this is not scary, this is like developers and ops. Now you can say network ops or whatever but this is DevOps. I mean this is cloud. That is how it is in the cloud. This is the top story here at RSA. Do you think you agree? I totally agree. That has been what we've seen and what we've hoped would ultimately happen. I think you're right. It's going to be very interesting to see who can adopt and align after all. I mean this is not the oldest show but super computing actually started in 1998 which we are now covering again because the chip side of the business is booming as well. So you got Silicon, super computing, high performance. Used to be HPC, now it's more Silicon advances which is GPUs and more AI and all great stuff there. This show here, Telecom and then the cloud show is kind of coming together. This is like a part of the biggest converges I've seen in my history 30 years in this business where this much force is intersecting. And the data is the wild card because data never has been a real lever in any kind of major inflection point. So it's totally agree. Final question, how's business with you guys? What's going on? Put a plug into the company. It's similar logic. Business is great. We keep adding new products. We had some great announcements. We've added our SOAR capabilities to our products integrated fully in. We actually integrated the chat GPT into our security orchestration automation response pipeline so you can now actually leverage that technology as you're assessing incidents and threats. We added UEVA to our technology stack. We're really excited. Our customers love it. They've been asking for it for some time now and we're having great traction and great conversation. Final, final question. As a strategy founding chief strategy officer what strategy changes or tweaks or adjustments are made based upon the new revelation to all the rest of the world and the geeks and the nerds out there that ML and AI is here to stay and is going to be an abler of new use cases or unknown big use cases. What's the strategy of a sublogic? Great question. We actually started working on this strategy more than a couple of years ago and in fact very recently we've put out a model for our customers to be able to inline inside Sumo, inline with Sumo execution, actually execute their own ML and AI algorithms if they choose to do so using data science that can be simple, complicated to detect fraud and other things. So we're enabling our customers to put in their own code in line with our engine. Enabling, democratization, open, value-oriented, great stuff. We call it open analytics. Founding Chief Strategy Officer here in theCUBE's last segment, four days of wall-to-wall coverage. Dave Vellante left the building. We recorded our podcast here, episode nine on location. I'm John Furrier, host of theCUBE. Thanks for watching this CUBE's presentation of RSA live coverage 2023. Thanks for watching.