 Welcome back to SuperCloud 3, I'm Javarra Dave Vellante, kicking off day one of two days of coverage, security plus AIs topic, and we're here with Doug Merritt from the CEO of Splunk. Coming out of retirement, you've retired now in the sea of AV-atrix. Cube alumni, friend of the cube, Doug, thanks for keynoting and kicking off SuperCloud 3. I'm honored to be here, really. Thank you for bringing me back. I knew it wouldn't be long if you'd be back in the game. Take some time off, congratulations on taking that time off. And now the sea of AV-atrix, Steve Mulaney. We've been covering AV-atrix pre-COVID, they had an event. They were the first ones really talking multi-cloud before that became kind of a thing. They saw the software side of it. We know, Steve, you're taking the helm, taking it to the growth. What's the attraction? What made you come out of retirement? Yeah, that's a good question. I honestly thought, one, I didn't have any idea how exhausted I was after the eight-year run at Splunk. So it was fascinating. My wife, like two months in, commented, how many days are you gonna sleep for nine hours? It's like, yeah, I guess I was running pretty hard there for a while. And I thought, I affiliated with a couple of great VCs. I was doing a bunch of sidecar investing. I stepped on a couple of boards, doing advising. I thought that could be really a great way to stay in the game and give back, but not be on that hot seat of, when you are an operator of any sort, any leader, and definitely the CEO, like you are on 24 by seven. And it's hard to sleep at night sometimes because there are things going on that you're worried about. And all my VC friends and other people that had kind of chosen this dabbling side, which was more me than a VC, it's like, you know, the best part is I can sleep at night. It's the operator's problem. Like I hand it to them and they lose sleep and I just wake up in the morning and forget what to do next. And that was a nice artifact. But what I found over a year and a quarter is when you do that, even within the venture capital community, you're more or less an individual player. You're individual partners. There are people that come together, but your daily activities are kind of, you are trying to govern your time and there isn't that vision, mission, purpose and constant team. And there's not that really deep customer contact. There's not super deep product contact. You get in and do the best you can as a board member or when you're making an investment or advisor, but it's not the same as I am. I'm not the chicken in the breakfast. I'm not just laying in the egg, it's the pig. Like I'm committed to this breakfast. And that, what I found nine, 10, 11 months in is I love my family. I'm more of those guys that actually likes and loves my wife and love to spend time with her and my kids. But I just, I really missed being on the field and being part of a team sport. And you got a great run there at Splunk. Obviously went public and all the great success data. Now security and multi-cloud, Aviatrix, growing company. So it's somewhat pressure cooker, but not too, you know, you're going to ride that grow that. So it's early pretty much for the company. Yeah, being a private company is great. That was one of my criteria was it'd be nice to start private and see if we can take the company public. But a lot of, so when I said, okay, I should jump back in. Like I missed being part of the team. Then the criteria of what kind of team you're going to join becomes really important. And Steve and I have known each other for probably 10, 15 years. He was, he spent a lot of time in Los Gatos. That's where I lived before moving in Austin a few years ago. And watching his career journey, especially when it comes to networking, he's been really, really good at picking up trends way before they become successful, including NYSERA and obviously that really successful acquisition with VMware. So knowing the team and understanding the category that they're going after is certainly important. I've learned that being really close to the board and having a great relationship with the board is super important as well. And two of the board members at Aviatrix, were early board members of Splunk. And I'd spent time with them before becoming CEO and then post CEO in that role. So my outside in, Steve had raised his hand and said, okay, I'm the zero to a hundred million guy. And this is now getting close to a hundred million. And maybe it's time for somebody else. And they crafted a list. And according to Steve and the board members, I was top of the list. Apparently shortly after my jump from Splunk. And so when they approached a few months ago and I dove in, this has a lot of those characteristics of Splunk. It's a little bit early on the trend, multi-cloud networking, obviously a super cloud event. It's incredibly important, I think, for every business out there. The last data saw 83% of major organizations have got a multi-cloud strategy that they want to be multi-cloud. But the reality for most of us is you started in a cloud and you're porting a lot of last generation workloads that really aren't cloud native to that cloud that you were in. And maybe through acquisitions or experimentation or a couple of clouds, but you don't really have a mesh across the clouds where you've got workloads seamlessly traversing, not just multi-clouds, but now we've got Intelligent Edge and Aviatrix saw this back 2014, 15, 16, and has made those investments to actually be there. I think it was ahead of the game here. I remember in 2021 we were up on the stage at AWS re-invent and one of your investors, your early investors was sitting down there and Milani and I were up on stage. I don't know if you were there. And the investor said, it's happening. And Milani said, what's happening? And we had coined this term super cloud and that's what's happening. But to your point Doug, it's not like somebody wakes up and says, oh, I got to buy a super cloud. It's not a product, right? It's an architecture, it's sort of maybe a philosophy. And so now you got to get into solutions that actually solve problems, which is I presume where you're spending a lot of your time and thought. Yeah, one of the characteristics of Splunk, I loved is I called it a blue collar culture that back when they created the index and tried to optimize logs, no one even thought the logs were worth that much. And to really get to petabyte scale with that data, it's just a lot of hard roll up your sleeve and do plumbing, non-glorious plumbing work because these things have to work. And networking is that same way. Like it is a very difficult category to do well. You're at ring zero. If the network goes down, all this great stuff we talk about. Seamless applications, real-time customer communication, employee empowerment just goes away. Like you cannot operate your business. So very hard to do. You have to roll up your sleeves and really understand the domain and be super diligent in doing it effectively. And when I look at this multi-cloud piece, like we would love to have the ability, the clouds continue to do good job of differentiating themselves on what kind of workloads are they optimized for. And all I wanna say, we can do anything, but it's hard. You've got to make investments from networking and Silicon all the way up to optimize different workloads. And if this pattern looks like any of the past thousands of patterns in tech, there will be, you have to be affected to effectively address the needs of your organization. You're going to be wisely choose to develop different apps and different workloads where they'll operate best, which will likely be different clouds. So. One of the things we were talking about SuperCloud, I love to get your thoughts because being at Splunk, you saw the data evolution and revolution there continues. That's the constant theme in our SuperCloud narrative is data in all aspect. This is security plus AI for this topic in SuperCloud three, but network plays a big role in insecurity data and networking of the two kind of areas. You see a lot of action around security, whether it's built in as a platform or tool or tracking packets. So networking across multiple environments is a huge deal. Unless you guys do an AV atrix, what's the role of data in this too? Cause you can bring that data perspective. I love to get your thoughts and reaction to data and security and networking. Yeah. I think data as you guys have talked about and we're all witnessing is the fuel for what's really going to make effective AI work. Like you need enough elements to train these different algorithms and to start to get more proactive and intelligent approaches to what she'd be doing, whatever domain that you're in. For us and networking, the data that we care a lot about is what's happening with the network. How do we make the network more resilient? How do we make it more secure? How do we optimize traffic flows? And again, if you look at these multi-cloud environment, if you look at the network services from any cloud provider, they're still relatively immature and they'll continue to progress them. Our job is to stay ahead of those within each one of those clouds. But when you go multi-cloud, it gets really difficult. So trying to provide that intelligence and resiliency that adaptability, the high security across these clouds is a difficult challenge. In addition, trying to protect the data sources that live at... So when I look at tech, it's all about layers. It's hard for the networking vendors to jump up to be data plane providers. We've got a data plane to transmit data across the network, but when you look at Snowflake or Splunk or Databricks, they've got more of a data plane. How do I curate data that people are going to take advantage of in whatever use case? Applications, plane is very different. Compute, the compute layer is different. So sticking within that layer is where people tend to really get lots of momentum. And enabling that network to ensure that at least for the network traffic, you've got understanding of what is beginning to... Who is touching your data? What are they potentially taking that data? We do have deep packet inspection capabilities, so there's some interesting things that we can actually infer from that networking layer that stops at a certain layer. You need to partner with the data players. You need to partner with the different application vendors to bring, so you can bring a whole picture on what's happening with the capabilities and also the security posture across the system. So you've got this cross cloud complexity. So thinking about AI, what were you doing with AI before specifically? And how has that changed? Has that changed your thinking with the AI heard around the world? Yeah, it's the large language model, Lemming March. If you are a company today and you're not claiming to do something with LLMs, you're just in trouble because obviously everybody is. But they do have a specific purpose, right? And we're still wrestling through what that purpose is as a world. And how do you contain that purpose and keep humans safe? And there's like everything else barbells on joy and fear simultaneously. Within AVATrix, we're lucky enough to have an incredible development team, people that were in the MIT Media Lab back in the 80s and 90s when AI was the first worst. We're supposed to take over the world and we're dealing with whole different vectors and algos and artifacts back then through Google engineers, Facebook engineers, Yahoo engineers, so really adapt with networking, really understand data and then different machine learning and AI capabilities you could bring to networking. And it's, I think like every other domain, LLMs will have a play in networking. When you're trying to think about understanding attack surfaces, that things that are more language oriented or can be translated to language, how do you optimize policies across these very complex networks and be more proactive and resilient? Those types of things, I think LLMs, we're playing with everything that we can think of right now and to see where across the different use cases, what types of AI we bring and where they're gonna add the most value. But right now, going back to how early we are in this multi-cloud world, just getting secure and resilient transport between clouds in a seamless basis is that most companies really, really wrestle with that. If you've got an app that spans two clouds and ensuring that that app performs the way that it should and is secure the way that it should and you understand traffic routing, it's a non-trivial task and you don't need an LLM to do that work. You need an effective data plane and control plane and a way to both observe what's happening and invoke policy anywhere that that traffic flows across different clouds, across different edge providers, back to your proprietary old data centers, like data and networking is flowing. So there's some AI in there. It's just not generative AI necessarily. It's, yeah, I mean, I think, yes, I think the world is still wrestling with or visualizing, how do I use LLMs? And then as you guys know, the choice of which LLM models to begin to use is off the charts and the open source world is, it's moving so quickly. Dr. Graf, I mean, it's unbelievable. It's interesting, you know, on the latency side, physics, obviously networking physics is everything. Latency is key. When we talk to infrastructure folks, they're skeptics when it comes to AI, well, it's BS, but when they see configuration stuff that's mundane, no brain, that's automation then, it's not so much AI, but they see it playing there. But one area they do see hope and prospects is observability data. Mountains of data around telemetry. You've been in that market, that's changing and growing, more and more data points coming in, whether it's network logs or network traffic patterns or application telemetry. I mean, everyone's hoarding data right now. I mean, no one knows what to do yet, but how do you see that observability piece coming in? And I mean, the interesting part is every layer of the tech stack has got observability. So there's a whole observability framework within AVIatrix, which is through our co-pilot offering. That's different than the way a data dog would talk about observability. We use data dog observability capability for our development team and for the applications that we roll out. So we all have the opportunity to do a better job of parsing through mountains of data to try and find the patterns that are existing with the way that our systems are behaving, whether we want them to and don't want them to. And again, LLMs I think could do a really effective job given a constrained data set and the right training and the right guardrails around it to both observe those patterns, but then to begin to iterate on the appropriate ways to tune and optimize what you're doing. And that's when I see where I think a lot of efficiency will be injected and cost structures might become significantly more advantageous to customers is the human component on that. It's like, what data do I grab and what metric do I create and how do these metrics tie together and what alert do I then generate from that? That is, it's a very expensive human component. I need really thoughtful, talented folks and they need to curate that entire data pipeline and what to do with that data. I think LLMs can, like they are with basic coding right now and basic SEO material. I think it can really impact the speed, efficiency and quality of that and that's one of those areas that we're looking at. So not so much necessarily taking action but allowing humans to have curated data so that they can take the action. Yep, and then eventually probably taking some action too. But right now it's a little dangerous. I'd be nervous right now. I'm going to do this. You want me to do this? I'd like some checks and balances. Not going to bring down an app. Great to have you on here. I got one question for you, break. Our editorial team leading up to SuperCloud 3 has been asking this question of a number of folks and when I first heard it I said, it's kind of, but it's really an interesting question the answers that we've been getting. So will AI ultimately be more beneficial to attackers or defenders? What would be your answer? I think if we follow human behavior I think the attackers are going to arm themselves more aggressively first. And I think it will propel the defenders to really, really up their game more quickly. And then I pray that it becomes more powerful to defenders over time. But we're in another just crazy arms race I think. That just got compressed. Oh my gosh, the power is, it's insane for the good and the bad. Well it's been great to see. I want to ask you one final question about AVATrix obviously coming out. You mentioned you want to have a vision and be part of a team, private company. So it's a little bit, not as pressure packed. What is your vision for AVATrix? As you lead that team and grow that ticket to the next level as you took it to the 100 million mark you're going to take it to public. That's the vision. But your vision is North Star. What's the AVATrix thinking? How do you see this playing out? So everything that you guys are evangelizing with SuperCloud and so many of your other podcasts and broadcasts go back to the power of how do you connect people and try to do that in a transparent and audible way so that we can get all this amazing benefits from technology and LLMs are no exception. Like without connecting everything together to get that visibility there is no benefit of generative AI or there is no way to actually roll out generative AI. What I view AVATrix as being behind is we're all about connections everywhere that are transparent and resilient and secure. And I think without a cloud native architect solution that is as agnostic to the clouds and a landscape it's really difficult to drive those connections and it's even more difficult to drive that transparency and resiliency and security. It's interesting we were mentioning on the opening a perfect storm in your career have you seen anything like this before? Dave and I were speculating the hype cycle and adoption curve and spending is almost like on top of each other. You're seeing the convergence of old, new some have a tailwind, some have a headwind. It's interesting time. What's your as a career tech leader explain this moment in time? The patterns are super similar I think. We saw this with mainframe and client server to first internet generation, multiple OSs that architectural shifts that we've seen from the 60s to 70s I think follow are following that same pattern. But I think the crazy part that you guys do such a good job of trying to capture is the compression of time. It's happening so quickly and it's happening across all layers of the stack so quickly and I think because things like LLMs tied together all these different usually separate categories and separate motions because they're using the same basic GPT framework and the same language thinking to really blend how these categories would be isolated and move more slowly that acceleration is what I think most of us are saying back how quick, what's going to happen next? In fact, Jeff Jonas was on the discussion with Dave on breaking analysis, C's former IBMer doing sensing a big start, a data startup. He was joking about the AI hype saying companies are getting term sheets from VC starts getting a term sheet from the VC before they get their money, their models obsolete. I wonder if I'm really joking. I know, I know, well you're seeing open source. I mean the scale. You know what else too? You're talking about the layers and you go back to the 80s and 90s that's when the mainframe blew apart and the industry competition started to occur along layers whether it was Intel and semiconductor, C8 and disk drives or whatever. Database, Oracle. That old ISO seven layer model. It still exists to a degree. But there was some thinking that cloud would change that that things would become more consolidated but it seems like LLMs and AI are going to increase the granularity of the stack. Do you buy that? 100% even cloud solving that like we've been told that in every generation. Clouds make it significantly more complex. Like the beauty of Amazon and Microsoft and Google is just three of the cloud vendors is they are so efficient that they've now generated hundreds of unique services. Every service has their own API calls. They've got their own abstraction layers or trying to manage those and they're all different across every cloud. So what I've seen with every generation is it gets more complex because we're getting more refined. Why does DB engines track 27 different database categories today? When I started there was hierarchical network and relational and that was it. And relational was the new thing that was going to solve the world. And now we've got ledger and vector and graph and everything you think of because when you get to billions of people you need specific capability and focus and then you need to tile these things together. So as we expand our technology we're going to nicheify for sure. And LLMs is what I think what blows what's so hard for us to digest mentally is there's so many unaddressed categories that need to be addressed. I was talking to someone that's a property manager there today and what I love about Austin is I've got so many friends and different verticals in just tech. You guys, you know there's no package. If you by a property manager like 10 properties or four package or if you're an HOA that's trying to manage them there's no solution there. There's for some reason I've got a cutoff line of around 40 homes or individual one home management. It's like, well I'm sure there'll be one in six months because with an LLM you can now create a HOA or property management package. So I just, I think the diversity that we're going to see is going to go through the roof but we need it to because we don't have our needs met. And you mentioned the OSI model and we talk a lot about open source and how that's fueling this perfect storm. And the OSI model was the open systems interconnect that created that seven layers. TCPIP was a key aspect of that. Absolutely. And that broke down. Remember when we were breaking in the business IBM had SNA network operating system. DEC had DECnet. DECnet blew it all away. And these were proprietary NOSIS or network operating systems. And that was their proprietary vendor. We kind of sometimes say, you got the cloud AWS has its own stack Azure has its own stack. So is the super cloud OSI model coming where open has to happen? I think it has to happen. And again, it depends where a corporation is today. With Splunk, we were majority in AWS on purpose. Like we need a reference operating system which the clouds are to develop our stuff against if we're going to move everything to the cloud and break it apart. And we just can't do it simultaneously across three. And I see so many companies there. So within that world, maybe you can just use the native cloud services native networking cloud services these folks have but there's still a pretty big gap that from what they're providing as far as transparency and remediation. And so we're trying to add value there as Aviatrix but it's so different across these different clouds. But as a company, I can't, for me to spend all of my energy on parsing and identifying how to be a super cloud provider or how to take advantage of the super cloud, they need to spend time on features and functions and what to do with LLMs to serve their customers better. And I don't think that that networking layer is where most of them will get most of their value as a large IT shopper. Super cloud too, we had Walmart on, yeah, they can afford to do it. We had Uber on a breaking analysis, they can afford to do it, but most companies can't. Yeah, I think IT is going to have to move to this cloud secure model AI. And I think that we call it the super stack. You get supercomputing, the physical layer. I mean, always I model kind of references today but not perfectly. You got physical layer, you got some sort of interoperability layer, middleware and then you got the application. So you got supercomputing, super cloud and super apps. Absolutely. That's going to be, we see that and how do you prepare? How's the company prepared today? I mean, Main Street IT, that doesn't have the Uber staff, is it managed services? How do companies compete? Knowing that the attackers are coming. Absolutely. AI is coming, you got a surge of AI, new capabilities, you got attackers. How does a normal company compete? It'll be very, very difficult. And I think you wind up turning to trusted vendors to do lower layers in your delivery so that you can focus on the upper layers to serve your customers and help your employee base be effective and manage your partners and the stuff that really matters to drive the PNL. Awesome. Well, Doug, thanks for coming on and keynoting our super cloud three. Great to see you. I'm honored. Back in the game, back in the arena as they say with AVIatrix. Put a quick plug in for AVIatrix, we got 30 seconds. For all of you out there that are seriously trying to develop mission critical workloads in any cloud and are beginning to think about how do I not be held hostage with one cloud and they're awesome vendors, but you need to be diversified. Look up AVIatrix, where we will help you on the network optimization side and the different corollaries around that. Awesome. Doug Murray here in super cloud. I'm John with Dave Vellante. Stay with more coverage. We've got a great agenda today. We'll be right back with our next guest.