 Hey everyone, welcome back to theCUBE. We are live at VMware Explorer 23. This is theCUBE's 13th year of covering VMware customer events. Lots of evolution along the way. I'm Lisa Martin with my co-host, Dave Nicholson. And we have a seven timer on theCUBE. He's back. Good time. David Chicochis, VP Products Strategy Club. Lucky seven. Head of Managed Services at Lubin. It is lucky seven. Right on. And you're in Vegas. I can't say anything wrong. Yeah. I'm a lucky seventh period. Exactly. It's going to be, it's going to be like just like textbook. Yeah. Master, no master class. It's good. We're in Vegas. Lucky seventh as well. It is good. And I've got two days here. It's kind of like between two firms, only better. Yeah. So let's start off with, for the audience who may not be familiar with Lubin, talk about who you guys are. What are those core pillars of your portfolio? How are you working to help shape the future of technology solutions for customers? Yeah, sure. So Lubin's a technology company that aims to connect people, data and applications quickly, securely and effortlessly. And we bring to bear sort of four key pillars of our portfolio to help customers do that. We hear customers struggling with day to day core operations. We hear customers struggling with empowering a very changing workforce. We see customers struggling with figuring out where they're going to make big bets and how they're going to innovate and how they're going to try new things connected to new clouds and connected to new environments. So Lubin really brings to bear a massive connectivity platform that has one of the most powerful and far-reaching networks in the world, fiber-based networks in the world. Lubin brings to bear a cybersecurity capability to secure all those network workflows. We bring an edge computing capability that's embedded inside that network to bring really tight proximity for within that network for workloads and software to run inside the network. And then we bring a managed and professional services capability that allows us to take any of those networks, any of those solutions and make them better. I was looking at some of the notes that you talked about Edge. Lubin has a huge presence, over 60 Edge compute nodes that cover 97% of US business demand with five milliseconds of latency. Huge part of the business. Yeah, no, Edge computing was a huge investment of ours. I think we took a look at the services our customers were talking to us about, especially in the innovation realm, right? And especially in the asset-intensive industries that are bringing on a lot of experimentation. And we really decided to invest in Edge because Edge helps with, there's a lot of questions about why you would deploy something at the Edge, right? And it really comes down to the law of physics, the law of economics, the law of the land and Murphy's law, right? Better performance, lower latency, law of economics, not moving data around and moving software a little bit closer to the data where it's generated, being compliant with the laws that say you have to run certain software and keep certain data certain places. And then the ability to run things out closer to the Edge sometimes gives you a resiliency benefit, right? So you're not having to worry about running workloads in some far-flung location or relying on a wide area network connection when you know it's not required. So, you know, all of those dynamics we think are really interesting, especially in the light of what you're hearing about here at VMware, at VMware Explorer this year, because there's so much going on around generative AI, so much going on around these IT and OT convergence use cases, both of those basically have demands in all four of those laws, right? They require low latency, high performance. They require you to be able to move software and workloads closer to the data and where it's generated, because data of copying data around gets very expensive. You know, they require compliance with running software and keeping data inside certain geographic boundaries and having what Edge computing, what generated AI workloads, what IT and OT convergence really require is a very resilient solution. And so having Edge as an option inside your network we believe is one of the main areas where we're co-collaborating on with VMware. So it was you in the back of the room during the keynote who was yelling, yes, yes, preach, doing the fist pump. It was a great keynote. It was a great keynote. Because look, you know, Edge, generative AI, explain how this concept of an overlaid network versus an underlaid network brings that kind of capability to bear for a customer. Where does Lumen play versus when people hear NSX, when people hear next iteration of NSX, explain the relationship between those two things? Yeah, so great question. And woven throughout the keynote there. First off, the keynote I thought was great. I love the way they open up with just sort of just luminaries from across the industry. Just all basically, first off, we're rallying to VMware's event, but basically all just talking about representatives from government, representatives from public cloud, from equipment manufacturers, all basically saying this is kind of a watershed moment and a really watershed year. And it's because of this phase shift, this 100X increase in productivity. You know, one of the elements there in what really drives that 100X phase shift of productivity is this topic of generative AI. And you know, what Raghu talked a lot about in the keynote this morning is this idea of private AI, powered by VMware. And intrinsic to a lot of the talk track around private AI is the ability to go and keep things private. But private computing, private data, but then private networking. And so being able to link together all the different sources of data, all the different parts of your generative AI model that you're generating or your training is key to the idea of having a secure, private generative AI base. And so what Lumen brings to the table and really the main area that we're co-collaborating on, or one of the main areas we're co-collaborating on with VMware is this idea of taking their overlay networking technology, things like NSX, but that overlay technology is absolutely the future. A software-defined overlay network is really powerful, but a software-defined underlay network is vitally important. I mean, you look back on the history of VMware and what they've done, they took underlay hardware and they basically built an abstraction layer that then let you run a lot of virtual machines and virtual hardware over the top of it. A lot of that is still missing in the network space. There's a lot of overlay networking solutions. There's a lot of acronym and alphabet soup around overlay networking that's out there in the industry, but that overlay networking, still, a lot of it's designed to run on a best effort network. And when you're talking about high performance applications, when you're talking about high degrees of resiliency and mission criticality, you can't just leave your underlay network up to a best effort. And in a lot of cases, the cloud found this out, the data center found this out, where just a, well, put it on commodity hardware sometimes works, but when you're trying to go train an AI model, just the same old x86 architecture isn't going to work forever. So paying attention to the underlay, that abstraction is good, that abstraction makes you not have to care about it, but that abstraction also, it's a really interesting boundary because on the other side of it, you're going to have to think about designing for additional levels of resiliency. You're going to have to think about designing for different kinds of optional optimizations you might make on your network path. There's a whole design science to it as well. And what we've been really just recently reintroduced are some exciting ways to make that underlay a whole lot more programmable. Every bit is programmable as the overlay networks that you see today. So another example of, it doesn't matter, in other words, people shouldn't have to care about it, but someone has to care about it and you care about it and you take care of it so that maybe, so that they can lay down that layer of NSX and get the true, get the real value out of it. Yeah, so there's the kind of the innovation that I just sort of mentioned there is a product we launched just a couple weeks ago called Network as a Service. Right, it's a framework that we've built up that basically allows API-based and automated control to all those different underlay network environments and really simplify and software-defined the underlay network. You know, let's admit it, the communications industry, the telecom industry, builds a lot of products that only a network engineer can love. Right, you know, you have to be ready to plug in optical gear to it or plug in a switch to it or plug in a router to it for the vast number of those, for the vast number of those product engagements. And what we've done with Network as a Service is we've turned it into a more of a software abstraction. We're going to define a port, we're going to let you build any service against it, we're going to let you have it join a private context, we're going to let you have it join a public context and get great internet connectivity and great internet transit, but you'll be able to set those up between any of the 180,000 buildings we have on net, all the different public cloud platforms that we have on net. And the ability to deploy VMware in any one of those locations is really what makes us a prime partner for VMware. It's one of the reasons why we're a Platinum sponsor here, the reason that we're here and the reason that we're, you know, one of the premier partners to VMware. Share some of the tangible results that customers can expect to achieve from Network as a Service. And then maybe even some of those joint Loom and VMware customers. Yeah, so what we've released here initially is a public IP based service, right? Where it's basically internet access, right? So the ability to go and launch, dial up and dial down internet access or actually build policies to be able to have, you know, take dynamic control over how much internet transit you have. What's really next on the roadmap and where those become tangible benefits is our ability to go and build private contexts, right? So a private, a set of private ports that all join a private routing and forwarding based throughout the network that nobody has to bring any hardware and nobody has to introduce any hardware or nobody needs to rack and stack anything. It's that we've already predefined those network connections. We also have part of our network as a service capability is actually built into a lot of the public cloud platforms, right? It's another announcement we made a few short weeks ago around the ExoSwitch platform that we're actually taking some of our switch gear that is bonded to our network as a service offering and putting it inside the public clouds. So when you think about, you know, now some of it, we're going to talk about things like generative AI and future use cases. Some of the tangible benefits are, you know, that you're actually opening up and enabling innovation vectors that you weren't able to do before. You know, what we're seeing in the practical realities of it today is that as customers are facing just a total change in the way that their offices, I mean, there's a lot of networks that are out there that are built to, that are designing a network for a different model of work, right? And so there's so much change going on, you know, the flexibility and the adaptability of what we bring to the table in terms of network as a service, being able to say, we can change your bandwidth requirements, we can change your overall network profile and we can do it in software, allows customers to really start thinking through some of the hard questions they have about, what's the future of work? Where are my workers? What are the workloads they need to connect to and what should the network that connects them look like? Three years later after COVID, are you still finding customers that are fighting those challenges of really trying to understand what the future of workforce for them looks like? Oh, for sure. There's still quite a few. Yeah, I mean, and even companies like us, we've made a full-on full-throated commitment to hybrid work and work from anywhere, sort of our brand, right? If you're running a large network, you should be, you know, pretty willing to have that be a flexible environment for your customers and your employees to work in. But yeah, we're customers talking to us all the time. We're actually taking a very proactive stance with our customers going out to them and saying, you know, there are better ways, right? You might be running a voice solution from ours that we've been running for you for a number of years. We're actually being very proactive and going out to them and talking to them about, here are some of your unified communications options. Here are some of your collaboration options. Are there ways for us to get you over to and update the way that your workers connect to and collaborate? So we're still in the early days of this. I actually work with students in the Wharton CTO program who are, you know, CIOs who are actively implementing generative AI for what a lot of people would think would be mundane back-end things. What they're finding is that the infrastructure requirements, both hardware and software, aren't as well understood as they thought going into it. What are you seeing in terms of, you know, this idea of, look, your work is distributed at hybrid work, but the actual reasoning over data that's gonna happen, it has to happen in some centralized place from a generative AI perspective. What does that look like in terms of the networking layer, that network as a service layer? I don't know if you call it NAS as to a different shape between NAS and NAS. I don't want to get into actions or anything. But what does that look like? Is it bandwidth intensive more than we've seen in the past? Yeah, I mean, I think what the killer app for generative AI looks like are models that are trained in the public cloud or trained in a centralized data center but then deployed out to the edge. So that's the combination that I think you're looking for, where you're not gonna, you're gonna wanna train off of as large a data set as you can, but sometimes that training data set, it might be better trained by accessing remotely the data that you're trying to incorporate into your model. Build the model by actually building a private network that doesn't have to, you don't have to put an RFP out and go and build that private network. You might only need it for a training period that lasts 90 days of intense computing usage in a public cloud or intense computing usage. But are those edges a lot heavier when you talk about generative AI on the edge? I mean, because I can think of a mobile device as an edge device, right? When you're talking about generative AI at the edge, is that more, do you need more rack space for it? No, I mean, some people might call mundane, other people would just call optimized. You can actually, once you build them, if you're looking to run a generative AI workload that's maybe very focused and has to answer a small number of questions but needs to be able to run a lot of different scenarios in order to do that, it doesn't need to understand Portuguese, right? It doesn't need to understand like the entirety of the internet, so the thing you hear about generative AI models that need the sum totality of the internet in order to fuel them and build these massive, large language models, there's much more smaller, domain-specific models that get trained up, maybe infused by the bits and pieces of the large language model that are necessary and then transports them. They can run on the edge in a much more economical fashion. What are some of your expectations? As VMware evolves, obviously, lots going on there with Broadcom. One of the things that Raghu talked about this morning in terms of the financial investment, it was a big investment in the partner ecosystem. Yeah. What do you think the future looks like for Lumen and VMware and its evolution of its partnership? Sure, so the details about, those details are starting to come out and they're starting to emerge and I think VMware is very proactive with its partner community. I actually said on the VMware's executive partner advisory board so we get together and we give them input around, and there's folks there from the big global systems integrators and there's folks there from the systems, from the managed service providers, companies like Lumen, there's companies there that are in the var in the disti community. So, we talk to VMware all the time about its posture. The, I think the Broadcom acquisition has been a pretty consistent topic along the way. It's heartening to hear that they're going to spend that much really focusing on distribution and supporting the solutions providers. We're excited to hear more. We're excited to hear more details about how exactly that's going to phase out. But I think VMware is, they're a willing collaborator with their members of the partner community and I think they're absolutely going to look to some of the value add distributors and the ones that are really able to go bring together unique solutions that tap into new pools of profit for VMware and Broadcom to get into. Let's go up a level to wrap here. What is, give us a peek into Lumen's future direction. How do you guys aim to stay ahead in this landscape of tech that is constantly changing? Yeah, I mean, it's, you know, we're honestly the, it's simple, simplifying and focusing, right? You know, the, one of the things in the exercises that we've really gone through over the past four to five months is really narrowing down our focus. You know, even, you know, what I sort of said at the outset, you know, clear mission of connecting people, data and applications quickly, securely and effortlessly. A real focus on our core competencies, which is connectivity, cybersecurity, edge computing and managed services, right? Basically focusing on everything in those pillars and then really consolidating in those down to be more software defined, to be more platform oriented, to be easier for our customers and partners to use when you're in that more simplified, more direct, more automated posture, makes it easier to co-collaborate with companies like VMware because we have some defined interfaces and ways to play our role and add our value and let them do what they do best. I've actually got a final little twist. A final, final. A final, final, final for you. An extra credit question. Extra credit. Yes, for extra credit, for extra credit. There was a time when we were going back and forth. This shows how old I am. Think of the dawn of the Netflix streaming era where it was all about the last mile in terms of networking and there was an overabundance of ports and network bandwidth and then there was a dearth, a shortage, a drought when it came to bandwidth. Are we on top of that, you know, in terms of an industry or do we see a tidal wave of requirements coming down the line with this AI thing that will put us in a situation where we're going to have to scramble? Or is Lumen, do you feel Lumen's on top of that? Do you feel constrained at all by any of that? I think Lumen, I don't think we feel constrained. I think we've built enough capacity. We know how to build and run and operate the capacity of our global fiber backbone and we're pretty proficient at it at this point. I think there's lots of exciting opportunities that require a lot of network connectivity but they don't necessarily require a phase shift of bandwidth, if that makes sense. Yeah, sure, absolutely. There's gonna be a lot more slices of that capacity pool but as cloud has taught us, as virtualization has taught us, the more slices you take of a fixed asset, the easier it is for you to manage capacity of it because you can kind of see the growth wave coming. Makes sense. I feel like he aced that extra credit. That's like extra credit. Full credit. That was like my seven and a half appearance now. I grade papers every day, that was excellent. That's great. Thank you so much for joining us on your seventh time on theCUBE. We look forward to the eighth time but thanks for sharing an update on Lumen, its partnership with VMware and some of the exciting things that we can be looking out for as the time evolves. Lisa, Dave, it was a pleasure. Pleasure. Likewise, we're our guests and for, I almost said Dave Vellante. Dave Nicholson. I'm Lisa Martin. Sorry, it's almost five o'clock. I'm Lisa Martin. You're watching theCUBE. I can tell you that you are watching theCUBE. Stick around, we have more great content coming from you from the next set. We're later on today. See you soon.