 Live from Barcelona, Spain, it's theCUBE. Covering Cisco Live 2020. Barcelona, Spain, it's theCUBE. Covering Cisco Live 2020. Brought to you by Cisco and its ecosystem partners. Welcome back to live coverage of CUBE four days here in Barcelona, Spain. I'm John Furrier, student in CUBE coverage at Cisco Live 2020 in Europe. Our next guest, Michael Beasley, CTO of the Cisco Service Provider Business Unit. Michael, great to see you. Thanks for coming on. Thank you for having me. It's great to see you guys again. You came on at the Cisco Live Show last year, 2019 in the US, obviously as a CTO of the Service Provider Group. You're in the middle of all these really big conversations because the service providers have been really trying to push the envelope for generations into getting better performance, but the diversity of services that they have to start bolting on for their infrastructure. Now with all the pressure of the cloud providers, everybody's streaming these days, so all these new competition, so service providers still have a huge footprint, huge infrastructure. What's the story? What's going on with the service providers? They obviously do. I mean, more and more service providers are deploying and running critical infrastructure for their consumer customers, their enterprise customers, and obviously as the economy, as nations, as industries continue to digitize, that infrastructure is critical for governments, for countries, and for whole economic environments. And the reality, of course, is that the bandwidth keeps growing. More and more bandwidth is coming on to the network. We see tremendous innovation and advancements in the access layers, whether it be with doxes for cable, Wi-Fi 6, obviously, for Wi-Fi, and for 5G with regard to mobility. So the amount of bandwidth that can come on to the network keeps rising, rising exponentially. So service providers, obviously that poses a set of challenges, but also a set of opportunities as they rethink their architectures, their infrastructure to be able to deliver that bandwidth cost-effectively. I know cost is a huge concern for these guys because they do spend a lot of money, but Stu and I were just reminiscing about how much we've been following Cisco growing up in the computer industry at our age. We were there when Cisco was born and watched it progress over the years and now it's on the next generation, on the next-gen cloud, next-gen everything. It's interesting, you have the service provider segment, the one that you're in, and I would say maybe financial services have always been like the hardcore Cisco customer, pushing the envelope on the gear, pushing the envelope on the technology, because they have low latency requirements and you're moving packets around, right? Now you're starting to add more payload with more bandwidth coming. It really kind of, they're the bellwether. What are the big trends that they're driving now because, again, they have to maintain those table stakes and still pioneer new ground. What are some of the things that they're doing that you see are tell-slides for the future? I think the things that I see is, first of all, a drive towards re-architecting the network, such that it's much more simple, easier to operate, more cost-effective and more reliable to operate with new, next-generation technology up and down the stack, from the silicon through the actual systems, the embedded software, the optical modules, all of the physical ingredients that go into building a next-generation, software-defined transport network. That's really what I see our major customers aim towards. Obviously, it takes time, there's an amount of challenges given that some of these customers have been running networks for a century. That said, the desire and the efforts to partner with us to get to that future state such that the bandwidth can be offered cost-effectively and very reliably as we're building out these critical infrastructures. I would add, the other aspect is that as these networks are getting more powerful, delivering more services, there is more of a consideration for the integrity, the trustworthiness and the security of the actual networks and the actual infrastructure from the hardware through the software and silicon that actually make up these networks in having technologies that can measure the trustworthiness and the fidelity, both from a hardware but also from a software perspective, and be able to report off of the infrastructure with attestation records to verify and to drive analytics with regard to the cleanliness and the trustworthiness of this infrastructure. Michael, I remember leading up to the announcement that Cisco had in December, it was, oh, here's the next generation of the internet. And in my mind, I was like, oh, sounds like it's time for the next generation of routers. But what I found really interesting is, what are those next generation applications that are going to drive things? John talked about from a history lesson. I remember going back, okay, what's going to drive 10 giga? We're going from a lot of North-South to the East-West virtualization wave was really kicking off inside data centers. These days, it's multi-cloud, it's cloud-native application, 5G, of course, there's a drumbeat in the background. Talked a little bit about some of those applications, the business impact that the service providers need to be able to enable in this rollout of this new technology. Yeah, it's a very interesting area. I've been in the industry for 30, something years, just about 30 years, and I think I've never found the industry more exciting than it is today. Obviously, there's a set of challenges, but there's an incredible set of opportunities as well. We have all of the applications that we know and love today are continuing to grow at exponential rates and get bigger, further and further adoption. If you think the fact that just slightly less than all humans on the planet are on the internet, with more coming in the future at an accelerated rate and bringing more devices with them, we think that the average device per user will go up to about three and a half over the next few years. So you have the current set of applications, whether it be social media, video, video streaming, they continue to grow, and then there's a whole new set of applications that we'll see, there's a long list, we'll see which ones actually transpire, it's hard to predict, but everything from advances in gaming, artificial intelligence, AR, VR services, telemedicine, the continued digitization of industry, in particular manufacturing, transportation, oil and gas. All of these industries open up the prospect for new applications that will run on top of this infrastructure that will drive exponential growth in bandwidth, and also will require much, much better latency from the actual network infrastructure. So there are areas that we're focused on and delivering the innovation and the core building blocks to allow our customers to build these networks to offer these services. You know, one of the other challenges, and you talk about these transitions and this step function that networking tends to do is the cost involved when you go from one generation to the next. Francisco, of course, has a large optics business, major player in the industry. Talk to us a little bit about what 400 gig means today and how people should be thinking about the cost of these type of solutions. Yeah, it's interesting. Certainly, as we've seen from each generation, as the interface speeds have changed, the actual bomb, the bill of materials for the solution has changed significantly with regard to which piece accounts for what dollars. It used to be, if you go back to the 10 gig generation, the actual networking equipment itself was the majority of the cost. That was the majority of the bomb. Optic modules that plugged in were a minority. Maybe the optic modules were 10 or 15%, and the rest was actually the system. As we look to the 400 gig generation, that is actually reversed. We now have network silicon that is so dense and so fast that it can power a full 36 ports of 400 gig on an actual line card. So you're plugging in 36 optical modules to bring that bandwidth to the networking silicon. So as a percentage of the bomb, the optical module is that much higher from a bomb perspective. It also becomes more critical technology with regard to the reliability and the cost of the whole solution. This is why Cisco's taken a big focus on the optical module space. We've also continued our own organic development, and we've also been quite active on the M&A front with regard to ensuring that we have the technology and the right R&D programs to be able to deliver very reliable, cost-effective optics at 400 gig and beyond. So you brought up silicon, so I got to ask the silicon one question. We were covering at the launch in San Francisco, Chuck Robin was there, David Gekler, you had all the top dogs there kind of really kind of going off on the future of silicon, but of course silicon angles, interest in covering that because that's in our name. But the trend is about cloud scale and operational efficiency. And one of the things that's coming out of the cloud trend is an operating model and public cloud and on-premise. That is proven, that's what people are going through, that's hybrid. How are the SP service providers implementing that? Do you guys see the silicon one being that opportunity where they can have an intent, software lifecycle, have an operating model? Is that some of the value? So what's the real story for service providers? So I mean that's a core aspect of our architecture and our strategy is to have a solution, a full solution that our service provider customers can consume that embodies all of those learnings and all of those operational realities that have built up in the public cloud space. Certainly silicon one is a key aspect of that with regard to being the fundamental building block from a network processing point of view being the fundamental building block that actually switches traffic, that switches packets and actually routes the traffic through the infrastructure and through the transport network. Along with silicon one, we have our embedded software XR7 which is the control plane for that silicon and it embodies the routing protocols, the management interfaces, analytics, traffic management, QOS services and so forth. But more and more we're augmenting that embedded software with a set of cloud services that are delivered as a SaaS to our customers that aids in operations, reduces their deployment efforts, their deployment costs and also increases reliability of the whole solution as the SaaS services are augmenting the physical infrastructure. There's less room for human error. There's less room for integration problems with between the layers in the stack. So it's a key aspect of our strategy. Okay, so the issue about the user experience or the application experience. So if I'm developing apps on silicon one, is it multiple stacks? What's the stack look like? What's the developer environment look like? I'm a telco, I'm a service provider, what's going on? So it depends on the use case. What we announced last month was not only the silicon and our own products, the Cisco 8000 that uses that silicon, but we also announced the offerings of silicon one through a merchant silicon program where third parties and OEM, a large customer, could actually transact with us on the silicon alone where we're selling them the actual silicon. In that context, the silicon comes with a full featured software development kit that sits on top of that silicon. You can consider that a device driver if you like, an abstraction layer. That then allows that third party to either use open source or to build their own network services stack on top of that SDK that can then leverage all of the power and the innovation that is in the silicon one engine. Before I get to my video question, because we're doing video, we care about that, but we love more bandwidth, we love more video action. How do you talk to customers that you meet with? Because we hear a lot from the community and our expert network on theCUBE alumni that in certain, there's a lot of pretender products out there. You bolt on a NIC, it's an offload. Where is it okay to have kind of like performance enhancements, performance enhancing hardware? No, that sounds kind of, that didn't sound right, but performance enhancements. I get it. Hardware, when the system is more important, so which customer profiles want more of that silicon one or Cisco 8000 versus either an enhancement product and how does the customer determine what's it fit for? One may be look good on paper, low price, high performance, how do you go in and say that's pretender, that's a player. It's interesting, I think that the fundamental route of the answer to that question is you have to look at the application stack that you're trying to deliver. If it's a homogeneous stack where the applications are infrastructure to deliver services to a third party, then what matters simply is that application and all the infrastructure underneath it, how can you deliver that most cost effectively, both in terms of capital cost but also operational cost in terms of power and human operational cost with regard to running the infrastructure. If you think about a heterogeneous situation, public cloud is a good example of that, where the public cloud provider is responsible and bears the cost of the infrastructure layer and the customer themselves are bringing the application workloads to run on top of that infrastructure in that heterogeneous model. Indeed there might be some valid business security and operational reasons for actually separating the infrastructure out and having a part of the infrastructure dedicated to run the application and a part of the infrastructure dedicated to the overhead of that application, whether it be virtual networking, security functions, analytics and so forth. So it's interesting, generally with our customers, what they're looking for more than anything else is bottom line, what is the most efficient way to deliver the end result, regardless of how it's architected or regardless of how the processing is separated into different layers of compute and dedicated hardware. What's the most effective way to deliver the outcome both in terms of capital cost and more and more operational cost and as everything gets faster, the power draw is more and more a very dominating function with regard to the ongoing operational costs of these networks. I want to get your thoughts on a couple of trends. One is the comeback of voice, Stu was riffing about his days working in voice over IP. Now voice, hey Alexa, it's not a real dip band with heavy applications, so okay, great, voice is coming back, that fits the service providers. But video is growing really fast. So video is putting a lot of pressure on service providers. What's the state of the art there? Can you make a comment on how you see that evolving? What are they doing? And what are some best practices and what are people doing? Yeah, I mean, you're exactly right. Video in particular over the top video, streaming video, but broadly video in all forms continues to grow at exponential levels. Our analysis, if you look at the Cisco V&I study by 2022, we predict that more than 80% of all internet traffic is actually going to be video. And along with its growth, unfortunately, the value per bit goes down because especially as you get higher definition videos, in particular, the value per bit to the service provider, to the entity bearing the transport cost of the video is actually going down. So what that drives our customers to do is first of all, provision very high bandwidth networks, but also optimize the most cost effective way to deliver that video at very high quality to their end users. I would say there's a few things that are top of mind in achieving that. The first is distributing out the network in particular distributing out peering into the metro areas of the network. And no longer having peering dedicated only at the far side of the backbone. When peering is done in the metro, that traffic is literally on the network for less kilometers, so that helps. I would also say the deployment of edge compute, caching and CDN services in the metro really helps in delivering video. We just got a great tutorial on video architecture in the major highways of the pipes, metro, peering. So it's changing the dynamics of peering relationships, traffic routes, but ultimately making it efficient. That's what you see there, exactly. Well, Michael, great to have you on. I know you got Mobile World Congress coming up in February. Always a big show. Spill some of the announcements for us. I'd love to, but unfortunately, I would be not popular with my bosses. I know, just teasing you. I know you got some good stuff on. We're waiting to hear them. We haven't heard anything, but we're getting some rumblings. As always, big announcements from you guys. Congratulations. Thanks for coming on. Thank you so much. We look forward to it. Great insights here on theCUBE, on the service provider, market the needs, what's going on in the network, and really ultimately videos changing. And also the architecture is changing. This is putting more pressure, again, more bandwidth, more things are happening. This is the Cisco-powered Cube here in Barcelona. I'm John Furrier, Stu Miniman. Thanks for watching.