 Welcome to this CUBE Conversation. I'm Dave Nicholson, and this is continuing coverage of Google Cloud Next 21. I'm joined today by Sachin Gupta, General Manager and Vice President of Open Infrastructure at Google Cloud. Sachin, welcome to the CUBE. Thanks, Dave, it's great to be here. So you and I both know that the definition of what constitutes cloud has been hotly contested by some over the last 20 years. But I think you and I both know that in some quarters there really has never been a debate. NIST, for example, the standard body that calls out what constitutes cloud, has always considered cloud an operational model, a set of capabilities, and it has never considered cloud specifically tied to a location. With that in mind, how about if you share with us what was announced at Cloud Next 21 around Google distributed cloud? Yeah, thanks, Dave. The power of cloud in terms of automation, simplicity, observability is undeniable. But our mission at Google Cloud is to ensure that we're meeting customers where they are in their digital transformation journey. And so in talking to customers, we found that there are some reasons that could prevent them to move certain workloads to cloud. And that could be because there's a low latency requirement. There is high amounts of data processing that needs to happen on-prem. So taking data from on-prem, moving it to cloud, to get it processed all the way back may not be very efficient. There could be security, privacy, data residency, compliance requirements that they're dealing with. And then some industries and for some customers, there's some very strict data sovereignty requirements that don't allow them to move things into the public cloud. And so when we talk to customers, we realize that we needed to extend the cloud and therefore we introduced Google Distributed Cloud at Next 2021. And what Google Distributed Cloud provides is all of that power of cloud anywhere the customer is needed. And this could be at a Google network edge. It could be at an operator or communication server provider edge as well. It could be at the customer edge. So right on-premise at their site. It could be in their data centers. And so a lot of flexibility in how you deploy through a fully managed hardware and software solution delivered through Google. It's interesting because often statistics are cited that somewhere near 75% of what we do in IT is still quote unquote on-premises. The reality is however that what's happening in those physical locations on the edge is looking a lot more cloudy, isn't it? Yes, customers are looking for that computational power, storage, automation, simplicity in all of these locations. And so what does this look like from an infrastructure stack perspective? Is there some secret sauce that you're layering into this that we should know about? Yes. So let me just talk about it a little bit more. So we start off with third party hardware. So we're sourcing from Dell, HPE, Cisco, NVIDIA, NetApp, bringing it together. We're using Anthos, you're hopefully familiar with Anthos, which is our hybrid multi-cloud software layer. And then on top of that, we use open source technologies, for example, built on Kubernetes. We offer a containerized environment, a VM environment that enables both Google on first party services, as well as third party services that customers may choose to deploy on top of this infrastructure. And so the management of the entire infrastructure top to bottom is delivered through Google directly. And therefore, customers can focus on applications. They can focus on business initiatives and not worry about the infrastructure complexity. They can just leave that to us. So you mentioned both Kubernetes, thinking of containerization as cloud native. You also said VMs. So this spans the divide between containerized microservices based applications and, say, VMware style of virtual machines or other VMs. Yes, look, the majority of customers are looking to modernize and move to a containerized environment with Kubernetes, but there are some workloads that they may have that still require a VM-like environment. And having the simplicity and the efficiency of operating VMs like containers on top of Google distributed cloud built on Anthos is extremely powerful for them. And so it goes back to our mission. We're going to meet customers where they are. And if they need VM support as well, we're providing it. So let's talk about initial implementations of this. What kind of scale are you anticipating that customers will deploy? The scale is going to vary based on use case. It could be a very small, let's think about it as a single server type of scale all the way to many, many dozens of racks that could be going in to support Google distributed cloud. And so for example, from a communication service provider point of view, looking to modernize their 5G network, in the core, it could be many, many racks with Google distributed cloud, the edge product. And for their ran solutions, it could be a much smaller form factor as an example. And so depending on use case, you're going to find all kinds of different form factors. And I didn't mention this before, but we also, in addition to scale, we offer two operational modes. One is the edge product, so Google distributed cloud edge that is connected to the cloud and so gets operational updates, et cetera, directly to the cloud. And the second one is something we call the hosted mode. And in hosted mode, it's completely air gout. So this infrastructure where it is modernized and provides rich 1P and third party services does not connect to the cloud at all. And therefore the organizations that have the strictest data and see sovereignty requirements can benefit from a completely air gap solution as well. So I'm curious, let's say you started with an air gap model, often our capabilities in cloud exceed our customer's comfort level for a period of time. Can that air gap initial implementation be connected to the cloud in the future? The air gap implementation, like typically customers, the same customer may have multiple deployments. One will require the air gap solution and another could be the hosted solution and the other could be the edge product, which is connected. And in both cases, the underlying stack is consistent. And so while I don't hear customers saying I want to start from air gap and move, we are providing Google distributed cloud as one portfolio to customers so that we can address these different use cases. In the air gap solution, the software updates obviously still come from Google and customers need to move that across the air gap, check signatures, check for vulnerability, load in the system and the system will then automatically update itself. And so the software we still provide, but in that case there's additional checks that that customer will typically go through before enabling that software onto their system. Yeah, so you mentioned at the outset some of the drivers, latency, security, et cetera, but can you restate that? Now I'd like to hear what the thinking behind this was at Google when customers were presenting you with a variety of problems they needed solutions for. I think it bears recapping that. Right, so let me give you a few examples here. So one is when you think about 5G, when you think about what 4G did for the industry in terms of enabling the big economy, with 5G we can really enable richer experiences. And this could be highly immersive experiences, it could be augmented reality, it could be all kinds of technologies that require lower latency. And for this you need to build out the 5G infrastructure on top of a modernized solution like Google distributed cloud. Let me just get into a few use cases though to just bring some color here. For example, for a retailer, instead of worrying about IT and infrastructure in the store, the people in the store can focus on their customers and they can implement solutions using Google distributed cloud for things like inventory management, asset protection, et cetera in the store. Inside a manufacturing facility, once again, you can reduce incidents, you can reduce injuries, you can look at your robotic solutions that require low latency, feedback, et cetera. There's a whole bunch of emerging applications through ISVs that a rich on-prem or anywhere you want it, in the edge infrastructure, can enable a new suite of possibilities that weren't possible before. In some cases, customers say, you know what, I want 5G, but I actually want a private 5G deployment and that becomes possible with Google distributed cloud as well. So we talked a little bit about scale. What's the smallest increment that someone could deploy? You just gave an example of retail. Some retail assets are small stores without any IT staff at all. There's the concept of a single node Kubernetes cluster, which it's something we love to come up with in our business, terminology that makes no sense, single node cluster. The point is these increments, especially in the containerized world are getting smaller. What's the smallest increment that you can deliver you're planning to deliver? I'll answer this two ways. First of all, we are planning to deliver a smallest increment, which is, think of it as one server. We are planning to deliver that as well, all the way up to many, many racks. But in addition, there's something unique that I wanted to call out. Let's say you're in the medium or larger deployment in the racks and you want to scale up, compute and storage separately. That's something we enable as well, right? Because we will work with customers in terms of what they need for their application and then scale that hardware up and down based on their need. So there's a lot of flexibility in that, but we will enable scale all the way down to a single server unit as well. So what is the feedback been from partners, from the partners that will be providing the hardware infrastructure, folks like Dell? What is their reaction then? Yeah, I think that they're obviously very eager to work with us. We're happy to partner with them in order to provide customers flexibility, any kind of scale in any kind of location, different kind of hardware equipment that they need. But in addition to those partners on the hardware side, there are customers and partners as well who are enabling rich experiences and solutions for that retailer, for that manufacturer, for example. And so working with AT&T, we announced partnership on 5G and Edge to enable experiences, especially in the areas of retail and manufacturing like I talked about earlier. But then in Europe, we're partnering with OVH Cloud, for example, in order to enable very strict data-solving requirements that are happening in that country. And so where this many communication service providers as many partners trying to solve for different use cases for their end customers. Yeah, that makes a lot of sense. So when, let's pretend for a minute that you're getting Yelp reviews of this infrastructure that you're responsible for moving forward. What would a delighted customer's comments look like? Yeah, I think a delighted customer's comments will be probably in two or three areas, all right? So first up will be it's all about the applications and the end user experience that this can enable. And so the power of Google, AI, ML, technology, third-party software as well, they can run consistently single operational model, build once, deploy anywhere is extremely powerful. So I would say the power of the applications and the simplicity that enables is number one. I think number two is the scale operations experience that Google has. They don't need to worry about, do I have five sites or 500 sites or 5,000 sites? It doesn't matter. The fleet operations, the scaled operations capability, the global network capability that Google has, all of that experience in size, reliability engineering, we can now bring to all of these vast amounts of edge locations. So they don't need to worry about scale at all. And then finally, they can be sort of rest assured that this is built on Anthos, it's built on Kubernetes. There's a lot of open source components here. They have flexibility, they have choice, they can run R1P services, they can run third-party services on this. And so we're going to preserve the flexibility and choice. I think these are the things that would likely get highlighted. So Sachin, you talked to customers around the world. Where do you see the mix between new stuff going into infrastructure like this versus modernized and migrated workloads into this solution? What does that mix look like? And I know it's a bit of speculation, but what are your thoughts? Yeah, I think Dave, that's a great question. I think it's a difficult one to answer because we find that those conversations happen together with the same customers, at least that's what I find. And so they are looking to modernize, create a much richer environment for their developers so that they can innovate much more quickly, react to business needs much more quickly to cater to their own end customers in a much better way, get business insights from the data that they have. They're looking to do all of this, but at the same time, they have perhaps legacy infrastructure or applications that they just can't easily migrate off of that may still be in a VM environment, more traditional type of storage environment, and they need to be able to address both worlds. And so yes, there are some who are so-called born in the cloud, everything is cloud native, but the vast majority of customers that I talk to are absolutely looking to modernize, like you don't find a customer that says just help me lift and shift, I'm not looking to modernize, I don't quite see that. They are looking to modernize, but they want to make sure that we have the options that they need to support different kinds of environment that they have today. And you mentioned insights. We should explore that a little further. Can you give us an example of artificial intelligence, machine learning being used now at the edge where you're putting more compute power at the edge? Can you give us an idea of the kinds of things that that enables specifically? Yes, so when you think about video processing for example, if I have a lot of video feeds and I'm looking based on that, I want to apply artificial intelligence. I'm trying to detect object inventory movement, people movement, et cetera. Again, adhering to all the privacy and local regulations. When I have that much data streaming in, if I had to take that out of my edge all the way across the WAN network into the cloud for processing and bring it all the way back and make a decision, I'm just moving a lot of data up and down into the cloud. And in this case, what you're able to do is say, no, you don't actually need to move it into the public cloud. You can keep that data locally. You can have a Google distributed cloud edge instance there. You're going to run your AI application right there, achieve the insights and take an action very, very quickly. And so it saves you from a latency point of view so significantly and it saves you from a data transmission up and down into the cloud significantly, which sometimes you're not supposed to send that data up that there's data residency requirements and sometimes the cost of just moving it is it doesn't make sense. So do you have any final thoughts? What else should we know about this? Anything that we didn't touch on? I think we touched on a lot of great things. I think, I'm just going to reiterate, you started with, what is the definition of cloud itself? And our mission once again, is to really understand what customers are trying to do and meet them where they are. And we're finding that they're looking for cloud solutions in a public region. We've announced a lot more regions. We continue to grow our footprint globally. But in addition, they want to be able to get that power of Google Cloud infrastructure and all the benefits that it provides in many different edge locations all the way onto their premises. And I think one of the things we perhaps spent less time on is we're also very unique in that, in our strategy, we're bringing in underlying third-party hardware, but it's a fully managed solution that can operate in that connected edge mode as well as a disconnected hosted mode which just enables pretty much all the use cases we've heard about from customers. So one portfolio that can address any kind of need that they have. Fantastic. Well, I said at the outset, Sachin, before we got started, you and I could talk for hours on this subject. Sadly, we don't have hours. I'd like to thank you for joining us in theCUBE. I'd like to thank everyone for joining us for this CUBE conversation, covering the events at Google Cloud Next 2021. I'm Dave Nicholson. Thanks for joining.