 Live from San Francisco, it's theCUBE, covering Google Cloud Next 19. Brought to you by Google Cloud and its ecosystem partners. Welcome back to theCUBE's live Google Next 19 coverage. I'm John Horace Dave Vellante. We're here for three days of wall-to-wall coverage, breaking down all the content from Google Cloud's big conference here, Google Next 2019. Our next guest, Joe Kava, Vice President of Google Data Centers, spans all the data centers that Google and Google Cloud deploy. He's the man in charge of thousands of full-time employees, thousands of contractors, tens of thousands of construction workers. He's building out the infrastructure and footprint to make the cloud work for us. Joe, welcome to theCUBE. Thank you both very much. So, Sundar Pakai, the CEO of Google, kicked off the keynote. The new CEO of Google Cloud, Thomas Kurian, came on only 10 weeks into the job. Clearly, the investment in Google Cloud, new building separate from campus. So, Google and Google Cloud are two separate groups that's been reported clearly by us and others. But at the end of the day, you got to run all this stuff on somewhere. So, you know, you guys have deep, deep experience. I know personally in following Google and covering Google, the excellence in engineering, the excellence in building out data centers. What is the status of, just quickly take a minute to explain how it's organized. You got Google proper, which is where everyone knows Google, Google search, et cetera, Gmail, and Google Cloud. How does that operate? What's some of the data points? Okay. So, as the head of the teams that do everything from procuring land and writing energy contracts and buying renewable energy to designing, building and operating all the data centers, Cloud is one of my largest customers. But my other customers are search and ads and Gmail and G Suite. So, really our data centers at Google are built for the entire Google enterprise. And Cloud happens to be one of our largest internal customers in that enterprise. Talk about some of the stats, countries, regions, data centers. What's the nuance? Because you have regions, you have availability zones. Talk about some of the stats inside the numbers. So, at the, starting at the Google level, we have data centers in four continents. So, we're in North America, South America, Asia, and Europe, of course. We have a probably one of the world's largest global private networks with 13 undersea cables that are our own and hundreds of thousands of miles of dark fiber and lit fiber that we operate, like I said, probably one of the world's largest networks. We have in Europe, we're in five countries in Europe, we're in two countries in Asia, we're in one country in South America. And that's at the Google, in North America, of course, we have many, many, many sites across all of North America. That's at the Google level. Now, Cloud has 19 regions that they operate in and 58 zones. So, each region, of course, has multiple zones in it. You know, we cover, Google has presence in over 200 countries worldwide. So, really, it is truly global operations. So, the 200 countries is Google-wide. The 19 Cloud regions and 58 availability zones, that's Google Cloud, is that right? Okay, and so, do you not sort of mix infrastructure for Cloud and things like Gmail and Maps and Search? Is that correct, they're separate infrastructures? It's not so separate infrastructure. So, when my team builds a data center, any one of our internal customers could be in that data center. In addition to the Google-owned and operated data centers, we also have some sites that are leased in certain regions and Cloud may be occupying those. But regardless of whether it's owned or leased, it's the same hardware in there, it's the same operation staff that are in there, the same expertise, the same deep knowledge about operating Cloud environments. And so, regardless of whether we built it or we leased it, it's the same experience. From a CIO's perspective, it's the same SLA, no matter what availability zone it is. I mean, that's what really matters, yeah, okay. Talk about the scale, because one of the things I liked in the keynote, Sundar is awesome, he creates great keynote. He used scale multiple times. He also had a clever comment around steel, which he said before publicly, the amount of steel that goes into building this, this gives you guys large scale, you guys are building on massive, I mean, it's like smart cities, almost, you guys have your own country, pretty much, on the infrastructure. What are some of the key learnings that you guys have, because you have to be very efficient, Google likes to solve hard problems. You guys have done some things with sustainability, specifically. Talk about some of the learnings as you guys have been building out these data centers for years, with Cloud on a massive expansion. You got to watch the environment, you got to do some things, what are some of the learnings, what are some of the notable accomplishments you guys are forging on, and what are some of the goals? So at Google, we've been at this for two decades. For more than 20 years, we've been building and innovating on hyper-efficiency, hyper-scale, basically trying to build infrastructure that was more sustainable than had ever been thought possible. And then as our cloud business started to expand and boom, frankly, we set apart to build the world's most sustainable cloud. And really what that means is that, you know, we were the first company to announce that we were buying 100% renewable energy, new renewable PPAs to match 100% of our consumption. And in 2017, we achieved that. That was after being carbon-neutral for 10 years before that. So going all the way back to 2007, we were a carbon-neutral company by mostly buying high-quality carbon offsets. Then we decided that, no, we want to advance the transition to renewable and sustainable energy. So we started buying direct power purchase agreements for wind and solar. And then in 2017, we announced that we had matched 100%. What that means is that we've acquired over three gigawatts of new solar and wind power purchase agreements. And now we're taking it a step further. We have a very ambitious kind of moonshot arguably to not only match our consumption, but match it 24 hours a day, seven days a week, 365. So you can imagine the complexity with this because the wind doesn't always blow, the sun doesn't always shine. And so that's going to take moonshot thinking in order for us to get there. But we feel so strongly about it. We're so committed to this cause that we've got dedicated team working on this right now. So it's not just squeezing PUE out of the data center. I'm sure you're doing that, but like you say, it's a new shot. Absolutely, we've been doing that since the earliest days. I've been at Google for over 11 years. From the very first day I got there, I was completely blown away with the numbers that I was seeing about the PUE. And for maybe your audience, PUE is a measure of efficiency in a data center. And at the time, like back 2008, Google was achieving numbers that the EPA thought wouldn't be achieved until like 2020. And so I started to dig in and look how, and it was astounding to me, the lengths that the company had gone to to optimize every single step of the way from the high voltage transformers in our own dedicated substations, excuse me, that are much more efficient than typical utility transformers, all the way through minimizing the number of transformations going from grid level, like 345,000 volts down to server voltage level, minimizing the number of transformations, reinventing the way people think about cooling. When we, when I got to Google, I was also amazed. Our data centers are running at like roughly about 80 degrees Fahrenheit. Most data centers run at like 65 degrees Fahrenheit. Our data centers consume about half of the energy of a traditional enterprise data center at the same size. And in addition to that, we're producing about seven times the compute capacity for the same amount of watts that enterprise data centers. This comes from a practice of engineering, really purpose-built engineering from the day one into the overall holistic plan of the build-up. It's frankly, it's a relentless focus on efficiency and innovation right from day one. When I got there, it had already been well in motion, but it's optimizing across the entire stack. It's optimizing software to be efficient, optimizing the server architecture to be more efficient, optimizing the power supplies in the servers, optimizing the racks, designing the racks to be working with the cooling equipment specifically. Our cooling systems are unique to Google. They're not traditional air conditioning units that you would buy for traditional data centers. Sometimes we'll site data centers where we can use natural environment in Finland, our data center is right on the Gulf of Finland, and we use cold seawater from the Gulf of Finland to cool the data center. So to be clear, you're doing quite a bit of vertical integration, whether it's your own transformers or power supplies and other equipment, right? Library optic across the Atlantic as Sundar pointed out. You guys are doing your own stuff. And the efficiencies you pass on in savings to the customers and society with the sustainability piece. That's right. Two angles on that. Really, it's good business, of course, because it's bottom line, but more importantly, it's also the right thing for us to do. We feel very strongly that we need to be responsible for our impact on the environment and to minimize that impact and to be accountable for it. And we realize that the only way we can truly be accountable for our impact on the environment and for our energy consumption is to have it matched with renewable energy 24 hours a day, seven days away. Not to take a sidetrack here, but we've been covering the tech business for many, many decades and certainly recently tech kind of got a bad name because of some headlines. But I always look for tech stories that, you know, everyone's like, oh, tech's bad for people. There's always good story. I think this is an example of tech for good. You guys have taken real engineering, building large scale systems and facilities that have software running on it. It's really a tech for good story. So congratulations on that. That's awesome work. Now I want to kind of ask you to put you on the spot here because I think one conversation we're hearing a lot and I want to get your expert opinion on this. It could be Google and also as a person in the industry. Security in the supply chain has come up a lot in terms of whether chips have been hacked. We've heard things like that in the story. Some of them have proven to be misinformation and fake news, but you got to watch security. Google's really hardcore on security is you know, you live that. How do you look at the supply chain? Because if you're not just throwing contractors at this, you could be taking a very holistic ground zero engineering approach to a holistic picture. How do you guys manage the security challenge in the supply chain throughout the facilities from chips to access, things of that nature? Sure, so there's two aspects. There's always the logical and the physical security aspect. From the physical security aspect in our warehouses that we manage, of course we apply the same rigorous standards for physical security that we do at their data centers. That's multi-layer and various different types of security technologies that we apply. But on the logical side, I think you're probably familiar with our Titan chip set that we developed and those Titan chips are put in all of our servers and from the time that they're built to the time that they're in the facility, those chips sets are securing the servers. From the logical side though, my colleagues on our information security team are truly the experts that could address that. And that's where the software shines. That's right. Again, this is not just one, it's not a silo. You got to build physical build. It's kind of a bigger, it's a holistic integrated model. It is, and this is from the data center industry perspective, for as long as there's been IT, there's always been the debate between facilities and IT, right? When I got to Google, I was also so relieved to say that was all technical infrastructure. And the IT systems, the software that runs on those data centers are all under the same technical infrastructure group. And so, you know, it all, the buck stops at ours. Well, for years, there was a discussion in general IT about those groups coming together. And I think the way they come together is the cloud, frankly. Because you haven't seen a lot of change within organizations of IT and facilities really working together, that's right. That's right, yeah. Well, Joe, thanks for coming on theCUBE. Thanks for sharing your insight. Final word, what's the thoughts, folks watching out there, who are trying to understand how to bring IP technology into facilities in general? I mean, a lot of people still have data centers, they still have on-premise activity from light bulbs to whatever. Any learnings in parting wisdom, folks watching there, in the facilities and or physical building space on how to build out these, whether it's smart cities, whether it's in construction and experiences you could share with folks out there, looking to build a holistic long-term plan. Yeah, there's a few things. First of all, we've published all of our energy efficiency best practices, and so I encourage everyone to take a look at those best practices because the best energy savings is the energy not consumed in the first place. So do all the right things to reduce the overall energy consumption in the first place. Two, we want to help further the transition to renewable energy, and so we've published a lot about our power purchase agreements, and a lot of the policy work that enables us to do those is also set in place for other large energy consumers that want to do the same thing. So our policy work can help to allow others to do the same thing. The third part of our sustainability aspect is really a circular economy. We want to have zero waste to landfill. We've currently achieved 91% diversion of all of our data center operations, so 91% is diverted to landfill, but we have an objective of 100%, no waste to landfill. And then that means you have to do smart things like better reuse, better recycling, better reselling of products that are still good but may be out of date for your use. And then just to end it off, we've really invested in our machine learning and AI intelligence. Both on the data center operations, we have now ML running some of our cooling systems in fully autonomous mode and doing a much better job of matching the cooling needs to the workloads at the time. And we took that same learning with our deep mind group partnered with them and we've applied that to a wind farm now as well so that they can better predict what the output of the wind farm is going to be 36 hours in advance that allows the operators of the grid to better bring on more energy and get higher value out of that wind energy. Great engineering story, at scale, congratulations. I love the societal impact, tech for good, congratulations. Love to have you back, talk about the impact of IoT. Joe, thanks for coming on. It's convergence at the edge, yeah. It's all coming together, wind farms to data center. Data center's not going away, obviously the cloud needs to run on servers and has to be done in an engineered fashion, Google's leading the charge there. It's a cube, live coverage. Day one of three days of coverage. We'll be right back after this short break.