 Live from the Mission Bay Conference Center in San Francisco, California, it's The Cube at Google Cloud Platform Live. Here are your hosts, John Furrier and Jeff Frick. Okay, welcome back everyone. You're watching Silicon Angles, The Cube, live in San Francisco, the Google Platform, Google Cloud Platform Live. This is their developer show, it's their premiere show breaking out from I.O. to really because so much is going on with Cloud. A lot of action, we were reporting all day on all the big news around containers, App Engine, Compute Engine, Container Engines, all kinds of engines on the developer side. Amazing show, I'm John Furrier with Silicon Angle, my co-host Jeff Frick. Our next guest is going to talk about the secret weapon of the Google Cloud, Morgan Dollar, Product Manager of Cloud Networking, welcome to The Cube. Thank you. Okay, so it's pretty obvious what's going on with Cloud in the market, everyone loves it. Economics are off the charts, SaaS is now business standard, still developing, people are now figuring it out in large scale. Two parts of the show here at Google, one is the app side, a lot of goodness there, integrated stacks and wall coolness, great for front end scale, and then the secret weapon is the backend. Interconnect, you guys announced Google Interconnect, a lot of networking features, and that really is the DevOps way to abstract the way the complexities of configuring, managing, getting down and dirty and networking, it's not what app developers want, but they want it to work. They want it to be programmable, they want virtualization, they want all that scale. So, tell us in your view, what's the big announcements here around the networking piece of Google Cloud, and what are the key things that you're talking about? Sure, so I mean, we announced today the launch of our Google Cloud Interconnect product, and we're really excited about that. It brings a lot of choice to our customers in terms of how they choose to connect to Google's infrastructure for their cloud usage, whether it's the use of VPN to get encrypted communication to Google, whether it's peering directly with us if they're a company that's used to operating their own network, or working with some of our phenomenal partners to connect and get the level of service that an enterprise might actually require. Earlier this year, we also announced our HTTP load balancing, and I think you can see that the direction we're taking with this really is bringing traditional networking features into the software-defined network stack. So, obviously the shot heard around the world, we were covering live at VMworld and Nasir was bought for $50 million. It's a huge deal, and all of a sudden, people were like, oh my god, software-defined networking became software-defined data center. What's your take on all that, and how do you make that easy for the cloud? What are you guys doing that's innovating, innovative within the cloud networking with SDN in particular? With SDN in particular, I mean, that's a great question. So, at some of the SDN conferences, we've announced that we were the first web-scale company and the first network to actually run SDN in production for Google services, and that's where we got started. We developed, we decided that the functionality that exists in the network today, actually by bringing it back up into software and having the ability to orchestrate that at a global level across a number of different applications, actually gave us a lot more flexibility for Google. You're seeing that now translate into the kind of functionality we're bringing for cloud customers in the cloud environment, multi-tenancy is key, and so SDN actually allows you to partition out networks and provide the isolation and the reliability that people expect. By developing our own SDN stack, you saw some of the announcements we made earlier today with the 1.5 release of our Andromeda stack, we're able to continue to radically improve performance and a five X increase in performance since the beginning of the year is a pretty phenomenal trend that we expect to continue going forward. So, talk about the relationship with Docker. Obviously containers give the developers some freedom, some application performance, some comfort around the dependencies of what's happening at the virtualization layer and then in the infrastructure. So, that being said, what do you guys do to make that easier with containers in particular because you guys have been using containers for a while? And what does that mean to the IT guys in the enterprise? Because the enterprise is heavily investing in cloud. They really want cloud, they want it their way, they want their use case, they're going to have on-premise, they're going to have, but they want some cloud. So, how does that translate the work you've done with the networking side and containers? Talk about that, and then how that translates to the guys in the trenches. I think those are two different ways of approaching the same thing. You're getting SDN, you're getting containerization, and what you're getting really is the abstraction away from the hardware-based models that used to exist, whether it was you had a server and you got a different server to do the next task that you wanted to do to run a different application. Same thing with networks. You used to partition them out, you put switches, you put different ports, and you isolated networks that way. SDN actually abstracts all of that out. The physical aspect of the network and for containers of the virtual machines themselves, you start moving a layer higher above it. And so you're able to actually, with the announcements for GKE earlier today, you can actually see that translate. People are going to start doing radically different things when you remove the boundaries that they're used to thinking about. The way we used to do ITs 10, 15 years ago, you'd go and you'd build specific aspects of the whole thing, of the whole stack. We're moving up from that, and so at that point it becomes sort of more orchestration, isolation. Where do you take your applications going forward and how, with removing boundaries, what else can you do that's new? That's what we're really excited about. So how to, I'm just writing a tweet here. You're part of the Seeker Weapon Group at Google. So let's take it down to the next level. So assuming that people are adopting at scale, this peering thing is interesting. You get this interconnecting model. We're at the peer 2.0 conference. Peering exchanges have been around. YouTube certainly participates in the massive scale. You guys are as well. Netflix is working with all these peering, and they're kind of taking it in the butt because they're at the beck and call of the people controlling the packets, whether it's throttling on the bandwidth, the last mile, deep packet inspection, the whole net neutrality. Now for a customer enterprise, it's about SLA and workloads. So you guys are offering this ability to connect into the Google backbone. Talk about that product, and how does someone get that done? So I think one of the great things to understand is that net neutrality is typically focused on the consumer aspect of access and internet. There are provisions, as far as net neutrality rules, that exist that actually recognize that enterprise and business-grade networking already exists. If you go back to the days of Frame Relay and X25, people have been doing packet privatizations on network, on private networks, on enterprise networks, for a very long time, whether it was X25, Frame Relay, MPLS, private IPs. Yeah, really, man, that's pretty good. That's the dust off the old protocol. But we've been there, right? That's what people did. We set up different PVCs. We had B2B relationships with Interfinite. You had this model where you had this committed rate that existed. You were allowed to burst on it. So you had some element that was guaranteed, some element that wasn't, that was opportunistic. And that model has just evolved and changed. And in a world where people, a lot more businesses are using the internet to run their businesses, whether it's by doing site-to-site VPNs, IPsec over the internet, or by connecting to cloud providers, there exists a need, and that's fulfilled by the traditional enterprise network service providers, to provide businesses the kind of connectivity they need to be able to predictably run their business. Imagine if a company like NASDAQ or the stock market actually said, well, we're sorry, we can't trade today because there's too much congestion on our network. That probably would not be an acceptable answer. And the same is true for companies, whether they're small or large, that are running on cloud. They have this need for predictable performance. Well, you bring back something that we were talking earlier with the senior management of the company earlier in the cube, which was this looks like and walks like the old enterprise of the 80s. All those frame relay and the X.25 networks were proprietary network stacks. You had SNA, FRI, VM, DECNED, and the X.400s were envelope, all that good stuff was happening. But again, it served a purpose. It was purpose-built. It was. More reliability. Now comes open source. Now comes TCPIP and it's like nth generation. And that's now the openness. So we got an open model enterprise with all the hardened architectural elements just completely decentralized. You're at a point that's in between technologies. It's an adoption curve that's ongoing. The traditional model, we've all accepted that. It was X.25, you migrated to frame relay, you went to MPLS, that never really shifted. The encapsulation aspect, of course, you went from hub and spoke topologies to a more flat topology within MPLS VPN where it's in any communication. Still had your data centers inside your offices. That model is now shifting. You're at North, South, East, West, and you've got virtualization, and you've got cloud. And now you're starting to bridge that model by going to third party providers, like Google, posting your workloads, your applications into the cloud, and you're seeing that pivot. And so for a period of time, who knows how long it's going to be, whether it's five years or 10 years, that shift is going to continue to happen. And then at some point, you're going to reach a level of maturity where a lot of companies that will have grown up in this phase, like all the startups today, they're not building infrastructure. They're not buying servers, they're not buying routers. What they're doing is they have their local offices and their connectivity, and they're putting all their production applications in cloud. And at that point, does the need for connectivity that exists for some people that are migrating, still exists going forward? So, I mean, I got to ask you, because this is really kind of an awesome conversation, because you're bringing up some real kind of interesting points. We're going through an inflection point and a shift at the same time. And I've just seen, Jeff, in the past year, we've been doing theCUBE, what one of the benefits of doing theCUBE is we get the high ground and see all the movement and marketplaces. And just in the past 12 months, there's been a massive shift in cloud. If you look at Docker and Kubernetes, in particular Docker first, containers, and now Kubernetes, the open source communities are rallying behind these two elements, mainly because the timing's perfect, right? The timing is interesting, right? So, I want you to weigh in on that and how it relates to what's going on with the network fabric, because the virtualization foundation's been set. That was boost one of the steroids of rebooting IT. And then now you've got open source on top, this mobile innovation, but now you've got cloud and now you've got Docker containerization, which is not a new concept, by the way, and you want to compute science. Well, I'll tell you, it's not new, but the timing of Docker with open source, now Kubernetes, why is that, in your opinion, a really big deal and how does it affect the network piece? I think it's a fundamental shift that you're seeing. And you're right, you brought virtualization as the first shift that happened, right? When you went from a dedicated server running one OS to this multi-tenancy within one box, now you're taking that up an extra layer of abstraction to a shift that's heading in the direction where an application, whether it's a microservice or macroservice, a production workload of any kind, is just going to exist as like a small, what we used to refer to as a binary, right? It's just a small package that can rapidly be pushed to the skill that it needs to skip to. And I think that's an equally great challenge on the networking side because at that point you don't have the predictable provisioning legacy model that existed. That's where SDN is going because as you're orchestrating the deployment of those packages, you're simultaneously orchestrating the network that goes with it, the packet flows and the provisioning that you might need as well. So what's the impact to you guys? Because obviously orchestration, automation, so Docker gives that greatness for compatibility. I'm a developer, I Dockerize it, now I don't have to worry about the nuance between different clouds and all that stuff. Magic happens, right? Magic juju of implementation. What's the impact to SDN? Because now, is SDN policy-based at this point? Is there automation in that orchestration so the developer has that compatibility? What is the impact for the network layer? People really want to know that impact. I think it really becomes a joint orchestration challenge as you're pushing out containers and binaries, you're orchestrating the network at the same time as the two bridge. You get it into the layer of NFV and network function virtualization is how you're moving things like load balancers. A way for them either an appliance or a binary that runs on one virtual machine that runs out of capacity that just spin up a new one into these ubiquitous sort of stacks that are capable of fronting traffic and then knowing where your binaries are going to be positioned whether it's on one virtual machine or another across regions, across data centers and then just have programmatic ways of programming path delivering the package to where they need to be. So I got to ask you a question. This is kind of, it's a loaded question. So, you know, get ready. I might not answer. So, let me try to answer because there's no wrong answer because we had two guests on theCUBE. I won't say their names, watch the videos if you watch and go to the YouTube channel. One person said, ah, no one really gives a crap about multi-cloud right now. It's just not a non-starter. It's just all, you know, headroom. And then we had another guest saying, oh, there's not a customer they don't talk to that's going multi-cloud. So, what is it? Is it multi-cloud future or non-multi-cloud future for IT enterprise? That have on premise. You know, that's a hard question to answer because I think it's going to vary by customers, right? It's really hard to predict what each individual customer might do. If you're a startup and you're building your staff from scratch, what you're probably going to do is you're going to pick the company that has the fastest innovation curve over the next few years. It's going to enable you to grow your business the way you want to. If you're a highly risk-sensitive enterprise that operates in a place that's very risky and when you want to be more cautious and diligent about your risk, you might want to choose to have a multi-vendor strategy, right? That's not unusual for people to do it. I think at some point, you know, not all cloud providers are going to be providing you with the same set of resources, the same capabilities, the same benefits, or the same economics. And at some point, you make a decision about what's right for your business, whether you're a small startup or a large company or somewhere in the middle, whether you're a SaaS vendor or otherwise, that the overall platform has to make sense from the developer all the way through the production and the economics and what you can do with it. Let's talk about the perimeter-less security. I'll say networking guys know what perimeter-less security is, so that was the old way, you know? Build the walls up, make sure no one comes in, but if bad guy gets in, they get the free reign of the castle. How'd that work out? It's not working out, but it doesn't work well. It's just not, I mean, have we had... Now's a good time. I haven't really read anything about breaches lately. I mean, what's today? Incidents are up, breaches are happening, so I'll say it's not working. APIs are like Swiss cheese. This whole SOMA all over the place. So we're in a notification economy, we've got messaging apps, we've got APIs, no brain air perimeter, perimeter's dead. It's been breached. What does perimeter-less mean? Is it a stateless benefit, stateful applications? How does someone implement security in the cloud? So, I think the concept of perimeter-less and perimeter-less security really is tied to legacy networks. That's the way they were built. You used to consider, you closed your front door and you assumed that just because you unlocked it, it was safe. And far be it from anybody to go pick your lock and actually come inside your house and steal your stuff. That's been fundamentally sort of... Proven to be secure. Proven to be a poor security approach. You can say sucky, it's secure. Well, I can say sucky, but it's just not a sane approach. Ultimately, if you believe that it's unbreachable, then at some point, somebody will prove you wrong. So how do you handle that in a cloud environment? I think it's different, right? Within the cloud environment, there's a number of different things that are done to actually make perimeter not be a thing, because you're operating in this multi-tenancy. There's no perimeter. There is no perimeter. Inside one data center, we're going to have Google services with our information and right next to it, and sometimes side-by-side or on the same sheet, you're going to have cloud customers. And so you have to manage the security very carefully and that's, you know, you do that. And access too, right? I mean, that's inside the data center, but we're all accessing it via our mobile. You are accessing your mobile. Just ultimately, the notion that any device is secure, you know, fundamentally leads to bad development. You make mistakes, it happens. Everybody makes mistakes, whether you should code, you have a breach that's exposed, you didn't think about it, it wasn't debug properly, it happens. Firewall rules, right, another thing, where it's very easy to fat finger. You just change a firewall rule and all of a sudden, whoops, your entire set of rules is useless, because you have one rule that supersedes all of them that says, hey, yes, sure, I'm happy to take all of this traffic. And so people are changing and I think this goes back to how you interconnect and when you're seeing companies sort of interconnect networks that are party cop providers, that notion as well has to come with the sense that regardless of how secure you think either side might be, the notion that the perimeter is a trustable thing doesn't really exist. Well, the network has to be self-aware of what's going on, right? So is that done at the network layer or is that going to be pushed out to the workload of the app or? It depends which side you're talking about anything. If you look at any company, sort of small to medium or large company, I think the recognition that even their internal, what they consider their internal perimeter isn't a trust zone. I think that's fundamental, right? Because at that point, you start making different decisions on how you secure your machines, your hosts, your servers, your backups for your mailboxes. It's one of the things that people might overlook. You back that up. I mean, somebody just took all of those mails and you end up with PII breaches. So you need to secure the entire chain and of course it'll be that touch of data with this. Very interesting area. It certainly opens up the Pandora's box, but virtualization could be interesting there. That's where the app itself. So there's a lot of computer science guys working on this. It's a big problem. And I think that's one of the things where container is actually my help going forward. Because you take the host, the OS out of it and you don't need to necessarily think about OS protection because that's taken care of for you. My vision, and not this is going to be accurate, but Jeff and I talk about this all the time and we're riffing on this, is that each app and piece of data will have its own compiler and link operating environment around it, just watching everything that it does. So you put the security around like a container is locked. That's locked, you need a key to unlock. I mean, that seems to be the only way to work. I mean, things going to float around in the ether. Might as well just let it go versus locking it in. So, all right, so net net, what's the bottom line from your group? You're the product guy. What's going on with the product? Give us the update. What are you excited about? I mean, you're really not going to be not excited about your own product, but like what are you working on that we need more to do more? You're pushing the pedal to the metal. What are you putting up on the board that you're proud of? And what do you still work in progress? I think what you've seen from us this year really is just a start, right? From a networking standpoint, we really think that this is a space that's ripe for innovation. You've seen some of it happen over the past year or two and we think that that trend is going to continue accelerating. What we're doing with Andromeda really gives us a huge runway in terms of the types of functionality and the types of performance that we can keep on bringing to customers in a virtualized networking environment. I think that's really exciting for us going forward. NFV, people talk about it as a buzzword, but really this notion of bringing functionality that's been implemented in an appliance that people have been purchasing and you buy one and then you run out of scale, you buy a second one and you keep on sort of building these blocks. Moving that into software and ubiquitously distributed highly scalable stacks so that you can go like with a little balancing offer. Going from one query per second on your application the day because you just started it, you publish it out, you go to a million queries per second and that just scares it happens. And that's a challenge for telcos big time today. Also, they want NFV, they want NFV because the performance, they're just managing the log data for instance, you just think about the ingest capabilities. You guys are like one big telco now, Google. So that's the question everyone's on their mind. You have this large scale, you have things like NFV. What's on the roadmap for you guys? You can share what you can publicly, what's not confidential. I think in general we don't comment on future products. I think the announcement we're doing today on VPN going Alpha and then in GA early next year, I think it's a good indication you tie that in, look at load balancing and you look at the set of things we're doing across connectivity, across performance. Identity based functionality. And you expand that out, right? And you try to take a wild guess as to where we might be in a year and two years and three years and see what that innovation curve looks like. And we're really excited about that. And I think the identity based authentication is interesting, right? You guys do that internally. Two-factor authentication. Yes, there's some stuff wrong. I'm not the best person to speak to. Pick your brain. Let's try to get the roadmap out of your head. Morgan, thanks for coming on. I'll give you the last word for the folks out there. Give them the bottom line. What's the vibe of your section to show? What's the net net of what you guys announced? What's the most exciting thing that you guys are talking about today? More choice, more performance and much more to come. All right, Google's putting their networking kind of really building out the solid scale on the back end, making it easy for developers. And again, a secret weapon in my opinion is what they're doing on the interconnect side. Hearing, these are really critical infrastructure, pieces of the puzzle here. They might not get the fanfare. So we really appreciate you taking the time. We're live here inside theCUBE, live in San Francisco for Google's Developer Conference. The Google Cloud Platform Live. I'm John Furrier with Jeff Frick. We'll be right back after this short break.