 From Copenhagen, Denmark, it's theCUBE, covering KubeCon and CloudNativeCon Europe 2018. Brought to you by the CloudNative Computing Foundation and its ecosystem partners. Hello everyone, welcome back to the live Kube coverage here in Copenhagen, Denmark for KubeCon 2018, Kubernetes European Conference. This is theCUBE. I'm John Furrier, my co-host Lauren Cooney. We're here at Adrian Cockroft who's the Vice President of Cloud Architecture and Strategy for Amazon Web Services, AWS. Kube alumni, great to see you. Legend in the industry, great to have you on board today. Thanks, come on. Thanks very much. Quick update, Amazon, we were at AWS Summit recently. Obviously, re-invent last year. Again, it gets bigger and bigger. Just continued growth. Congratulations on successful great earnings you guys posted last week. Just continuing to show the scale and leverage that the cloud has. So again, nothing really new here. Cloud is winning in the model of choice. So you guys are doing a great job. So congratulations. Open source, you're handling a lot of that now. This community here is all about driving cloud standards. You guys position on that is, standards are great. You do what customers want, as Andy Jassy always says. What's the update? I mean, what's new since Austin last year? Yeah, well, it's great to be back on. Had a great video of us talking at Austin. It's been very helpful to get the message out of what we're doing in containers and what the open source team that I lead's been up to. It's been very nice. Since then, we've done quite a lot. We were talking about doing things then, which we've now actually done and delivered on. We're getting closer to getting our Kubernetes service out, EKS, we hired Bob Wise. He started with us in January. He's the general manager of EKS. Some of you may know Bob, he's been working with Kubernetes since the early days. He was on the CNCF board before he joined us. So he's working very hard. They have a team cranking away on all the things we need to do to get the EKS service out. So that's been major focus, just get it out. We have a lot of people signed up for the preview, huge interest, we're onboarding a lot of people every week. And yeah, we're getting good feedback from people. And we have demos of it in the booth here this week. So you guys are very customer-centric, following you guys closely as you know. What's the feedback that you're hearing? And what are you guys ingesting from an intelligent standpoint from the field? Obviously a new constituent, not a new, but a major constituent is open source communities as well as paying enterprise customers. What's the feedback? What are you hearing? Obviously beyond tire kicking, there's a general interest in what Kubernetes has enabled. What's Amazon's view of that? Yeah, well open source in general is always getting a larger slice of what people want to do. Generally people are trying to get off of their enterprise solutions and evolving into an open source space. And then you kind of evolve from that into buying it as a service. So that's kind of the evolution from on-prem custom or enterprise software to open source to as a service. And we're standing up all of these tools as a service to make them easy to consume for people. Just everybody's happy to do that. What I'm hearing from customers is that they, that's what they're looking for. They want it to be easy to use. They want it to scale. They want it to be reliable and work. And that's what we're good at doing. And then they want to track the latest moves in the industry and run with the latest technologies. And that's what Kubernetes and the CNCF is doing, gathering together a lot of technologies, building big community around it, just able to move faster than we'd move on our own. And we're leveraging all of those things into what we're doing. And the status of EKS right now is in preview? Yeah. At the estimated timetable for GA? In the next few months. Next few months, okay. We'll get it out. Then right now it's running in Oregon, in our Oregon data center. So the previews are all happening there. So that gets us kind of our initial thing. And then everyone go, okay, we'll want it in our other region. So we have to do that. So another service we have is Fargate, which is basically you say just here's a container. I want to run it. You don't have to declare a node or an instance to run it first. We launched that at ReInvent. That's already in production, obviously. We just rolled that out to four regions. So that's in Virginia, Oregon, Dublin, and Ohio right now. And a huge interest in Fargate. It kind of lets you simplify your deployments a little bit. And we just posted a new blog post that we have an open source blog. You can find if you want to keep up with what's going on with the open source team at AWS. So we just did a post this morning and it's a first pass at getting Fargate to work with Kubernetes using a virtual Kubelet, which is a project that was kicked off by, it's an experimental project. It's not part of the core Kubernetes system, but it's running on the side. It's something that Microsoft came up with a little while ago. So we now have sort of, we're working with them. We did a pull request, they accepted it. So that team and AWS and a few other customers and other people in the community working together to provide you a way to start up. Fargate as the underlying layer for provisioning containers underneath Kubernetes as the API for doing the management of that. So who do you work with mostly when you're working in open source? So who do you partner with? Who do you, you know, what communities are you engaging with in particular? It's all over. I mean, yeah, it's, I mean, wherever the communities are, we're engaging with them. Okay, any particular ones that stand out is? Well, other than CNCF, we have a lot of engagement with the sort of the Apache Hadoop ecosystem, a lot of work in data science. There's many, many projects in that space in AI and machine learning. We've sponsored the, we've spent a lot of time working with Apache MXNet, but we're also working up with TensorFlow and PyTorch and CAFE and those are all open source frameworks. So there's lots of contributions there. In the serverless arena, we have our own SAM serverless application model. We've been open sourcing more of that recently ourselves and we're working with various other people and so. So there's, you know, across these different groups, there's different conferences you go to, there's different things we do. We just sponsored Rails conference. My team sponsors and manages most of the open source conference events we go to now. So we just did RailConf. We're doing a Rust conference soon, I think, with Python conferences. I forget when all these others, there's a massive calendar of conferences that we're supporting. Well, make sure you email us that list we're interested in and looking at what the news and action is. So the language ones, Oscones are flagship one, we'll be a top level sponsor there and we'll be, when we get to the US, KubeCon in Seattle, it's right there. It's two weeks after re-invent. It's going to be much easier to manage. One week after re-invent was like, everyone just wants to take that week off, right? We got a week for everyone to recover and then it's in the hometown. You still had that look in your eyes when we interviewed you in Austin, you came down, we both were pretty exhausted after re-invent. Yeah, so we announced a bunch of things on the Wednesday and Thursday and I had to turn it into a keynote by Tuesday and get everyone to agree that's what was going on. That was very compressed. So we have more time and all of the engineering teams that really want to be at an event like this, we're right in the hometown for a lot of them. What's it like we're in Amazon? I've got to ask you, since you brought it up, I mean, and you guys run hard at Amazon, you're releasing stuff for the pace that's unbelievable. I mean, I get blown away every year. Almost seems like inhuman that you guys can run at that pace and earnings, obviously business results, speak for themselves, what's it like there? I mean, if you put your running shoes on, you're running marathon every day, what's it like? It's lots of small teams working relatively independently in that scales and that's something other engineering organizations have trouble with. They build hierarchies that slow down and we have a really good engineering culture where every time you start a new team, it runs at its own speed and we've shown that as we add more and more resources, more teams, they are just executing. In fact, they're accelerating, they're building on top of other things. We get to build higher and higher level abstractions to layer into, just getting easier and easier to build things. So we're accelerating our pace of innovation. There's no slowing down. I was telling Andy Jaster they're going to write a Harvard Business School case study on a lot of the management practice, but certainly the impact on the business side with the modeling that you guys do. I've got to ask you on the momentum side, super impressed with SageMaker. I mean, I predicted on theCUBE at eight of its summit that that will be the fastest growing service. It'll overtake Aurora. I think that is the currently is on stage, presented as the fastest growing service. SageMaker is really popular. Updates there, its role in the community. Obviously, Kubernetes is a good fit for orchestrating things. We heard about Kubeflow is an interesting model. What's going on with SageMaker? How is it interplaying with Kubernetes? I think people that want to run, if you're running on-premise cluster of GPU enabled machines, then Kubeflow is a great way of doing that. You're on TensorFlow that manages your cluster. You're on Kubeflow on top. SageMaker is running at very large scale. And like a lot of the things we do at AWS, what you need to run an individual cluster for a one customer is different from running a multi-tenant service. So SageMaker sits on top of ECS and it's now one of the largest consumers, generators of traffic to ECS, which is Amazon's sort of horizontally scaled sort of multi-tenant cluster management system, which is now doing hundreds of millions of container launches a week. So that isn't continuing to grow. We see Kubernetes as a more portable abstraction. It has some more different layers of APIs and a big community around it. But for the heavy lifting of running tens of thousands of containers and for a single application, we're still at the level where ECS does that every day. And Kubernetes, that's kind of the extreme case where a few people are pushing it. So it'll gradually grow scale. We have an evolution area for it. So there's an evolution here. Yeah, the interesting things are we're starting to get some convergence on some of the interfaces, like the interfacing of CNI. CNI is the way you do networking on containers and there is one way of doing that that is shared by everybody through CNI. EKS uses it, ECS uses it and Kubernetes uses it, you can, you know. And the impact of customer is what for that? What's the impact? It means if you, the networking structures you want to set up will be the same and the capabilities and the interfaces. But what happens on AWS is because it has a direct plug-in, you can hook it up to our accelerated networking infrastructure. So AWS's instances right now, we've offloaded most of the network traffic processing. You're running 25 gigabits of traffic. That's quite a lot of work, even for a big CPU, but it's handled by the nitro plug-in architecture we have in our latest instance type. So we've talked a bit about that at re-invent, but what you're getting is an almost complete hypervisor offload at the core machine level. So you get to use that accelerated networking, right? You're plugging into that interface. But that, if you want to have a huge number of containers on a machine and you're not really trying to drive very high throughput, then you can use Calico and we support that as well. So, sort of multiple different ways, but all through the same plug-ins on both. So some portability there. So you mentioned some stats. What's the numbers you mentioned? How many containers you're launching a week? Hundreds of thousands? What's the ballpark? On ECS, our container platform that's been out for a few years, hundreds of millions a week. It's really growing very fast. Containers are taking off everywhere, yep. Microservices growth is, again, that's the architecture. As architecture is a big part of the conversation, what's your dialogue with customers? Because the modern software architecture in cloud looks a lot different than what it was in the three-layered approach that used to be the web stack. Yeah, and I think to add to that, we were just talking to folks about how in large enterprise organizations you're still finding groups that do waterfall development. And how are you working to kind of bring these customers and these developers into the future, per se? Yeah, that's actually I spend about half my time managing the open source team and recruiting. The other half is talking to customers about this topic. I spend my time traveling around the world, talking at summits and events like this and meeting with customers. And there's lots of different problems slowing people down. I think you see three phases of adoption of cloud in general. One is just speed. I want to get something done quickly. I have a business need, I want to do it. I want machines in minutes instead of months, right? And that speeds everything up, so you get something done quickly. The second phase is where you're starting to do stuff at scale and that's where you need cloud native. You really need to have elastic services. You just scale down as well as up, otherwise you just end up with a lot of idle machines that will cost you too much and it's not giving you the flexibility. And the third phase we're getting into is complete data center shutdown. If you look at creating investing in a new data center or data center refresh or just opening an AWS account, it really doesn't make sense nowadays. And we're seeing lots of large enterprises either considering it or well into it. Some are a long way into this. When you shut down the data center, all of the backend core infrastructure starts coming out. So we're starting to see sort of mainframe replacement and the really critical business systems being replaced. So those are the interesting conversations. That's one of the areas that I'm particularly interested in right now. And it's leading into this other buzzword if you like called chaos engineering, which is sort of the, think of it as the availability model for cloud native and microservices. And we're just starting a working group at CNCF around chaos engineering is being started this week. So you can get a bit involved in how we can build some standard patterns for doing that. Where is that going to be? It's here, I mean, it's a working group. So CNCF working group, they are wherever the people are, right? So what is that conversation? When you talk about that mainframe kind of conversation or shutdown data centers to the cloud, what is the key thing that you promote upfront that needs to get done by the customer? I mean, you have the pillars, the key pillars, but you think about microservices, it's a global platform, it's not a lift and shift situation. It kind of is a shutdown, but I mean, not at that scale. But security, identity, these are authentication, there's no perimeter. So microservices potentially going to scale. What are the things that you promote upfront that they have to do upfront? What are the upfront table stake decisions? For management level, the real problem is people problems. And it's the technology problems somewhere down in the weeds, really. If you don't get the people structures right, then you'll spend forever going through these migrations. So if you sort of bite the bullet and do the reorganization that's needed first and get the right people in the right place, then you move much faster through it. So I say a lot of the time, yeah, I am way upstream of picking a technology. It's much more about understanding the sort of DevOps agile and the organizational structures for these more cellular-based organizations. AWS is a great example, Amazon's a great example of that. Netflix is another good example of that. There are lots of, capital one is becoming a good example of that, too. In banking, they're going much faster because they've already gone through that. So they're taking the Amazon models, small teams. Is that your general recommendation? What's your general recommendation? Well, it's just the whole point of microservices is that they're built by these small teams. So if you build, it's called Conway's Law, which says that the code will end up looking like the org structure that built it. So if you set up lots of small teams, you will end up with microservices. That's just the way it works, right? If you try to take your existing siloed architecture with your long waterfall things, it's very hard not to build a monolith. So getting the org structure up done first is right. And then we get into kind of the landing zone thing. You could spend years just debating what your architecture should be and some people have and then every year they come back and it's changing faster than they can decide what to do. So that's another kind of analysis paralysis mode. You see some large enterprises in. Others are just saying, just do it. What's the standard best practice? Lay out my accounts like this, my networks like this, my structures, we call it landing zone. We get somebody up to speed incredibly quickly and it's the beaten path. We start to build automation around these onboarding things for just getting stuff going. That's great. And then going back to the sort of chaos engineering kind of idea, one of the first things I should think you should put into this infrastructure is the disaster recovery automation. Because if that gets there before the apps do, then the apps learn to live with the chaos monkeys and things like that. And really one of the first apps we installed at Netflix was Chaos Monkey. It wasn't added later. It was there when you arrived. Your app had to survive the chaos that was in the system. So think of that as, it used to be disaster recovery was incredibly expensive, hard to build, custom and very difficult to test. People very rarely run through their disaster recovery testing and they just send a failover. But if you build it in on day one, you can build it automated. And I think Kubernetes is particularly interesting because the APIs to do that automation are there. So we're looking at automating, injecting failure at the Kubernetes level and also injecting into the underlying machines that are running Kubernetes, like attacking the control plane to make sure that the control plane recovery works. So I think there's a lot we can do there to automate it and make it into a low cost productized, safe, reliable thing that you do a lot rather than being something that everyone's scared of doing that. You know disaster recovery. Or they bolt on after they make decisions and retrofit pre-existing conditions into a disaster recovery. Which is chaotic in and of itself. So let's look at the org chart, right? And then actually get the disaster recovery patterns. If you need something highly available, do that first before the apps turn up. That's it. Thanks for coming on, chaos engineering. Congratulations. And again, we know you know a little bit about Netflix. You know that environment and could bend big Amazon customer. Congratulations on your success. Looking forward to keeping in touch. Thanks for coming on and sharing the AWS perspective on theCUBE. I'm John Furrier, Lauren Cooney, live in Denmark for KubeCon 2018. Part of the CNC at the Cloud Native Compute Foundation. We'll be back with more live coverage. Stay with us. We'll be right back.