 From the CUBE studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE conversation. Everyone, welcome to the CUBE special coverage of the AWS summit, San Francisco, North America, all over the world and most of the parts Asia Pacific, Amazon summit is the hashtag. This is part of the CUBE virtual program where we're going to be covering Amazon summits throughout the year. I'm John Furrier, host of the CUBE and of course we're not at the events. We're here in the Palo Alto studios with our COVID-19 quarantine crew. And we got a great guest here from AWS, Dave Brown, vice president of EC2, leads the team on Elastic Compute and its business where it's evolving. And most importantly, what it means for the customers in the industry. Dave, thanks for spending the time to come on the CUBE virtual program. Hey John, it's really great to be here. Thanks for having me. So we've got the summit going down. It's new format because of the shelter in place. They're going virtual, virtual or digital, virtualization of events. And I want to have a session with you on EC2 and some of the new things that are going on. I think the story is important because certainly around the pandemic and certainly around the large scale, SaaS business models, which are turning out to be quite the impact from a positive standpoint with people sheltering in place is the role of data in all this. Okay, and also there's a lot of pressure financially. We've had the payroll loan programs from the government and to companies really looking at their bottom line. So two major highlights going on in the world that's directly impacted and you have some products and news around this and I want to do a deep dive on that. One is Appflow, which is a new integration service by AWS that really talks about taking the scale and value of AWS services and integrating that with SaaS applications and the migration acceleration program for Windows, which has a story history with AWS for many, many years. You guys have been powering most of the Windows workloads. I run it that you guys are not at Microsoft certainly had success there. Let's start with the app flow. Okay, this was recently announced on the 22nd of April. This is a new service. Can you take us through why this is important? What is the service? Why now? What was the main driver behind Appflow? Yeah, absolutely. So with the launch of Appflow, what we're really trying to do is make it easy for organizations and enterprises to really control the flow of their data between the number of different applications that they use on-premise and AWS. And so the problem we started to see was enterprises just had this data all over the place and they wanted to do something useful with it. We've seen many organizations running data lakes, large scale analytics, be machine learning on AWS. But before you can do all of that, you have to have access to the data. And if that data is sitting in an application either on-premise or elsewhere in AWS, it's very difficult to get out of that application and into S3 or Redshift or one of those services before you can manipulate it. That was the challenge. And so the journey kind of started a few years ago. We actually launched a service on the EC2 network inside for PrivateLink. And it was really, it provided organizations with a very secure way to transfer network data both between VPCs and also between VPC and on-prem networks. And what this highlighted to us is organizations said that's great, but I actually, I don't have the technical ability or the team to actually do the work that's required to transform the data from whether it's Salesforce or SAP and actually move it over PrivateLink to AWS. And so we realized, well, PrivateLink was useful. We needed another layer of service that actually provided this. And one of the key requirements was an organization must be able to do this with no code at all. So basically no developer required. And I want to be able to transfer data from Salesforce, my Salesforce database and put that in Redshift together with some other data and then perform some function on that. And so that's what app flow is all about. And so we came up with the idea but a little bit more than a year ago was the first time I sat down and actually reviewed the content for what this was going to be. And the team's been hard at work and launched in the 22nd of April and we actually launched with 14 partners as well that provide what we call connectors which allow us to access these various services. And so companies like Salesforce and ServiceNow, Slack, Snowflake, to name a few. Well, certainly you guys have a great ecosystem of SaaS partners and that's, you know well documented in the industry that you guys are not going to be competing directly with a lot of these big SaaS players although you do have a few services for customers who want end to end. You know, Jassy continues to pound that home on my CUBE interviews. But I think this is notable and I want to get your thoughts on this because this seems to be the key unlocking of the value of SaaS and cloud because data traversal, data transfer is cost involved. Also moving traffic over the internet is unsecure and unreliable. So I wanted a couple of questions I wanted to just ask you directly. One is did the app flow come out of the AWS private link piece of it? And two, is it one directional or bi-directional? How is that working? Because I'm guessing that you had private link became successful because no one wants to move on the internet, they want to direct connects. Was it, was there something inadequate about that service? Was there more headroom there? And is it bi-directional for the customer? So let me take the second one. It's absolutely bi-directional. So you can transfer that data between an on-premise application and AWS or AWS and the on-premise application. Really anything that has a connector can support the data flow in both directions. And with transformations. So, data in one data source may need to be transformed before it's actually useful in a second data source. And so app flow takes care of all that transformation as well in both directions. And again, with no requirement for any code on behalf of the customer, which really unlocks it for a lot of the more business-focused parts of an organization who maybe don't have immediate access to developers. They can use it immediately just literally with a few transformations via the console and it's working for you. In terms of, you mentioned sort of the flow of data over the internet and the need for security of data, it's critically important. And as we look at just what happened as a company does, we have very, very strict requirements around the flow of data and what services we can use internally and where's any of our data going to be going? And I think it's a good example of how many enterprises are thinking about data today. They do not want to be, they don't even want to trust human HTTPS and encryption of data on the internet. I'd rather just be in a world where my data never, ever traverses the internet and I just never have to deal with that. And so, the journey all started with PrivateLink there. And PrivateLink was an interesting feature because it really was a change in the way that we asked our customers to think about networking. Nothing like PrivateLink has ever existed in the sort of standard networking that an enterprise would normally have. It's kind of only possible because of what VPC allows you to do and what the software-defined network on AWS gives you. And so, we built PrivateLink and as I said, customers started to adopt it. They loved the idea of being able to transfer data by either between VPCs or between on-premise or between their own VPC and maybe a third-party provider like Snowflake has been a very big adopter of PrivateLink and they have many customers using it to get access to Snowflake databases in a very secure way. And so, that's where it all started. And in those discussions with customers, we started to see that they wanted us to up-level it a little bit. They said, we can use PrivateLink, it's great, but one of the problems we had was just a flow of data. And how do we move data in a very secure, in a highly available way with no sort of bottlenecks in the system. And so, PrivateLink was a great sort of underlying technology that empowered all of this, but we had to build the system on top of that, which is AppFlow that says, we're going to take care of all the complexity. And then we had to go out to the ecosystem and say to all these providers, can you guys build connectors? Because everybody realizes it's super important that data can be shared and so that organizations can really extract the value from that data. And so, 14 of them at launch, and we have many more coming down the road, have come to the body with connectors and full support for what AppFlow provides. Yeah, us DevOps peers always are pounding the fist on the table, now virtual table, APIs and connectors. This is the model, is how people are integrating. And I want to get your thoughts on this. I think you said low code or no code on the developer simplicity side. Is it no code or low code? What, can you just explain quickly? It's no code for getting started. Literally, for the kind of basic to medium complexity use case, it's no code. And so, in a lot of customers we spoke to, that was a bottleneck, right? They needed something from data. It might have been the finance organization or it could have been human resources, somebody else in the organization needed that. They don't have a developer that helps them typically. And so we find that they would wait many, many months or maybe even never get the project done just because they never ever had access to that data or to the developer to actually do the work that was required for the transformation. And so, it's no code for almost all use cases where it literally is select your data, so select the connector and then select the transformations. And there are some basic transformations, renaming of fields, transformation of data in simple ways that's more insufficient for the vast majority of use cases and then obviously through to the destination with the connector on the other side to do the final transformation to the final data source that you want to migrate the data to. You know, you have an interesting background. I was looking at your history and you've essentially been a web services kind of guy all your life from a code standpoint, software divine environment. And now also EC2 is the crown jewel of AWS and doing more and more with S3. But what's interesting as you build more of these layers of services in there, there's more flexibility. So, right now in most of the customer environments is a debate around do I build something monolithic and or decoupled, okay? And I think there's a world where there's a mutually, they're not usually exclusive anyway. You have a mainframe, you have a big monolithic thing if it does something, but generally people would agree that a decoupled environment is more flexible and more agile. So I want to kind of get to the customer use case because I can really see this being really powerful app flow with private link where you mentioned Snowflake. I mean, Snowflake is built on AWS. They're doing extremely, extremely well. Like any other company that builds on AWS, whether it's theCUBE cloud or a Snowflake, we as we tap those services at customers, we might have people who want to build on our platform on top of AWS. So I know a bunch of startups that are building on within the Snowflake ecosystem, a customer of yours. So they're technically a customer of Amazon, but they're also in the ecosystem of say Snowflake. So this brings up an interesting kind of computer science problem, which is architecturally, how do I think about that? Is this something where app flow could help me? Because I certainly want to enable people to build on a platform that I build if I'm doing that, if I'm not going to be a pure SAS turnkey application, but if I'm going to bring partners in and do integration, use the benefits of the goodness of an API or connector driven architecture, I need that. So explain to me how this helps me or doesn't help me. Is this something that makes sense to you? Does that question make sense? How do you react to that? I think so. I think the question is pretty broad, but I think there's an element in which they can help. So firstly, you talk about sort of decoupled applications, and I think that is certainly the way that we've gone on Amazon and been very, very successful for us. I think we started that journey back in 2003, when we decoupled the monolithic application that was Amazon.com, and that's when our service journey started and a lot of that sort of inspired AWS and how we built what we built today. We see a lot of our customers doing that, moving to smaller applications. It just works better, it's easier to debug. There's ownership at a very controlled level so you can let all your engineering teams have very clear and crisp ownership. And it just drives innovation, right? Because each little component can innovate without the burden of the rest of the ecosystem. And so that's what we really enjoy. In terms of, I think the other thing that's important when you think about design is to see how much of the ecosystem you can leverage. And so whether you're building on Snowflake, you're building directly on top of AWS, or you're building on top of one of our other customers and partners. If you can use something that solves the problem for you versus building it yourself, well that just leaves you with more time to actually go and focus on the stuff that you need to be solving, right? That the product you need to be building. And so in the case of AppFlow, I think, if there's a need for transfer of data, between, for example, Snowflake and some data warehouse that you as an organization are trying to build on the Snowflake infrastructure, AppFlow is something you could potentially look at. It's certainly not something that you could just use for, you know, it's very specific and focused to the flow of data between services from a data analytics point of view. It's not really something you could use from an API point of view or messaging between services. It's more really just facilitating that flow of data and the transformation of data to get it into a place that you can do something useful with it. But like any of our services, there's no reason it's a level of coding when you use any layer in the stack. Yeah, so it's a level of integration, right? No code to code, depending on how you look at it. Cool, customer use cases. You mentioned large-scale analytics. I thought I heard you say machine learning, data lakes. I mean, basically anyone who's using data is going to want to tap some sort of data repository and figure out how to scale data when appropriate. There's also contextual relevant data that might be specific to say an industry vertical or a database. And obviously AI becomes the application for all of this. If I'm a customer, how does app flow relate to that? How does that help me and what's the bottom line? So I think there's two parts to that journey, and depending on where customers are. And so there's, we do have millions of customers today that are running applications on AWS. So over the last few years, we've seen the emergence of data lakes, really just the storage of a large amount of data, typically in S3, that then companies want to extract value out of and use in certain ways. Obviously we have many, many tools today, from Redshift, Athena, that allow you to utilize these data lakes and be able to run queries against this information, things like EMR and one of our older services in the space. And so doing some sort of large scale analytics. And more recently, services like SageMaker are allowing us to do machine learning. And so being able to run machine learning across an enormous amount of data that we have stored in AWS. And there's some stuff in the IoT workload use space as well that's emerging. And many customers are using that. There's obviously many customers today that aren't using that on AWS potential customers for us, that are looking to do something useful with data. And so the one part of the journey is, setting up all of that infrastructure. And we have a lot of services that make it really easy to do machine learning and do analytics and that sort of thing. And then the other side of the problem, which is what Outflow is addressing is, how do I get that data to S3 or to Redshift to actually go and run that machine learning workload? And that's what it's really unlocking for customers. And it's not just the one time transfer of data. The other thing that Outflow actually supports is the continuous updating of data. And so if you decide that you want to have the view of your data in S3, for example, in a data lake, that's kept up to date, within a few minutes or within an hour, you can actually configure Outflow to do that. And so the data source could be Salesforce, it could be Slack, it could be whatever data source you want to pull in and you continuously have that flow of data between those systems. And so when you go to run your machine learning workload or your analytics, it's all continuously up to date. And you don't have this problem of, let me get the data, right? When I think about some of the data jobs that I've run in my time as back in the day as an engineer on early EC2, a small part of it was actually running the job on the data. A large part of it was how do I actually get that data and is it up to date? Yeah, the data is critical. I think that's the big feature there is that this idea of having the data connectors really makes the data fresh because if you go through the modeling and you realize, well, I missed a big patch of data, the machine learning is not. Exactly. I mean, it's only the data. And the other thing is it's very easy to bring in new data sources, right? You think about how many companies today have an enormous amount of data just stored in silos and they haven't done anything with it. They may have, often it'll be a conversation somewhere right around the coffee machine, hey, we could do this and we could do this, but they haven't had the developers to help them and they haven't had access to the data and they haven't been able to move the data and to put it in a useful place. And so I think what we're seeing here is without really unlocking of that because going from that initial conversation to actually having something running literally requires no code. Log into the AWS console and figure a few connectors and it's up and running and you're ready to go. And you can do the same thing with SageMaker or any of the other services we have on the other side that make it really simple to run some of these ideas that just historically have been just too complicated. Okay, so take me through that console piece, just walk me through. I'm in, you sold me on this. I just came out of meeting with my company and I said, hey, you know what? We're blowing up this siloed approach. We want to kind of create this horizontal data model where we can mix and match connectors based upon our needs. So, okay, what do I do? I'm using SageMaker, using some data, I got S3, I got an application. What do I do? I'm connecting what? S3 to the app? Yeah, so the simplest thing is, and the simplest place to find this actually is on Jeff Boss' blog that he did for the release, right? Jeff always does a great job in demonstrating how to use our various products. But it literally is going into the standard AWS console, which is the console that we use for all of our services. I think we have 200 of them, so it is getting kind of challenging to find them all in that console as we continue to grow. And find AppLo. AppLo is a top level service, and so you'll see it in the console. And the first thing you got to do is you got to configure your source connector. And so there's a connector that, where's the data coming from? And as I said, we had 14 partners. You'll be able to see those connectors there and see what's supported. And obviously there's the connectivity. Do you have access to that data? Or where is that data running? AppLo runs within AWS, and so you need to have either VPN or Direct Connect back to your organization. If the data source is on-premise, if the data source happens to be in AWS, it'll obviously be in a VPC, and you just need to configure some of that connectivity functionality. So no code if the connectors are there, but what if I want to build my own connector? So building your own connector, that is something that we're working with third-party parties with right now. I could be corrected. I'm 100% sure whether that's available. That's certainly something I think we would allow customers to do, is to extend either the existing connectors or to add additional transformations as well. And so you'd be able to do that. But the transformations of the vast majority of our customers are usually not literally just in the console with basic- You're taking some of the bigger wraps that people have and just building those connectors. How does a partner get involved? You've got 14 partners now. How do you extend the partner base contact in Amazon Partner Manager? Or you send an email to someone. How does someone get involved? What are you recommending? So there are a couple of ways, right? We have an extensive partner ecosystem that the vast majority of these ISPs are already integrated with. And so we have the 14 we launched with. We also pre-announced SAP, which is going to be a very critical one for the vast majority of our customers. Having deep integration with SAP data and being able to bring that seamlessly into AWS, that'll be launching soon. And then there's a long list of other ones that we're currently working on and they're currently working on them themselves. And then the other one is going to be, like with most things at Amazon, feedback from customers. And so what we hear from customers and very often we'll hear from third-party partners as well, who come and say, hey, my customers are asking me to integrate with the app flow. What do I need to do? And so just reaching out to AWS and letting them know that you'd be interested in integrating me. If you're not part of the partner program, the team would be happy to engage and bring you on board. So it's really very- Class of Amazon Playbook at the top use cases, nail down, listen to customers and figure it out. Great stuff. Dave, we really appreciate it. I'm looking forward to digging in an app flow and I'll check on Jeff Barr's blog. I'm sure it's April 22nd was the launch day probably had up there. One of the things that I want to just jump into now, moving to the next topic is the cost structure. A lot of people pressure on costs. This is where I think this migration acceleration program for Windows is interesting. Andy Jassy always likes to boast on stage and reinvent about the number of workloads of Windows running on Amazon web services. This has been a big part of the customers. I think for over 10 years, I think that I can think of him talking about this. What is this about? Because are you still seeing uptake on Windows workloads or? Absolutely. I mean, Azure is market share, but now it doesn't really kind of square in my mind what's going on here. Tell us about this migration service. Yeah, absolutely on the migration side. So Windows is absolutely, you know, we still believe AWS is the best place to run a Windows workload. And we have many, many happy Windows customers today. And it's a very big, very large, great part of our business today. It used to be. I was part of the original team back in 2008 that launched, I think it was Windows 2008 back then, on EC2. And I remember sort of working out all the details of how to do all the virtualization with Windows. Obviously, back then we'd done Linux and getting Windows up and running and working through some of the challenges that Windows had as an operating system in the early days. And it was October 2008 that we actually launched Windows as an operating system. And it's just been, you know, we've had many, many happy Windows customers since then. Why is Amazon so peaked to run workloads from Windows so effectively? Well, I think, sorry, did you say peaked? Why, you know, why is Amazon so in well-position to run the Windows workloads? Well, firstly, I mean, I think, you know, Windows is really just the operating system, right? So if you think about that as the very last little bit of your sort of virtualization stack and then being able to support your applications, what you really have to think about is everything below that, both in terms of the compute. So, you know, the performance you're going to get, the price performance you're going to get, you know, with our Nitro hypervisor and the Nitro system that we developed back in 2018 or launched in 2018, we really are able to provide you with the best price performance, you know, and have the very least overhead from a hypervisor point of view. And then what that means is you're getting more out of your machine for the price that you pay. And then you think about the rest of the ecosystem, right? You think about all the other services and all the features and just the breadth and the extensiveness of AWS. And that's critically important for all of our Windows customers as well. And so you're going to have things like Active Directory and all those sorts of things that are very Windows specific and we can absolutely support all of those as, you know, natively. And then the Windows operating system as well, you know, we have things like various agents that you can run inside the Windows box to do more maintenance and management. And so I think we've done a really good job in bringing Windows into the larger and broader ecosystem of AWS. And really it's just a case of making sure that, you know, Windows runs smoothly and that's just the last little bit on top of that. And so, you know, many customers, enterprises run Windows today. You know, when I started out my career I was developing software in the banking industry. And it was a very much Windows, you know, environment where they were running critical applications. And so we see it critically important for customers who run Windows today to be able to bring those Windows workloads to AWS. We are seeing a trend. Yeah, so go ahead. Well, that's certainly out there from a market share standpoint, but this is a cost driver. You guys are saying, and I want you to just give an example or just illustrate why it costs less. How is it a cost savings? Is it just services cycle times on EC2? I mean, what's the cost savings? I'm a customer like, okay, so I'm going to go to Amazon with my workloads. Why is it a cost? I think there are a few things. The one I was referring to in my previous comment was the price performance, right? And so if I'm running on a system where the hypervisor is using a significant portion of the physical CPU that I want to use as well, well, there's an overhead to that. And so from a price performance point of view, if I look at, if I go and benchmark a CPU and I look at how much I pay for that, you know, per unit of that benchmark, it's better on AWS because with our nitro system, we're able to give you a hundred percent of that for. And so you get a performance there. And so that's the first thing is price performance and which is different from less price, but there's a saving there as well. The other one is, you know, a large part and getting into the migration program as well, a large part of what we do with our customers when they come to AWS is, is first we take a long look at their license strategy. What licenses do they have? And a key part of bringing a Windows workload to AWS is license optimization. What can we do to help you optimize the licenses that you're using today for Windows, for SQL server and really try and find efficiencies in that. And so we're able to secure significant savings for many of our customers by doing that. We have a number of tools that they use as part of the migration program to do that. And so, so that helps save there. And then, and then finally, you know, we have a lot of customers doing what we call modernization of their applications. And so you really embrace cloud and some of the benefits that you get from cloud, especially elasticity, so being able to scale for demand. It's very difficult to do that when you're bound by a license for your operating system because every box you run, you have to have a license for it. And so, you know, turning order scaling on, you go to make sure you have enough licenses for all these Windows boxes you've seen. And so that the push the cloud's bringing, we've seen a lot of customers move Windows applications from Windows to Linux or even move SQL server from SQL server to SQL server on Linux. And so do another database platform and do a modernization there that already allows them to benefit from the elasticity that cloud provides without even to constantly worry about licenses. So final question on this point, migration service implies migration from somewhere else. How do they get involved? What's the onboarding process? Can you just give a quick detail on that? Absolutely. So we've been helping customers with migrations for years. We launched the migration program on the migration acceleration program map. We launched it, I think about 2016, 2017 was the first part of that. And it was really just a bringing together of the various, the things we'd learned, the tools we built, the best strategies to do a migration. And we said, how do we help customers looking to migrate to the cloud? And so that's what maps all about is just a three phase will help you assess the migration, we'll help you do a whole lot of planning. And then ultimately we help you actually do the migration. We partner with a number of external partners and ISVs and GSIs who also work very closely with us to help customers do migrations. And so what we launched in April of this year with the Windows Migration Program is really just more support for a Windows workload as part of the broader migration acceleration program. And there's benefits to customers. There's a smoother migration, it's a faster migration in almost all cases. They land, we're doing license assessments and so there's cost reduction in that as well. And ultimately, there's other benefits as well that we offer them if they partner with us in bringing the workload to AWS. And so getting involved is really just reaching out to one of our AWS sales folks or one of your account managers, if you have an account manager and talk to them about workloads that you'd like to bring in. And we even go as far as helping you identify which applications are easiest to migrate. And so that you can kind of get going with some of the easier ones and while we help you with some of the more difficult ones. And it's really just about removing those roadblocks to bring your services to AWS. Takes the blockers away. Dave Brown, vice president of EC2, the crown jewel of it is flip breaking down app flow and the migration to Windows services. Great insights, appreciate the time. We're here with Dan Brown, VP of EC2, was part of the virtual CUBE coverage. Dave, I want to get your thoughts on an industry topic. Given what you've done with EC2 and the success and with COVID-19, you're seeing that scale problem play out on the world stage for the entire population of the global world. This is now turning non-believers into believers of DevOps, web services, real time. I mean, this is now a moment in history with the challenges that we have. Even when we come out of this, whether it's six months or 12 months, the world won't be the same. And I believe that there's going to be a Cambrian explosion of applications. In an architecture that's going to look a lot like cloud, cloud native. You've been doing this for many, many years. Key architect of EC2 with your team. How do you see this playing out? Because a lot of people are going to be squirreling in rooms when this comes back. They're going to be video conferencing now, but when they have meetings, they're going to look at the window of the future and they're going to be exposed to what's failed and saying, we need to double down on that. We have to fix this. So there's going to be winners and losers coming out of this pandemic really quickly. And I think this is going to be a major opportunity for everyone to rally around this moment to reset. And I think it's going to look a lot like this decoupled, this distributed computing environment, leveraging all the things that we've talked about in the past. So what's your advice and how do you see this evolving? Yeah, I completely agree. I mean, I think, you know, just the speed of which it happened as well and the way in which organizations, both internally and externally, had to reinvent themselves very, very quickly, right? You know, we've been very fortunate within Amazon and moving to working from home was relatively simple for the most majority of us. Obviously, we have a number of our employees that are working data centers and performance centers that have been on the front lines and been doing a great job. But for the rest of us, it's been virtual video conferencing, right? All of our meetings and being able to use all of our networking tools securely either over the VPN or the no VPN infrastructure that we have. And many organizations have to do that. And so I think there are a number of different things that have impacted us right now. You know, obviously, virtual desktops has been a significant sort of growth point, right? Folks don't have access to their physical machine anymore. They're now all having to work remote. And so it serves like workspaces which runs on EC2 as well. It's been a critical service there to support many of our largest customers. Our client VPN service that we have within EC2 on the networking side has also been critical for many large organizations as they see more of their staff working every day remotely. As also seen, you know, being able to support a lot of customers there. Just more broadly, you know, what we've seen with COVID-19 is we've seen some industries really struggle. Obviously a travel industry, people just aren't traveling anymore. And so there's been immediate impact to some of those industries. There've been other industries that support functions, you know, like the video conferencing or entertainment side of the house has seen a bit of growth over the last couple of months. And, you know, education's been an interesting one for us as well where schools have been moving online. And, you know, behind the scenes in AWS, we've, and on EC2, we've been working really hard to make sure that our supply chains are not interrupted in any way. The last thing we want to do is have any of our customers not be able to get EC2 capacity when they desperately need it. And so, you know, we've made sure that capacity is fully available even all the way through the pandemic. And we've even been able to support customers with, you know, I remember one customer who told me that next day they're going to have it was more than 100,000 students coming online and they suddenly need to grow their business, you know, by some crazy number. And we were able to support them and give them that capacity, which is way outside of any sort of demand signal. I think this is the Cambridge explosion that I was referring to because a whole new set of new things have emerged. New gaps in businesses have been exposed, new opportunities are emerging. This is about agility. It's real time now. It's actually happening for everybody, not just the folks on the inside of the industry. This is going to create a reinvention. So it's ironic, I've heard the word reinvent mentioned more times now over the past three months than I've heard it represented to Amazon because that's your annual conference reinvent but people are resetting and reinventing. It's actually a tactic. This is going on. So they're going to need some clouds. So what do you say to that? So I mean, the first thing is making sure that we can continue to be highly available and continue to have the capacity. The worst scenario is not being able to have the capacity for our customers, right? We did see that with some providers. And that honesty on our side is just years and years of experience of being able to manage supply chain. And the second thing is obviously making sure that we remain available, that we don't have issues. And so with all of our stuff going remote and working from home, all my teams are working from home. Being able to support AWS in this environment has been, we haven't missed a beat there, which has been really good. People are well set up to be able to absorb this. And then obviously remaining secure, which was our highest priority. And then innovating with our customers and being able to, and that's both products that we're going to launch over time. But in many cases, like that education scenario I was talking about, that's being able to find that capacity in multiple regions around the world, literally on a Sunday night, because they found out literally that afternoon that Monday morning, all schools were virtual and they were going to use their platform. And so being able to respond to that demand. We've seen a lot more machine learning workload. We've seen an increase there as well as organizations are running more models, both within the health sciences area, but also in the financial areas and also in just general business, right? Well, yes, wherever it might be, everybody's trying to respond to, what is the impact of this and better understand it? And so machine learning is helping there. And so being able to support all those workloads. And so there's been an explosion. I was joking with my son. I said, you know, this world is interesting, but Amazon really wins and stuff's getting delivered to my house. And I want to play video games and Twitch. And I want to build applications and write software. All I can do that all my home. So you win all around. But you know, all kidding aside, this is an opportunity to define in Chile. So I want to get your thoughts because I've been a big fan of Amazon. As everyone knows, I'm kind of a pro Amazon person and as other clouds kind of try to level up, they're moving in the same direction, which is good for everybody, good competition and all. But S3 and EC2 have been the crown jewels and building more services around those and creating these abstraction layers and new sets of service to make it easier. I know it's been a top priority for AWS. So can you share your vision on how you're going to make EC2 and all these services easier for me? So if I'm a coder, I want literally no code, low code, infrastructure as code. I need to make Amazon more programmable and easier. Can you just share your vision on, as we talk about the virtual summits as we cover the show, what's your take on making Amazon easier to consume and use? You know, we've, it's been something we thought a lot over the years, right? When we started out, we were very simple. The early days of EC2, it wasn't that rich feature set. And it's been an interesting journey for us. You know, we've obviously, we've become a lot more, we've launched a lot more features, which narratively brings some more complexity to the platform. We have launched things like LightSale over the years. LightSale is a hosting environment that gives you that EC2-like experience, but it's a lot simpler. And it's also integrated with a number of other services like RDS and ELB as well, to be basic load balancing functionality. And we've seen some really good growth there, but what we've also learned is, is customers enjoy the richness of what EC2 provides and what the full ecosystem provides and being able to use, you know, the pieces that they really need to build their application. The, from a, you know, from an S3 point of view, from a more board ecosystem point of view, you know, it's providing customers with their features and functionality that they really need to be successful. So we haven't, from a, we've, on the compute side of the house, we've done some things, obviously containers have really taken off and there's a lot of frameworks, whether it's EKS or, you know, elastic community service or Docker-based ECS. It's made that a lot simpler for developers. And then obviously, you know, in the serverless space, Lambda is a great way of consuming EC2, right? I know it's serverless, but there's still an EC2 instance under the hood. And being able to just bring a basic function and run those functions in serverless is, you know, a lot of customers are enjoying that. You know, the other complexity we're going after is on the networking side of the house. I find that a lot of developers out there, they're more than happy to write the code, they're more than happy to bring their application to AWS, but they struggle a little bit more on the networking side. They really do not want to have to worry about whether they have a route to an internet gateway. And if they submit to define correctly to actually make the application work. And so, you know, we have services like AppMesh and the whole mesh service space is developing a lot to really make that a lot simpler where you can just bring your application and call another application that just uses service discovery. And so those higher level services are definitely helping. In terms of no code, you know, I think AppMesh is, sorry, not AppMesh, AppLo is one of the examples where we've already given organizations something at that level that says I can do something with no code. I'm sure there's a lot of work happening in other areas. It's not something I'm actively thinking on right now in my role at ELEAN EC2. But I'm sure, you know, as the use cases come from customers, I'm sure you'll see more from us in those areas. They'll likely be more specific though, because as soon as you take code out of the picture, you know, you're going to have to get pretty specific in the use case to really get the depth and the functionality that customers will need. Well, it's been super awesome to have you valuable time here on the VirtualCube for covering Amazon summit, virtual, digital event that's going on. And we'll be going on throughout the year. Really appreciate the insight. And I think, you know, it's right on the money. I think the world is going to have a six to 12 month surge in reset, reinventing, and growing. So I think a lot of companies who are smart are going to reset, reinvent and set a new growth trajectory because it's a cloud native world. It's cloud computing. This is now a reality. And I think there's proof points now. So the whole world's experiencing it, not just the insiders and the industry. And it's going to be an interesting time. So really appreciate that. They appreciate it. They're coming on. Thank you very much for having me. It's been good. I'm John Furrier here inside the Cube Virtual, our VirtualCube coverage of AWS Summit 2020. We're going to have ongoing Amazon Summit VirtualCube. We can't be on the show floor. So we'll be on the virtual show floor covering and talking to the people behind the stories. And of course, the most important stories is looking at angle and the cube.net. Thanks for watching.