 easier, faster, harder, pick any of them, yeah? It's great. Broad services not only drive sort of the interaction with other partners, they also allow us to build very, very elaborate systems, and what we've seen with these elaborate systems is that they have a whole set of properties. If you were here two years ago, I really talked about many of those, how 21st century architectures drive the way that you've always wanted to create your applications. You can make them secure and definitely with the announcement yesterday of our new key management service, you have a whole new range of functionality to actually protect your data and your customers. They can be adaptive, which means that you don't have to make decisions about what hardware you have to run on, or if you change your mind, or if you make a failure, or if you make a mistake, that it doesn't really matter because you can always change your mind. And if you need a larger database, it's just a push of a button to actually get that done without that you have to give up the hundreds of thousands of dollars in investments in hardware that you had to make before that. They're resilient because of course all of you deploy your applications to at least two availability zones, right? Yes, good. And with 11 regions around the world, if you have developed the application for one region, you will easily be able to deploy this to any other region in the world with just the push of one button. And what we've seen with these very elaborate applications, that they actually start providing APIs sometimes first internally, but later to their customers. And then you see a very, very exciting dynamic happening on the AWS platform. And that is that these very elaborate applications suddenly start extending the AWS platform with very highly specialized new functionality. And so the AWS platform becomes a lot more than just AWS and even much more than AWS Prespartis. And one of the best examples of this that I've seen in recent time is a company called Omnifone. The platform that they have built will allow you to build the next generation music applications with just a click of a button. And with large team right on stage, Phil Sand, Chief Engineer and Co-Founder of Omnifone. Yes, and thank you, Verna. It's great to be here. My name's Phil Sand. I'm Co-Founder and Chief Engineer at Omnifone. Omnifone's the world's leading B2B music platform. And we support all the models, streaming, download, radio services and background music. We operate in 48 countries. We're sort of like a big box of Lego in the cloud from which you can build music services. And we're fortunate enough to work with some of the world's leading music services, including Sirius XM, Spotify, Sony and many others. So this is the diagram I used to use many years ago to describe our music platform. Some of you may be thinking that this looks complicated. Well, the music industry is complicated. You couldn't have built it more complicated if you tried. I tend to think that if I'd have known 11 years ago what I know now, I wouldn't have touched the music industry with a barge pole. I should have built a taxi app. But back then, this is the way we used to build software platforms. Colossal architectures in self-hosted data centers. We built ours in a thousand square foot of Acton. But it didn't stop there. For disaster recovery, we had to build two. That's $15 million of sunk cost. But with this architecture, we achieved our service levels and a high availability until real volume arrived. When we had millions of partners on the platform, millions of users, it was really challenging and really expensive to keep every single feature in the platform performing with globally consistent performance. You know, they say if you don't rewrite your own software, somebody else will do it for you. So three years ago, we decided to rebuild ourselves on Amazon Web Services. For us, AWS was the only choice. It was a no-brainer. But when we rebuilt ourselves, we had to do it whilst achieving super high service levels. It was sort of like rebuilding a plane whilst it was in flight. But we made it. And now the whole of the platform runs in AWS. We're all in. So let's see what it looks like. What you're looking at here is not just a list of features. These are all individually and geographically scalable AWS-hosted services. And now the entirety of the platform runs in AWS. We can scale near infinitely with globally consistent performance. And it's great for our developers too. They can come up to speed much more quickly and deliver functionality much more efficiently. So we've got a vastly improved platform for our customers. And the way our customers use ours is similar to the way we use AWS. We do the music industry undifferentiated heavy lifting. And they focus on the value-added applications, the stuff that differentiates them. And it's working. Since I moved to the cloud, we've delivered more new projects and signed more customers than we ever dreamed possible. So now we've got a platform for the music industry. A music industry cloud. People like you can build imaginative and innovative music applications without having to worry about all the complicated stuff going on behind the scenes. And since I moved to the cloud, loads of new opportunities have opened up. High resolution audio is one of them. For high resolution audio, it offers the opportunity for users, for you guys to experience the music as the artist intended. But it's got its challenges. High-res files are 150 times bigger than the stuff we'd send to a mobile phone. And with 38 million tracks at studio quality, adding a million months, but distributed to two regions, it takes a hell of a lot of storage. And we also leverage AWS for its network bandwidth. Despite the size of the files, we need to deliver them with ultra-low latency and global uninterrupted streaming. So we've got a lot of great partners on the platform doing some fantastic innovation. Pono is one of those customers. They're really breaking the ground in the new world of high resolution audio. I recommend you listen to it. And we're delighted to be working with them. Omniphone's new cloud platform is making all of these dreams a reality. So if any of you fancy being the next Pono, Spotify or SoundCloud, please get in touch and we'd love to help make that happen. It's been great. I hope you enjoy the rest of the conference and thank you for having me. Thanks Phil. I find this extremely exciting. If you're a music lover and you've always wondered why we don't have more different music applications, that era has gone. Now you can build your own music application without having to worry about licensing and things like that. That is pretty spectacular. And I thank Phil and Omniphone for delivering that on our platform so that you can build these things, these new applications on top of AWS plus Omniphone. There's also something else. If you continue the theme of broad services, there's something else as well. It drives the speed of development. You can develop the new products that you want to build much faster. And I think in most organizations, agility these days is the holy grail. The ability to experiment and to deliver new functionality really quickly. Because competition is murderous. There's an abundance of products in the market. There's increasing consumer choice. There's decreasing consumer loyalty. And so you have to work much harder and much faster to get products in the hands of your customers and then figure out whether those are the right products or not. And so agility is really what most organizations are after today and especially using digital services. And I'm going to make a statement here that the core to agility is deaf and test. Think about that. Most media outlets or whatever will always talk about deaf and test as not being serious workloads. They are the core workloads of most CIOs. And so in the past two, three years, I've made it a point to ask every CIO that I met, asking them which percentage of their budget was going to deaf and test. And the numbers are consistently between 40 and 60%. Imagine what you can do to actually take care of those workloads. And most organizations are moving those workloads onto AWS and we help them save significant money. But not only that, we help them to move so much faster. I just met in Australia with a media company whose CMS would take about, if you would instantiate a new copy of that, it would take 50 days to do that. If something takes 50 days, you're not going to innovate on it. You're not going to build new products around it. And so they moved it over to AWS and they can now instantiate a new copy of the whole CMS in four hours. That means that certainly as a company, you are going to innovate. You're going to drive new products, things that you would never be able to do before. And we're helping many enterprises, not only enterprises, but small and medium businesses as well, to move much faster. And I would like to invite Bryson Keider on stage of the weather company to talk to you about how they are moving much faster. Weather geek. And I think weather is amazing. Weather impacts billions of people every day. In fact, weather impacts over a third of the world's GDP every single day. I think that's phenomenal. And at the weather company, we've been reinventing ourselves over the last couple of years to really make sure that we've got the best forecasts built on the best science so that we can have what's really an awesome weather data platform so that we can wrap the best stories, the best services, and the best safety all tied to weather. If you think about the intersection of weather and consumer behavior, whether in your own life, you'll understand and realize just how important weather is to all of the decisions that we all make every day. And so at the weather company, we really have moved past where we started. Most of you know us as the Weather Channel, the most distributed cable network in the United States. But today, we're much more than that. Today, we're much more than Weather.com and Weather Underground, Weather.com being a top 15 website. Today, we're more than the fleet of mobile apps that we own and operate that are installed on probably most of your devices today. Today, we've got and we've built with a great partnership with AWS. We've built a data platform that powers a whole suite of mobile applications with our partners at Apple, Google, Samsung, Lenovo, Dell, HP. The list goes on. Right? Those partnerships are really important and they impact over a billion people around the world. We also have an amazing fleet of B2B services where we power 48 of the world's 100 largest airlines with weather decision-making so that they can operate over 50,000 flights safely every day. We're the world's largest energy and insurance weather provider. We provide weather data and systems and tools to over 500 local media broadcast companies in markets all over the world. Now this transformation, this journey that we've been on as we try to wrap an amazing weather platform around the amazing science that underpins it all has not been an easy transformation. We started this journey in a pretty deep hole, a hole that came from a series of acquisitions, normal course of business, some underinvestment along the way. That ended it up with 13 data centers that were all interdependent, interconnected. We had pretty much every data platform that you could imagine all running across these environments. Heck, I still had a vax in production up until about a year ago. Now all of this complexity kind of resulted in a very low level of successful change and agility starts with your ability to make quick change. And if your changes are unsuccessful then even if you make a lot of them you're not very agile. And so we had to change. We had to change our technology. But more importantly we had to change our culture because if you're on a journey to transform your business or your company to be an agile company, one that takes advantage of the full suite of offerings that AWS provides, it does start with your culture. It starts with your team and it starts with your people. And we had to go through that transformation. To do that we did have to make technology bets. We had to go through and figure out where were we going to place our bets and what was important. And so we chose AWS. We chose AWS because it gave us the confidence that we needed to be able to move forward. We use a fleet of services across the board that gives us the ability to produce and distribute upwards of 15 billion forecasts every day. That's somewhere between 100 and 150,000 transactions every second to consumers and businesses all over the world. We had to be able to rely and be confident to do that. And so the partnership with AWS for us has been tremendous. Every time we've hit a roadblock or a challenge or we've pushed an element further than it was designed to be pushed, they've jumped right in there with us and helped us make it better. And that's what a partnership is all about. And the end result of that is a really great platform. A data platform that enables us to ingest data from over 800 different sources all over the world. Radar data, both satellite and terrestrial. Data from the 35,000 personal weather stations connected into the Weather Underground network. Data from the internet of things will continue to grow. And we've got a platform that's built to scale and handle that growth as we continue to refine the accuracy of our forecasts and provide better data and better decision-making tools so that we can help both consumers and businesses make those decisions in real time with more confidence as we move forward. Now that confidence starts with the forecast. If you're going to be a big data company, if you're going to be the leading forecaster in the world, then you better be the most accurate. So this transformation has given us the power to have the world's most accurate forecast. Last year when we launched this, all running on AWS, we moved from forecasting 2.2 million locations that we would update about every four hours to forecasting for nearly 3 billion locations updated every 15 minutes. It's a massive step change that provides the world's most accurate forecast. We always roll those forecasts out to Weather Underground first. It's our test bed, if you will, to make sure that we're good with it before we roll it into the Weather Channel. So you'll always see here Weather Underground slightly edging out as we have the fun internal competition, which is the spirit of what that culture needs to be. And so the accuracy that we've got, we're very proud of, we're going to continue to focus in on that. We're going to continue to make sure that we can scale the platform, leverage the best science, both atmospheric science and computer science to ensure that we've got always the world's most accurate data to help everybody make the decisions that they need to make. And with the world's best data gives us the credibility to be able to power every iOS 8 device, every Google Now-enabled Android device, over 170 million downloads of our own applications all over the world. And so we're really proud to have the confidence and the partnership with AWS that has given us the scale and the capabilities to be at the intersection of consumer behavior and weather in a really big way. Thank you. Higher fidelity, higher accuracy at much lower cost, at much greater speed. What more do you want? Yeah? Actually, I was going to come back to something that I said before that about deaf and test and agility. One thing about tests in the cloud, I think if you think about testing how you used to do it, it was actually always sort of the forgotten guys in the overall development cycle. They always had older hardware. They could never test at the fidelity that they would want to. And what I've heard from many of our customers is that their testing in the cloud not only allows them to actually drive that course down and makes them go much faster, they can actually test at much higher fidelity that they could do otherwise. I think development is changing in support of agility. We see something happening that we were already promoting two years ago here as well, is that the sort of decomposition into smaller building blocks helps development go much faster by actually creating smaller blocks of your application often even in the form of microservices allows you to achieve better isolation, to achieve faster updates, to be able to use things like autoscaling to actually help you build better, faster services and to actually help the agility of your company. Before I go any further on this, I would like to invite Patrick Colancheri on stage who's the co-founder and CTO of Pristine who will explain you how their use of these sort of containers and microservices will help you build better applications. My name is Patrick Colancheri and I'm the CTO of Pristine. Pristine is an awesome-based startup that connects people and professionals through Google Glass. Our product, iSight, is a product that allows you to go see through your eyes and broadcast what you're seeing through your eyes through people around the world to any device. We've been around since around May of 2013 and in that time we've built a very strong team. We've raised more than 5 million in funding and we've built a client basis over 2,000 clients. We founded Pristine to go change, fundamentally change the way that healthcare is delivered. We started in the operating room and spread throughout the health system, helping doctors and nurses improve patient experience. Well, what do those words mean? What is an improvement in patient experience? To highlight that, I'd like to go share with you guys one of our customer use cases. When babies are born prematurely, oftentimes they're separated from their mothers during the first few hours after their birth. Both the mother and child tend to be fairly sick, but they're careful specialized care by different hospital teams. A NICU in Boston is using Pristine's technology and Google Glass in order to go connect mothers with their children to improve patient satisfaction. With our product, mothers can now get a dad's eye view of their children, you know, from no matter how far the distance is. Our technology allows this connection between mother and child to improve patient experience and satisfaction. We quickly realized that if we can go do this in healthcare, we can do this anywhere else. So we began to expand across horizontals, going into manufacturing, medical devices, and field service to allow workers to remotely collaborate through our solution and through Glass. All of this is possible through the power of Docker and AWS. We depend on AWS for security, availability, and due to their partner ecosystem. For us, since AWS offers a HIPAA compliant infrastructure, it allowed us to go build a product on a platform that we knew in a way that we could trust it to scale. In addition, as a startup, it's difficult to build an enterprise-grade product when you're limited by capital. The AWS Activate program was there to help us build our product. And finally, when you're just a couple of people in a room, it's pretty hard to go build an enterprise-grade product rapidly. Luckily, we were able to leverage Amazon's partner infrastructure and ecosystem to go build our product. We worked with a local AWS partner called Flux 7 in order to go build out our Amazon infrastructure and get 10 times more DevOps work done in a 10th of the time. Our infrastructure on Amazon uses a multitude of AWS services. Our end-to-end encrypted video calls require the use of a authentication component to allow our clients to connect to each other. These components are run on top of EC2 clusters that are distributed across multiple availability zones. In addition, we use Elastic Load Balancers to go handle load balancing of our traffic between our different signaling servers and CloudFront to go serve our static web application. While the majority of our calls are point-to-point, some of these calls do require the use of a relay service in the Cloud. We host this relay service on top of EC2 and use Elastic Cache to facilitate communication between our signaling servers and our relay servers. Now, all of these services on EC2 are made possible through Docker. Pristine depends on Docker for our growth. They are the key to our growth and we use them everywhere from our development tools and infrastructure all the way to our production environment. For us, Docker simplifies the workflow from development to production. What this means is that our engineers can build on the same containers, test on the same containers, and deploy the exact same containers across the board, from development to production. In addition, in the process of onboarding and replicating a production environment locally, rather than having to set up multiple and virtual machines, we can take advantage of container linking to create this complex multi-node infrastructure locally very quickly. As a result, Docker also simplifies and introduces variability for our deployments. We can use the power of Docker to encapsulate our application and all of its dependencies into a single container. In addition, since Docker containers are built upon layers, rollbacks are extremely simple. We don't have to worry about rolling back dependencies or worry about bringing up an old AMI. We can simply just check out a tag of that Docker container. It's as easy as checking out a version from a source repository. Finally, Docker and AWS are perfect companions. We're using Docker with AWS to go streamline, to build a streamlined deployment process, a blue-green deployment process. For a bit of context, a blue-green deployment process is a deployment process that involves bringing up a parallel environment in production, a parallel infrastructure group in production, which you launch your new services onto and then make the switchover at the load balancer layer. In the past, when, as we explored building out a blue-green deployment strategy, we saw that we'd have to go create new AMIs and spin up entirely new instances on new auto-scaling groups to go actually build this out. With Docker, we're simply able to go perform an executed Docker pull in order to go and start a new container on our new infrastructure group, which is in order of magnitude faster than the process of building new AMIs for every single one of our microservices. Without Docker and AWS, it would have been extremely difficult, if not impossible, for us to rapidly build an enterprise-grade product. With AWS and AWS's global infrastructure, we hope to increase our availability and decrease our application latencies so that we can provide our clients with the best possible product experience. All in all, Docker and AWS lets us be comfortable with the idea of our product scaling and working just as well as it did in that NICU in Boston as it does across hundreds of enterprises. Thank you. So, are you guys ready? So, why do developers love containers? Yeah? Because you can ship them everywhere, and they have a standard format. By the way, there's a book called The Box that you should really read. That's about the two physical containers and how they change the world. There's an amazing book to read. Why do we love development containers? Yeah? It's easier to manage development. Portability between environments makes it much easier. There's much lower risk in deployments. Easier to manage and maintain application components and have them work together. But it's really hard, I find. Yeah? Scheduling containers requires a lot of heavy lifting. Yeah? You have to work on optimizing payments for utilization, placement for utilization, placement for high availability, the right resources per container, launching them, rolling them back, managing the cluster management, all these kind of things. It makes it really hard. It's not simple. And so, what if you could get all the benefits of containers without the overhead? Introducing the Amazon EC2 container service. A highly scalable, high-performance container management service. Yeah, you can manage containers at any scale. You can launch them. You can terminate them. The clusters of EC2 instance, you can run tens of thousands of containers with built-in versioning for deployment and rollback. We get optimized scheduling so you can schedule your containers for more placements. You can per container resource requirements. And you can ensure high availability with what we call isolation policies. Basically, that means you can deploy sets of containers to separate availability zones to ensure that your application has the high availability that it requires. It also improves resource efficiency. You can run a mix of containers over instances. You can improve that new resource utilization really. We're also mixing long and short-running tasks. And, of course, as always, there's a simple int API with which you can integrate. You can centralize cluster visibility and control. It integrates with Docker repositories. And you can extend it with existing or custom scheduler, such as, for example, the Masal scheduler, if that's what you want. But, of course, none of this is true unless we can give you a demo of it. So I'd like to invite Paul Duffy, Head of Product Management Marketing, in stage to give you a demo of the new EC2 container service. Thank you, Werner. Good morning. So I'm really happy to be here today to give you a demonstration, a short demonstration of EC2 container service. And in this demonstration, we're going to show how we can use EC2 container service to deploy a reasonably complicated distributed application using Docker containers across a cluster of EC2 instances. So I'm going to start off by listing the clusters that we've got defined with the service. And right now we have just one cluster, a default one. So we'll describe that cluster to see the resources that we've got in it. And right now, we have a set of R3 instances. These are standard Amazon EC2 memory-optimized instances. For the application that I'm going to deploy, I want to also have some C3 instances in this cluster. So we're going to launch a bunch of C3 instances, which will also become part of that cluster. So I'll take a few moments for those to launch. While that happens, I want to tell you a few other things about the service. So we provide you with an AMI that's ready to go with the Docker daemon, with the EC2 container service agent, and you've got lots of customized options there as well. We're also building a cluster here that is heterogeneous. So it has a mixture of both our three instances and C3 instances, because I need that mix of capabilities to run the different parts of my application. And then a few words about the nature of the application that we're going to show. It's one that lets end users upload an audio clip, perhaps of someone saying hello in English. And then it takes that audio clip, stores it in Amazon S3, of course, and then using queuing and some backend audio processing we have using a Redis cluster to store metadata. It ultimately takes that. It's translated so the hello becomes a Mandarin Chinese hello that the user can get. So we'll start deploying the various bits of that application. Now, the first thing that we're going to do is register a task with the cluster that we've got defined. Instances are ready, so we'll register the task. When we register a task with the cluster, with the service, we're basically saying these are the resources in terms of CPU and RAM. This is the name of the Docker image that we're going to use, and we do that with a JSON file. I'm going to describe that particular task so we can see the CPU and RAM resources that it needs. And then I'm ready to go, so I'll start running that task. So we've now got our RabbitMQ component of our application that's running on one of those R3 instances. Next thing we're going to do is launch the Redis cluster. We've already registered that with the service, so all we need to do is run the task. And what you see when that Redis cluster is launched, in the visualization that we've got for this demo, these containers are bigger. And if we describe the Redis task, we'll see that in the definition of that task, we specified more RAM. So the EC2 container service knows the right place that it has to deploy that particular task. So the next thing we're going to deploy is the front-end components of our application. So we'll deploy a few of them. We'll see them get deployed into the cluster in a matter of seconds. It's a little bit difficult to see because the colors are similar, but these components of the application actually consist of two containers. If I describe what we've defined for that task, I've told the service not only things like the CPU and the RAM that I need for NGINX and the Node.js components of that application, but also it's two separate docker images. But the service knows that they're going to be deployed together, so it takes care of making sure that they're in the right place at the right time. Finally, we're going to launch the back-end processes, the audio processes that are going to do the work with the audio files that users upload. So it'll take a few seconds for us to get them up, and you'll see that they are deployed across different instances. They're not just on the C3s, they're also on the R3s, because when we describe the task, the service can make decisions about where to place them based on what we've told them about the resource requirements of those tasks. So all the piece parts of our distributed application are there now. To do some things, we see the load for the front-end on the right. We've got a script we're going to run that will generate some HTTP traffic to increase the load on that. Once we've done that, if we want to better deal with that load, we'll launch a bunch more of the front-end components of the application to deal with that, and we'll see that the load comes down as we've got more of those instances. All things that with EC2 container service are happening in a matter of seconds. We don't need that load anymore, so as we kill that script, and then as we kill the script, we're also going to get rid of the extra containers that we don't need anymore. The last thing that we're going to show in this demo, we made some changes to the audio encoding processing part of our application, so we have a new version of that that we'd like to deploy. So we registered the new task, we told the service the resources that we need, so now we're going to deploy V2 of our audio processor, so we run that task, and then the same thing, the service knows the resource requirements for these containers, for these applications, and it puts them in the right place across our managed cluster of EC2 instances. We're going to get rid of V1 now, and we end up with the evolved version of our application, and that is the end of the short demonstration that we're showing you, so it's also very easy for us to say goodbye to all of the containers that we've deployed as part of this demo. Thank you very much for your service, and then we can also finish off by terminating our cluster and finishing the demonstration. So what we showed here with EC2 container service is how it's straightforward for you to take a heterogeneous cluster of EC2 instances that are managed by this service and deploy this somewhat complex distributed application. We told the service the resource requirements we needed. We told the service that certain containers needed to be deployed together, and the service took care of deploying them in the right place very quickly, making it very easy for us. We're really, really excited about this service, and we're also really excited about what you guys are going to do with it. We've spent a lot of time talking about Docker. We've spent a lot of time talking about Docker containers today. Who better to come and tell us some more about Docker, the company, about Docker containers? I'd like to invite Scott Golob, CEO and co-founder of Docker on stage to come and spend some time talking to you about that. Thank you very much. Does that a cool demo or what? You know, Docker just turned 18 months old, and this all feels a little surreal. You know, I'm used to thinking of Docker like most 18 months old, you know, something that stumbles and spits up on you and keeps you up at night. And to be here at ReInvent and to have AWS and the Cloud Pioneer that it is not only launch a Docker-based service, but do it in such a Docker ethos-friendly way is really incredible. And by Docker ethos-friendly, I mean they've used native Docker interfaces, they are respecting portability, and perhaps most importantly, through the integration with Docker Hub, they're recognizing that Docker isn't just a container technology, but it's a huge ecosystem. I also feel really grateful because between Werner and Paul and Patrick, they did such a great job of describing the what of Docker in containers that I can take a step back and talk about the why. And so why did Solomon and the rest of the team create Docker? Why are we so passionate about it? It's really because we think we're entering and enabling a new world of applications. And it's a future of applications that's predicated on the notion that developers are fundamentally content creators. Developers are fundamentally authors. And if we look throughout history, we see that amazing things happen when you liberate authors from the concerns about production and distribution. And of course, there was a time when the only books produced were by monks scribbling away in dark cellars. And then you had the printing press and then you had the internet. And now, of course, if you want to publish something, you don't have to worry about what the packets are. You don't have to worry about which routers it's going across, whether those are the same routers that were used yesterday. It just happens. The internet is a universal publishing platform, except for that class of authors that we call developers, who today are still largely stuck in the dark ages. And I'm not trying to imply that every developer is celibate and works in a dark cellar. But we still spend an inordinate amount of time worrying about things like getting access to servers, or worrying about dependencies, or versions of dependencies, or rewriting work that others have done. And that really shouldn't be the case. The internet should be a universal computing platform in the same way that it's a universal publishing platform. And there's some reasons why it isn't. I mean, most of the infrastructure that's in place now tightly links applications to infrastructure because it was designed back in an era when applications lived a long time, when they were monolithic and built on a single stack, and when they were deployed to a single server. And all three of those things have changed. And so if we think about what the distributed applications of the future are going to be, it's really quite different. Of course, the applications are rapidly changing. They're built from multiple loosely coupled components that are themselves rapidly changing. And their need to somehow make all of these rapidly changing applications and versions of applications and languages and frameworks work consistently together and work consistently as a unit as you move from dev to Q8 abroad, but also as you go into production, as you scale across clusters, as you move from physical to virtual, as you burst to the cloud, et cetera. And in fact, we want to get to the point where we can have multiple different components stored on different servers and on different clusters and still interact consistently. Now, that's a tall order, but fortunately, we've already made a lot of progress towards this kind of model, as you saw through the demos, both Patrick's and Paul's, and we have a good roadmap to follow. And again, I think the past is a good guide here. So if you've been to any Docker talks, you know we talk ad and Ozium about shipping containers simply because they're a good model for how you can have a similar separation of concerns. You know, in the case of shipping containers, how do you separate the manufacturer's issues from the shipper's issues, and how do you make sure you can put anything inside it and move it from ship to train to truck to crane. And the shipping container industry went through five steps, and we're going to go through the same five steps as we try and get to a world of fully dockerized distributed applications. So step one has sort of been happening for the past 10 years, and that's the great work that the people who have worked on the low-level technology in Linux containers and Solaris Zones and BSD Jails worked on, giving you the ability to isolate a process and run it in a lightweight way on an OS. And that's fantastic. What we've been doing at Docker over the past 18 months is steps two and three. So step two was to take that, that sort of that plain steel box and turn it into a shipping container, if you will, or take that notion of a container but make it portable, give it the equivalent of hooks and holes, good APIs so that it's standard and can work anywhere. And when we did this, step three happened in a remarkably short amount of time. And we suddenly saw a perfusion where every major operating system and every major cloud provider and every major DevOps tool supported Docker. We have now over 700 non-docker ink employees contributing to the code. We have over 16,000 projects on GitHub that provide tooling around Docker. And if you go to Docker Hub, you'll find over 50,000 languages, frameworks, applications that the community has contributed to make it possible to run Docker. And that ecosystem is so powerful that now we really can take any Linux application, package it up in seconds into a container, test it in seconds and deploy it without modification or delay to virtually any server. And that's amazing. And now the next 18 months are about steps four and five because what works really well with single containers or small numbers of Docker containers, we want to make work well for large multi-container applications running across different data centers. And that's the multi-docker application model. And as Werner said, there's a lot that we need to do around scheduling and around clustering and around composition and around networking and storage. And if you come to the Docker talk later today, you can hear about what's happening. But I think the critical thing is not only the technology, but that we do it in an open way so that all of that portability and all of that ecosystem isn't lost when you go from single container to multi-container. And that's part of the reason why I'm so thrilled about the EC2 container service launch because it really does respect this notion of being integrated with Docker Hub and using native interfaces and enabling app portability not just within AWS, but between on-premise and AWS. And that's a fantastic thing and it will result in tremendous, tremendous benefits not only for developers but for companies. Now, you heard Patrick talk about what Docker and AWS do for Pristine. I'll talk quickly about another company, Guilt, which is a joint AWS and Docker customer. And Guilt, of course, is a flash retail site. And as you can imagine, they want to do a lot of experiments, but they also have to deal with big surges in traffic and rapid growth and lots of different products. Before Docker and AWS, they had seven monolithic apps. It took weeks to go from Dev to deploy. With Docker, they've moved from monolithic apps to 300 microservices. They go from Dev to deploy in minutes and enable them to do over 100 different experiments a day. And that's an amazing result for Guilt. And they're one of thousands of customers that are using Docker. And it's not just web companies, it's now hospitals and banks and government institutions. And beyond them, of course, we're really excited because Docker has also just passed an important milestone. We have 50 millionth download, which means that there are going to be lots of developers out there helping to drive this revolution. So we don't have time for questions, but I will address a question that I'm frequently asked, which is, you know, which technologies will win and lose in a Docker-based, a container-based world? And the answer is I don't really know. Docker is a disruptive technology, and so some vendors will win and some vendors will lose. But I think the more interesting question is, and who wins when we liberate developers from worrying about content... Sorry, worrying about issues of production and issues of distribution? What happens when we liberate applications from infrastructure? Who wins when we liberate all of that creativity from the developers? And I think the answer there is pretty clear. We all win. Thank you. We all win because, you know, it's free. Yeah, or, you know, you still have to pay for your ECT resources, of course, that you're using, but the service by itself is free of charge. Simplification drives reliability and performance. That's one of the key things, I think, in software development that we all know about. The simpler you can make things, the more easier it is to make sure that your code parts are reliable, to make sure that you can build highly-available applications and that you can really drive performance. And so when you think about, originally, what are sort of the primitives of the whole cloud? They were in the earlier days, or still are, of course. It started off with the EC2 instance. It started off with storage. Now we have containers, but those are primitives of your execution environment. What now if we take a look at what are the primitives of an application? Yeah, forget about the execution environment. Just think about applications by itself. There's, in my eyes, three components that sit there. On one hand, you have functions which encapsulate business logic. You have data which encapsulates business states. And then you have interactions between those, often driven by events that are sort of the interactions between the business logic and the data that it operates on. And the applications really are sort of the magic on the intersection between these functions, events, and data. And let me give you an example there of what I mean because this is actually really simple because most of us have been using these kind of principles for the past 30 years already. If you look at an example to make this simple, think about a spreadsheet. Think about that the cell gets updated. What happens there? Yeah, so the data gets updated. It basically sends out an update event to a function that responds to the changes in the data and then updates another cell. And so when this data changes, this function automatically gets triggered. The cool thing is that you can have many of these functions operate in parallel because these functions are stateless. And they are scalable and they're easily parallelized. So concurrency automatically happens. So these functions get executed, results get put back. And you can easily add more functions to this. If you want to add, for example, an average function to it, you can have them operate in parallel with all the other functions that are happening. And we all know what happens next. If you change one cell, automatically all the functions get triggered and which need to operate on. So this is the world of functions, data, and triggers, events, or interactions between them. And so if you think about that those are actually the fundamental building blocks of the applications that you're building, imagine we could build with this because it would be composable, because these functions are small and you can build them quickly. They're easy to update because they operate independently of each other. And they have a really dynamic nature because data is always kept up to date. So if this is such a wonderful model of building applications, why don't we do that? Because for each of these functions today we have to run a complete stack. Some functions actually change very infrequently or run very infrequently, and as such you have to run a whole stack for a long time for something that may actually only run once a month. Let me give you an example of this. There's a company called Retransfer. There's a great customer on AWS. What they do, they upload data into, they allow customers that have files that are too large to be sent with an email to upload them into the Retransfer service, who then will send an email to all the recipients who can then download it. These guys are really successful. Last month they had 65 million transfers and they transferred well over 11 petabyte of data. But what happens is that on each of those uploads they want to do a virus check on the files and they want to zip the files together. So in each of these they actually need to run a whole fleet of easy two instances just to be able to on each file upload to do the virus check and to do the compression. What are we going to do? Today we are going to add these primitives to AWS with a new compute service. I want to introduce to you something that's very exciting. It is AWS Lambda, which is an event-driven compute service for dynamic applications. You can reduce your development effort by writing no more glue codes. You can respond to new and updated data quickly. You can make extending applications by writing new code without actually having to change the old code. You can improve performance through concurrency and you have to run no servers, no instances, nothing. You just write a code and it will run for you. So the focus here is on events and so these events may be driven by, for example, AWS services that will trigger these events. That could be, for example, an S3 upload notification or a DynamoDB stream update. We will get to there in a minute and you'll write some code. In this case, we're starting off. We'll just provide JavaScript, which we'll be able to write other code later on as well. And more importantly, as this will run for you automatically without any compute infrastructure that you have to provision for it. This is easy to use, low maintenance. You can run code without managing infrastructure. Let me repeat that. You can write code without having to manage any infrastructure. The cloud functions you write are in JavaScript and Lambda will take care of managing, skating, monitoring and logging for you. It will respond very fast to events. Execution of such a function is in milliseconds after that the event is being triggered. Each event is then processed through the status cloud function that you've written and thousands of functions can run in parallel. And this is great because your code will only run when needed. For example, if you have a mobile device, that will send an update to S3 that will then be triggered a cloud function, only then the function will run. You won't have to have this running all the time. It will be very low fee per request, cost effective at high rates, low rates, and you know you can run it once a month or maybe you can run thousands of these functions a second. And so these events will come in many, many different shapes and forms. For example, as the event notifications with objects change or update or being emitted, it could be DynamoDB streams. Remember, this launched a few days ago where actually you can get update streams out of all the changes that happen in DynamoDB. Kinesis events, when events are being added to the Kinesis streams, or they can be custom events just driven by you, for example, from your mobile devices. And if you look at the number of these use cases, for example, you want to write server-free backends. The example that I just gave for mobile devices where you just want to have a back-end that can respond to any of the updates that you sent. It can be data triggers. For example, you do a send an image file into Amazon S3 and you immediately want to create some thumbnails for it. Or it can be the world of the Internet of Things where your sensor, when it's certain, as a temperature change will happen, will be able to trigger one of the Lambda functions such that you can send an SMS message to the owner. Or it will be stream processing, where, for example, updates in DynamoDB, you will be able to execute business functions on data changes that happen. Or, of course, indexing and singleization are one of the common things that customers do on our platform with S3. So you will upload data into Amazon S3 and then you will want to run a function to actually extract the metadata from that and put that in DynamoDB. Let's actually go through that last example. That might be a good one. So you have a mobile device. You upload a photo in your S3 bucket, which will send, which will trigger your Lambda function that you've written, who will then extract the metadata from that photograph and will put it in DynamoDB. And then it will trigger another function over all the streams of DynamoDB to figure out which of the metadata functions, for example place or location or user, are actually trending. And then you can add another function to it that actually notifies the customer if his or her photo is actually one of the trending photos. All of this you can write without running any infrastructure, any instances to do. It becomes extremely simple to build highly reliable, highly concurrent applications this way. And I'd like to invite on stage Neil Hunt from Netflix to talk to you about, he's the Chief Product Officer there, to talk to you about how they have been using Lambda to make their systems way more efficient. I'm excited to be here. Last quarter, Netflix delivered about 7 billion hours of video to about 50 million customers in 60 countries and to do that, we used a lot of complex and dynamic AWS infrastructure. About 30 to 50,000 instances in about 12 zones, 50% of those instances turning over every day, almost all of them every month, petabytes of data, hundreds of thousands of files created and changed daily. Now, computing technology is built with layers of abstraction. Many years ago, we stopped using assembly language and wire protocols in favor of high-level languages. Operating systems abstracted away the management of the hardware. And with AWS APIs, for the first time, we're able to programmatically control whole systems of infrastructure offering a new layer of abstraction. And now with Lambda, we put another new layer on top of that. With Lambda, we can replace inefficient procedural systems that would pull the infrastructure for updates in order to manage and control. We can replace it with declarative rules-based systems, triggered when events happen that manage and adapt the data processing fabric to respond to the needs and interests of what we're trying to accomplish. I've got four examples. Let's start with encoding media files. Studios push to us media files frequently for their assets we've licensed. Each time a file arrives, we chop it into five-minute chunks for parallel processing. We distribute it across the set of systems and code. When the last piece is encoded, we repackage it and then deploy it for CDN use. With Lambda, we can use rules triggered by the movement of those assets to launch and configure the necessary processing to encode in the 60 different parallel streams that we need. And then we can use the rules and the events to aggregate and deploy after all the parts have been processed. Another example, in the space of backup, in our environment, hundreds of different processes save or update data in S3 continuously. And with Lambda, we can use the rules that trigger on the data updates to decide what needs to be backed up, what needs to be copied to off-site storage, and to check and validate that it arrives safely and to restart the copies and re-check and re-validate if it didn't, or raise alarms in case of failure. A third example, in the space of security, we have hundreds of different processes that start and stop instances all the time. Now, Lambda allows us to validate that each new instance is constructed and configured in accordance with the rules and situations and to trigger shutdown of violations or notification of unauthorized instances that appear in our infrastructure. And then for a final example, in the space of operational monitoring, if we use an event-based model to track the operational metrics to build the dashboards, we can build nice models. And since the infrastructure generates the events themselves, we can be confident nothing is missed and we're seeing the whole picture. And metrics exceptions can trigger more rules that make further changes to the environment to compensate for changing situations. So we're excited to explore these and many other opportunities to use Lambda for rules-based systems to make our computing more efficient and more effective. It's a new abstraction layer that gets above the levels we've used in the past, promises more efficiency, cleaner logic for better control of our systems. So thanks, Werner. I've been pleased to be here today and I'm excited to see where this product goes. Thank you. So if you think about cost of this service, there's a number of unit costs that come along. So first of all, there's a number of requests and we'll count them per million. Then there is execution time in hundreds of milliseconds. And then there is the amount of memory that you use in blocks of 128. And so the pricing of this will be 20 cents per million request and a lot of zeros, and then 21 for every 100 milliseconds and 128 megabytes. But more importantly, for all of you that want to try out this service, there is a free tier, a base free tier for all customers each month where you can get up to 3.2 million seconds of execution and 1 million requests. I think this is a great way to get started and it will totally revolutionize the way that you will be writing your business applications. It's available today in preview. It's available for you today in preview so you can go and sign up and you can get started. We also continue to innovate on the core building blocks, by the way. And the core building blocks, of course, of AWS are the easy two instances. And I'm proud to announce that we have a total new compute instance for you called C4, which is based on the Haswell processor. And it's a custom designed processor and I'm very fortunate that we have Intel on stage in a minute to talk to you about the specific of that processor. This has more virtual CPUs than we have ever had before. And this is the fastest processor, fastest easy two instance, the highest performance one that you can get. It also will be EBS optimized by default, which means that you do not have to pay extra for your EBS connectivity. We also have made great strides in networking. The network jitter that you will see between your easy two instances actually has improved dramatically. And as you see both at the P50, the 90, and the 99.9 percentile, these are work solids. And so, whether you use the R3, the I2, the C3, or the C4 instances, you get this very high performance network with very consistent performance. Also, I'm proud to announce new EBS volumes. Of course, earlier this year, we already announced the general purpose SSD instances, which you can now have them up to 16 terabytes with 10,000 IOPS, all over up to 160 megabits per second. Megabytes per second, sorry. Doesn't much be there. Of course, we already had provisioned IOPS for you for a while. This allowed you to actually get dedicated connectivity between you and the EBS instances. They will also be up to 16 terabytes, but will be able to run all the way up to 20,000 IOPS and will have double the bandwidth that you get from the general purpose SSDs. I think it's great strides. Now, speaking about performance. Yeah. It's time to start talking about what performance we will see tonight. And as sits, I would like to invite Diane Bryant on stage from Intel to talk about the great partnership that we have between AWS and Intel and how they helped us create instances that are so fast that you've never seen it before. Diane, welcome on stage. Thank you. Thank you. Thank you so much for inviting us here today. So there is nothing we love more than collaborating with Amazon and working with you to continue to expand and support you in the build out of all of these new instances. The workload diversification is clear and you guys have done a tremendous job growing the infrastructure service industry through the support of this massive build out of workloads. We've seen some great examples of the diversity of workloads that we believe in this morning. So we, Intel and Amazon are strategically aligned. We believe that the way to win is to deliver the customized, optimized solution for each of those workloads. And as you just mentioned, a great example of doing that together is with your new C4 instance, the C2 performance optimized solution. The Haswell processor that you just talked about is an exclusive processor designed specifically for Amazon, the only users of that product. And it's based on our latest technology and the way that optimization actually occurred is through the tremendous engineer-to-engineer collaboration that we have between our two companies. It's absolutely fabulous. Through that engineering collaboration, we're able to tune the attributes of the processor to take advantage of the insights into the actual Amazon environment. And the result, of course, is the highest performing solution for compute-intensive workloads, whether they're engineering workloads or scientific workloads or big data analytic solutions. So now your customers have the fastest, highest performing EC2 instance on the planet. So thank you very much for that. I must say, there is only one thing that we love more than collaborating with Amazon on the next generation AWS instances, and that is throwing a party with you. So with that, I'm going to let you tell them what they have in store for tonight. Well, Milena, thank you very much. First of all, for being a great partner, both on the technology side, but also on the party side. So for you guys, let me introduce the guest performance at the party tonight. This is going to be an awesome party tonight. But before that, we have some other things to do, of course. I think there's a wide range of great technical sessions this afternoon which you can attend. Also, if you're not yet sick and tired of me, I'm running three startup prior side chats and the startup launches this afternoon if you want to attend that. With that, I hope we've given you a great view of what the future of computing in the cloud is going to be. So go build, please, now. Thank you. Ladies and gentlemen, this concludes the 2014 AWS Reinvent Keynote program. On behalf of Amazon Web Services, we thank you for attending. Please join us later tonight at 8 p.m. right here for the AWS Replay Party.