 I apologize for the very bright light, but that's the best lighting I can arrange. I'm actually not at home at the moment, I'm away talking about open source. But today I'm going to talk about open source at Amazon. But first of all, just to set in context, for over 20 years, open source has really been helping builders to innovate and develop solutions, whether it's the tools that we use to develop the libraries that we combine to make our applications or the run times on which we deploy applications. And so I want to share how we think and act with open source at AWS and how we work backwards from our customers to provide the builders with kind of a greater choice in how they develop and run their preferred open source technologies and share hopefully how cloud and open source, I think is the future of IT. Now to understand how we think about open source at Amazon, we first have to look at our culture. At Amazon, we have 16 leadership principles and these form the basis of everything that is done within the organization, from hiring and finding builders, from our day-to-day interactions with each other, how we get our work done. These leadership principles enable us to better understand how we can work together to serve our customers. And these also influence how we interact with open source. So I'm going to take a look at some of these. The first and the most important leadership principle is customer obsession, but what does this actually mean? So as a company, we are customer obsessed. We don't lead with the coolest tech. We don't focus on what competitors are doing or what delivers short-term results. We aim to build relationships with customers that outlast any one of us. And it means we start with the customer and we work backwards to understand what their needs are, their pain points and their frustrations. But it also means that we look between the lines and we try and discover opportunities to delight our customers. Now, in the context of open source, this means a few things. And many of our customers are enthusiastic users and creators of open source software. And we listen to our customers when it comes to what products and services to create. 90% of the AWS roadmap, for example, is driven by customer feedback. And this is also true of open source. When customers tell us that they love open source technologies, but they would like to benefit from the integration to the other AWS products and services, we listen and we work hard to create managed services, sorry, excuse me, for those open source technologies. Customers tell us that they love the choice available to them when it comes to how they want to run their open source workloads. Now, for some workloads, they actually want to take on the responsibility and manage those open source technologies. Other times, it might be they just want to use one of the many hundreds of open source products available in the AWS marketplace. Customer obsession also means that we want to understand the pain points that customers have when it comes to using open source software and then helping them to tackle these pain points. And one of the common, sorry, one of the common pain points that customers have when it comes to running open source workloads is how to remove all that undifferentiated heavy lifting involved. So the installation, the configuration, the patching of security updates, the scaling up and down, how to do upgrades, those kind of things. They love the open source project and they want to focus on using that to help their customers. And I want to take one example of this using a project that I've been very deep into the last sort of 12 months. In 2014, the engineering team at Airbnb open sourced a tool that helped them build their and scale their data pipelines. Apache Airflow is a workflow orchestration tool and it helps you to create and manage complex workflows and it's been adopted by many businesses and data engineering teams. It became a top level Apache project, I think back in 2019 and it's been growing in popularity ever since. And it's something that I've been working with over the last 12 minutes, including my first contribution to the project, a few weeks ago. Now we have customers who prefer to actually manage their own Apache Airflow environments and they want to deploy this into AWS. And so in order to enable that, they want choice and flexibility in how they deploy that. So for some customers, it might be deploying on virtual machines. For others, it might be deploying container services such as ECS or EKS or maybe even their own Kubernetes environments. But customers ultimately want the ability to integrate with other AWS services such as S3 for storage or some of the data and analytics services such as Amazon Redshift and Athena. Now there are a number of challenges around self-managing Apache Airflow. The first is around setup. It's typically a very manual process. So some of that, the customers end up developing their own environment creation tools and there are lots of choices that aren't always easy to understand when and how to make the best choices. Scaling can also be a challenge and this can be done with some of the AWS services such as ECS, EC2 auto scaling or with Kubernetes. But when you do that, that brings its own complexities as well. Now for security supporting role-based authentication authorization typically involves a process where you have to authenticate in one place and then go into the airflow user interface to authorize that particular person to have a specific role like administrator or viewer. But sometimes what we find is customers make everyone administrator and don't even worry about it. And it's pretty easy to make mistakes accidentally like opening up the web server to the world. Upgrades and patches are also challenging as there are hundreds of Python libraries and other dependencies to keep a track on. And so knowing which ones are stable, which ones are required and which ones maybe have security vulnerabilities can be tough. And on top of that, upgrading airflow can be a challenge when you perform an upgrade and sometimes things don't always go the way you expect and so it can be painful to roll back. So in December 2020, Amazon released a managed service for Apache airflow called Managed Workflows for Apache airflow or as I like to call it more, that provides an upstream version of Apache airflow that's already integrated with the AWS services that I've talked about. And there's all of those things for you allowing you just to benefit from the use of Apache airflow. Now for other customers, they may want to use one of the managed services that some of our partners and users of AWS Marketplace have deployed. So as we can see here, there are a number of options you have including from Astronomer who are one of the main contributors and drivers of the Apache airflow project. And so you can easily start using Apache airflow by using Apache airflow via Astronomer through the Marketplace. And then we also contribute new projects that actually helps builders to easily locally develop and test their Apache airflow workflows before actually then committing your workflows to your production environments. So that's just kind of like an idea of the spectrum really of how customers might want to go from one end self-managed to the other end of using managed services. Over the past years, we've worked hard with a number of open source ISVs including Confluent Databricks, Hashicorp, various labs and many others to help them build their own cloud services with that open source technology. And our partnership, for example, with Confluent is really representative of how we obsess over our partners and our mutual customers' success. AWS also works with customers to help them effectively build and operate their own open source projects whilst we also contribute to those projects in the process. For example, AWS has partners with Intuit and Weaveworks to develop Argo Flux which provides a single tool chain for continuous deployment and fleet scale autonomous workflows via GitOps. But you see other projects there such as Spinnaker from Netflix and Envoy from Lyft as well which we've contributed and actually are part of some AWS services in the case of Envoy and Atmesh. Now another leadership principle is insist on the higher standard. AWS services are built to meet the needs of very high scale multi-tenanted operations. And when we build a service based on open source we provide customers with a fully managed, highly scalable multi-tenanted service. And here's a look at some of the AWS services for open source across just a few of the categories such as data, analytics, compute, machine learning. Now these managed services allow customers to quickly get started using these open source technologies without having to worry about how to run them. And when we do launch one of these services based on open source projects we make a long-term commitment to support our customers. We, and part of that long-term commitment means contributing bug fixes to security, scalability performance and feature enhancements back to the community in the project. Now, for example, the Amazon EMR team have been making contributions to the Hadoop ecosystem for many, many years. And the Amazon Elastic Container Service, the EKS for Kubernetes team has been contributing Kubernetes code, both code and broader contributions as well. I want to dive into one other example. Now we've used Apache Lucene internally for years and I think it was around 2019 the Amazon.com search service decided to move 100% to Lucene powered search. Now at the scale of Amazon runs we pushed Lucene to its limits but we felt confident that if we worked together with the community we could jointly ensure that it could meet our needs. And in pushing Lucene to the limits Amazon developers uncovered some rough edges, some bugs and some other issues according to Mike McAndless who's one of our senior engineers and a long time contributed to Lucene and other related projects. Now AWS has always aimed to take technology that it was traditionally cost-prevented or complex and difficult for organizations to adopt and make it accessible to a much broader audience. Open source is just one of the ways that AWS makes technology more accessible to everyone. And that's kind of core to Amazon's mission of helping make open source easier to use. Now thanks to the four freedoms of software customers are able to access many thousands of open source projects but just because you can access the source code does that mean you can use it? Well, not really because for many open source projects there is kind of work that you need to do to get it required in a state where it can be useful to you. And when you think about it what you typically need to do is you might have the source code but you might need to install it you might need to provide instructions and examples so you can get started quickly. You might need to actually spend some time to learn the project and how it works especially if there's lots of moving parts you're gonna need to deploy this project onto some kind of infrastructure and then you're gonna need to do that in an optimized way you're gonna need to do and configure security and then you're gonna need to make sure you can scale up and down to meet your needs and many other things and here on this little graph here you can see some of the activities that typically is involved before you get to the top bit which is actually being able to use and benefit from the open source software. So when you dive into the details we can see that operating open source technologies is actually a lot of work and we do surveys with some of our events from time to time and 88% of our customers sorry, 88% of customer surveyed to one particular survey said that open source leadership was really important or very important to them in determining who they went to as a technology provider but when we asked them what they defined by leadership what they said was making it easy for customers to use open source and run their preferred open source was something that was important to them. So at Amazon we hire builders, we hire pioneers people who like to challenge the status quo people who like to invent who want to build the future people who look at different customer experiences and who assess what's wrong with them and iterate and reinvent them entirely and people who get that launching is just the starting and not the finishing line and many of those builders are also users and contributors to open source software and so this influences how we think about open source as we aim to build relationships with open source projects that outlast any one of us and we draw upon the expertise and experience from folk we have working from the likes of Red Hat the Apache Software Foundation and other open source communities as well as Amazonians who contribute to a diverse range of open source projects and this is just a small selection of those contributors and the projects they contribute to and we have a distinctly Amazonian term for the way we organize people to optimize for innovation and execution and we call this our two pizza team model meaning that no team should have big enough or be big enough that you can't feed really with two live size pizzas. Now this concept is fundamentally around creating little startups of no more than around sort of eight to 10 people and this small team size allows you to minimize the matrix communication and unnecessary meetings and bureaucracy and so it helps accelerate decision making but also it helps increase autonomy and drive innovation because they're freer to experiment and create tools and move rapidly really on their customers behalf and many of these teams develop open source technologies open source projects and this is a small sample of some of those projects some of which you may be not familiar with but I think are really super interesting so I just want to take a look at a couple of them. Particle is an open source project that allows you to provide a single query language across all your data sources and it allows you to easily and efficiently query data regardless of where it is and what format it's stored and actually you'll find that Particle is actually integrated in some of the AWS services as well. S2N or short for signal to noise is an open source transport layer implementation that we use so every time you access AWS for example S3 ever SSL is using that under the covers. Amplify is an open source development framework for building secure scalable and modern mobile web applications and it makes it really easy to add capabilities such as authentication and machine learning and more. And so here are some of those projects but I want to dive deep into a couple of them. So Bottle Rocket is a Linux based operating system that's purpose built by us for running containers on virtual machines or even on bare metal. Now most customers today run containerized applications on general purpose operating systems that are updated package by package which makes OS updates difficult to automate. With Bottle Rocket updates are applied in a single step rather than by package to package and this single step update process helps reduce the management overhead by making OS updates easy to automate using kind of container orchestration services sorry by providing a single step process and that helps reduce the management overhead making it easier to automate that through your kind of container orchestration service. Now that single step update also means that you can improve the uptime for your containers by minimizing the update failures and enabling rollbacks much more easily. Now the other benefit of Bottle Rocket is also only includes just the essential bare packages required to run containers which basically which both improves resource usage but also reduces the attack surface. Now Firecracker is another great project. Now before Firecracker when you were working with containers you had to choose between containers that had fast startup times and high density or by using VMs which had strong hardware virtualization based security and workload isolation. But with Firecracker you can have both and Firecracker is a virtualization technology a purpose built for creating and managing secure multi-tenanted micro VMs and it underpins our serverless computing but you can use it for other things such as containers as well. And we've got customers who are using and innovating and building on top of this open source project. So WeWorks who I've already mentioned is a startup software company that make it fast and simple for developers to build containerized applications and they use Firecracker when they built Ignite which is their GitOps managed VM solution. Kind of on a different spectrum here. Last year we opened Open Source to Open 3D Engine and it's a real-time multi-platform 3D engine that allows developers and content creators in that industry to be able to create triple A rated games without any fees or commercial obligations. AWS has been one of the top contributors to Open Source for many years and no other company really has done more to foster the rise in success of Open Source. Our participation extends beyond just the Open Source projects but also the code contributions to other projects, financial support of them, of foundations as well as I think one of the important things which is having a false multiplier, the false multiplier effect of the cloud-provised Open Source projects. It's hard, sorry, it's easy to forget. I've been working in Open Source for 20 years but today Open Source is embraced by most organizations but that wasn't always the case. And in the early days, Open Source organizations needed validation for Open Source and when cloud came along, cloud was built on Open Source technologies and cloud became an important way to show that Open Source could scale and was ready for the most demanding workloads. So cloud was a false multiplier enabler. And on top of that, we've been working with customers to help provide tooling, to help them move away from proprietary software into Open Source tools. So we've got open source tools such as .NET Framework, the porting, sorry, Assistant, to help them shift their .NET workloads to .NET Core as well as move from proprietary databases to open source databases. So just to conclude, there are kind of three key points to take away from this presentation about how we think about Open Source. We're building strong partnerships with Open Source customers, partners, communities and we're increasing our activities whether that's code contributions or other contributions that together help grow the Open Source pie. Across both consuming and contributing to Open Source, the cloud is that force multiplier effect and it's the tide that lifts all boats, enables customers to focus on and get the benefits of Open Source software they love and allows Open Source software to win. And by combining the best of Open Source and cloud to accelerate business outcomes, enabling innovation and power and digital transformation, I think this is what the future of IT looks like. Make sure you keep up to date with everything that's happening on Open Source and AWS by using these valuable resources. I do a weekly newsletter. We have a very active social media and we have a blog that features some great Open Source content. I'm always looking to hear from builders as to what projects they're working on. And with that, thank you very much for your time. At Amazon, we value your feedback and how we can continue to improve the work we do with Open Source. So if you have got the time to complete this very short feedback, I'd be much appreciated.