 Now, next up, I'm very excited to bring on Sabu Alarmanju, who is the chief engineer for Cloud at eBay, and he's very active on our OpenStack user committee. So come on out. Good morning. I'm extremely thrilled and honored to be in front of you all, an expert audience who created this amazing product called OpenStack. And did I say my sweaty palms? My team at eBay provides infrastructure and platforms for these two brands and more. These brands have a very simple mission, to connect people to do one of the most basic things, buying and selling. These have been doing for a number of years. You may have used them. What keeps me and my team at work excited every day is the infrastructure and platforms that power these massive businesses, moving hundreds of billions of dollars of goods and money around. For example, we have one of the largest Hadoop clusters in our data centers. It runs over hosts over 120 petabytes of data for a variety of workloads, including, for example, a search engine, a small search engine that you may not have heard about, that turns over 1.7 million listings in about two to three minutes around the clock to keep the search up to date. We have a metric system that is built on open source technologies that writes about 2 million metrics a second. Our developers write around 10,000 applications for these businesses, and all of those are deployed, unattended, through automation that our teams build on top of the cloud, some of which are running on OpenStack, some of which are not yet running on OpenStack. And collectively, we do about 150 deployments every day on any given day. Again, these are not great numbers, and these are not where we want to be. But what we're excited about is the opportunity that we have with OpenStack at the bottom of the foundation of our infrastructure. For someone doing infrastructure, what this means is that there is a ton of infrastructure. There is a ton of servers, storage, and network in our data centers across multiple geographies that we have to automate and hand it off to the business so that they can run the applications and keeps on going in a lights out mode. And we are a big fan of open source. A lot of stuff we do at eBay and PayPal, from top down, from pixels to the infrastructure, are using open source in many, many layers. And of course, OpenStack is a natural fit in that stack of things. Our journey into OpenStack started in 2012. We had experimented in early prior to Essex a few times. We're not sure if OpenStack was ready that we could spend time on it. But in 2012, summer was the first time we had the confidence to say, OK, let's try to put up an OpenStack deployment and see how it goes. We want to learn from it. So we scavenged about 300 nodes across our data centers. There was no funding. It was not a sponsored project that we had a project roadmap and KPIs set up front. But some of us wanted to try it out. I was one of the first engineers in the team who had the opportunity to go through the journey. And I'm excited to share the journey with you all here. The next three years, from summer of 2012 to the spring of 2015, which is now, we had an amazing journey of putting OpenStack to work at massive scale. We're still learning Havana. We have our teams are working right now to do the next batch of upgrades and those tests are happening now. And hopefully by end of next month, we should be running on either Zuno or Kilo. We are debating that. Those 300 hypervoices that we had for our cloud in 2012 grew to what turns out to be around 12,000 hypervoices by mid next month. All of which are going to be running OpenStack in our data centers. Last time when I checked, we had provisioned over 300,000 virtual compute on the cloud. And I think we provisioned about 70,000 virtual machines on these hypervoices. And more and more is happening every day. I can't keep up with the capacity crunch we face every day. We have distributed OpenStack across multiple availability zones. These are running in multiple data centers. These are shared nothing deployments of OpenStack running except Keystone. There's no shared component across these availability zones. Each of these availability zones support one or more virtual private clouds. This is our way of slicing and dicing the compute and network and storage capacity for different security reasons. One of those VPCs, for example, is a dev VPC, which has the most popular in the company because anybody can get access to it with their core credentials. There's nothing between the user and the infrastructure except an API. And that is OpenStack. We provisioned over 1.6 petabytes of block storage through Cinder. 100% of our compute is running on KVM, which means some of our team members had to cancel the trip last minute. They spent the last four days over the weekend to restart our cloud. Of course, you must have done that, I hope, before you head off to the summit. We run 100% over, yes, today on OpenStack. These are interesting numbers for me to brag in front of you. But what did it mean for the business? During the holidays of 2014, November and December, we were running 66% of PayPal front and mid year on OpenStack. That number is now 100%. I think we crossed that number in March this year. If you have doubts about OpenStack running in production, this is a proof point for you to say that OpenStack is doing transactions, financial transactions. If you go on the web, use any e-commerce provider, buy something, sell something, or go to ebay.com, buy something, sell something, and use PayPal as a payment provider, that transaction is now going through OpenStack. My team is extremely proud of pulling that off. Similarly, nearly 20% of ebay.com is today running on OpenStack. And we have a lot more workloads to migrate to OpenStack, and their journey is going to continue this year. And I would love to be able to come back here in a future day to talk about the journey from 20% to 100%. What it took, how we scaled, how we improved our operations to do that. Third and most interesting thing for me and my team is that the developers love OpenStack, the APIs. As I said before, there is nothing between our developers and infrastructure except a dashboard, a set of APIs, or a CLI. I can log into the dashboard using my corporate initials and get the compute I need when I need it. Get a load balancer, make DNS entries, get a database. I can do that, all that in minutes, and do whatever I want to do with it. There's nothing between me and the infrastructure. It's completely programmatic. That meant a lot of agility and innovation in the company. Our developers today cannot live without the cloud being there. It is given that they expect it to be there. They expect the capacity to be there. They expect it to be running all the time. So this means that all of our dev test workloads today run on OpenStack, no wonder. This journey didn't happen overnight. We went through many iterations to figure out how to move from 300 hypervisors to over 12,000 hypervisors in a month or so. As you know, OpenStack is not cloud. It is a cloud controller software. We all know that. It's awesome at that, but it's not a cloud. It's not a cloud in a box. We have to put it to work. We have to run a service using OpenStack as an interface for the infrastructure. So we built our muscle over the last three years. We built a stack of technologies, stack of tools to help us get to that scale and operate it without losing sleep. For example, in 2013, we had no automation to speak of when I started, 2012. 2013, we automated the heck of it. 2014, we noticed that automation is not going to work at scale, so we need to worry about configuration drift to keep the fleet of thousands of hypervisors in sync with the desired state. Same thing for monitoring, zero monitoring in 2012. You can call it organic cloud. It was handcrafted. There was no nothing of any operational stuff around it. 2015, we still have alert noise issues, because we have too much noise now, too much in alerts. We're still working on figuring out how to detect problems before our customers find out. Upgrades was another fun. We had our first upgrade on SX to Folsom in 2013, I think spring. And we had no clue how to do that. There was no clear upgrade path from SX to Folsom. So we ended up building a different deployment and moved the workloads. We built a tool to automate the process. One of my colleagues did a talk yesterday on how we did that. The second upgrade, which is a much more critical, mission critical, was Nova Network to Neutron, which we did in the last six months twice. We have two availability zones running Nova Network. And there were a lot of debates on how to do the upgrades. Our team spent the time to pull off the upgrades to make sure that the business can count on OpenStack. And we did that. And we shared that information at the last summit. We had a talk from our team members to talk about it. Scale out, same challenges. We were always unsure of what's going to happen next if we had a matchup capacity. So we went through many iterations to do that. What this meant is that it is a journey of operational, building the muscle to operate at scale, getting cuts and bruises along the way. One of my colleagues, Trini, he's going to give a talk, I think, today or tomorrow, to talk about all the stuff on the slide to go over the process that we have set in place. And we are still iterating on that. Now, we have not done with OpenStack. We have scaled up. We have moved a lot of workloads. We're going to keep moving that. But we are asking some fundamental questions. As you know, in most data centers, Cloud is only running a part of the data center. That is the fact for most enterprises. And we want to go beyond that. We want to create the choice for our customers so that the Cloud can run entire data centers. We are not there yet, and we are working on that. The second, we want to help improve resiliency and efficiency for applications. I don't expect every developer to handcraft an application from scratch and deal with availability and resiliency and design it for failure. And that's a challenge. A lot of developers don't have to worry about it. So we are excited about some of the stuff that I saw in the demos just before my talk. And we are looking forward to adopting some of those stuff. So let me take a step back and show how we are looking at creating the choice for our customers from the bottom up. So from the bottom, we are looking at how we design the network for FF scale so that we can control. We can program the entire data center through our APIs. We are investing in compute flexibility so that more and more workloads that run today on bare metal can run on OpenStack. So we are investing in compute flavors. We are investing in Ironic, thanks to the kilo release of OpenStack. The premise is that every compute that we provision in a data center goes through one single API and experience. Same story on storage side. We are experimenting with the more volume types behind center, spending time in Elbaz and Resignate so that entire compute clusters can be provisioned in minutes using OpenStack APIs. This is about giving choice to our customers. Awesome. But what about the top of the stack where we want developers to do less so that certain qualities like resiliency are built into what they do? And that's where active cluster management comes into play. We are really excited about Kubernetes, containers, and Mesos being there, providing the building blocks and patterns to enforce efficiencies, agility, and reliability for our applications. So it's about giving the choice from the bottom of the stack and reducing the choice from the top of the stack to enforce patterns for efficiency. And I'm excited about it. And my teams are working hard and committing to these projects to put these in place. But we need help. As an operator, as a large user, we need help from you. As you get into the design summits, every decision you make has a huge impact on how we do our business, how we use open source technologies. Number one, we need to raise the bar on the core. The number of committers to OpenStack has increased in the last two years from 950 at the end of Havana to 1,500 right now. Awesome. The number of products to increase from 9 to 15, which is a bit worrisome as an operator, because if you have attended any operator sessions, you would hear concerns about does heat work at scale? Does kilometer work at scale? Has anyone put in zucker in production? We don't know the answers. So we need to have an operational bar on what we call as the core of OpenStack, whatever that be. And we need you to think of that. Second, we need to productize operations. For every success of OpenStack, there are failures of OpenStack. We hear about that all the time. And the number of attendees, for example, at OpenStack, operator summits has increased from around 60 last year when we had the first summit in San Jose to about 200 at Comcast three months ago. What this shows is that a lot more operators are participating in the community. But I found an interesting metric when I was looking at the user survey data last week. I had a chance to look at the data before you all got a chance, because I was in the user committee. So I was looking at the numbers of production deployments, which went up, which is good. But about 89% of operators are deployment or running a code base of at least six months old. Not bad, because Kilo just got out. If you look back further, 55% of operators are running a code base of at least 12 months old. And 21% of operators are running code base of 18 months or old. What this means is that it is very likely that these operators are participating in the design summits. They are coming to the summits. But they're perhaps not thinking about liberty. They are actually worried about, how do I upgrade? How do I keep my cloud going? These are the day-to-day worries that those may be having. I have the same issues when I go to operator summits. So this is just an example of manageability missing in action. I think we need to think about how do you manage a large deployment? How do you make manageability a feature of OpenStack itself? And I'm sure I don't have answers to these questions, but I want to pose these questions so that we can find some reasonable common ground to move forward. Third, I've seen a lot of vendors. The participation has increased, which is extremely important for us because we have a ton of infrastructure that need to be programmed through standard interfaces. It's awesome to see that commitment. And we want to see more of that. We want vendors innovating in the compute network storage spaces to stand behind open interfaces, which OpenStack is the interface for us and innovate behind it. Not fight it, work with it. Thank you. As I said before, every design decision you make has a huge impact for us. May the best decision win. Thank you. Thank you, Savu. Amazing story. Another awesome leader in the OpenStack community giving back. And 300,000 cores added to our OpenStack Power Planet is pretty amazing what they're doing over there.