 So, I found a parking spot. This car is going to be here all week in the Expo hall upstairs. So I hope you will all take an opportunity to go back and take a look at it, get some pictures. It's a pretty sweet vehicle, I have to say. Even though I didn't really get to, you know, see how fast it goes here on the stage. So we heard from BBVA. We heard from BMW. Those are both European users. Next up we're going to hear from a user from the United States. And this is a case study that I'm really excited about because this is from Time Warner Cable. And this is a team that has done some amazing things in a very short period of time. And so I hope that you will find this really interesting. Please help me welcome from Time Warner Cable Matt Haines. Thank you, Jonathan. I am happy to be here today to share our stand-up story and how that fits into some of the exciting changes we have happening at Time Warner Cable. So Time Warner Cable is the second largest cable provider in the United States, providing television, broadband phone, and business services to 15 million customers in major market centers including Los Angeles and New York City. And for the past several years, Time Warner Cable has been on a mission to change the nature of television. Moving from a world in which we serve our content at very specific times to a few TVs in your home, we're embarking on a journey that serves any content, to any device, any time, and anywhere. We think it's pretty exciting. Let's take a look. Welcome to a new way to experience TV. Watch what you love anywhere you want on your devices with TV from Time Warner Cable. It's more than just your favorite shows and movies. It's TV the way you want it with the TWC TV app, the number one rated pay TV distributor app that makes it possible to transform every screen in your home into a TV. Watch more of what you love on your devices, only with Time Warner Cable. So that's an exciting future that we're in the middle of, and bringing that to place requires a significant number of changes in technology, culture, and organization within the company. From a technology perspective, we have a whole new set of set top boxes and software that powers them, new architectures for transcoding and delivering content, and software platforms for a wide range of consumer devices. From a cultural standpoint, we're evolving from slow and methodical, and to rapid and fail fast delivery, matching the rapid evolution of our device platforms. And from an organizational standpoint, we are starting to evolve into DevOps organizations that can deliver on this culture. For a cable company, these are pretty exciting times. A year ago, I came to Time Warner Cable to transform our infrastructure into a self-service platform that would be able to support this new vision for television, and one that would help in the transformation for technology, culture, and organization within the company. Having worked with OpenStack at scale since the Diablo days, I knew it had the potential to deliver what we needed. So after grabbing a couple of folks from inside of the company and a couple of folks from outside, we sat down late last year to do a little stand-up planning. And as anybody who's stood up OpenStack at scale knows, there are a number of decisions that need to be made before getting started. Which set of services are you going to deploy? How many data centers or regions will be involved? What are the provider and tenant networks going to look like? And networking at the end of the day is everything. And what are the system workloads, and importantly, what kind of operational stability do you need for the environment? Complicating our designs were my desires to have a certain set of operational requirements in place. Because we were going to run in multiple data centers, I wanted global identity to make a seamless for our customers to get into. I wanted to be close to trunk to take advantage of fast-moving projects like Neutron, and also to be able to provide a certain level of agnostic approach to vendors. I wanted automated deployments to make it easy to stay close to trunk. And I wanted an HA and DR control plane to give us enterprise stability. Geo redundant object storage is very important, so we give our customers a very safe place to put their data. And in short, I really wanted operational maturity for enterprise applications at service provider scale. So with the holidays fast approaching, I needed a little advice. Our little four-person rock band was being stretched a bit thin. I reached out to Lou Tucker and Joe Arnold, asked them for some help on early design to components for object storage and networking. And with just before the holidays, we ordered some hardware so that when January came around, we'd be ready to rack and stack and get this going. We started the new year and I added one final requirement. I wanted to be open for business in six months, meeting all the requirements and at the scale that would drive this business. So needless to say, we were off and running. Well, not surprisingly, we tripped a few times. Our tenant networking designs changed several times, moving from a flat network architecture that's similar to Nova Networking to one that uses the latest SDN support in Neutron, Ice House, with an ML2 plug-in and VXLAN overlay. And these changes rippled through our provider network, causing us to go touch and all of our servers. Another bump in the road for us was the desire to have live migration with attached volumes. This was an extremely important operational requirement that I thought was important. And the initial design for our block storage was a sand fiber channel platform. And that was a little too far out in front of where the community was early this year. So once again, we adjusted. We racked a new set of SEF block storage nodes, and we retouched all of our servers again, and we were off and running. By the end of Q1, we were well on our way. The team had Havana up in two regions, and we were iterating towards Ice House. The team started to grow, and importantly, experienced hands joined the team to iterate on those initial designs and improve them. And by the end of Q2, just six months after we started, we were open for business at time more cable. With an initial capacity for thousands of VMs, hundreds of terabytes of usable object and block storage operating out of two data centers, we were starting to onboard applications, the same kind of applications that are changing the way we view TV. Since then, we've continued to grow and mature. The team is now about 15 folks located in three geographic regions. We've invested in hardening and maturing the platform. We've invested in the community by contributing upstream in a number of projects. And we've more than doubled our initial rollout capacity in Q1 with a production mirror staging environment as well that helps us stay close to trunk. Today, in fact, our automated tooling keeps us as close to trunk as we would like to be. Going forward, we're looking at adding a number of new services to make it easier to build and manage applications, DNS, load balancing, messaging, and the like. And whatever else our customers say they need to build the kind of applications that deliver their value. Underlying it all, however, are three important objectives that I was given when I came to the company. Build a scalable infrastructure, make it extremely stable, and make it cost-effective. We're also spending a lot of time helping other teams at Time Warner Cable use OpenStack as a vehicle for changing the way they develop, deploy, and support their fast-changing applications. So for anyone out there who's thinking of standing up a mature, private cloud capable of supporting enterprise applications, rest assured, today OpenStack is mature and up to the task, and partners are capable and out there to help as these partners have helped us. I want to thank the OpenStack team at Time Warner Cable who have worked tirelessly over the past several months to deliver on these lofty requirements and kind of a crazy, aggressive timeline. A few of the stackers are here. They'll be presenting the details and the nitty gritty of the stand-up in a session Thursday morning at 9 a.m. hosted by Swift Stack over at the Meridian. I encourage you to take a look at that if you want to know the real details, and you'll sign up at the Swift Stack booth. And finally, I want to thank all of you who continue to deliver great community developed software. As I said in the beginning, delivering on this ambitious plan of any content, anywhere, any device requires significant change. OpenStack and the DevOps team at Time Warner Cable that designs, deploys, and supports this are examples of changes for our technology, culture, and organization. And we continue to use those and OpenStack to drive changes throughout all of Time Warner Cable. Thank you.