 Next, I'm very happy to introduce Weston Josie from TapJoy, so come on out. Good morning, everyone. You all are a little, you're hungover. It's okay. You can just admit it. You've all were out last night. It's fine. Well, thanks for having me. As you said, my name is Weston Josie. I run operations over at TapJoy, a mobile advertising company. I'm not here to announce that we are pivoting and moving and building our own large Hadron Collider, unfortunately. It would be a fun pivot, but it's not something that we're very good at, so let me tell you a little bit about our story and our first year on OpenStack. So, some history. What is TapJoy exactly? Well, we are a global app tech startup, so what exactly does that mean? Well, for mobile developers all around the world, we power a couple of things. One, first and foremost is monetization. How do you basically monetize the users that are coming to your app so that way you can run your business? Two is analytics. We want to provide you a clear and concise way of understanding how your business is performing and how you might need to change it. Three is user acquisition. We help to get users into your platform so that way more people are using your app, your game, your newspaper app, whatever the case may be. And then it's also user retention. So, how do we keep the users that you already have from leaving the platform that they're already on? So, TapJoy is quite large. We have over 450 million monthly users across 270,000 apps. It's an absolutely massive platform to get to work on on a daily basis. We have a worldwide presence. We're actually basically in every single country all over the world. The thing that I like to say is that the sun never sets on the TapJoy empire because in the middle of the night when I'm sleeping, I still get pages occasionally because people are still using it in Japan somewhere. So, it's definitely worldwide. So, a little bit of technical details. As Mark was talking about earlier, I am that guy who does use AWS. In fact, we use it a lot. We grew predominantly on AWS for our first two years. We now, to date, basically have right around 1,100 VMs on AWS running at any given moment. That was taken last month. It will probably continue to go up every month here on out. We have active regions in Asia, Europe, and North America. And annually, we now process over one trillion requests, which is a lot to have to handle on a daily basis. So, OpenStack. It's not just AWS for us. It's also about OpenStack. In early 2013, and 2015 hasn't happened yet, in early 2013, we began assessing the viability of buying our own hardware and starting to divest ourselves off of just using AWS. And so, what we did was we analyzed the landscape. We looked at a couple of different vendors. We looked at a couple of different options. And we narrowed in on OpenStack very quickly. It was clearly the winner. It was clearly the front runner. And the reason for that is what you see in this room today. It's all of you. It's all the community that's come together around this platform. It's the fact that we have people like CERN and Expedia pushing upstream to the platform. It's clearly going to win. And so, we wanted to get behind it. So, what we decided to do is we wanted to start looking for an OpenStack partner, a hardware vendor, and a COLO provider. And we started doing all of that in summer of 2013. It was important for us to find partners that were willing to work with us on our project, because in-house we didn't have that expertise on OpenStack. We didn't necessarily have a wide, expansive expertise on how to build out the infrastructure that we wanted to build, because we were very, very good at AWS. I can go in and I can shoot the shit with the best of them, so to speak, when it comes to AWS. But OpenStack, I'm a newbie. I'm going to a bunch of the talks here, because I'm learning more and more about OpenStack every day. So, we really needed to have some good people with us. So, the plunge. I have a great guy on my team, James Moore. He's unfortunately not here today, but he basically helped me build our OpenStack deployment from day one. It's really his baby. He did so much of the hard work, and honestly, he's probably better up here than I am, but we decided to build out what we call TapTroy 1, which we decided to build as a unit for our data science department. So, at a company like TapTroy, we're obviously analyzing a bunch of analytics and a bunch of data on a daily basis, and we need a lot of different applications to run on that cloud and run performantly. So, we run Hadoop, HBase, a bunch of complex data modeling, and that really powers our brain. It helps us make decisions about what to do in the moment for all the users that come to our platform. And what we did was we defined our requirements for what we wanted to build from the ratios that we were working with. And for us, it was really important for us to think about how many CPUs to disk. What was the appropriate ratio of RAM per CPU? Those are the sorts of things that we started looking at, and we weren't necessarily going to get that on AWS. And so, when we started to build these requirements, we got to work from a blank slate, and it was the first time that we'd ever really been able to do that, because we got to define exactly what we wanted to build, exactly how we wanted it to look, and exactly the right ratios. So, we launched this summer in June of 2014. We cut over with zero downtime, and since we went live, we've also had zero downtime. So, if there was some wood up on stage, I'd be knocking furiously right now, because if you're an Ops guy, you get terrified when you talk about downtime. But so far, so good. So, what does it look like? Well, we have 348 data all-purpose nodes that we basically kind of built out. It's a 3U configuration, 12 nodes per 3U, and on each node, we have a 1265Lv3, that's a four-core, eight-hyper thread, two-and-a-half gigahertz processor. These are the low-power chips, and you'll see why in a minute. We have four one-terabyte drives per node. That's one one spindle per core, and 32 gigabytes of RAM. Each of these has a dual one-gig-nick for basically fully non-blocking networking throughout our entire infrastructure. Another reason why we chose this particular setup is that it's very flexible. It's very recyclable if we want to use it in the future for our app servers, database servers, whatever the case is, and so we decided to choose this. We also have 12 management nodes. These are a little bit less sexy, a little bit less interesting, but they basically are 2650s, V2, 2.6 gigahertz, 128 gigs of RAM. These guys have the SSDs in them, and they have dual 10-gig-nicks coming out. So for us, it was a density play. We did all of this over only three racks. Each rack can draw upwards of 17 KVA, and the reason why we were able to do that is because we did have good partners. The people who worked with us were willing to go that extra mile to help us design the infrastructure that we wanted and be creative. MetaCloud contributed very heavily to our network design and helps us to power our OpenStack deployment. We are running basically a MetaCloud variant of OpenStack. And Equinix is our COLO provider, and they did some fantastic jobs helping us to design the cooling and the power requirements to make this work. We are running this on the East Coast. Our data center is actually in Virginia right next to the AWS facilities and getting this sort of power density is not necessarily the easiest thing in the world. So some of the tools that we use, we have two open source projects that have recently come out. I hope all of you go and take a look at them today. We have a tool called Slugforge, which we released, which is basically Capistrano meets containers. It's a great deployment tool for basically rolling out your code or rolling out your infrastructure in a containerized way. It's really fantastic. I hope you come take a look at it. We also have Chore, which is our public, pluggable back-end queuing system, which we use to plug into systems like SQS, file systems, and a couple of internal queuing systems that we're working on at this moment. So I want to get a little philosophical with you now and talk about why I think OpenStack is going to win and why it matters. So I don't know if you guys have seen this campaign yet. It's the new Android L campaign that Google's been running. It's this Be Together, not the same campaign. And I think it's really powerful, and I think it has a good metaphor for the entire OpenStack community. So what is it basically all about? It's all about the fact that fragmentation is okay, right? It was one of the big knocks on Android when it first came out. It was like, well, how do I design for it? As a mobile developer, what's the right screen resolution that I'm working for? What's the processor that's actually running on the system? How much RAM do I have to work with? And everybody was ragging on it, and there was like, ah, this is why Apple's gonna win. I got three devices that I have to code for or one at any given moment, and that's it, right? But Android, no, no, they were right. They had 11,000 distinct Android hardware variants in 2013 according to OpenSignals, and I'm sure that number's gone up over the last year. That's just a mind-boggling amount of configurable hardware out there that you have to choose and you can play with. It's absolutely amazing. And the way that Google did it is they iterated on it, they expanded on it, and they improved on it, and all the hardware vendors jumped on as well. So here's my challenge to all of you. I think we can get to 10,000 unique variants of OpenStack over the next couple of years. A few sizes does not fit all. As somebody who uses AWS on a daily basis, I can definitely tell you that there are times where I wish I had a completely different system that I could work around with. A lot of times I want a lot more disks than they're willing to give me, and I don't necessarily wanna have to pay through the nose for it. There's only seven modern variants on the AWS platform right now. That's it, seven, that's all you have to work with. You may get them in slightly different sizes, but underneath the hood, it's just seven, that's it. So I think we can create 10,000 different ones. I think we can create an ecosystem that is flexible and uses fantastic hardware and is all built on top of this great core foundation that is OpenStack. If you need a lot of CPU, throw in a bunch of CPU. You need a bunch of disks like we did. We needed four terabytes per node. Throw in a bunch of disks. If you want a bunch of RAM, toss in a bunch of RAM. So I wanna know who's got the great reference design for Xeon 5, because every day James Moore is pitching me on how we need to have these crazy CPU-driven compute nodes sitting in our architecture. And I wanna know what other people are doing with it already and why it works for them. I wanna know who out there is already expanding with some of these one and a half terabyte vertically scaled nodes, the E7 series, specifically the 8893, because it's pretty interesting to see how that would potentially work on a VM-based environment. Backblaze also just came out with their 180 terabyte pods where they blew Amazon out of the water in terms of cost efficiency on their storage. I wanna know who's gonna figure out how to basically integrate that into their core OpenStack platform. And honestly, I wanna know what your workload looks like. I wanna know because I wanna use it to help influence our designs. I want it to help to influence how we start to think about our hardware going forward and our designs going forward. So what will you choose? What's going to be your variant? How will it look for you? So how do we get there? Well, we've got a lot of hard work. It takes a long time to potentially spin up your first OpenStack deployment. If you haven't done it already, it's gonna take a little bit of time, but it's okay. Just gotta stick with it. It took 12 months of effort for us at Tapjoy, and there were some bumpy points along the road with some delays in hardware back in January and February. But we made it. I also wanna know how do we get cheaper? How do we continue to drive down the cost of deploying your very first or multiple OpenStack deployments? So that way, there's no question about the cost savings that you can basically derive by doing it yourself. How do we also figure out how to do this so that way monthly bills hurt a lot less? Or sorry, so that we have it more like the monthly bills? Because honestly, going and asking for a multi-million dollar check from your board is not exactly the most pleasant process I've ever been through in my entire life. It's like going and asking for a seed round. And how do we win the hearts and minds of the next thousand startups? So let's be together, not the same. Don't just copy your infrastructure, iterate on your design, find some great partners, hire hard workers. And it's okay to make mistakes. Ask for help and share your story. So if you wanna hear more, I'm giving another talk that's gonna go more into details about exactly how we accomplished what we accomplished. We're in Sal Pasi tomorrow. I'll also be doing a little bit of Q&A on Twitter. I'm at Dusty West, or you can email me directly west at tapstra.com. And yes, I am hiring in Boston, Atlanta. San Francisco and Seoul, South Korea. So if you're interested, come let me know. All right, thank you everybody. Thank you. Thanks Mark. That was great.