 So, yeah, I am from the GAP Incorporated and I'm gonna be talking to you today basically about how we use the cloud, OpenStack in particular, and let's get started. So, I'm gonna tell you a brief story about my history of the GAP because it's actually kind of a case study of how you can implement and use OpenStack pretty much anywhere. And then I'm going to tell you all the stuff everyone's heard before, or if you've ever been to one of these before and you sat in on a case study, it will be rehashing a few things. As a matter of fact, watching the keynotes this morning, as they said most of the stuff I wanted to say. So, we'll do that and I'll go into how we use OpenStack now, as opposed to my story about how we initially began the journey. And I also want to talk about open source and our partnerships with companies providing the tools we use. And then the last slide there is just a little bit of discussion about public cloud and DevOps and being an infrastructure as a service provider. And then I'll open it up for Q&A and considering the keynotes this morning, I'll probably keep some of my comments brief so we'll have plenty of time for questions. So, as I mentioned, my name is Eli and I'm currently the cloud domain architect for the GAP. I did not start out that way when I first started at the GAP. I was a solutions architect on an infrastructure pod. And that was a great position for me because it allowed me to use my background to good effect. I'm definitely a jack of all trades. So, in the past I've been a data architect. I've been a database administrator. I've been a network engineer. I've been a UNIX technical support person. So, coming into the GAP as a solutions architect on an infrastructure pod, I had on my team a network engineer and a database administrator and a system administrator. And my job was to sit down with the project teams as they were incepting a project and figure out what their infrastructure needs were and get my team to deliver it and deliver it quickly. And it was about this time that the GAP was building its first cloud. And they partnered with Rackspace and deployed Icehouse and then Havana and immediately moved their isolation testing pipeline into it. And then, what do you do, right? Well, from my perspective, it was a wonderful tool for doing infrastructure delivery. It allowed me to, instead of having to buy a server, set it up bare metal or get into VMware, find an image, update an image, it allowed me to just do everything in one place. And it literally changed... That's the wrong slide. All right. It literally changed the paradigm, right? We heard from customers that it could take five, six, seven weeks to get the infrastructure for their application, not even starting to work on it. These guys are sitting there doing nothing. And this is one of those stories you've probably heard before. We were able to spin up servers in 90 seconds, right? We were able to set up the networking on those servers in 90 seconds. We were able to get people working almost on day one. So that's the first use case we really had other than our isolation pipeline and our testing. So now we have a completely different environment. We have five private clouds right now. Liberty is our newest release. We have two clouds on Liberty and then we have three legacy clouds that are still on Havana. And workloads are being migrated into the newer clouds. We call them our next-gen clouds, but they're already old. And we're a multi-data center, and we're not just a compute cloud anymore, right? Our next-gen clouds are Liberty clouds. We're doing object storage. We're doing database as a service. We're doing some network function virtualization. It's becoming much more of a consumable cloud from a developer's perspective. We do multi-tenancy in our clouds. So we have production. We have pre-production altogether. We have CI CD pipelines in the clouds. As a matter of fact, when a project spins up, they get their own. So using FedCI Jenkins, they get their own pipeline and begin developing immediately and can see their code all the way through development to production. And the millions served, well, the millions served is twofold. First off, that isolation pipeline that I mentioned earlier, we have literally spun up and destroyed millions of VMs over the past four or five years. I think the number is close to 3 million now. And also, our ECOM stack lives in OpenStack. So if you go to GAP or Banana Republic or Old Navy online and you do an order or you just look at stuff, you're hitting OpenStack. So that leads into number three there, e-commerce. We do have 90% of our forward-facing applications are in OpenStack. So if you walk into a store today and they don't have the item you want and your salesperson pulls out an iPad and completes a transaction for you and has something shipped to your house, all of that happened from start to finish on OpenStack. So the joke I like with this one is a guy walks into a bar and he looks around and he says to the bartender, I want 10 of what everybody else is having. The bartender says, well, that's an order of magnitude. So I like to point out that OpenStack is a great infrastructure tool. It's a great developer tool. It's got a lot of great tools to it. But if you're looking to get OpenStack into your organization, into your enterprise, especially if you're a retail organization and you don't have a business case or developers that are screaming for this if they're going to public cloud, well, it's a really great infrastructure tool. Internal infrastructure isn't going to go away, especially if you have brick-and-mortar stores, if you're supporting cash registers, if you're supporting just your sales organization or your HR organization. So if you want to speed up infrastructure delivery, you can do that day one with OpenStack. You can, as I said earlier, instantly spin up a server on demand. And that's a big boon to any organization, whether the phrase I'm looking for escapes me. So let's move on. All right, so we talked about the pipeline. Continuous integration, isolation testing, great place to live, right? No matter what you're coding, no matter what you're coding in, no matter what tools you're using, you can use an API and you can have your pipelines and your cattle in that API. Now the final two here, the challenges and benefits, I bet no one noticed that they're exactly the same. So from my perspective, if your challenges and your wins aren't the same thing, you're probably not trying hard enough, right? Your challenges should be your big wins. And even if your challenges result, you know, if you failed fast and you decide not to do something, well, that's a big win, right? Failing fast and moving on is also a big benefit. So that's why those are the same. And a lot of the challenges we hit, the cattle versus pets is a great one, right? We hear it all the time. But there are always going to be pets. And you can tell people that, you know, pets need to be somewhere where they're getting constantly backed up and they're getting constant care and feeding, and it doesn't matter. A developer, when they get access to OpenStack, they're going to spin it up in OpenStack. So the only way to fix that is to educate your users, right? And we have a policy, it's not a policy. I have a... What I want to do is I don't want to tell my users how they have to consume this, right? I just want to give it to them and help them consume it. So part of that is saying, you know, what you have here is a pet, right? You have a special snowflake here that needs a lot of care and feeding. And we have every tool imaginable for you to not have this. We have Chef so you can make it repeatable. We have Heat so that you can orchestrate multiple of these servers. So, you know, it's a challenge to change the way your developers are thinking or your users are thinking, but when you get there, it makes everything great, right? Another challenging we had is networking. I mean, we're a big company. We've been around a long time. We have a lot of legacy infrastructure. And I would encourage anybody who's starting out on this journey to partner with your siloed infrastructure teams to sit down with your networking team and say, look, you know, I went to the OpenStack summit and half the talks were about network function virtualization and software-defined networking. And we're not there. Let's take this journey together. So we still use a flat network today inside of OpenStack. We have the capability to do VXLam. We have the capability to do SDN. We're working very hard on getting load balancers of service together. But the reality is our network is still a legacy network. I wouldn't let that discourage you, but it's a challenge. And when you get there, when you get that software-defined networking, it's a big win. The other challenge is adoption. But as I said, we were immediately able to adopt for infrastructure. So once you start to adopt for infrastructure, especially in an environment where you're working closely with the developers, working on their inception, and then your team is building for them, once they start to see the speed of that, and once you tell them, you can do it yourselves. It's an API. You don't have to do much more than that. As soon as they're aware that it's out there and it's easy to use, they're going to use it. And they're going to use it until you go crazy because they're overusing it. And then lastly, DevOps. DevOps is a direction we're taking. It's a direction a lot of people are taking. And it gives us an interesting set of challenges as well. I'll give a real-world example. So we had users who came to us and they said, we need a lot of VMs. But we found in our testing that we need five CPUs and 16 gig of memory. That's all we need. Five CPUs and 16 gig of memory. That would be great. They would have the ability to create a t-shirt size. That's five CPUs and 16 gig of memory and use it all they wanted. What they're missing is the infrastructure background to understand the problems that come with that. The architect designed our whole compute platform to see one CPU, one thread gets two gigabytes of RAM. So when they start using an odd number of CPUs with an even number of RAM, we have problems because the way physical computing works is that RAM has to be addressed. You have to talk to it. And if you have a workload over on this CPU and a workload over on this CPU and this workload needs to talk to RAM that's over on this CPU, you hit high latency and your user's app has a problem, right? So DevOps requires some people with infrastructure knowledge to be in the mix. And it requires them to make reasonable rules that don't block a user from using your platform the way they want to but makes them use it in the same way. All right. So this one I like, as a retail company, we don't participate in or commit to the code base where it's very difficult for us to give back to the community. But what we can do is we can partner with great vendors. So at GAP, we partner with Rackspace and use their OpenStack distribution. We partner with Tesora for their Trove distribution. We partner as much as possible with some of these great companies that commit a lot of the code. And then what I do is I drive them crazy because I want this and I want that and I want this. And the industry would be really great if it had this. And that's how I feel we at GAP give back is by trying to push the envelope with our vendors and make them take that next step. All right. So last slide, really. I wanted to talk about vertical integration. I added this this morning after hearing Boris talk. He mentioned that public cloud adoption is widespread, that some private clouds fail because you're trying to integrate too much. You need to integrate virtually or vertically. What I've found is with a good approach, you can integrate multiple vendors. You really can. One of the things I try to do is I try to silo those vendors. I mean, when you look at OpenStack, OpenStack has a bunch of endpoints in it. You can run an OpenStack endpoint list and you can see your Swift endpoint and you can see your Nova endpoint and you can see your Neutron endpoint and whatever else you may have. And when you're integrating a new vendor, what you're really doing is you're adding that endpoint and that endpoint can have back-end services of its own. So when we decided to go with Tesora for our database as a service for Trove and integrate it with our Rackspace OpenStack and they did not support Trove at the time, we needed to make sure that there was a clear delineation of responsibility between when we would call Rackspace and when we would call Tesora. So rather than using the same RabbitMQ messaging and the same database as every installation document would tell you to do, we decided that they were going to be siloed and going to be separate. So our Tesora installation uses its own RabbitMQ. It uses its own MySQL database and we set it up as a cluster, highly available, and put a VIP in front of it. And the way it's consumed is the OpenStack endpoint is a VIP. We can change it in any way Tesora tells us to or we deem necessary and it integrates just fine. So multiple vendors is possible. You just have to approach it from an infrastructure standpoint and do it in an intelligent way. And that's not always the way the vendor would have you install it because that's maybe not something they do. So, you know, get in front of it. Why not put everything in the public cloud? I think that's pretty easy. And I think when Jonathan said 75% cost savings by moving out of the public cloud, that's really what you get. That's why we have a private cloud, right? Our private cloud is on our hardware. We're writing it off on our taxes. And if a developer leaves a workload running in there, it really doesn't cost us anything at the end of the day. So I would strongly recommend that if you're going to do both, if you're trying to choose, do a hybrid model. Put that Edge work case out there because it's available and it's fast and maybe do your development here and use containers so it's completely portable and it can go either place. I talked way too much about DevOps, so I think we can skip that. But infrastructure as a service as a service. So in my case, and I think in a lot of cases, I am doing that. I am doing infrastructure as a service as a service. I am a service provider. I'm an ISP for my developers. And it's a good way to look at it because like I say, I don't try to fit them into these channels. I let them consume my resources in whatever way they want to consume it, whether it be from a pipeline or just command line from Chef, from a tool they built themselves or a tool they found out on the Internet. I want them to just be able to consume. And I want to provide a service. I want to provide a performance service. And that allows me to not only provide that performance service and really keep an eye on the infrastructure and keep pushing out new ideas, but it also allows me to get with them and find out what they want us to provide. So that kind of covers the what else can should I do. What is the in-game? For me, the in-game is to take that to the extreme. So in a retail environment, we have one weekend a year that's insane. We have Black Friday through Cyber Monday that every year a whole bunch of us get in the car, we drive to one location, we sit down and we babysit everything for the whole weekend. My in-game is not to have that anymore. My in-game is that everyone at my company can stay home with their family over Thanksgiving weekend. And to do that, I need to provide a service that is very, very similar to what the public cloud provides. I need to provide a service that maybe you don't know how many CPUs and memory you need. It's not five. But you can spin something up and I'm going to grow it for you. I'm going to grow it for you if you need it. I'm going to shrink it for you if you don't. I'm going to take your disk workload and if it's not accessed very often, I'm going to push it on to cheap disk. And if you need it all the time, I'm going to move it on the flash and I'm not going to make you choose because developers don't understand the infrastructure, but they understand performance. So that's my in-game. I'm sure everyone else is here is different, but that's what I've got. So any questions? We've got microphones so that it gets recorded, but come on up. Yeah, I appreciate the talk and the hard-fought experiences. How are you... Do you bill the consumers the tenants of your cloud at all? Is there any sort of cost-back, charge-back model or are you guys just paying for it all? So we have done both. In the past, we basically charged a one-time upfront fee. And we would generally charge it based on the quota rather than on what was actually used. That doesn't work very well when you start looking at a lot of the software out there. Even if you look at Manage IQ, its bill-back is built a lot like a public cloud. 30 cents an hour for a CPU, so on and so forth. That didn't work for us because that's not how we had traditionally billed. So at a certain point, we did decide that we just weren't going to bill. And we were going to let the businesses decide how they wanted to deal with charge-back. I'll add one more thing while you're walking up. From a personal perspective, though, I would love to have that Manage IQ public cloud type of charge-back because for exactly what I talked about earlier, I'd like to be able to say, look, you ran this workload in my cloud and it cost you 50 bucks. If you had ran it in AWS, it would have been 500. Michael Elliott here, so share the last name. One little bit about that journey because you talked about bare metal provisioning is kind of where you started. Were you a Linux shop? Were you a Windows shop? How did you start that journey to go open-stack? What was that decision point that led you there? Yeah, as I said, I was actually a solutions architect at the time, so we had an enterprise architect who saw the potential and did a small pilot for the ISO pipeline. We were a Unix Linux Windows shop, and we still are a Unix Linux Windows shop. Application workload generally doesn't occur on Windows, but yes, hopefully that answers your question. I love the questions. This is making it so easy. You spoke about having 90% of your customer-facing apps on the private cloud. I believe that's what you were getting at. Are you using the public cloud? And if so, what workloads are you using? Are you running there? We are just now beginning the journey to public cloud into a hybrid type setup. There are some workloads on the public cloud. I'm not entirely sure which ones are there, so I don't want to say anything and get it wrong. But yes, we do plan on heavily leveraging the public cloud and being a hybrid environment. 27 subparts. First of all, back to the other gentleman's question about hardware. How do you determine what type of hardware you use, whether blade servers or just standalone bare metal, and how do you choose to expand that? There are certain limitations to blade servers. So this is another thing I inherited. The decision was made to go with Dell hardware and to use physical servers. And since I've been architecting this, I wanted to remain as homogeneous as possible. So while we are going with slightly better processors and slightly better memory and CPU configurations, we're trying to stay pretty much the same. There are challenges to changing your hardware too much once you have a large compute workload in place, because for example, you can't live migrate a workload on a newer CPU to an older CPU. So yeah, there was some testing done. I was not a part of it, but we did find that blades did not work for our use case. The second question is, with so many different vendors and you have issues, I'm sure, how do you keep track of issues that they don't... You probably have your own internal system for keeping track of problems, but with external vendors, how do you keep track of all of those and make sure that they get fixed? You know, we try to silo off the technology as much as possible, as I mentioned, but in general, we don't have too many issues. My team is sitting back there. There's two of them, and there's me. And there's one guy who was at Red Hat last week. So we run five clouds and all that workload with four people. We do leverage rack space. We use tickets when we find a problem, you know, maybe some communication problems in the back end or something, and we just need extra hands to track it down. You know, we call rack space. But in general, it's pretty easy for us to determine which vendor is responsible. And because we've siloed things off, it's pretty easy for us if they come back and say, no, no, no, this is a rack space problem. We're like, well, rack space can't touch your database. They don't even have access. So we genuinely haven't had too many problems. And with a very lean team, we're able to operate very effectively. So how do you decide to do your open stack distribution? Is it common across all five or different? It is common and different. So it's all rack space. Now, several years ago, when the initial clouds were spun up, rack space recommended that we use red hat or centos, and we install their open stack distribution. Before Kilo, they came out and they said, we're going to follow the community, and we're going to install an Ubuntu. So we were not able to upgrade our clouds. This is actually a fairly common problem, but we weren't able to upgrade. So we decided to build these next-gen clouds and add functionality like object storage and block storage and load balancers and service and databases and service and all these things. So we kind of went whole hog with the Ubuntu still using the rack space distribution of open stack. And now rack space is supporting red hat open stack. So that gives us possibilities again. And we're really not opposed to leveraging that or to changing the way we do this. It's really not that hard. So we're happy to weigh pros and cons and make a change if we need to. Hi, my name is Ahmed Sudikian from Veritas Technologies. So we attended the Mirantis keynote session in the morning and then they said private cloud is becoming a toxic word in enterprise domain, right? And then you're saying you've obviously had great experience building private clouds and you were able to do that with the lean team. So are you able to maybe further expand on why you choose to go that way versus managed public cloud solution? And, you know, so I know you tried to cover that, but maybe expand a little bit more. Sure. I need to think about that one for a moment, though. We have a great deal of expertise and we know our business better than any managed service provider would. And we do have something of a different use case when we're not a telco. We have a lot of brick and mortar stores. We sell a lot of pants and shirts. And I think for us it made more sense to continue, like I was saying, it's great to be able to sit down with Rackspace and say, you know, I need Magnum. When are you guys going to give us Magnum? Why don't we have Magnum yet, right? Magnum's a bad example because probably everyone wants container orchestration. But there are a lot of projects out there, maybe Barbacon for key management that, you know, we're taking credit cards. We need to make sure that data is secure. So that may not be the number one choice for a private cloud management company to provide. Right? Maybe it is, right? Public cloud is providing that kind of functionality. But for us, you know, the control and the ability to pivot to our consumers, our developers and our business units outweighs what a private management company could give us, I believe. No other questions? As a follow-up, do you see it to be a continuing practice for you guys? Or do you see a case where you would down the line shift to a managed public cloud solution? Or in other words, private clouds are always going to be around for use cases like yours? I think we're very flexible and I could easily see a world in which we had all three, public, private and managed, and moved workloads around as necessary. I don't think we would ever rule anything out. All right? Any other questions?