 So welcome, folks, to an end user panel. This is where people that actually use OpenStack talk about their thoughts, their trials, their tribulations, and all that fun stuff. I'm Doss Kamhout. I'm a principal engineer at Intel. I used to run a massive Intel cloud. My background's in grid computing, and then I helped build all of our OpenStack environment. And now I work in our data center group, and our job is to do engineering and help everybody else in the world do cool stuff with clouds. And I'm going to go ahead and have the panel introduce himself. We won't have all the audience introduce to yourselves. So just, guys, just tell us a little bit about yourself and what does your company do? Why don't you go ahead and start? Hello, everyone. I'm Francisco Raya. I work for Keon Networks. We are one of the largest data centers in Latin America. I am the DevOps team lead. We have many data centers across all South America, Latin America, North America, and Europe. And we start working with OpenStack since 2012, starting a long road ago. And we are currently using OpenStack to deploy all our cloud efforts. And we're at Mexico City? Yeah, we are Mexico City-based. Good mess, Cal, that you can find there. Little. Alpastore. OK, Dion. Hello, my name is Dion Vabafel. I'm within BMW Group responsible for OpenStack solutions. Within BMW, we're using OpenStack since IceHouse release, currently running the Gino release, and trying to get as much workloads as we can on the OpenStack solution. And what kind of BMW do you drive? I think I cannot say that in front of the camera. You have an IH, right? Every BMW employee gets an IH, right? No? We're not allowed to say that. That's all right. So he said yes. All right, I'm Justin Deppman. I'm with Thomson Reuters. Thomson Reuters delivers information to business and professionals, so the financial industry and legal tax, that kind of thing. I'm an infrastructure architect with responsibility for our OpenStack deployment. And I've been working with cloud OpenStack and before that, we had a cloud stack deployment, but we're relatively recent to OpenStack and our running IceHouse. Awesome. I'm Anand Palnisami. I work for EBN Paper. And I manage the cloud networking group that includes SDN, Neutron, DNS, and road balance resistance. Is your mic on? Now it's on. OK. OK. So my name is Anand Palnisami. I work for EBN Paper. And I manage the cloud networking group. And that includes SDN, Neutron, LBIOS, DNS. And I manage both for EBN Paper. So I would say, actually, we are one of the largest OpenStack-private cloud in the world. And we take serious production traffic. It's not like we just use it only for developer cloud, RQA, or whatever. So we use it in serious production. And recently, we announced around PayPal site, 100% of web and mid, and APIs are taking only OpenStack. It's a significant achievement. We started in 2012, August, with around 16 unused compute nodes in the lab. Now we are on several thousand stuff in the data center really. So it's a lot of journey. Of course, I'll be starting most of it during this panel. But it's a lot of kickbacks. So Anand's one of the original gangsters in OpenStack. So he's an OG. OK. So we're going to go through a bunch of questions. And again, we can do audience questions too. So you have something you really want to hear from the esteemed panel here. Feel free to step up to the mic. OK. So just quickly, what's the version where you first thought about OpenStack? What's the version that you first deployed? And what version are you on now? Let's go ahead. OK. We'll start to hear about OpenStack in the Cactus release. We'll start our first deployment, our public cloud solution with the ESEX, a lot of problems there, but we are currently using from Juno and Icehouse, and are starting to develop our private cloud solution with Kilo. OK. We started to think about OpenStack or Cloud in general already, I think during ESEX release. But during the Havana release, it was really being discussed that we should start deploying it. Our first lab deployment was with Icehouse. And currently, we're running Juno in production. All right, so in 2012, we realized that we were seeing a lot of our business units going out to the public cloud providers really just for Compute, which we could deliver on a private cloud. So kind of looking at what was available, we looked at what options were available in the market at that time. We ended up going with CloudStack initially just because OpenStack at that time was much more of an engineering exercise to get up and running, and realized we'd probably need to re-evaluate that at some point in time to see what happened. Looking back, we probably first started seriously in looking at OpenStack in the cruiserily release, and ultimately ended up deploying Icehouse. Excellent. Are we on? Yeah, so we started with SX. Of course, I played with Diablo, but we deployed SX. We started with two racks with a quarter million dollar of investment as a startup company. So a quarter million dollar two racks. We could buy only two racks for that, by the way. That's a good seed. Yeah, good seed. And we started with SX, and we directly went to the production so by the PayPal side. On eBay side, actually, we were in both the production as well as Dev and then QA Cloud. Today, actually, we are running Habanai very well, and we are halfway through in operating in Juneau. And I'll share more pain points and why it is getting delayed and delayed. Upgrading Lizer infrastructure with OpenStack. Everybody wants rolling upgrades. So, Justin, you were talking a little bit about how you saw people using public clouds for compute. So one thing I want to kind of look at from each of you is what's the two top reasons that you deployed OpenStack? Go ahead and start. I heard one of them. I think the other one was just we were initially seeing some users looking for infrastructure as a service. I say the other piece of it is just the speed of delivery recognizing that there were some bottlenecks in order for the development groups that we support to really innovate and be able to get the infrastructure they need to develop, to do testing, and eventually roll out to production. There's just a lot of buzz out there in cloud, and we were seeing requests from our business units as well to really start moving down that path. Deion, why the BMW? What were the two top reasons you guys deployed it? Two top reasons. Also, the DevOps, basically, we start running them away from the central IT, going to other or make their own solutions, but we want to get those workloads back in a central manner, and for that, OpenStack is just a nice or very well EOS platform infrastructure as a service. You can really easily deploy workloads or virtual machines, and people are up and running. Anything either one of you guys want to add on about another reason? Yeah, so I would say OpenStack is not, we just use it because the source is in the open source. Because we, at least for me, it's a platform to collaborate between our vendors and partners as well as internally. So basically, for example, if someone is managing a larger infrastructure, of course everyone will have scripts and then automation and whatnot, and we wanted to bring all this effort into a single place. So the code is being in the GitHub, in the private or public or whatever, and if you want to go and enhance some part of infrastructure, better be part of this. So I see it as a collaboration platform where we clearly say, okay, this is our way of integrating with our vendor products, and as long as you have the drivers and plugins, actually we'll test it on our lab, and then if it qualifies for our workloads, we'll get into the infrastructure. I'm more excited about that, you know, even Subbu talked about it in the keynote. So let's keep the APIs open, but innovate everything behind that. But I feel actually OpenStack perfectly fits for that. Good, so being that API for existing gear, new gear, last to push the envelope below, DevOps guys can do all kinds of crazy stuff above. Yeah, just don't change anything in the APIs. Exactly, keep them solid. Well, in our case, we had previous clouds with another vendor-based providers. We put into an exercise of comparison of features, and OpenStack is the most adaptable platform that we had, and that's why we choose them to deploy our OpenStack clouds. So it sounds like most of you had maybe something before that gave self-service to an extent and moved over to OpenStack. Definitely, we see this trend happening. So let's jump to another one. There's lots of distros available. I've heard a lot of people at the summit talking about do-it-yourself, and there's other options too. So just high-level, are you guys using a distro? You don't have to say which one unless you want to. Are you doing it yourself, or is there somewhere you want to be in a year that's different than what you're doing today? An on-point, sir. So we are using our own. We package ourself, and we run through our CACD and fix the bugs and then harden it and then upgrade it to our infrastructure. So did you have to hire a bunch of PhDs to pull this off, or where do you get them from? Yeah, actually, unfortunately, we had PhDs to work on that, and now we are at a point we run through that path, where exactly we identified as CACD infrastructure today, actually, bringing upstream changes and then run through your CACD. And now, of course, they've worked on that until unless you gather the muscle, you can't run for the marathon. Of course, the PhDs work for getting ready for the marathon, now they're running marathon. And now, actually, they're working on the features and then SDN and then storage and compute and then the core services. Of course, before we get there, we need to build that muscle, and then they work on that. Okay, so now you're running the marathon. Yes, exactly. Keep going. Got it. All right. So we're running the distra. I think there's really two reasons that we opted for that. One is, as an enterprise, we're always very cautious, right, and so to have a distra, you've got a vendor that you can lean on for some assistance. The other part of it is looking at starting from scratch is pretty daunting. So you need a lot of people with a lot of skills in order to be able to pull directly from stable release and be able to get it all up and running. So I think looking at a distra also allows you to really bootstrap it and get something up and running that actually works so you can demonstrate the value of it without having to figure out all the bits of how to make it work yourself. Exactly. We're currently also using a distra for it, basically for the same reasons as my neighbor already said. As an enterprise, you're quite anxious about things, especially open source things. We don't have a large development team for backing us up and that's why we want to fall back to a support provider being our distra in this case. But we also currently noticing us while we're ramping up that using a distribution has a chance to limiting you in the possibilities. As a distro cannot support all features that OpenStack is offering you at the moment and we're quite getting to a point that we might have to re-availuate what parts we are going to follow within the next years. Okay, so you want to be more flexible and you feel you're being squeezed to a specific set of capabilities. Are you seeing this too in what you have, Justin? Oh, absolutely. So as I mentioned, we're still on Ice House for release but there's been stuff that's come along that I'd like to really get integrated into our environment and we've ended up doing a couple things ourselves but I also feel very limited that I don't want to deviate too much from our distra because that potentially makes the upgrade in the future a lot more difficult. Exactly. And that's also what we notice. If you change things yourself in a distro, as soon as you get an update package, the things you modified have been reverted to the things the distribution wanted you to do and then you start backporting and changing and you keep on going. And then you end up with a lot of spaghetti code. What do you guys do next year? Well, what we do is to deploy our OpenStack over a vanilla OpenStack over a custom Linux distribution but we wait for a couple of months before everything skips stable and then we release a new version for Private Cloud. We are 20 people, we are not a big team and what we do is to rely on providers to support our environments. In this case, we are even looking for some partners to support our OpenStack jobs. Yeah. Any vendors out there? Yeah. These? Q and F is calling. And really they have great outpass door tacos, right? Yeah. Outpass, if they haven't had outpass door, they should have tried it, right? That's for sure. Okay, got it. Okay. So we covered support, we covered distros. If you think back, most of you've been on a journey here for a while, at least a year or two or more. Wow, three, four, whoa. It's been a long, strange journey. So if you think about what you could have done differently, a plus or a minus in your journey to production OpenStack, what would it be? Let's start over. Well, in our case, we start with a public cloud. And I think one of the mistakes we made was to not secure enough this public cloud environment. There was a lot of issues with the networking. We had many troubles with some DOS attacks and something like that. That, at that time, with this ex, it wasn't that documented, you know? So we started to, or we had to limit the instances networking and putting firewalls over these solutions so we can, we are being able to start working with a good level of services. Okay, Diane, anything that you would change in your journey? Too differently? Too differently. It's a good one. I think that seeing this journey, as we said, we're just starting and I already said the previous. I think the thing that I would like to change in our journey is the thing that we would like to be more flexible, but especially because we're using the distribution, but that is. Got it. So I'll talk about something that's not specifically related to the open stack technology, but from a capacity planning perspective. So we, as I mentioned, had a different technology that we used for running our private cloud. We really did some calculations based on where we were running there and thought that's what we were gonna need for capacity in our open stack cloud. And we keep having to add more and more hardware. User demand was been really difficult to predict. We have a self-service interface for users to come in. They can request a project in our open stack cloud. There's a little bit of governance that sets some quotas and the interest has really surprised us and it's even been more than we'd had in our previous cloud. Yeah, there's a thing we've been talking about until publicly, there's a guy named Jevin. He's a 1800s economist and there's the Jevin's paradox which is as you make a computing more efficient or actually any resource more efficient and you give it self-service so it's easier to consume. People just skyrocket their usage. I mean, this is what happened in the public cloud. Oh, absolutely. It's definitely happening in private clouds now. But the developers get to innovate on top, hopefully. Anon, anything you would have done differently? Actually, one thing that we did try is directly going to the production. Directly. Jump right in. Yeah, so I would say because a lot of people say actually why you went to production directly, right? In fact, actually building a Dev and QA cloud is tougher than building the production cloud because the reason is that the rate of change in developer cloud is a lot, right? The number of API requests that you make on automations and then CIS and whatnot, it's more tougher than the production cloud because production, you don't change maybe your application pool every day or maybe every minute. And we have some CIA infrastructure that creates every minute, actually hundreds of, hundreds of VMs and then attaching, detaching the volumes. And every minute actually, sometimes they attach and detach some people there. Recently, we ran into an issue. 100 volumes, they were not attached to a single VM and we could scale up to 70 or 80 or whatever. But finally, we said we are going to be supporting only 50, right? But it's not the case in production. Got it, got it. So they really pushed the envelope. Well, if you look at most public clouds, I don't think they distinguish their infrastructure between non-prod or prod, right? It's all treated the same. No, no, it's the same cloud now. Yeah, for you guys, it's the same thing. You're happy, you made it all identical. Okay, so, and again, audience questions are good anytime, but you gotta go to the mic. You have to go to the mic. If you wanna do an audience question, just go on up and then I'll jump right to you. Actually, go ahead. I'd like to know how you convince your upper management to use relatively speaking new open source, open stack. I think many upper management who are not necessarily IT experts, most likely think that the large vendor supplied software support on all of that may be safer to the corporations. Yeah, and you guys wanna take that? I could take that actually. So I'm from the journey from day one. It was not an easy journey where when we started looking at multiple options in terms of infrastructure automation, right, in 2012 August specifically. So what happened was we had five different vendor solutions and three different open sources. I'm writing a blog post also on that. So it was not an easy decision. Of course, watch your core business, whether it's building the infrastructure and then building your own private cloud and then going and taking something from the open source and then put your muscle into that and then running it, right? But we went back and forth on that, of course, we did a POC in the lab, of course, the POC that I talked about, that's a POC money, right? And management decided to invest on that and we were very successful in that and we ran production workload on that in 2012 December. We started in August and then 2012 December we ran production and we compared all of our results, whatever actually the existing bare metals and existing stack, whatever we are running it in the data center and maybe compared with this one, KVM versus whatever, the open stack VMs we are running. Surprisingly, the results was, Egor is right here and we compared the results and we got better results and there is no brainer actually. So every investment after that went into this direction. In fact, actually I did talk to open stack marketing team and also Jonathan as well on this and we are going to be publishing one of the blog post on that. So it was a very interesting journey and most of the time I get all these questions from the community member, they're able to get into, able to get to decide their cloud operating system and I keep replying back the same question over and over and I'm putting the same question in the community also. And the journey that we went through, of course, it's not an easy decision. We took very cautiously the decisions but we continue to go on that path and we recently announced around 100% of web bandwidth and API traffic is running on open stack for PayPal. It is a consistent journey. Of course we burnt on fingers some places but we all got through that. But no. Well thank you for blazing a trail. So you started small, you showed the results, comparisons, good engineering and shared it up. And we consistently stick to that plan actually. For the next two years we know what exactly need to be done and what kind of resources we need, what kind of talents we need. Most of the time you need that, because look at from two years from now what you need and then back back from there. And one good thing that we had actually, open stack is new to our infrastructure maybe but we had experience in operating in a thousands of thousands of bare model servers and the infrastructure and that we leverage that in the best practices. So did you guys have a different approach to get your upper management? So I'll just add a little bit and looking at what we do as a company, we do a lot of technology, we deliver information to our customers and I'd say from the top level leadership, there's a recognition that the world is changing. We don't necessarily want to be a cloud provider but they recognize to build the products that we want to sell to our customers. We probably need to provide that infrastructure platform to do it. I think that leads into a little bit of why we're running a distro instead of us rolling our own open stack because that's not really our area of expertise. Our area of expertise is the products but I think there's that recognition from our senior leadership in the company that if we are gonna stay competitive we need to innovate and having a cloud platform on which to do that is part of that strategy. Kyle? Any additional thoughts? In addition, for us it was not too hard as the upper management in our case was already aware that open stack is quite booming, quite growing fast and if you look back at our landscape, there are already hundreds, thousands of Linux servers running in the end. Linux is also an open source product. It's just about the majority of the product and therefore as an enterprise you go back to your vendor and that's why we did the pilot and with testing and that kind of stuff we already found quite fast out. It's a major product and although it, of course there are things that, there were bugs but there's an active community working on that. Things get fixed at the moment that you found the bug it's mostly already fixed. So it was not too hard for us to convince the upper management. Good, I think Linux really blaze the trail for open source, right? Yes. Yeah, almost every major public cloud is Linux behind it. So let's jump to a yes or no answer. Woo, so will open stack control most of your data center infrastructure in three years? Yes or no? Yes. Yes. Absolutely. No, let's go. No. Would you say? Yeah, actually, you know, no, yes or no? Yes. Okay, so we can, next one's not yes or no but those are always fun. We won't do Jeopardy this time. Next time you guys do Jeopardy. So you said no but I'm not gonna ask you why. Maybe one of my later questions will dig into that. So we have any, anybody from the design teams here? Oh. No, not from the design teams. That's fine, you got a question? Yeah. You can ask a question. What was the biggest mental hurdle for getting open stack deployed? Was there anything that you spent like a week just banging your head against your desk before you realized what the solution was? Or there was some configuration thing or something like that that when you got it, you're like, oh, I feel stupid now for not realizing this. Well, at first, when we deployed E6, there wasn't a guide to start installation from zero to production. We were working about two months, three months, finding the solutions, finding this guide that doesn't stick. So we come into looking for three guides that we mixed up and finally we got the installation. At that time, I didn't knew open stack. My guys doesn't, they didn't know open stack also. We were just like kids working with a new toy. But now we are very mature. We have our strong team that is in Glorious DevOps and they have obsessive support about this and we are very, very, very confident in how we can support this kind of solution. You have to keep your hands on the installation and on the deployment and you will be very confident that. So you're like kid to adult in like three years? Two years? Three. Any thoughts, mental challenge? The mental challenge actually debunking, right? So it's debunking. Even one of our availability zone recently, we have multiple and one of them actually, we couldn't figure out why the Nova boot is timing out. And we debunked as much as possible for maybe more than half a day and some of them are going through, some of them are not going through up. So as usual, they're all related to RabbitMQ, right? And if you look at actually every blame will come on open stack after that. But it's not really because there was networking and network plapping happening and consistently the messages are getting dropped. And but one thing actually, I would say actually it's an open stack problem where infrastructure will fail. And as a client of Nova or whatever, right? You need to gracefully handle all these use cases or maybe the corner cases. That is not being handled in every aspect of the core services yet. Even something happens, completely you are relying on RabbitMQ. But something happens to that particular, how gracefully you are going to be handling all this, maybe collect all this request and then process it offline once everything is stable or whatever, right? Or maybe what is that Netra sold actually you can handle? Or maybe give a meaningful messages to the users, right? And the error handling and then messages open stack gives back to the user today. It's not user friendly at all. That creates in a problem for the guys who are debugging itself, right? So we banked a lot of our head most recently actually. So, just one, yeah. Yeah, so I think the question is what are the mental obstacles or you know trying to figure stuff out? And I'd say one of the biggest challenges is the shift that we went through, we brought in a lot of new technology, right? And to try and figure out like how does OpenV switch work and how does stuff work and KVM and all these pieces and really just the introduction of a whole bunch of new technology at once is challenging. And as you were asking the question, I was thinking about the time that I was like banging my head against my desk for like a week trying to figure out how to run these OVS port create commands to get the ports I needed created because it's just such a dramatic shift in what you're accustomed to doing. But to addition to that, indeed also you will learn the operation while using the product. You know, your customer is going to give you some challenges and on the fly you have to find out how things really work. Like for this product, it's an amazing technology but if one bit falls the wrong direction, it could give you some challenges. New technology, yeah, we had some lots of fun and entertainment with it in general but it's also with OpenStack, you know, if something goes wrong and you need to debug, I of course could say that to my vendor but I would also like to understand what's happening because I would like to understand OpenStack. I would not like to understand the product of my vendor, now I would like to understand OpenStack. So then you go to the debugging and then you have to combine so many log files and most likely normally you set it to information or error, but if you set the log file to debug, don't know if one ever tried it but then you get so much information thrown at you that you're like, help, what's happening here and just to get back to the start of the question, we also had some, although we chosen for distro, also that took some banging on the desk, trying to implement that general solution as the solution by default didn't really fit within the enterprise or some things that needed to tweak and our vendor was really eager to help us and within a short period also that product was really up and running but it also took some mindset changing from how does cloud meet the enterprise? Excellent. Yeah, I'd say just on our journey too, part of our biggest mental obstacle was just getting all the IT people on the path to something new. Usually most IT people want to stay with next, next finish style solutions versus actually having to look at some code. So is there any developers in here? Okay, good. So guys, what's the top problem you want solved in Liberty? Start over there. I would like to see something about the HA compute capabilities. So if your compute node dies, you want the instances to restart? Yeah, exactly. That's a good idea. We may have some for you soon, okay? I'm already loving a little bit as I was in a session about it yesterday, the Ops session was quite entertaining but indeed that would also be for us a real nice feature to have. Okay. Indeed, it also is out of operation that kind of stuff in the end. Okay, you can pick another one if you want or you can double down on that one. Okay, we got two for that now. All right, so I think our biggest challenge that I mentioned to the other guys is we were preparing for this talk is from the networking side. We use provider networks because our users just want to be on our corporate network but there's some scaling limits as far as how big you can go and there was a neutron blueprint that really hasn't gone anywhere that essentially would create some tagging so that when I do a boot I could say I want to be on the public network and the scheduler between Nova and Neutron would somehow figure out, okay, I'm gonna put it on this physical hypervisor host and here's the appropriate network to put it on and I would love to see something like that get picked up and worked on. Any Neutron people in here? Okay, we'll give them. Maybe I'll do another one. No, I'm just kidding. I still have another one, Daz. Okay, go ahead, Daz. I found another one and that's some horizon topics. Horizon currently is quite stateless as it cannot store information in a database. It has no database by default and as an enterprise, we would like to have something like a measure of the day or some disclaimer. There's a blueprint out there. It's already quite old and unfairly nowhere picked up and I already talked to Matthias Wrensch about it and each submit he's basically reminding some key some team about it. He wants to have some table space but it's going nowhere and that's sometimes quite frustrating us. Something like a disclaimer will be really helpful in our case. Okay, well, hey, David Lyle works for us. He's the PTL, maybe we can have a talk with him. Sandra, I saw you in here, you got that? We got one for David. Okay, Anand. Yeah, so I want to have the Neutron L-Bass being developed properly, specifically. Properly, be more specific. More specific actually, it's not really, see, we took Juno and then we put a lot of effort in making, you know, useful for us. Basically, you know, see, we built our own SOS three years back. It is managing both eBay and PayPal today and if I'm moving away from that and I'm going to be using open source, I need to have at least featured parity and I'm talking about after that actually, what's the value it is going to bring in later? But if you want to move away from here to here and we need to put serious effort onto that because the problem is, right, today actually everything is running on namespaces and multiple plugins and then scheduling, there are a lot of features that needs to be done and there are a lot of debate whether Neutron will scale for that if you put all these advanced services within Neutron itself or basically take out L-Bass and keep it as a separate service all the layer four to layer seven, right? So there are a lot of debates around it. Whatever the IP, it has to be Neutron. I remember actually when the quantum started when we pulled out Novo Network in San Francisco Summit, anything related to IP, we need to keep it in quantum. I completely get it, but if you don't, if you cannot scale beyond certain limit, we can't split things and if somebody is here in Neutron core or whatever and then we talked about that in last session as well and I'm okay to have it in Neutron but actually we need to make it work. So that's what I wanted to see. And you're definitely showing you're in OG by saying quantum. Yes. That was what Neutron was called before. Of course. Yeah, I understand that. Okay, let's so Docker, CoreOS, Kubernetes, Mesos, Magnum. Did I miss any? So there's lots of containers, lots of containers, lots of containers and the app orchestration process tools. So much of it with is Google is like conceptually supporting. And by the way, thank you guys for coming to this session. All my buddies from Google is like next door. So I'm glad you didn't get sucked over to the Google land. Are you guys using any of these technologies and do you run them on top, on the side or underneath OpenStack? Well, we are currently not using that kind of technologies but at the end of the year, we will start using Magnum. Yeah. Any specific ones? You just focused on containers or orchestration? Containers. Okay, yeah. Got it. I know my customers are using CoreOS with Docker within OpenStack. Okay, and so they're using DevOps or using CoreOS and Docker on top. So they take instances and then run it on top. Yes. All right, so I definitely see a ton of interest in Docker is really the main one that I see my business units are really, really looking at. One of the things that I've observed here is there just doesn't seem to be a consistent approach to what's gonna happen with OpenStack. So we're not really sure where we're gonna end up going with it, but recognize that that's definitely something we need to figure out. Do we run it on top of OpenStack, potentially using, you know, Ironic at the bare metal level? Do we, you know, what do we end up doing? And I think we still need to figure that out. Would you guys be comfortable running on bare metal or do you still like the concept of OpenStack giving you virtual machines to run containers? See, you know, looking at what we do, being in an enterprise in general, most of my tenants trust each other as opposed to a public cod provider. You know, I'd like to see we start running containers for multiple tenants on the same bare metal, but you know, I just kind of need to see what's gonna happen with all the stuff that's happening right now in OpenStack and whether that's even gonna be a possibility because that doesn't seem to be where anybody's going right now. Right. And on your trailblazer, what are you guys doing with containers? You have all three. You have all three. Which ones? I said Docker, ChloroS, Kubernetes, Mesa, Magnum, or on top below to the side. Okay, hey, we're almost over. Yeah, so we recently started with Kubernetes and we have Mesa's actually, it is getting used for the organization layer in our CA infrastructure. And of course we are looking at Docker as well. Okay, all of them, yeah, it's a wild world out there. Okay, so we have five minutes. If anybody wants to know the audience question, we can probably take one more while we're going. Okay, this should be a quick one. What's your favorite project in OpenStack? Heat, our customers love it. I second that. Two heats, it's kind of hot. So, I probably am just gonna go with Nova. I mean, that's kind of a boring answer, but I mean, that gives you the building block to do everything else. It's one of the reasons you started, right? Yeah. You need to compute. Of course, Nova and Cinder, Newton, everyone likes it only in the engineering side because we don't expose that to the users directly. Okay, there's a couple key problems gone with operators. Let's try just upgrades, not too much into the details of it, but are you guys doing it once every six months, every day, every hour, or every year? Every six months. Okay, so you're on the integrated release cadence. Exactly. That's going away, sort of, right? Integrated release. Whenever my vendor brings up one out. Okay. Yeah, I'd like to be on the six month, but like I said, we're on Icehouse, so we're a little bit lagging. I want to be on upstream, but it is not really. So we are lagging behind at least six months and a year now. So because of a lot of infrastructure upgrade, but I explained your offline about what are the challenges. Exactly. So we're all aspirational, but can't quite get there yet. Questions from the audience? Sure. You want to do a yes or no question, or a jeopardy question, or a normal question? Normal question, yeah. I don't know if this is too similar to the question about what feature you'd like to see, but I was wondering what's your current pain points that you see? One of the biggest problems you have with the open stack deployment. Okay, you guys only got to say one, top pain point. Top pain points are upgrades, of course, but. Upgrades, yeah. I would say just troubleshooting as stuff just breaks, and like there's no logs to try and figure out what happened. Indeed troubleshooting, and sorry it does, but second one, educating users to get the application cloud ready. Apps cloud ready, okay. Explaining Neutron to our users. Explaining Neutron, yeah. Okay, so another big pain point we hear a lot about is scale. I know I see Tim Bell out there, so I know he has a big environment, and I know you guys have a big environment, and you're using cells. I know Rackspace is a big environment, they're using cells. Are you guys having issues with scaling, and how big can you get, and how large would you like to be? Well, we scale to a hundred of servers, we haven't seen any scalability problems. At first, when we tried with Havana, there were some issues. Hundreds of servers per region? Per region, yeah, yeah, yeah. Not bad. Yeah, but we are okay with that, but as we use private clouds, we don't need to grow that much over a hundred. Okay, hey, I just got news that we gotta wrap it up, so thank you guys for doing this panel. Hopefully people heard, and the developers will watch us all later, I am sure. And hey, everybody, thanks to the panelists.