 Hi, welcome to SuperUZTV. Gentlemen, why don't you introduce yourself? So my name is Matt Van Winkle. I run operations for Rackspace's cloud servers. My name is Jill Priest. I'm an engineer for Rackspace Public Cloud. I've been there for about four or five years now. Welcome Matt and Jill. Appreciate you coming on. We want to talk with you guys, because you at Rackspace run the largest and the oldest OpenStack-powered public cloud. So what lessons have you learned having run that large OpenStack-powered cloud? I would say the biggest thing we've learned from trying to operate, especially at the scale that we're working at, is you just have to be very diligent about your implementation of OpenStack and really make sure that you're crossing all your T's, dotting your I's, make sure you have everything down, document everything as you go, and especially when we're kind of pushing the envelope as we have, I think the biggest lesson we learned is how to fail fast and recover quickly, get those changes made, work with community, get them pushed upstream whenever possible, and just iterate, iterate, iterate, and just continue to improve. As long as you're adaptable, OpenStack gives you the platform that you need to build the cloud that you want to build. Yeah, I think I would just add that we come to things like the summit and we talk to different folks running in large clouds like our own, and they in some cases have the luxury of being very aware of the application running on top of their cloud, and therefore they have ways to deal with failure by redeploying or orchestrating around it, whereas in our case, we don't know what our customers are using, we know they're there, and so we have to be very focused on how do you minimize impact of any issues and issues will occur. So that's probably one of the biggest challenges we've personally learned running a public cloud. Great, and even today, what would you say are the biggest challenges you have trying to build, run that public cloud on OpenStack? Clearly scale is always a challenge. Project interaction, especially as you get really large, we constantly have to look at the way neutron behaves with respect to NOVA, but change on one side affect behavior on the other. What else would you add? Scale, I would say like a component of scale would definitely be because we're such a large implementation, there's certain changes that come in from upstream that it's just impossible to replicate the rack space scale or not impossible, it's extremely difficult, and we're doing stuff now to try and improve on that with the OSIC project and that kind of stuff, and that's definitely helping, changes come in and they might work great for a smaller cloud or changes come in that maybe don't leverage all of the features that we're using to make our scale work, like cells for instance, and so we have to be really diligent about taking those changes in from upstream and making sure that they cohabitate with the things that we need to do and our actual implementation. So that's a big one for us and just being able to test, test, test, test, test, test. Yeah, I think specifically one of the ones I think that surprises a couple of times is changes to queries in a trunk pool multiplied by thousands of compute nodes actually generate more network traffic than you would normally think about and so it's typically in weird ways like that that we find these problems instead of just the code not working, it's oh, we're now saturating certain parts of our network because of all this extra traffic being generated by this one query being run thousands of times every minute. Great. So obviously it opens up to a very fast evolving, moving project. What are some things that you see looking at coming down the pipe that you think would be helpful to you as you're running the public cloud? You're the first. Cells V2, we're really excited about that and we're very excited, especially that it's going to become the part of the default installation of OpenStack that everyone's going to be using cells. One thing that we've noticed, or at least I've noticed personally coming to Summits for the last few years is watching a lot of companies kind of expand their OpenStack footprint and they're starting to see like, oh, there is a need for this. We do need to take these kind of cells and scaling things into consideration and it's becoming more accepted within the community as a whole. So I'm really excited for those changes. The scheduler updates, really excited for some of the stuff going on the scheduler and just kind of the continuing refinement of some of the base pieces like Neutron. Really excited about kind of scheduling as well, like the IP availability being aware of Neutron. Some of the improvements with like on the network side, network segments and all that. Those are the big ones for me, but yeah, cells, cells, cells. Outside of the technology changes coming, I think the working model's been really being refined as well with the working groups coming out of the user committee. We're both involved in the large deployments team and specifically the segmentation conversation around Neutron was born out of sort of common needs from a lot of large employers. So for me, I think it's the fact that we're actually seeing users, especially users of large clouds actually getting engaged and creating change in the process. And to me, the more that happens the better off we're going to be as open stack continues to go. Great. Well thank you manager Joe for joining us and thank you everyone for watching Super User TV. My pleasure.