 All right, it's about time loud 1205 on the nose so My talk is called war stories What I've got up here is a graph of my annoyance with open stack overtime Not really so this is actually a graph of search traffic by Google on open stack over time It's been a crazy five years So war stories Basically, what is five years in the open stack trenches mean? What this is going to be is essentially like a fireside chat I'm going to tell you some funny moments in the history of open stack some mistakes that we made some things we did well and Tie up at the end with a little bit of ideas about what went wrong kind of a retrospective and what went right over the past five years I think the best place to start of course is With me. So my name is Vish vananda shy. Everybody calls me Vish Based on the name you can imagine that my background is both in computers and meditation But I'm not really here to talk much about myself I'm here to talk about the beginning and the middle and the end So first things first 2010 I think it's easiest to tell from my perspective So I started out Here who knows what states this state this is raise your hand. Yes. All right, Iowa We're known for potatoes Nope, that's Idaho We're known for corn So I actually grew up in a little town in southeast Iowa in the middle of what we refer to as Silicon Valley Great little spot Doing tech stuff in the Midwest Back in 2010. I have this interesting opportunity And the opportunity was to go work to move to California to go work at NASA Now, why did I need to go work at NASA? So NASA it turns out had a problem. They had 3,000 websites And all of these websites were running on servers in different locations Some of them were in data centers owned by NASA Some of them were in rooms sitting underneath one of the NASA centers Some of them were in under people's desks some of them were with hosting companies And so some folks at NASA had this great plan Which was what if we took all of these websites and we united them and put them on a cloud Now, of course, it's NASA. It's government. They weren't totally comfortable using public cloud So they had to develop a internal platform as a service. We actually It was called NASA net initially and the goal was to use open-source software and Create a cloud that could run be a consistent platform for these 3,000 websites that were run by NASA So I'm sure you've seen this infrastructure service platform as service offers service So our goal was to create platform as a service initially Although we never actually got there funnily enough because in order to build platform you need the underlying components You need infrastructures the service. So the initial Focus was to create an infrastructure as a service platform to run NASA websites And we searched it for all the open-source software out there and the biggest one at the time was this thing called eucalyptus We actually bought a container and We called the cloud NASA nebula You'll notice that Cisco and Verrari were the the people that help us get the hardware Verrari doesn't really exist anymore It's been renamed to something. I can't remember what but they made blade servers We got this container and the reason was because the Ames Research Center in Mountain View, California is right next to may west like it has incredible connectivity But it had no data center space There were a bunch of extremely inefficient rooms that were being used. It was an old Air Force base A horrible heat dispersion, etc. So there was no place to stick servers. So essentially what we did was By a container put a bunch of servers sitting next to the connectivity that we had and plug them in we put a chiller outside That's actually a satellite dish that you can see up in the corner in the corner there and that was NASA our cloud So the next thing that happened is we had this great infrastructure service cloud We started working on the platform layer and there was this website. So USA spending that got anybody remember this website It came out like around 2010 And it was one of Obama's big things He wanted to give transparency to how government money was spent now This website was like the healthcare gov debacle of five years ago. So it was behind schedule was over budget They had built this huge platform, but they had spent so much money. They had no money left to host it and so Did that kundra the CTO of the White House came to us and said hey I hear you have this cloud Can we host USA spending dot gov on your on your infrastructure cloud to which all of the engineers said no We do not want to be responsible for USA spending that gov going down, you know This is an alpha quality product, etc But the problem is that you don't really say no to the White House so very soon. We had people Using our internal cloud in that container running USA spending gov from the team at the White House So we had actual users. It was great And we were smart enough to partition that user group offer off of our research users so that if There was a failure. It was isolated between the two and we started building up our alpha cloud The problem was that once we got to about 40 users The thing blew up. So there's a bunch of problems in the early days of eucalyptus One was that it had an in-memory queue and if that queue crashed the entire system would blow up When that system went when the system would blow up the the main Controller would blow up all of the compute nodes would sit there and go oh, I haven't heard from the controller in a while I'm gonna delete all of my instances that are running. So hey, it's a cloud, right? Totally fine not so fine So Additionally the other big problem in the early days of eucalyptus is they had adopted early on this open core model And so we were trying to submit patches to them and every time we would submit patches They would go oh, you know, we have something like this in the enterprise version of our product Why don't you just buy that? We're like, well, we don't have the budget for it So eventually we had to just completely throw the whole thing out and build something ourselves The problem is we didn't have permission to do that so April 9th 2010 was the day that it happened And there is one other person in this room that was there on that day and that's term he's sitting back there This is not a picture of term his house, but it's similar to this So basically we all sat down in in term his house. There were about Four of us there locally and four of us remote And we decided that we were going to create Open stack only we didn't call it that at the time We decided we're gonna create a Python based cloud service that could that we could replace eucalyptus In the data center now interestingly, this is the day that I showed up in San Francisco So I arrive in the mission. I call up Josh at the time and I'm like hey, what are you guys doing? Oh hacking on stuff cool. I'll come over I come over I said so today. We're going to build cloud software this weekend. We're gonna demo it on Monday Nice introduction to my first day in San Francisco And of course we had turn me over here who types about 75 words a minute plus While coding on a big loud clacky keyboard, so I'm saying they're my first day going. What am I doing? I don't even know how this works and he's like Typing out code extremely quickly. So I had to catch up real quick. So we created this thing called Nova That not really a Chevy Nova more like a supernova And so it was AWS in Python four days after we started we were demoing it to The civil servants above us at NASA who had not given us permission to create it So this was all on our own time open source created on the side Then we got permission from the higher-ups and in three weeks. We had moved all of the alpha customers over from Eucalyptus over to Nova. So we were pretty excited very fast iteration. We did the minimum viable product. We got it working and Everything was great. So we had a cloud and the first thing we did was we released the code Open source because we wanted NASA to be able to use it and somehow Terry who is also here found it on the internet and He was working at Rackspace at the time now Rackspace had just created this initiative to to basically Take all of their cloud software they'd rewritten their object storage software They called it Swift and they were about to rewrite their compute software and they had brought Rick and so on and a bunch of people from Drizzle to kind of redesign and rebuild their compute software in Python from scratch using a lot of the same principles So they kind of went this sort of seems like kind of the same thing. Let's see if we can work together and Together we created OpenStack. So that's the beginning of the story That was that was when OpenStack actually became a thing. We had this design summit in Austin But there's a lot that happened between then and now and some of the moments are quite entertaining for example first moment I'm calling a lawyer, so We had this design summit and we the idea was we were going to take all of the Code that we had written that Nova we had written it in private Then we added some stuff at NASA and then we're going to release it so that everybody could build it We're gonna have a demo day on I think the Thursday of the first design summit Unfortunate we didn't actually have permission from NASA legal to release the code yet so 10 p.m. The night before the build-and-run Nova demo that we were all doing the next day getting all the coders involved We finally got permission to a number of very heated phone calls between Chris Kemp in the legal department and William S. Och and various people there and The way that they ended up doing it is they forced us to they said okay If you wanted to this you have to assign copyright to NASA because NASA can't create the government can't create copyright So we had to assign the copyright to the government so that they could open source it And that was the only way we get through the legal process So this is the day before going through the entire code base Cleaning out all of the initial copyright and reassigning it over to the US government so that we could actually release this thing and do a demo out of it That's Jesse Andrews by the way who was involved very early on at OpenStack in this little company called anso labs And so labs was the consulting company where we did all this is Jesse and Sue the two founders of the company we did all of the work for NASA on OpenStack through a consulting company because Government's complicated contracting is hard. So becoming a civil servant is years of work So we all were into this contracting company There's one very interesting moment that happened here about a few months after OpenStack was created We kind of what you know what this looks like it's going to be a thing like it's growing really fast People might be interested maybe we could do consulting not just for NASA but for other companies So why don't we instead of just all being independent contractors? Why don't we actually turn this into more of a bit consulting business and I'll become employees now here in lies the problem Some of you may remember the beard of OpenStack Todd Wiley Todd and I were two of the original people that worked on the code at NASA He decided to become an employee before I did of anso labs, but I signed the paperwork first So of course we had to have a battle to decide who was employee number one And being tech people as we were we had two different competitions One was Rock Paper Scissors Spock Lizard, which is quite entertaining and the other one is something That's called rock paper anything now rock paper anything is an is it an improv game Where basically you can do anything instead of rock paper or scissors and then the audience decides Who wins and I'm happy to say that I won the battle the rock paper anything was the last battle And I won it the thing that I was in rock paper anything was Todd's beard So I actually beat him with his own beard pretty impressive So I became employee number one of anso labs Anso labs later joined up with rack space. We got bought by rack space Which I'll get to in a bit. So what were some of the early challenges that we had to deal with? Probably the most interesting one right off the bat was we had to integrate a Zen hypervisor so open second was only KVM and Rack space really had a lot of investment in Zen So they wanted to use Zen as the hypervisor so we had to abstract the back end significantly so that we could support multiple hypervisors That was a challenge it didn't take a huge amount of time initially But I feel like it's cost us a lot over time to be To have done this and I'll explain the trade-offs later when I get to the retrospective part The next thing was we had this whole other code base out there called Swift and Swift had made a couple of design decisions that were a little bit different from ours And we were trying to create a unified product of some sort And so we actually one of the big things that Swift had done is all of their asynchronous program was done using event lit We were using twisted in Nova and so we decided let's unite let's use the same libraries and same frameworks as asynchronous programming is tricky And we can benefit of using the same expertise so that involved rewriting another big component of Nova early on to make it work properly with With the event lit code base The next really entertaining thing was the API so we had initially just a clone of the AWS API's rack space was like Hey, we should create a new shared open set of API's that are called the open stack API's They were initially based the version one of the open stack API's was essentially a clone of the rack space API Which we then iterated on and of course the rack space API supported XML in addition to JSON Which sparked a whole bunch of debate because people did not like supporting both and there was one very famous quote by J Pipes during what I think the second summit which was Get your dirty XML on my JSON so we were opinionated we There were things that we believed in and we made compromises and trade-offs throughout the whole life of open stack I mean that's that's what being a community is you you start with one thing you debate it and you and you make Compromises and you try and do the best you can The next big the next big challenge was BZR versus get so a bunch of the Principles we inherited from Ubuntu like the time-based releases the development process And so initially we were using Launchpad and BZR for everything And a bunch of the developers who were used to get and also github were very Frustrated with the BZR workflow because it just was a little bit slow and it just didn't look as pretty in some respects Unfortunately, we really liked the project management features that we got on Launchpad for bug tracking and Merging of code and stuff like that and so for a long time. We were kind of stymied We couldn't move over to get because we needed to use those code bases until Monty and I think Jim was involved in this after a while finally said okay, we're going to Move everything over to Garrett. We did this at explanation. We managed to move everything from BZR to get so those of you Who work on the code base today are happily able to use get all the time and go through the fun code reviews on Garrett Which I'm sure we all really appreciate The next really interesting part of the story so this is after Anso labs join rack space was PTL so initially there weren't any PTL's in fact. There wasn't a technical committee. It was called the project policy board but we needed a way and Unfortunately, I'm sorry to all of you who have been PTL's and we will be PTL's the PTL's are my fault So one day at rack space. I was sitting with a few of the people I think would market on market wasn't there yet. So it's probably John Purrier Jim Curry some of the original higher-ups at rack space and I said, you know what the thing that we need is we need For to make some of these technical decisions. We need a kind of single point of contact We need someone to go talk to you know if another product if one project needs to get something from another project There's got to be one person there They could be like the tech lead for that for the system, you know, and it would be really great to have one of these things Now when I suggested it, I had no desire to do this. I was actually Hoping Jesse was going to come in and be the first PTL for Nova And basically as soon as I suggest everybody said, oh, that sounds great. You should totally do that No, wait a second. No, you missed you missed what I was going for here I didn't want to be this person. I wanted someone else to do it and unfortunately, I did become the first PTL for Nova Which was quite a lot of work and more than I expected at the time So next round of startups happened a little bit after that We had a couple of them both founded by people that were originally at NASA So Piston started Josh and Gretchen were both from NASA Chris was from rack space At the same time or similar time Nebula started, which was Devon and Chris who are both at NASA as well So we had a lot of this sort of NASA-based Old group kind of going in these new directions of trying to turn open stack into a product and sell it So that was kind of cool So the next really interesting thing that happened was Keystone So initially open stack didn't have any identity outside of the AWS Access and secret which was embedded in Nova So we needed a generic identity framework that all the services could talk to and so we decided to build this thing called Keystone And it was loosely modeled on the rack space identity API with a few changes And unfortunately that the people that wrote it Were all had written the Java implementation and so they put in a lot of different layers of abstraction It was very un-Pythonic and after about three or four months It was just a total mess like you couldn't add any code to it because you had to write it in six different places So we got this crazy idea to create this thing called Keystone light and Termi essentially Wrote this by himself. I'm sure he was at least two drinks deep for most of it Basically rewrote the whole thing in a pythonic manner so that we actually had a way to Build it along the open stack principles a little bit more easily And we managed to replace the entire code base We had essentially one commit that replaced the entire old code base with a new code base So we had to have testing to ensure that we didn't break between the versions and it was one of the great Switch arouses that we pulled throughout open stack and it took a lot of work on both the coding side and the Community outreach side to get people to buy into that. This was needed and then of course Termi passed off the the mantle of Keystone to Joe Hecken since then it's gone to Morgan Feinberg And I think there was someone else in between who I'm forgetting but yes, Dolph right Dolph who is a great early developer from Rackspace working on on the code base So that I consider a great success when we actually managed to get the Keystone switched over to a better Implementation kind of under the surface without everybody completely losing their minds I don't think it's something we could actually do today because there's a lot more code and there's a lot more people involved Soon after that I realized that one of the problems with being a PTL Is that you don't have enough time to do anything else? I was doing deployments with Rackspace, and I actually went to Paris to help you know Vons do a deployment in in France and at the same time we were Ratcheting up to do the belief that the release right before the Essex summit, so I think it was the the fulsome release perhaps And so I was doing a full day of work doing deployment and then coming back to my hotel room a hotel room And I was five hours or eight hours offset from the US and so I basically do a full day of work back home Merging patches. I think I was doing a rate about 30 reviews a day at that point and then Immediately after two weeks of doing that working like 16 hour days I flew back and the design summit started the day after I arrived So I got like four hours of sleep and then was there for the whole design summit after that three week period I realized that I couldn't do PTL duties and another job like it's a full-time job, which is good It showed that opensack had grown to the point where it needed someone doing that full-time And so I have of course a whole lot of respect for everyone who is currently Become a PTL and is working on things the next Step in the opensack evolution was we realized that the teams were having trouble scaling Nova had gotten up to about 60 or 70 developers at this point and it just was very hard to manage that many people across so many companies Which is another thing I'll get back to in the retrospective So we decided to create this thing called cinder We we actually forklifted all of the volume related code from Nova into a new project that replicated the same API's and It was interchangeable So you could from one release be running Nova volume and the next release running cinder And you wouldn't know the difference then we iterated from there was actually our most successful break out of a project that we've had so far It worked out really well John Griffith was highly instrumental in doing that. He's the original PTL and now of course as you all know Skrillex Is the new PTL Mike Perez um And about this time So we were getting kind of beat up in the press throughout this whole period because Rackspace essentially was the main one running the show And people were like is it really an open-source project if there's one company behind it running everything and They're they're in control and we felt like it was causing a lot of other companies not want not wanting to be involved And so the next thing that we did was we created the foundation. Nope No There we go the foundation And this has been something that had been working for a while being worked on for a while And I think it's been another great success of open stack although interestingly It didn't make the press any happier because now they're complaining There's too many people involved and there's no clear direction because one company isn't running it So of course the press changes their mind at all times So After about two years of being open-stack Nova PTL I finally decided that I had enough two years worth of cat herding was enough for me And I decided to pass the torch on so pass the torch to Russell And I think we've gotten in a pretty nice cadence now at the different projects where To prevent burnout people do it for a year or two and then and then move on and let someone else take over So Russell pass on to Michael still and Michael still is now passes on to John Garbet So we have we have a nice kind of Rotation going through different companies and different different experts so that each person can kind of bring in their own expertise to the project So that's working out quite well And then of course came the grand project explosion of 2013 Where essentially every company wanted to get involved because now we had this wonderful You know foundation everybody could be involved But it was very hard to contribute to existing projects You need a lot of existing knowledge and sometimes the team was so big the review queues were long and so every company said well I'll make a new project So we had about 10 or 15 projects show up in the space of a few months in fact at one point I joked that what I should do is create a form letter for declaring a new project on the mailing List that people could use so that you know this insert name here insert, you know title here And goals here because we really I think went a little bit overboard in terms of the number of new projects We were creating I mean it was great Everybody was excited, but it may meant that suddenly it was very hard to manage it from all the shared summits became more difficult How we ran things on the TC to bring projects into OpenStack became more difficult and so we had to adapt and ultimately Right in the middle of this addition to of all of the new projects OpenStack started to move from DevTest and a lot of environments into production. So once we hit about Grizzly Havana Ice House, that's when people started saying okay. I've been testing this for a while I think I'm gonna put it in production now and we did some things in the code base to really help with that So for example in Nova we made upgrades a little bit easier But overall it's people had enough familiarity that the APIs were stable enough that they're like, okay We're gonna actually start sticking this in production workloads Which is why we had a huge uptick around then in searches and people getting interested in OpenStack Because it's people are actually starting to use it now for real workloads and then Unfortunately, we had the great neutron debacle if you don't aren't familiar with the great neutron debacle this t-shirt from the From the last summit. I think kind of sums it up Or might have been two summits ago so So the issue was initially all the networking code was in Nova network And instead of kind of forklifting that code out like we did with Cinder We had a bunch of people come in and write a more SDN focused from scratch networking implementation that supported a whole bunch of features But unfortunately didn't map perfectly to the features that were in Nova network And so the main problem was that people had a huge amount of Difficulty migrating and people are still dealing with this with this today The newer installs are okay because almost everybody who installs now is starting out with neutrons So they don't have to go through this migration But there's a bunch of legacy installs that are going through the pain of how do I make the transition keep my performance keep my feature set and that that's something that's still making a lot of consultants money, I'm sure But we realized in in this process so That the best thing that we can do community-wise is to be inclusive of everyone So if new companies want to come in and create new projects, we need to find a way for them to be a part of OpenStack without necessarily causing a burden on the other projects and and finding a way for them to Contribute and fulfill their company guidelines, etc So over the past six months to a year We've been working on this thing that we call the big tent and the big tent is essentially Finding a way to include more people and it's it's a subtle change in that the focus of OpenStack Is about the people as opposed to about the projects and so when you're saying OpenStack is about the people You can be really inclusive when you're saying it's about the projects Then you have to declare whether a project is good enough to be part of OpenStack Which is not something that anybody really wants to be involved in from a community perspective So that was one of the great successes actually finally pushing that through it took about It's taken about a year and a half working on the TC in discussions trying to get this to happen And that brings us to 2015 where you know, we have the biggest summit ever again Finally might break that next year in Hong Kong because I don't think that venue is big enough to hold more than this one has And we're working on Liberty, which I think is kind of a fitting name freedom for The next the next release. So what what can we learn? What do we do wrong? Throughout this development process throughout the community building process. What were the mistakes that we made that held us back And what were some of the things we did right? So First of all things we did well. I think from the beginning We had a pretty clear vision that people could buy into that understand about what we are trying to build that We wanted to build cloud software, right? We had a great Marketing presence and that I think ties in well with the clear vision Most of the people that got involved with open stack early were not because the code was stellar It didn't do much at first. I mean, I told you we wrote it in the weekend, right? Then three weeks we were announcing it, right? So it's it's not like there was a lot there initially But people loved the idea of collaborating on something that could be running all of the infrastructure of the future That was a pretty powerful idea So people got involved because of the idea and because of the vision our marketing presence was excellent Like the people at rack space and later at the foundation that were responsible for marketing really did a good job of playing up that story And making it accessible to everyone and getting everyone involved the CR framework was built that we built was awesome I mean, there's no way that we could get the skill that we are at without that being there Once we got about 50 developers like up till then you can kind of do whatever you want once you're over 50 You really need some things in place to actually automate a lot of the testing and and merging Process and so that actually works quite well We run into problems. It's slow sometimes reviews take a while But but we did a good job in terms of the back end of making those tests run And I think we did the best we think we did was how we built the community so all of you guys are here guys and gals are here because We have inspired The community come and participate doing open development doing design some it's doing everything on the mailing list using Blueprints and Garrett all of these things that allow everyone to come in and participate and be a part of the idea of OpenStack is really powerful Okay, so where have where do we fall short? These are personal opinions I think one of the earliest mistakes we made is we made it way too pluggable So it's it's always a tough trade-off because you want to be inclusive You want everybody in the community to participate which means you want to solve every use case But the problem is if you solve every use case you don't do anything Well, and we early on started adding too many apis too many back ends too many points of integration and plugability 600 different configuration options and Nova really I'm responsible for almost all of those. So by the way So that was that was a mistake. I think if we knew where OpenStack was going today We probably would have picked a different language to write it in so it turns out Scripting languages are really good for small-scale projects that are probably under a hundred thousand lines of code and have Smallish development teams working on them and you get a huge advantage from the agility you get for me able to code stuff and prototype stuff Very quickly the problem is that once you get over a certain size It actually gets in your way to be in a dynamically type language that the compiler isn't helping you out on on stuff You end up I've had many times where I spent three times the amount of time fixing the tests that were written that I did actually writing the code to solve a problem and so if If if we knew that it was going to be at the scale that we're at today with a number of different people trying to be involved I think we could have made some different choices around tooling and languages But we didn't know and we wanted to do and we wanted to build fast and and that's what happens And it's not something that necessarily Dooms the projects it just is something that causes a little bit of slowdown, especially at the size that we are today The other another big problem is that which is still being discussed is there hasn't been a very strong product focus in OpenStack We built a toolkit. We built a framework And we built that because we have all these different companies each with their own agendas and they all need a way to either make money or Succeed with OpenStack solve business problems with OpenStack So we needed to support all of those different configuration options But it means that there isn't a clear product and a lot of these companies have start have startups and companies have Come about because they're trying to kind of opinionate OpenStack into a clear product And having spent the past few years at a company trying to selling that and not necessarily succeeding It's it's difficult to pick a product that actually meets all of the requirements that people are trying to get out of OpenStack So it's actually a hard problem, but I feel like we could have done it better than we did We could have at least we could have picked a smaller set of things to work on and said This is what we're gonna do and only this and then moved into other verticals and areas later And the final thing that's been a big challenge for us is we haven't figured out how to do team scaling yet Once we get over about 20 people on a team. It starts getting pretty messy Codebase gets too long the review queue gets too long and essentially the only solution We've come up with so far is okay. Let's take this project and split it up into smaller projects, which Eases the burden a bit But it seems like there must be a better way for us to learn how to collaborate and work together in larger teams That's successful and some of that might be tooling some of that may be better communication I don't I don't have the answers for what we need there, but that's definitely a place That we could use Some better choices So I think That's the end of my talk unless anybody has any questions because I've got I think ten minutes left Yeah, over here. Ah, it's a tough question So the question was what language would I use if we were to start over? I think probably Java which is gonna scare a lot of people because it's not very open source friendly on the other hand The libraries in at work It's statically typed and people know how to build large pieces of software in it So it's got a lot a long track record of doing that Of course personally, I would love to build it in something new like go or rust But I think you're gonna run into some of the problems you ran with Python there where you don't have it You don't have good libraries You're just you're kind of having to build a lot of stuff from scratch because they're not fully developed yet There's a microphone at the back here. We could use because we're recording please sure is there you want to go The mic or I can just repeat your question. Go ahead Oh, man, I don't do predictions So the question is what happens in the next five years? Um, I Don't know There's there's obviously a lot of trends happening around this space that are that are interesting. I will make one question mark prediction And and there's there's an assumption that people have made that private cloud is a useful thing and I'm starting to question that and I wouldn't be surprised if Private clouds actually kind of I mean, I'm sure there'll be some here and there But they actually start dissipating and more people and more and more people go to public clouds because I think People are starting to become more comfortable with the security of public clouds and the real it savings comes in when you actually Don't have to manage your own data center or you can offload a lot of that data center management Is someone who's doing it at a larger scale, so it'll be interesting But I I definitely saw selling private clouds is a challenging bit of work And I mean we had some sales cycles at Nebula that were over a year And I think part of the reason is it's not necessarily solving a real problem that people have as it as it is today Now if it moves into sort of more private Platform as a service like solving the original problem that we had at NASA Which is how do we get all these 3,000 websites onto the same framework? That might be something that's that exists in in business But I wouldn't be surprised if we see the trends going more towards public cloud in the future So that's that's the one Opinionated view of where we might be going over the next five years, but it's really hard to say. I mean there's new projects There's people are getting super excited about containers and everybody wonders how they're gonna fit in What's that? Yeah, everything's gonna be containers quoting Rafi Okay, one more question. Who's got one? Okay, Tim War here so the question is there seems to be a disconnect between business goals and the marketing focus of OpenStack and there and I think that's something that we that is being addressed like there's a lot more product development communication happening In fact, there's a talk this afternoon about product management getting together and figuring out What are the actual problems and how do we communicate to the developers? You know, no one's really done an open stack an open source project in this manner before So we're kind of figuring a lot of things out about how you manage a team of 500 developers all working for different companies with different goals, right? I think it's getting better one of the main problems Historically in our marketing focuses our marketing has been totally directed towards operators and developers which developers in the sense of developers that are going to work on OpenStack instead of use OpenStack and one of the things that there's kind of this movement happening now of focusing on end users and People trying to use OpenStack and that being a valuable thing to focus on for a marketing perspective Which is great for all the companies trying to sell OpenStack for sure So I think those things are becoming more aligned over time But we'll just have to see What's OpenStack stand on Amazon web services? I don't know if we have one They're a cloud Do you have more? So, I mean it would be great Amazon is welcome to join the community at any point and I and we've always been happy to let them But hasn't happened yet Okay, one more back here in the very beginning. Why do we choose open source? So initially obviously we weren't trying to build open source, but we were trying to do it on the cheap at NASA So we it wasn't a program that had any budget initially So NASA that was an idea with no budget now if you're familiar with government contracting often It works that you have an idea and you pull budget from various places and kind of make it happen I don't know where the money from the container for the container actually came from It's probably for some rocket at Lockheed Martin. Who knows? So IT contract there you go. It was under. Yes. It was under the nasaims IT contract as a subheading somewhere So we didn't have a lot of money, which is why we couldn't go buy an open source commercial product or or pay for a Open-core open source project because we needed to do it cheap And so we had hoped that the existing open source solutions out there would solve the problems and they just didn't It wasn't really no it was it was budgeting. That's it Okay, thank you so much. I hope I provided some entertainment for you