 I'm Mark Collier with the OpenStack Foundation and we're here to talk of course about OpenStack and I've got three panelists with me today so I'm going to let them all introduce themselves and then we'll get started. Anand, do you want to kick it off? Sure, so I'm Anand Polaniswamy, work part of PayPal. I'm a small team of OpenStack with any team and responsible for some of the high-level architectures as well, just called on actually. Yeah, so you guys continue with other introduction and I will have some conflict in this room and I'll send them to you in a little bit. Okay, and I'm Doss Kamhout. I work in Intel IT. I'm a principal engineer and I'm responsible for all of our cloud strategy and execution and been a utilizing OpenStack since probably about 2011, a little bit earlier. My background is in design grid and running our enterprise private cloud environment. And I'm Guillaume Obishon, Chief Technology Officer for Digital Film Tree. We're a post-production facility for television and film and also a media and entertainment technology consultant and we're currently using OpenStack to serve our feature and television shows that we're currently working on. Sure, so let me start again actually. So to give some background about PayPal itself, so we started an year back and we are quite successful in applying OpenStack on production scale. Of course, we have a lot of production in large scale issues that we talk about maybe during the first nine years and a lot has escalated. But we are very happy and we started a year back and we are running serious in business on it. Great, great. And I know that Guillaume, you all produced some pretty cool shows that a lot of our OpenStack fans may be familiar with, like Modern Family and quite a few others. What types of shows are you guys doing with all this OpenStack kit? So actually we have a software development team that produces software for use on our television and theatrical projects and we've used that software in conjunction with OpenStack on a new Joaquin Phoenix movie, Spike Jon's feature called Her, which just debuted, NCIS Los Angeles, Cougar Town, Modern Family, a couple new shows called Surviving Jack, Undatable on Ground Floor. And yeah, that's what we're currently and we're using that as for video streaming and also sort of doing some hybridization between Private Cloud at individual studios like Warner Brothers and Public Cloud on Rackspace. That's very cool. I know that a lot of infrastructure as a service, clouds get used for a lot of things that people don't realize in our daily lives and those are some pretty awesome ways we can impact the world and the entertainment industry. So one of the things we wanted to do is really talk about Havana today. So we have a new release of the software every six months and the latest was last week called Havana and over 400 new features, a lot of new capabilities like orchestration and metering that really are kind of making this solution more mature. So I'm curious what each of you are looking forward to in Havana. Are you familiar with it? Have you taken a look at all about the capabilities yet and what are your impressions so far? Anand, what do you think? Yeah, so we are really excited about the Havana release. First of all, in fact, before even Havana released, actually, we cherry picked some of the past set and we even back put it to our products and the code base that we are running in as part of this release table itself. So we are eagerly looking for some of the changes specifically happened around Neutron, where some of the performance issues that we faced across a number of ports. But we're really excited about that and also the enhancements around the heat and a lot of advanced networking services like LBS and firewall services. And of course, it is not going to be very much in line with what we are doing today to last get production. Of course, we are going to be contributing some of a lot of real-time use cases to the design systems in Cancun, but it's a good starter to start with our basic use cases and eventually we'll get that for sure in the next release. But we are also very much interested in upgrading Havana in a first quarter. Of course, now we are already very much close to our holiday readiness and we can't go and put all our new code base into production and we can't break things at the end of the year. So we are planning to, you know, January every time to upgrade to Havana. Of course, we are already running our internal CI CD to make Havana work for our internal network propulsion stuff like that. We are already started doing that, but the update will happen in January every time. Okay, that's great. And Doss, I know that you're an engineer at Intel responsible for a lot of big open-stack clouds running there and we were talking recently about some of the types of workloads you're running and maybe how those might be unique or I guess more traditional in terms of enterprise workloads. Can you tell us a little bit about that and where you see Havana coming into play? Yeah, totally. So when we first started bringing open-stack into our environment, initially in 2011, obviously it was being built, you know, everybody knows the pet versus cattle discussion, you know, pets, and cattles. You don't, depending on what country you're from or if you're vegetarian. But so when we initially did it, it was all focused on cloud-aware apps, so things that didn't need the resilience of the infrastructure, they could use things like anti-finity rules to basically run the app on many machines if we lost a node it didn't really matter. But we knew that there would be enough interest from enterprises to take the type of capabilities and make them for our traditional applications. And sometimes I call them legacy apps, but these are apps that they require the host to not go down. The servers are built to always stay up. So we're pretty excited that things like boot from volume have progressed quite a bit. Live migration, evacuate, basically a number of capabilities that allow an enterprise shop that needs their virtual machines to stay up. They can basically keep them up. Another area that's important to us is just SSL encryption. So all the APIs, all the interactions must have encryption. So we're excited that's moved forward quite a bit. And last but not least, orchestration and billing are huge for us. So while it's working properly is really good. But then taking the advance of the heat and moving that forward is hugely important for us. So Havana is moving forward well. And maybe just one last point. Most enterprises are used to new products about every year, year and a half and they're not not fast change. When we first got into OpenStack we knew that the six-month cadence would be the ability to allow us to see innovation happening quickly. And I think it's proving itself with things like Trove coming in, Savannah coming in. So we're seeing a lot of fast-paced introduction, which is what we need in this community to keep it innovating. Great. And Guillaume, I know that you're with Digital Film Tree and we oftentimes in the cloud where we talk about production and production workloads, but you have a different type of production there in LA. I think trying to produce television shows, I think we could hear your walkie-talkie going off. So if we get in that way of your modern family production, we got to make the right priority. But why don't you tell us a little bit about what you think about Havana and how you're using OpenStack? No, I mean Havana is a tremendous step forward for us. We were using Grizzly and Folsom before that. And really with Havana, especially with its orchestration ability, we're sort of able to package an OpenStack for a studio implementation that really makes it a much less daunting task for them to stand up their own private cloud environment. In addition, obviously the additional security capabilities are of tremendous value and have really enabled us to go to studios that are obviously very paranoid about their content and really show them how OpenStack, even though it's an open standard in an open source community, can be an absolutely secure environment for their content. Yeah, I know that there's been a lot of improvements on the Keystone front for authentication as well as encryption on Cinder from a storage perspective. And there's quite a few other storage enhancements that I think are interesting. In terms of the big new projects that power services in Havana, which would be specifically the orchestration, i.e. heat and then salameter, I wonder which of you want to chime in on that point and if there's anything that you want to say about your plans there, how that might work into your deployment in the future. Sure, so let me start with what we are doing currently and what our future plan was and how Havana really is really aligned with some of the plans that we had in mind. So basically, we already started using heat and with Falsam itself and in fact actually we back put some of the features from Grizzly itself into Falsam and we made it work for some of our use cases. And then we upgraded to Grizzly, now we are at a stage actually, we need to scale out and we are looking at our deployment architecture today and it is active standby and we are not very much comfortable with active standby running and we wanted to run active active cluster in each and every cell that we are deploying right. And it is not only we are managing our open stack resources as for top heat, we also orchestrate non-open stack components also like our code deployment, our firewall automation and you know our dependency service checks and stuff like that. So we are really using very, very extensive right now but of course, we are not rolled out to the entire site to deploy all our applications, we are piloting with close to 20 applications today for you know, deploying automatically. That's one part, without human intervention or scale, you know, flux or flux down as well at part of auto-scaling. And CFN tools are there and of course, you know, it has some issues as well in terms of, you know, performances that we ran through, you know, some of the performance testing and it starts around, you know, 70 concurrent, you know, deployment at any point of time. That is sort of a huge win for us, we are not expecting, you know, 70 concurrent deployment going at any point of time and we brought down another standby node, we tried to, you know, bail over it and of course it, you know, took some time to recover it from, you know, message queues, specifically, you know, there are a lot of messages that got not, you know, drained to the standby node. So it took some time to, you know, recover it around 10 to 15 minutes but I'm expecting, you know, a lot of changes that happened around, one has released that will solve, you know, most of the problem that, you know, be encountered with something. And we're already using CFN tools for, you know, auto-scaling, we are using only CPU as the threshold and we wanted to use not only CPU, we wanted to include, you know, other attributes as well like memory and the network in a consumption, what is happening around, you know, load balance, how much traffic is coming and, you know, for the application, you know, cluster and we wanted to, you know, calculate a lot of other attributes to make sure that there is a good time to scale up or scale down. So I'm sure, you know, Cylometer integration is going to be, you know, very much useful for that because we are collecting, you know, so much of data already part of our enometrics and if you see all of them as part of the decision-making process, it is going to be very, very useful for us. So you've been running with auto-scaling in your open-stack environment just been more simplistic as far as the inputs that drive when to scale up or down and now with Cylometer, you'll be able to measure quite a bit more and have a more sophisticated auto-scale. Yes, exactly. That's great. That's great. Doss, I know we were talking earlier about heat and how you, what you guys are thinking about doing there. Yeah, so I mean, any large IT shop needs to fully embrace automation. If you look at most enterprise IT shops, they don't have a lot of automation. I think with Anon's doing in PayPal, they're probably very familiar as a web-scale shop with automated space, you know, same with the studios. So heat is great because it gives our guys and gals the ability to basically, you know, describe their environment and be able to rule out environments in mass. So we're big on standardization, so giving people a template of how to build a proper application is pretty important for us. And totally, the connection with Cylometer is pretty beneficial as well. We actually, heat actually needs a little bit more improvement for us, though, too. So since we do boot from volume to enable things like live migration, we need heat to support that as well, which is coming. Another thing enterprise IT shop needs is capacity management. And I think Cylometer, you know, really helps in that space, especially as you move to larger and larger, larger scale, you want to make sure your utilization is really, really strong. We run a huge Linux design grid inside of Intel today. And we've spent a lot of resources on optimizing each percentage of utilization. And we expect to do the same thing with OpenStack and Cylometer. So the advances are great and really just help us run a very well orchestrated infrastructure for our tenants, our consumers, and let them build really cool stuff on top. So we're excited about the changes there. Good. Gome, do you have anything to add on this topic? Yeah, I would like to say that, you know, we're working very heavily with the heat team at Rackspace, you know, because we're trying to stand up both private and public environments where projects can move back and forth between private and public as those, you know, the security requirements of those assets change, or the distribution of those requirements change. So, you know, heat's a tremendous advantage for us in being able to stand up public and private cloud that feel and are managed identically. Yeah, that's a good point. And we need that in more areas. Federated Keystone. Right. That's, let's get that in the queue. A non brought up how neutrons added on additional plugins. So that's actually pretty key, right? Because we only had one plugin before the ability to add an additional plugin. So get your load balancer behind there. And that's pretty key because most of us that run, you know, large shops today have existing solutions. And really, that was the promise that we strongly believe an open stack is, is give me something that lets me put in existing things, add and do, and really drive forward with this one API, you know, to rule them all. Yeah, and also, you know, the project structure is actually, you know, it's very much useful for us to, you know, liberate some of the existing systems and stop, you know, just choosing the entire stack. And we can, you know, mix and match actually what we wanted to use now. And we wanted to, you know, put it in the roadmap for, you know, we will use it maybe later, you know, some of the components. And because, you know, we can rip it off the entire infrastructure. It's not being built for us, right? We have to play with, you know, what, you know, we have it in our data centers, we can't go back and rebuild and it doesn't, that will take two years. Yeah, a lot of people don't talk about VMware is even invested quite a bit into Havana with with Nova controlling ESX. I know PayPal, you guys are get in the news there, we'll probably do a little chat at the Hong Kong Summit. I'm just watching my tweet feed and cloud based just announced their Hyper-V installer. So just go back to my point, you know, we're really seeing the industry get serious about Nova controlling all the hypervisors. Nova moving forward with Ironic to control bare metal. Same thing with Neutron, you know, sender across across the stack. So we're pretty pleased with the progress. Yes, exactly. Good. Well, that kind of goes to our next topic, which is about the future of OpenStack, you know, past Havana. The next release is Icehouse. We'll be planning that out at the Hong Kong Summit in just two weeks. So that's pretty exciting. And, you know, we already know some of what's coming because there are incubated projects like Ironic, which is a bare metal provisioning capability. Trove is actually exiting incubation, I believe, going into Icehouse, which is database as a service. There's a Savannah project, which is Hadoop and Marconi, which is queuing. So there's a lot of work going on that we know is coming. And I'm just curious, you know, of those projects we know are maturing, you know, which of those you think are most important and then, you know, what else is out there that we should be thinking about? You mentioned federated identity. I've heard that from quite a few people. So what's your wish list going into 2014, guys? Yes, so let me maybe, you know, start with, you know, some of the issues, at least, you know, what we have currently and, you know, how we wanted to address in terms of, you know, next few, you know, decisions, right? So we are not running in just a single, you know, cloud, right? We are running multiple clouds within our data center because of, you know, scalable issues. But, you know, the problem for the upper layers where they wanted to orchestrate the heat or maybe some of the automation tools that we have, we don't want to, you know, get to, you know, multiple keystones in each and every, you know, small cell that we are building. And exactly, you know, we wanted to have, you know, keystone, federated keystone, at least in the very region we wanted to have only one keystone. And we were already looking at, you know, different options of, you know, deploying with, you know, GLP's low load balancers, you know, instead of, you know, connecting to, you know, different cell, cell specific keystone. And that is not the right way to solve. Instead, actually, now I want to have, you know, the real, you know, cell, rule cell and leave cell deployment where every region has only one keystone to connect for the end users. And we wanted to, you know, federate and, you know, seek that work across, you know, multiple regions at the same time, so that, actually, you don't need to, you know, regenerate the token for each and every, you know, cell. And we looked at, you know, some of the, you know, back in storage, like, you know, databases, and finally we ended up in doing, you know, the token sync up using LAP itself, actually. So LAP was very useful for us in, you know, syncing the token, you know, putting into MySQL database, you know, doing the full sync with the clusters. So that one thing, actually, we are looking around, actually, how we can, you know, put together some of the real-time, you know, challenges that we face when we deploy to multiple data centers and multiple regions. So that one thing. And also, you know, related to the database, Pro, actually, it's very, very much in line with, you know, some of the things that our, you know, developers are already, you know, asking for. Specifically, you know, they are looking for self-service, you know, Mongo database cluster and MySQL database cluster and including, you know, Oracle also. But one thing that, you know, we are taking this different approach instead of we develop everything ourselves from the open-source team. We partner with, you know, some of, you know, our internal teams to make sure that we are not forgetting some of the challenges that we faced in last 10 to 15 years in real-time production. So you give a cluster and after that, actually, how we are going to be replicating the data and how, you know, how we are going to be, you know, managing the cluster. We want to be just not going to give an API for them to just spin out the cluster and after that we want to pay out API for them to manage the cluster as well. Right. So that's why actually, you know, we had some, you know, automation around the data services service. But if, probably, we is going to be active project and it is going to be invested a lot of energy from the community, then we might want to, you know, reconsider our days enough, you know, developing data services ourselves. So definitely, I'm going to be looking at that actually. It's very important. Yeah, I agree with both of those too. So we have 68 data centers across the globe. I work with public cloud providers and obviously our internal environment. Federation of all of that, both from a Keystone perspective, and yeah, we used LDAP too, but we haven't solved that for a hybrid model, though, just so internally we're using LDAP through OpenDJ and then into Keystone to give us that. What we also need heavily is, I think everybody will ask this upgrade. So, I mean, we'll probably switch, we were inspired by Anon's work with CI CD. We do do CI CD in a small environment, say, but we want to bring that in where we're doing continuous employment, integration and deployment because small changes are much easier to pull off than massive changes. So a good way to upgrade. We got to make sure the database schema isn't changing drastically underneath us, so we can keep doing these types of upgrades. I think a lot of the ones we already talked about, you know, Trove is huge for us. We see database as a service kind of in three phases. One, give me a virtual machine that has a database on it, you know, very simplistic. Two, more of the Trove model of give me an instance on a potentially on a cluster, but what we need to get to is true APIs that actually mask the backend database structure so that software developers can write once. They can optimize at their code level, but it allows us some freedom to basically change some of the solutions underneath. And probably the third point for me, and I'm not brought this up too, is just integration. We need to continue to see we need the open-stack software developers and we're pretty happy to say that some of our Intel IT guys are are contributing now and we're going to ratchet up the number, but the enterprise IT market is estimated something like a hundred billion dollars, and if you do the math of what public clouds are making today, it's a pretty massive amount of environment and people want to tap into their existing gear. They're not going to go all greenfield for everything, so the more that we can have integration to existing solutions to give people a pathway to introduce more and more disruptive technologies the better off. So lots of integration, ability to upgrade, and massive improvements on database. I can't not talk about Savannah. The ability to give us, you know, just let me speak MapReduce jobs. We have guys that spend way too much time configuring Hadoop and giving them the vehicle to stand up and just focus on their MapReduce versus on how to configure Hadoop is is gigantic. So we really want to see that move forward and also gives us segmentation of clusters too. So we can share data, but you keep some segmentation. It's pretty important, you know, especially here at Intel, where we do need to keep quite a bit of things very secure. I'd like to say that, you know, we have a sort of a unique challenge in that we, what we're trying to do right now is really build a community within the entertainment industry where, you know, we do have, you know, hundreds of data centers. They're just all owned by different companies, but in reality they need to all be integrated with one another. You know, we have six principal studios and we have basically four digital content distributors. This is really the core of the entertainment business right now and just the amount of money that's spent to move content or grant access to content between those companies is a huge potential source of revenue for them. So, you know, to standardize around something like OpenStack and have integrated authentication and across the board, you know, is what we're really trying to do with the entertainment industry. Standardization. Yeah, I think that point's pretty important because it probably even goes beyond entertainment. I bet if we looked inside of all three of our companies, if you really look deep at what we do from an IT perspective, there's probably a ton of similarities. So, the more that we turn this into a community discussion and share code and share how we're doing things, I mean this is how we really drive a change in advancing the IT industry. This is why we care a lot. We're running hackathons internally or codathons where we basically show our IT teams, you know, how to first contribute and get a bug fixed and then how do you do a blueprint? How do you start getting out into the community and sharing your ideas because we're all doing exactly almost the same thing. We may use slightly different words. We may have slightly different focus, but, you know, we're dealing with servers and storage and networks and then automation and so I really want to see the community grow. We are seeing the signs, you know, the data speaks for itself. The community grew massively with Havana, so I expect Icehouse to go even bigger. Yeah, and also I brought up one more thing to my team, you know, three or four weeks back when we were hitting a lot of performance is just when we enabled, you know, auto scaling. These guys created in a couple of their 3500 VMs. That's enormous. I just created, deleted, and they were doing some performance testing here and there. And what we found out was actually, you know, we badly need to have, you know, performance testing also as part of the CICD process in the community itself to address, you know, some of the major changes it is going to affect, you know, API response times. So what was happening is we were, you know, listing it out around you know, 100 or 200 ports, you know, that itself was taking around, you know, seven minutes to list out, you know, 200 ports. That's not something that acceptable, but it's not that actually CPE is going up or whatever, right? The way that, you know, CTA will come in that we composed up, you know, how we were loading data, at least. So we looked at, you know, some of the code, we went back and forth the whole day, we couldn't figure out and finally we were looking at, you know, Havana code base and we were checking with some of our vendors also, you know, heavily contributing in a new track. Finally, what we found out was actually, you know, there was a simple change that, you know, we were lazy loading everything instead of actually, you know, hard loading at the same time. Then the seven minutes came down to, you know, few seconds actually, right? So three or four seconds to list out, you know, hundreds of thousands of ports. So things like that actually in the code war and today's in the code review actually, I don't want to, you know, blame that actually, that what we are doing is not right today. So we wanted to, you know, enhance, you know, more to add performance test also as part of, you know, major changes when it comes through. For example, if you are moving from, you know, the existing non-oscillar components into oscillar adoption, of course, there's a lot of change in the foundation itself. So of course, you know, the asset, you know, as a Jenkins part of the pipeline, we need to introduce, you know, if the reviews are proposing for, you know, performance test run or whatever, then we need to account that as well as part of the, you know, pipeline before we go and match those assets to the product. Yeah, that's a good point. I mean, the Jenkins Tempest tests that are running right now are pretty cool. I think, you know, pretty advanced, but we definitely need, especially we start scaling massive performance analysis tests going on to, to help. And that should be a community effort. It'd be good to see, and I know I've heard some proposals to have that move forward. Yeah, because, you know, and also the upgrade also the problem for us now because we have auto-scaling enabled. And if the response time itself takes, you know, five minutes to respond back from quantum or whatever, then we cannot scale up our production because, you know, the capacity is crying already and we can't wait for five minutes to, you know, just go and spin up, you know, couple of games or whatever. So that is something not, you know, good for us today, but we need to, you know, talk about how we are going to be, you know, improving the performance, specifically in the area and where we need to, you know, head towards, you know, what Mercedes really needs are maybe, you know, some decent change itself that we need to do as part of each and individual service where we see these problems. Yeah, there was actually a discussion on the mailing list just in the past week about potentially incorporating benchmark testing into Tempest and into the gate, down the line. I expect there'll be a lot of discussion as a result at the summit in a couple of weeks in Hong Kong about, you know, how we would actually put that into practice, but it's definitely a common question about, you know, quality. Obviously, there's a lot of ways you can ensure it and there is a lot going on in the continuous testing with every commit, but, you know, adding performance to the equation I think is a great idea and I think that that is a good community effort just starting to spin up on that. I would second that only as someone who's actually, you know, trying to actively use OpenStack to not only produce data and serve data, but stream live video, which is what we're currently doing, where everything is dependent on being real-time and frame-accurate. Yeah. Yeah, I did discuss with a couple of other in-a-bicket players in, you know, various ones in the valley and, you know, they also acknowledge that actually, you know, the large-scale users like us and we'd be all going to team up within if foundation needs some kind of, you know, infrastructure to run massive-scale testing and whatever, you know, definitely, you know, we all have to join together and make progress in the media. Great. We'll take you up on that. Hey, Mark, one more future thing and I think it's important. We've got to make sure fragmentation doesn't keep happening. I think, I remember you guys did a call out about a couple of public clouds that didn't have as much interoperability as we'd all like. And you see this, you can see this across the board. I mean, everybody wants to innovate and that's awesome. We should have innovation but wherever we can, you know, lock into a similar way and minimize fragmentation. If we have 20 people doing the exact same thing just slightly differently, it doesn't help many of us as IT shops. Just figure out, you know, some of these key areas and I think, you know, the community naturally is doing that, but I think, we just want to remind all their software developers. Everybody wants to think of a new way to do a wheel. But, you know, let's try to act like a community and minimize fragmentation. Yes, let's have Mark to comment on that too. Sure, so I think that when we talk about community efforts and testing and where to take sort of the tempest style tests that have been done more from a code quality perspective and we just talked about, you know, maybe branching that out into performance, I think it also can be evolved to be used for sort of conformance or compliance type testing or interoperability testing whatever you want to call it. And there is, there have been a lot of discussions both at the the board of directors level as well as amongst different folks in the community about how we could take sort of let's say a snapshot of the final set of tests that ran in Grizzly or in the VanaCycle and run those against a public cloud endpoint instead of just running it against, you know, kind of the new code as it's coming in and I think that there's a lot of opportunity to reuse and leverage that. You know, the doubles and the details, but there's definitely a lot of interest in that and I think we've kind of turned the corner from a lot of excitement about OpenStack and early proofs of concept to, you know, people like yourselves that are running them and, you know, dozens of running it and dozens of data centers and, you know, now it's the real world comes into play and we need to make sure that we're not making minor changes just for the sake of changes between implementations. And a lot of that, I think, in my opinion, comes down not just to the code, but actually sharing the details of how the implementations are done amongst users and, you know, sharing that knowledge and which kind of brings me to my next question kind of related and all these topics, which is, you know, what role do you see users like yourself having in the OpenStack community today, you know, do you feel empowered to have an influence over the roadmap? You know, how do you want to plug in and how do you feel it's going right now? Sure. So let me maybe start with that. So we could, you know, as a starting point, you know, before even we started on OpenStack, we developed Cloud ourselves within, you know, eBay Inc as a company, right? So we developed, you know, a lot of, you know, large-scale infrastructure management, you know, services like load balancer service and DNS and all those things. So it's very critical for our availability to help all these services running, you know, 69, 99.49 software. Right? We can't take outage of, you know, some of the greatest middle of the morning or maybe middle of the week hours. You know, we can't have our operational tools, you know, to go and die with not changing every infrastructure. Right? We shared, you know, some of our, you know, the design itself as part of Elbas and DNS during our, think it's part of Falsum Summit. No, Falsum Augustis Summit in San Diego. And we brought in all these vendors and we made sure that actually we're all, you know, aligning in terms of the nomenclature actually, how we wanted to call, you know, different, you know, you know, different, you know, different, you know, different, you know, where you vendor is having and, of course, we are all aligned on. A lot of problems that we need to do the same thing from the last users perspective, breaking everyone and inform that actually, you know, this is what we are trying to do. It's not that we are trying to replace someone. Basically, you know, we wanted to have open site is the only API that we are going to be using to manipulate our infrastructure. And as long as, you know, the products are aligned in terms of creating the plug-in some fiber for that, we are ready to go and use it. So, we are very much very clear on that, in our strategy. But, as a, you know, large-scale user, we need to think all of our, you know, vendors, but we are all they have an address center to understand all of that and make it part of the open site community itself. So, I'm keep, you know, hearing from in a different, you know, other companies also whenever I meet them in for dinners or lunches, you know, what happens is, you know, there is a common complaint and open site, there are a lot of projects actually that duplicated projects today. Right. So, often different vendor, you know, they are not aligning with others and they are keep creating new projects and I'm not sure, I'm sure actually foundation is clearly addressing in the time we may have to, you know, get it out to duplicate so that we all focus on the final output quality one. Right. Of course, if you have 20 or 30 different projects for the same purpose, you know, community get confused actually where we need to contribute and we cannot, you know, scale it for, you know, large clubs. So that's the only common complaint of keep getting, definitely not used as like this, we are ready to take on that wherever we need it. Yeah, I'll add on from a roadmap perspective, we were a little concerned when we first got involved, whether or not we just run to a philosophical battle with the software developers that were already entrenched in OpenStack or the PTLs. And that's basically because, you know, I have to do both support web scale, CloudWare apps, as well as enterprise legacy apps that are not designed for failure. But what we've seen is if we just jump on in and get, become part of the community, there's nothing keeping us from not being able to help move the roadmap forward. So this is what we tell to all of our senior tech guys. I know quite a few IT shops that are maybe not as high tech as some of us, but they're also looking at, hey, how do we, it's a whole new thing to be able to step into a community and contribute versus just listening to a vendor roadmap and hearing what's coming in a year and a half and hoping it comes, right? So I think it's a pretty interesting opportunity to change for IT shops to basically be able to jump right, right? I mean design summits, that doesn't exist, right? Other types of industry or other areas, it's much better to go to a design summit, give your feedback versus just listen to marketing slides about what's coming up next. So I'm pretty excited and yeah, the community's open and just as long as we don't all wear suits, I think we'll be okay. I don't think there's too much risk of that. No, fortunately. I feel like just from our small perspective that we have a tremendous partnership with Rackspace and we've sort of been able to give them some of our feedback and I do feel like just as a more of an end user that the maturation of the product has really come a long way and at this point that's why we feel confident going out into the entertainment industry and really pushing OpenStack adoption across the board. Good. And people keep complaining that this is not stable, you are running open source and all those things. Keep telling. Okay. OpenStack is not the only one we are running open source. Okay. We are running Linux itself. From that perspective, if there is cloud software that's entirely stable, 100% of the time, please let me know because I would really like that software. That's cool. That's not it. Yeah. And one common other complaint that I keep hearing from our windows, you know, whenever they, you know, change something to their plugins or drivers and the review takes long time and they're saying, okay, just take our code and then put it into your, you know, CACD and then make people in it. But I don't want to do that because, you know, if they push it to upstream, they change the code, then my mesh fails and all those things. I don't want to do that but as a community on the foundation, maybe, I don't know where the gap is. Basically, even recently actually, there was a common complaint from my team, it said they pushed, you know, a lot of patches and then said actually, they're all waiting for another use. And I'm not sure it's a lack of in a code, you know, code developers are code contributors that we have and they're all filled and, you know, they're not approving or I don't know is to where the gap is. Yeah. I think it's just very few software projects have, you know, 900 contributors we had in Havana from 150 different companies and, you know, the system and the people and the processes are all scaling as much as possible. But at the end of the day, with feature freeze and kind of that last minute rush, there's always a massive, massive number of patches coming in. And, you know, a lot of it is just, it takes time if you value quality to really review those patches and make sure they're, you know, they're right before incorporating them because at the end of the day, yourself and thousands of other users have to live with how all those different contributions intermingle and make sure there's no regressions and stuff like that. So it is something that everybody wants their patch in faster. So we're always trying to try to balance that. I'd like to add Oh, sorry. I'd like to add is my own small niche industry. Well, what we're doing is sort of branching and really as we make small modifications that are really only applicable to our industry sort of branching that off into its own slice of open stack. You know, maybe that's maybe that's not the best course of action, but, you know, certainly we feel we're making changes that are very specific to our industry. And, you know, maybe that contribution would be, you know, valid in the community as a whole. But that's sort of how we're dealing with it. We're branching it off into its own thing. We're not keeping it closed, but, you know, we're not certainly contributing back to the open stack community. I would just say that, you know, the more open you can be about what you're doing, what problems you're solving, you know, on the mailing list and other forums and putting the code out there, whether you push it upstream to the repository for OpenStack or not. You know, a lot of times you'll be surprised at just how many other people have the same set of problems they're trying to solve for and if it's, you know, live streaming or video streaming, I mean, there's a whole huge number of industries that rely on that outside of just, you know, the folks that you work with. So, you know, you may be surprised at how many people would benefit from that. And then at some point, if it makes sense for some of those capabilities to actually be part of OpenStack, you know, that it sort of happens organically as long as there's a lot of sharing of knowledge and of what you're working on, you know, people will chime in and let you know if they think it should go one place or the other. And certainly, from our perspective and how we've been trying to push the agenda in the entertainment industry is that, you know, by contributing back and by having an OpenStandard, you'll have a standard that persists for a long time. Because one of the, you know, issues that we've encountered in the entertainment industry is just in terms of longevity. And what we've been saying is, well, by having this openness, you have longevity. Yep. Yeah, that actually kind of brings up my next question, which was about the ecosystem. You know, I think we have a large ecosystem. So there's a lot of vendors you can hire if you want to for whether it's services or package products or other, you know, different types of help that you may or may not, you know, want to bring in outside experts for. And I'm just curious kind of how you feel about about the state of the OpenSec ecosystem. Do you feel like there are a lot of options to choose from, you know, for distributions or services or training or what have you? Do you all have any feedback on that? Of course. I'd say we're in the early days. Yeah, there's lots of options and I think there's lots of opportunity right now. Mostly we have smaller players. I think we, everybody knows probably who Mirantis is. You know, and we're starting to see, you know, some of the the more, the larger open source guys like Red Hat, you know, starting to play a pretty massive role. And you're seeing this quite a bit, but I'd still say if you looked and compared it to that $100 billion enterprise market that I was talking about, you'd find that where we're just we're just starting, right? It's still a niche in regards to the ecosystem. I think there's options, but it's going to have to scale massively over 2014 to really be a global ecosystem. I think everybody's jumping in, taking part. I think people are saying, hey, can I make money off this? Or not. And I think I think the what we'll see in 2014 is the ecosystem growing. It may just mean that that's some of the guys that got in really earlier going to scale massively. But I know for sure it needs to grow big before it starts getting out of really scaling across the entire opportunity space. Yeah. So we took a different approach from the ecosystem itself. How we are going to be leveraging the innovation we are putting in the vendor side. And what happened was actually we've had multiple options in our data center today. Multiple devices, multiple vendors. And instead of automating everything ourselves, then we wanted to have a common platform where all of our vendors know that what we are using. So that they don't need to dream actually what we are going to be using in the management play. And as long as they integrate with OpenStack with their drivers and plugins and we are ready to pilot in the lab. Because we have our qualification process is very, very stringent because of our timing in the network band itself or maybe the latency itself. Every millisecond and nanosecond is very critical for us. Then we had a very, very strict guidelines, what we need to bring in our data center and what it is not. Sometime what was happening is even the product was very stable in the data play and we are going to do it in the hypervisor itself. And sometimes we are not able to bring into our as central management system within months. So it used to take a lot of time to integrate with our automation system itself. So OpenStack really, really does to liberate all of this innovation happening from multiple vendor side and as long as there are plugins and driver scales for us we can go and use it. But in terms of distribution we are not planning to use anybody's distribution because for example we are using our existing non-ecotopology existing hardware and also different topology whenever we build a new data center. And I don't think the generic distribution from a particular vendor will solve our problem. For example, recently Carvana really introduced RAM filters and a lot of other filters that we already implemented to meet some of our needs. For example, we have different security zones based on our payment platform where we have our applications running. Of course we can't build multiple plots for that instead as we heavily leverage host aggregates and different filters that we create for availability and stuff like that. So by keeping all of that I don't think that the distribution is going to help us but definitely the plugins and drivers from multiple vendors mix and match based on their performance and the management direction as well as the data plan which is going to really help us to become the ecosystem itself. So I'm really excited about that. I never seen a project within open source technology. Today you have that option. Yeah that's a good point and this is the first time and the ecosystem can just grow through those types of vehicles. So all the major hardware vendors and software vendors that are building the plugins I recall like 10 years ago I sat down with some storage guys and said hey realize we run multiple storage solutions. I want a single control plane that allows me to deal with both of them. They said no way. We won't do that. So in comes the open stack. And so I think that's a brilliant way to push it. It's good to hear PayPal taking that approach and I think most people should do that. Say hey I need these APIs. I need your plugins for the APIs. But I was friend and the distro point is a good point too. Most of us can't just lock into a single stack. It's very different than Linux. Linux was a single operating system that has tendrils that go out. With open stack you're now taking your entire data center which has many different options in it. So it's hard to really just lock into one single focus. But I still think that the ecosystem will have to grow. Yep. I agree and I honestly from from the perspective of media and entertainment you know I still spend 75% of my time just explaining what open stack is to people. So you know education is really key and I would say that the vendors that have come about to sort of fill that void of education and or do integration you know it's hard. The ones that have approached the entertainment industry that their message is heavily skewed. So you know we're really trying to get out there and you know hopefully the ecosystem will provide some more balanced and objective education resources that'll not have these sort of attached caveats. So the last thing I wanted to bring up was related open source projects. You know as Dost just said the real world of a data center is very complex, is very diverse. And so you know obviously you're going to be plugging in lots of different projects. And one in particular that's interesting I think is the Netflix OSS project and PayPal Anon had done some work I think to port it on top of open stack. I don't know if that was your team if you're familiar with it but to me it's really interesting to see you know folks like Netflix opening up what they were doing for their massive you know arguably the largest cloud application on the planet and taking a lot of the things they've done to make that scale and opening it up and then you know PayPal picking it up and working on how to integrate it on top of open stack. Can you talk to that at all Anon? Sure, so basically you know we have two different type of you know major users within our company. One is you know our admin team itself where they're responsible to manage you know some parts of our major parts of our you know production today. And they're all you know administration background and they know what is in-processor is and stuff like that and they keep you know open stack dashboard today as it is and they'll be able to figure out actually you know they go to you know multiple tabs and they could do you know part of it because they know the things and they could even go to you know CLI also to perform some tasks. But there are you know some other users and they have you know very less knowledge about the infrastructure itself or maybe you know they don't want to even know actually you know what's happening begin to think they just you know want to deploy their application and then they want to you know deploy into multiple you know data center multiple regions you know fail over reasons right and we wanted to you know address both this world in one single interface and of course you know open stack as you know different way of you know handling infrastructure as a service itself but we wanted to have a middle ground where we could you know meet both of their needs and for example you know there was one interesting request when we gave you know open stack dashboard to our end users yeah so I'm creating a instance and if I want to go and create a volume I got to go in a different place and you know I want to you know permit myself and basically you know there are you know a lot of questions to us going back and forth between the cloud users versus the cloud engineering team itself then we decided to you know meet in the middle where you know we wanted to have us want to develop you know simple workflow where you know people could understand between the end users versus you know advanced users and we started with we were looking at you know different other open source project that is out there you know how actually we could leverage from the end stop you know you will do yourself everything right then interesting actually we found out you know how you know Netflix is applying their applications in one of the you know largest cloud in the world today so also in the multiple regions and they thought they are separate tool for applying their applications that applications team could understand that easily then you know we leverage that framework and it was very easy for us to you know go and build on top of that we took I think six weeks to build on that and you know and also it's not only that actually we have you know multiple regions and multiple data centers and once the user blocks him he wants to manage across all the sites at the same time for example if we have our central monitoring system and they are going to be in a bus for all the tenants all the projects and they want to help an interface where they could see all the tenants across all the code they want to be able to search a particular instance or based on the DNS and here whatever right so OpenStack dashboard actually I was looking at you know also there are a lot of improvements happen to meet you know some of our needs but actually we are going to be you know at least for now actually we will be using the Aurora tool that we you know be developed on top of OpenStack actually hey thanks for making that that's pretty cool we're bringing into our labs maybe we'll contribute back there too so we so obviously OpenStack doesn't do everything right and everybody who it doesn't have to do everything we use Nagios we use Shinkan we expect storage vendors like like SAF or Luster building solutions that go behind things there's people use the Swift API but they they put stuff behind it there's people using Cinder they're putting different solutions behind there OpenStack needs to stick with the concept that we need plugins Solometer should accept plugins that allow you to to add other things to meter because we have things like Shinkan or Nagios many of us use some sort of orchestration configuration management like a puppet or a SF or CF engine so I think but I I still love the concept of an API that allows us to do different technologies so just like it happened with Nova across the board I love to see that across we call it the watcher the decider and the actor and collector but see that the manageability layer but not have to solve everything inherently in it and and the open source community is rich right so you know PayPal doing Aurora that's awesome to see it you know mimicking Asgard and yes because we have different types of user experiences we have to supply not everybody wants to go to Horizon not everybody wants to go to CLI API so I think it's good to see that level of innovation and and figuring out you know how what what should we bring in what what should stay on the side but the plug-in concept is is huge so like I said we're we're trying to you know put together a lot of different companies that that run their own you know clouds if you will right now and so we're actively working with open source project called karma which came out of USC's School of Information Sciences and karma is a automated database anthology ontology translation tool so you know we're actively putting that up on OpenStack to to sort of have these public private various private clouds talk to each one one or other especially from a metadata perspective great great well listen I really appreciate everybody's time today I think we're just about out of time but I just only want to thank Anon and Doss and Guillaume for for coming together on the hangout today we'd like to do more of these and I hope you guys stay involved and keep contributing back and talking to each other and and the whole ecosystem we think you know this is what makes OpenStack great is everybody coming together sharing and and making it what we wanted it in the future so again I'm Mark Collier with the OpenStack Foundation and thanks to our guests and everybody who tuned in thanks take care thanks everybody bye