 Good morning everyone. If you could come in and take your seats, we will get started. I know you had a lot to enjoy at the keynote this morning. They let you a little bit out late, so we're starting a little bit late to accommodate that. Apologies. So welcome to Vancouver. It's a pretty exciting time. We have a great lineup today, and I'm pretty excited to share with you briefly that high-level overview so you know what's going on here. We have a number of talks here in this room throughout the day. We're going to hear from folks like Monty Taylor and Bill Franklin this morning. In the afternoon after lunch, we're going to have a customer panel. We're going to look at some very exciting open NFV and some SWIFT stuff as well, which is pretty awesome. So if you're not already familiar with our lounge, which is over here, you definitely want to check it out. There's going to be s'mores there for you to enjoy. We're going to have some craft beers starting around three o'clock. There's also going to be a acoustic guitar player there for you to enjoy. And of course, if you come back down to our booth at around 6.10 today, there's going to be, of course, all of the booths are going to be open. Then we're going to do the pub crawl there, so lots of great food and drink for you to enjoy. We're also going to have custom hoodies, and you're going to be able to actually impress on them little badges for the different projects and whatnot to help represent who you are. We also have the huge hospitality event, which is going to take place this Tuesday. So if you haven't already RSVP'd, make sure you do that. That's going to sell out very quickly. And we also have lightning talks at the last session for today. So it's going to start at 5.30 and go to 6.10. Lots of great different presenters from the community and HP who's going to share with you in five-minute chunks some really exciting ideas. And at the end of that, at the end of those lightning talks, we're going to be giving away an HP slate, a tablet running Android. So you'll definitely want to be here for that. When you come in, you'll get a ticket and we'll do a draw at the end of that lightning talks. So you may have noticed when you came in that there is a survey on your seat, I'd ask that at the end of the presentation that you fill that out for us. It's much appreciated. And there'll be two beautiful ladies at the doors here that will be able to take those from you. That's going to just help us understand whether or not the content that we're providing in the sponsored track sessions today are interesting and provide the value that you're looking for. So I have a short video to share with you. I think you may have seen it actually if you were at the keynote, but if you didn't, you'll get the catch up on that now. Then I'll introduce you to our first speaker. So open source is all about collaborating and working together to solve problems. I think there are many who would argue that it's actually better technology. Open source technology versus proprietary. Open source is probably the greatest trend of our time when it comes to development. The cloud of tomorrow is based on open source technology and with our more than 20 years of open source experience at HP it's clear to us that open stack is the premier leading open source solution. It's important to HP that we have a strong foundation to build all of the components of open stack because we're committed to open stack itself. Helion takes what open stack does as a generic piece of functionality and builds it into a framework and a product that customers can use for infrastructure and storage cloud. We are here to make sure that open stack is good for everybody. The reason that HP is having such success in open stack and in open source is because they truly believe in the open source development model and they're able to kind of attract people that are interested in that as well. In addition to building this awesome piece of technology, we're also building up thousands of awesome technologists who will go out and do other cool things. And they see the benefits of doing things which maybe other companies don't want to do. Things like infrastructure and QA and making sure that the things that aren't as sexy are done so that the project can be successful. Alright, thank you so much. So that was a great video where you got to see from a number of our technologists at HP. I'd like to introduce you to a great gentleman sitting here in front of me, Mr. Bill Franklin. Thanks for that oversell. Great gentleman. I just want to welcome everybody here. HP is proud to be a serious sponsor of open stack. But it's not just the work that we do that makes open stack great. It's actually the work that everybody does. Wow, serious feedback. That everybody does in open stack. So I want to thank all of you because without the community, the open stack ecosystem wouldn't be what it is. So today this is the HP track. It actually runs the same time as all the rest of the events. So we're not going to keep you locked in here away from anything else. Cody is kind of your master of ceremonies. He was the first speaker in the video and the person who was just up here. Cody Somerville runs a lot of our evangelism and outreach. So Cody is going to sort of keep everybody on time and schedule today. Our first speaker is going to be Monty Taylor who I think all of you know and I'll introduce him in a few seconds. I'm going to chat a little bit. Tim Lible who runs our Early Access Forum is going to talk to you but instead of just HP talking to you about what we're doing at HP, Tim's going to have a set of HP customers come up and talk to you about what they're actually doing with our HP products. Then we're going to talk to you about some of the other HP technologies and as Cody said we'll have some lightning talks at the end. If you weren't at the demo and the keynotes this morning I just want to make one comment about you got to see our public cloud up and running and doing a lot of stuff. So it is still an active part of everything that we do. We have a bunch of people who are going to be at the summit. So I would encourage you to go out and look at a lot of the other HP talks. There are more than 50 of them here at the design summit and at the conference. Cody was telling you about the other activities that we have going on but the other thing that's sort of special about HP in our role here at OpenStack is we do really try to say thank you to everybody in the community. I just want to really reinforce that even though you're sitting here on an HP track about it we put a lot of value on all the work that all of you do. So I want to say thank you very much for coming. I hope that you will learn something out of all of these and without much ado I'm going to introduce the next speaker who's also a fine gentleman sitting in the front row but I'm kidding. Not kidding but so if you don't know here, Monti Taylor is one of the few people who sits on both the board of the OpenStack foundation and on the TC, the technical committee. Monti along with, there's a couple of other people scattered around here, Vish, I see sitting in a back row and the lights are pretty bright so it's hard to see all of them. There's a number of people who were involved in the earliest instances of OpenStack between Rackspace and NASA. Mark Interante who was the HP keynote speaker this morning was actually one of the people who ran that group at Rackspace. Monti Taylor was one of the people involved in it. Monti's been written up in Wired Magazine, a variety of other things as one of the top 25 cloud luminaries. He is a distinguished technologist at Hewlett Packard currently I think resides somewhere on the east coast but more frequently seems to reside in an airplane traveling somewhere around the world. Monti speaks at a variety of conferences and when he has spare time actually leads a lot of infrastructure and other activities inside HP. So without a do I'd like to welcome Monti Taylor and both a fine gentleman and a good friend of mine. Where we get to watch, oh I have to get to watch, oh gosh that's loud. This is where we get to watch technology happen. This isn't quite as exciting as of a demo as this morning but this is, no it's other side. Yep, this is where HP produces laptops. I don't know if you know that about us and they work with projectors and that's basically what today's talk is about is how good my laptop was accomplishing that. So thanks for the intro, Bill. I'd also like to thank Vish for the lovely shirt that I'm wearing today. It's great when you have sort of old friends in the community and they make lovely, I created OpenStack and all I got is this lousy T-shirt. T-shirts. So anyway, in general I think actually that's, I tend to just like to talk about myself so if you guys don't mind that I think what we'll just do for the next 45 minutes. Okay maybe not. So I'm assuming since we're here that I don't have to spend a lot of time talking about what OpenStack is and in fact I've transgressed, I've fully transgressed a thing which is that I believe this is going to be the first presentation I've ever given that doesn't have the OpenStack marketing architecture slide in it. I'm really sort of confused as to how to start talking about OpenStack without looking at it but we'll just have to assume that somewhere there's an architecture and it's important. It's cloud software. If you're not aware of that I have no idea how you found your way to this room or what it is that you think I'm going to talk about. I am going to talk about product management and to talk about that I wanted to bring up our mission. I don't know how many of you go to our documentation or read mission statements from Wikis or what not. This is sort of what we set out to do five years, five years, six years, seven years, however many it's been a number of years. Eleven summits I guess I saw this morning, that's terrifying. So we set out to do this, produce a ubiquitous platform that will meet the needs of public and private clouds and be simple to implement massively scalable. I think there's a flaw in this and the flaw in this is that it's very clearly targeted at deployers. Now I'm not saying that we've achieved this mission for our deployers in fact I'm sure that if I said that in a room of deployers they would kill me but it's very clear just from the verbiage there that we're looking at giving deployers a platform that they can use to implement clouds and I'll talk about hopefully some ways in which it's very clear that that's what we've done in a little bit as a foreshadowing I might get to a point where I'm just going to rant a while. In any case this is sort of what we're doing. When you listen to the Jonathan this morning and you listen to sort of the things we're wanting to do these days there's this other class of humans that may have been left out of this mission statement that might be sort of important and that's the humans who might want to use the cloud to do something. In fact like the federated identity demonstration is a really great example of that. It's nothing about that technology demonstration was about how anyone had deployed OpenStack. It wasn't about how anybody was running OpenStack it was that they were using it to run workloads across multiple clouds which it turns out is something that I do as well it's sort of near and dear to my heart. So to achieve that mission we set up our organization borrowing some things from the Ubuntu project and I promise it's relevant that I'm mentioning this. We picked the time-based release thing which has allowed us to never skip a release in the history of OpenStack. We've released on time every time partially because we define releases by the time that we release them. So it's partially a tautological process and we've accomplished it. It's pretty cool. And we had these design summits which are a great excuse for us to get together and drink and work with each other in person and I think that's also allowed us to grow the community. And then our code names in alphabetical order which is less relevant although I will say that we started with release names in alphabetical order and if you've been around to Ubuntu for a long time you'll know that their first couple weren't and there's a weird ordering problem if you go too far back in history. Also you may or may not know that our second release is pronounced bear even though there's an X in the middle of it. But anyway, so there's a couple things we did different from Ubuntu and these are really relevant to a discussion of product management and that's that we do not have a BDFL it's a benevolent dictator for life. That's when Mr. Shuttleworth started the Ubuntu project he sort of anointed himself as that so that there would be a person who at times that it's needed could make a decision. We don't have that by design and because of that we're doing that so that everything is democratic and so that everybody can be included. When you have that sort of strong decision making power it is necessarily exclusive and useful and can be tactical and can be very pleasant at times but also can not be. Unity can happen. What that's allowed us to do is this and we like to talk about how many people are involved and that might just sound like it's self-aggrandizing but I actually think that it's a lot of the point of what we're doing. It's not that we have a set of five smart people over in the corner writing some smart cloud software. We have a collaboration from a whole bunch of different people who have a whole bunch of different points of view and we are inclusive of those points of view or at least we want to be and so we can have numbers like 430 associated companies. We can have numbers like 2600 cumulative contributors. These are actually very old numbers. I believe that the current cumulative contributor count is over 3000 but I already had this cut out and put into HTML so I apologize for not updating the numbers but they're really not the point. The actual values aren't as much as it turns out a lot of people with a lot of different vested interests from a lot of backgrounds. This has led us to this interesting point where we have folks making a call saying that OpenStack needs product management and I think that while I agree with some aspects of that I think that in some cases it's code for people thinking that this is a problem or for people thinking that this happens at all because actually we don't really have a lot of developers just sitting around working on whatever they feel like. I mentioned it in the last slide but I'd like to go back to it again. We have 430 companies. If you think that we don't have any product management so involved in OpenStack you're living in a dream world. We have 430 companies worth of product management so it's not that we need product management in OpenStack. I believe what we need is we need product management coordination. We need for the product managers of these different companies to talk to each other. We need for them to get together and we need for them to figure it out because right now how they're doing prioritization is they're all coming to the developers and they're essentially communicating to each other through their associated development organizations and if you think about how that might work at your own company that the various product managers on a product only talk to each other via sending messages through the engineering team one can imagine that things might get a little bit hectic and that it might not be as clear as what the outcome of that might be. So to this end there's a group, the product management working group it's not a product management committee. I'm not sure that I can explain to you the difference between a working group and a committee but apparently there's a big difference because I think we talked about it for about a half an hour yesterday at the board meeting and I'm still not really sure what the difference is and I'm sure that's going to get me in trouble but today if you not that you shouldn't of course hang out in here and enjoy all the wonderful content but if you feel like you want to talk to people about the product management working group there's a group of them talking today from 2 to 330 in room 212. There's actually more than one shout out to other talks in here which is also a sort of strange thing but so they'll be getting together and this is a nascent group but this is actually what this is. It's a group of product managers from the companies that are involved getting together and starting to figure out how to have these conversations. It's not exactly like there's a straightforward answer to how this body is going to or how a body of product managers would interact with our technical community because there's a few things that a set of product managers can do and there's some things they can't do. So one of the things they could do which would be really helpful is defining problems. Actually looking at what are the things that we're doing, making clear definitions, what are the problems that need to be solved. Also between them coordinating priorities if you have 450 companies worth of product managers all thinking that their thing is the absolute most important thing in the history of mankind then it's sort of the equivalent of getting 450 critical bugs and you must fix them all at the top priority which means that effectively you have no priority. So them coordinating some priorities amongst themselves and saying you know what maybe if we all worked on this then that will enable us to work on these other things which would be really spectacular. They could also communicate the problems that they've come up with clearly to the tech community. They can't tell the tech community what to do that's sort of not how this works. None of us can tell any of the rest of us what to do. They came and said we have to have this thing in Nova. There's nothing that John can do to force the Nova developers to do that particular thing. But when you clearly articulate something, if you clearly articulate a problem, if you say hey listen this is not just I'm running around in the corner waving my hands in the air but I've actually sort of sketched out for you some issues that should be addressed then that's actually something that can be engaged with. People can latch on to that and potentially start to make steps to make that better. So that's theoretical. I sort of have some problems with OpenStack and so I thought that as a good product manager I should mention that as with everything at OpenStack it's completely inclusive so if you want to go no matter who you are and participate in the product management working group you don't have to be employed as a product manager to do that you can go express yourself in whatever way suits you maybe not whatever way suits you. I think there's a few that are probably inappropriate but you can do that so I figured I'd maybe take a stab at a couple of those things that I think the product management working group could do and maybe describe some things that maybe are problematic, could be better from the perspective of a person trying to use OpenStack and hopefully that might shed some light at least on to me because I like talking about myself. So I decided that I would take a stab at writing use cases we don't do this in OpenStack very often but I thought it would be fun. I've been learning to be all corporate and stuff so as an application developer I want to deploy and run an application on the internet so that my customers over the world can consume it. I think that's a reasonably understandable thing that somebody might want to do with the cloud. Also I want to deploy that application across multiple clouds so that my service is resilient against issues in any one of them so if one of the clouds goes down I don't want my users to know that in other talks I've put up a nice slide about Netflix but I decided I would not do that here but you know we all love it when you can't watch House of Cards because AWS is down, it's sort of a problem with a single vendor ecosystem so I want to do those things and it turns out this is a thing that you can do. It is totally possible we are collectively doing this right now as we sit in this room because there's an automated system that's doing that currently. This actually is not the current graph because God only knows what the internet would be like at the conference but this is a snapshot of a live graph from just a little while ago showing a system building nodes. This is a system that does 10 to 20,000 VMs a day on currently three, soon to be six clouds and it only does it using the OPN stack API so I'm not talking about how are we going to make a future state where the world finally works. It works, it's working great, it's working in production, it works every day, it works all over the place. It's a little harder than it might need to be and although I've got it working, it might take somebody else a little bit more time than maybe it should to get that done. So to run your application on one or more open-stat clouds in a resilient way I believe there's a few steps that you need to go through to do that. You need a base image, you may make it, you may fetch it, you may find it but there has to be an image that you're going to boot. It needs to be in your clouds you're going to boot a VM on one or more of the clouds and you're going to ensure it's on the internet. That might sound obvious or simplistic but it turns out maybe it's not. So step one, get a base image. There's a few different ways you can do this. OPN stack has a tool called disk image builder. It's not the world's most inventive name and I apologize as it doesn't start with M because I believe that most of the new projects these days start with M for some reason that I haven't been able to figure out so it's unfortunately has a descriptive name so I apologize for the boringness but you can use it to make disk images. There's other tools. You don't have to use OPN stack's tool to make disk images. You could use tools like Packer. You could also download images that other people have made for you, the fine folks at both Ubuntu and Fedora and many of the other distros make and upload images that you can use in clouds and you can consume those. So this isn't trying to pitch a workload where you have to build images all the time. If you want to build an image great, if you want to use somebody else's image great. So you'd think that that's a thing that you could do but unfortunately you can't just build an image or download an image because you need to know what hypervisor your cloud is running and you need to know the file format that hypervisor requires the images to be in. As some examples, Rackspace uses VHD for their file format. HP uses QCOW too. Dreamhost uses raw. You have to know this. It isn't told you anywhere in the Keystone catalog. It's not communicated anywhere technically. It's just a thing you have to know. So if you get an image you might have to transcode it into a couple of different image formats or you might have to transcode it into the image format that's appropriate for your cloud. It's a little silly. So okay, great. That's fine. I'm going to give you a couple of examples. The product management at one cloud had made one choice. The product management at another cloud made another choice. We enabled them to do that because we're open stack and we're inclusive of people's ideas. Now I can just upload it to the cloud. So I'm going to say Glance image create and I left off a parameter here. I'm sorry this is not functional code in the slide but you're going to create an image in the cloud except you're not. Because the next thing that you need to know is you need to know the image API version and there are two API versions out in the wild and there are two API versions that are currently running on multiple different public clouds. So you have to know which of those is there. So for instance, HP is currently using V1 and Vexhost which is another public cloud uses V2 and they both work quite nicely. There's this API endpoint for Glance which will give you the list of versions that the Glance on that cloud is running. For reasons surpassing understanding, that information is not accessible anywhere through the Keystone catalog. So if you go to the Keystone catalog and it gives you a list of the endpoints in the cloud, it gives you one of the versioned URLs without any way to get to the thing that might give you the list of versioned APIs and their associated URLs. There is a session by the way this week to talk about this topic and to talk about Keystone catalog metadata alignment across things and I fully intend to be in that session and talk to people about that because it's one of those things you'd like to be able to figure out. So since that doesn't work, what you can do is you can grab the URL from the thing and you can parse the end of it to see if the string V1 or V2 happens to be at the end of the URL that you've been given. These might or might not be present on the URL that you get back from the service catalog because it's not actually part of an interface. But you can probably figure out if it has a V2 on the end of it that it's a V2 API. This is of course bananas. Okay, so that's fine. So I figured out it's not that bad it's not that big of a deal. I figured out that my cloud, this cloud has V1 and I'm going to upload it and I'm going to do Glance image create and this time I'm going to give it a file name which is still not syntactically correct. It's actually a dash-dash file. And then the file name. So I'm going to do that. Yeah, sorry, that's not going to work. We're going to have our worst cat eat some more lettuce. Because it turns out that there are even within the V1 and the V2 APIs of Glance there are two different ways in which you can upload an image to Glance and your cloud providers have been given the flexibility to only turn on one or only turn on the other. Because there's this concept called policy.json which allows you to map roles and privileges to individual API calls. This is of course also bananas and not a thing that is useful to a user. So you have to know ahead of time it's a priori knowledge that you have to have as to whether or not your cloud requires you to upload something to Swift and then import it or whether it will allow you to use the rest put call to put the image into the cloud. This is not possible to discover. Even though both of the API calls are valid API calls in the Glance V2 API. So the bottom Glance image create on this one actually is correct syntactically at least now. So you can run the commands on this slide sadly. So that top one is what you have to do if it's an image import. And yes you do in fact have to pass JSON on the command line as a parameter. So in terms of making a better user interface I would suggest that passing JSON on the command line is not a good user interface. Although passing JSON to a rest end point is of course fine. So that's great. At this point I've uploaded something and now it should be easy. I'm going to boot that image into a VM. As you might expect we're going to see this picture again because it's not that easy. So the thing is that the image that I uploaded to the thing needs to be able to get on to the network in the cloud that it's running in. And although there are some standard protocols for doing dynamic host configuration not everybody uses them. So you could have a cloud that sends configuration to the VMs running inside of it using the dynamic host configuration protocol. You could have ones that put static network config into config drive. Or you could have clouds that want to vendor specific agent running in your VM that will do file injection using some sort of magic and will overwrite files in the VM that you decided to run. Because you always want somebody else. I mean the NSA is going to do it for you anyway. But you know other than them I tend to not like people writing things into my computers. So these are options because use cases are useful. I'd like to go back to the part where I said I wanted to upload the same thing to more than one cloud. I don't want to have to build a per cloud provider image. If I'm going to do that I might as well have a, oh that's hey meeting. Look at that. I got a meeting with Nauranta and just a little bit. Oh no. This is terrible. Why are you doing this to me Google? Alright now you're going to see all sorts of things I'm sure here. That's exciting. Yeah they probably should. So in any case I don't want to have to build a completely separate image for each of my clouds. I kind of wanted to do that task once or download it once and convert it three times or whatever that is but I would like to be able to know that I've got the same content across my cloud providers that I'm using. And if one of them uses DHCP and one of them uses some sort of vendor specific host file injection I'm guessing the vendor specific host file injection won't work on the other cloud that doesn't support that vendor specific thing. So once again we've been very inclusive of people's ideas and the users have suffered. Okay great so I'm going to boot an image now and I've gone back and I've rebuilt my images and I've done evil things to make sure that the image actually can talk to the network on each of the clouds that I've got. Except of course it's not that easy because it's not just the network there in the cloud I want to talk to. I said in my earlier use case that I wanted to run something on the internet. I don't know if you've heard of it. There's this thing called the internet that allows computers to talk to each other across the entire world and people have built a commerce system on top of it that allows you to exchange money for goods and services over things called websites. And it's a pretty common use case. I don't think it's not just that I'm sort of an esoteric tech person. I want to run my service that can connect to the world. So that's sadly harder than you might want it to be. My VM may have been given to me with the public IP address. That public IP address might have even come to the VM over DHCP which is kind of really cool. It may not. It may need a floating IP from Nova to get out onto the internet. These are all fantastic choices and they make my life as a user the complexity of that much more enriched. I really enjoy it. Just as an example real quick, so if you've ever gotten used the Nova rest API and I've translated some JSON into YAML here because it's more readable, it's the same thing. So this is the addresses field out of a Nova server object on the network. You'll notice that it has two entries in it, private and public, and each of those has an address. I can look for that and if that's there, I know that this is a Nova network thing and I probably can tell because the name is public that that's a public network. This is one that is a VM that is on a cloud that is running neutron. And you'll notice that although it has an addresses field, the content of the addresses field is completely different. So as a user I can go in and I can see if it had a public and a private entry or I can see if it has a network name entry that inside of it has some structures in terms of the address and the version of that. Which is really helpful to me. It's exactly what I've always wanted to do is have my code to boot a single VM, have to pick out the difference between whether the deployer decided to deploy Nova network or neutron. In case you're not aware, there's going to be some people talking about the Nova network neutron thing all this week. If you have thoughts or opinions on that, I suggest you find them and give them a nice hug, possibly hand them a beer or two or a case because there's some work to be done there. So that's great. So I figured out that on this cloud I've got to create a floating IP to get my VM on the network so that it can talk to people who want to give me money for the pictures of worst cats that I'm apparently obsessed with. So now I have to do these things. I have to boot it and create a floating IP and then I have to associate it with my server. I'm not including the different version of doing that that works with neutron, but it's similar. You can also use the Nova pass through if you have neutron because that choice is definitely useful. It turns out that we're not done figuring out the things that the cloud has done to get between me and the internet. There are these things called security groups because for some reason somebody thinks that it's a great idea to put a production server behind a NAT. Now I don't know about you, but I know very few ops people who said to themselves, I want to run my production servers that need to talk to the internet behind a NAT. In fact, if you go get like, you know, residential DSL, you can usually get an Excel for it to get a real non-NATed IP address because you might want to run a server. It's a kind of common thing. But apparently there's a set of people out there who think this should be the default behavior of all servers that you create in a cloud. I'd like to invite any of those people to ever run a production service because clearly they haven't. So there are these security group things which block all of my traffic unless I explicitly ask for it to be able to talk to the internet. So not as it NATed, but it's firewalled. So I'm now getting something that is actually less good than my residential DSL service on my production VM that I'm trying to use to run a production application. I'm probably, if I'm not, you know, quite antiquated, going to be using things like Ansible and Puppet to actually run that machine, which means that me running IP tables on the machine is probably pretty easy. Or EB tables or whatever they've renamed it to this month. Or systemd, because I'm sure it's probably systemd that's handling that now. But in any case, I don't necessarily need the cloud to firewall me from the thing that I wanted to get to. So now what we've got, because I'm going to run a web server on this, is now I've got to add a rule to the default security group. I could do this in a more complicated way by adding a new security group with its own set of rules. But right here we're just going to add one to the default security group that opens port 80. So now I get to add port 80 to be open on the security group, then boot it, then create an IP address, and then associate it with that. I'd like to point out, in case you weren't tracking that, that's two different ways that the cloud defaulted in not having my machine be able to talk to the Internet. Not only is it behind the NAT, it's also got a weird firewall. And it's on a private network by default. This is, of course, not very fun to me. So in summation, the things that I had to figure out to boot AVM are format, did my cloud provider decide that they want to what hypervisor they decided to run, so I know what image format. What's the image API version? Which of the image API upload mechanisms has the cloud decided to allow me to use? Does this cloud support public networking by default? Or does it put me on a private network by default? Is it using Nova Network or Neutron? Do I have to use a floating IP to get to the Internet? If I have to use a floating IP, do I have to use Nova or Neutron? Is my cloud provider giving me its internal networking information via DACP or a static config of some sort? And do I need to do something with the security group? This is an insane amount of steps and information that one needs to do to boot AVM. It is crap, and we need to fix it. I think we can do better than that. So I like to rant at people, but I also like to fix things. So I've been poking at this a little bit, at least for my own sake, because I am narcissistic and I'd like to solve my problems before I solve your problems. So there's a few things that we've spun up over the last cycle. There's a library that we wrote called OSClientConfig which makes me sad. It shouldn't have to exist, but it needs to because there's a bunch of information you need to know about cloud providers. Well, so we have a vendors.py file in the OSClientConfig that lists all of the a prior knowledge that you need to know about each of the public clouds that I am aware of. That I have had to write this just makes me want to cry into my beer. But it exists, and you can use it, and you can reuse it with other things. It's also now being consumed by the Python OpenStack client, so the unified Python client now allows you to reference a named vendor cloud as what you're going to point to. It also knows things like the URL so that you don't have to put a crap ton of stuff in your config file. So that's there. It can be used today. This is also being used behind the new version of Ansible modules that are going out there. So hopefully at some point just saying I'm using Vexhost or I'm using Runabove or I'm using United Stack. By the way, I'm not sure if you guys know, but there's a lot of OpenStack public clouds out there. I've sort of discovered that recently. And they all work pretty well. We've also written this library called Shade. That's being hosted over in the Infra project. It's a library to wrap the business logic around the client libraries that we've discovered. I kind of think that it existing is also a bit of a failure. But there it is. So the goal there being that you can make a simple call like create server and please give me an IP. And it will do all of the things that you need to do to create a server and give you an IP. This is being rolled out into Infra's node pool, the thing that spins up and tears down our machines all day long. So you know we're testing the ever-loving mess out of it. It will be pretty solid. If it's not solid, you're not getting test VMs in the OpenStack infrastructure. And also, again, the Ansible modules upstream are starting to consume that as well. That will before I go into that. Currently that's wrapping the Python star client libraries. The next step of goal is to port that onto Python OpenStack SDK when that's ready. And then potentially we'll talk in the future about whether that needs to be like a simple version on top of the SDK or whether it makes sense to be two different things. Ultimately, having all of these different choices pre-built is something we need to consolidate it on. So I mentioned we can raise issues. And so I've been doing this. I mentioned the product management working group later today. It's got other things. I try and be as involved with them as I can because I have some opinions. Defcore is also a place where work is being done in this area. They're working on testing clouds for interoperability. And recently we've gotten to the state with that process where the tests are discovering that the clouds are divergent in some ways and putting that into strong relief so that we know that we need to go address things and make them better. So they're doing a meeting tomorrow. They're also doing a meeting today at 2, I believe. But also tomorrow at 10.30 to 12.30 they'll be doing a working session. And then as Flavio will be happy, I'm sure, to back me up on. I'm more than happy to go talk to the cores and PTLs directly and tell them just how much I like a particular interface that they've provided me. They really enjoy it, I think. It's love. There it is. I got a thumbs up. That's fantastic. So that's just what I've been doing about it. More largely I think this is a community effort that we've got to get behind. I think that there's some basics that we need to handle. Being able to handle all of the really complicated, crazy things is a fantastic thing. An open stack is really good at it. The things you can do with overlay networks and neutron and different ways you can stitch things together are really neat. But it shouldn't be that hard to do the simple everyday things. We should do that. And the thing is that that involves making decisions. So I'd like to suggest that the existence of the shade library that I've been putting together is a bug. Every single line of code in it is an indication that there's a bug somewhere else. And I think that it wouldn't be the world's worst thing to look at that and figure out how we can actually make the projects themselves not have to expose that much divergence to the users. It means we have to make some decisions about how we can do that. And I think we need to do that. And I think we need to be very divergent at the very basic level. So in the places where there are two just completely diametrically opposed viewpoints, I would suggest that either we need to get rid of one of them or we need to make it very clearly discoverable at a technical level so that with an explanation of why a user would want to use one or the other, an end user, not a deployer. The other thing is we need to be able to find the ability to take a stand even when we have a product manager at a particular company that has strong disagreement. It's okay. We all have differences of opinion. We've got to make sure that as we're coming into consensus in the technical community that that consensus is including the product management from the different companies that it's involved and isn't just information that's coming over the wall to us. So anyway, so that's me rambling for a period of time or any reason you want to do something with the slides they're posted on at that URL. They're also in my GitHub account so you can clone them. I have no idea what purpose that would serve for you but they are there and they're licensed creative commons so go nuts or whatever it is you want to do. Anyway, thank you very much. Thank you so much, Monty. So folks, we went a little over. Apologize about that. Make sure that you get to your survey. Feel free to fill it out and please return it to our assistance at the door. Next up we have Bill Franklin. So I'm just going to give a minute for folks to exchange and get into your seats. For those that are just arriving, you'll find that there are surveys on your seat. You can fill those out at the end of the session so you can provide us a little bit of feedback on how you feel that the sessions went and whether or not you got any value from it.