 Hey folks that have just arrived. If you're interested in winning an HP Slate, a 10-inch tablet that's running Android, what you have to do is come over here and get yourself a ticket. At the end of the first series of lighting talks, we're going to do the draw, and then after the next eight talks, we'll do the final draw, and then I'll encourage you all to go down to the booth and get some free champagne and cheese. So if you haven't got a ticket, please come up and I'll help you there. We're going to start right now with a great talk from the very infamous Monty Taylor. Here you go. Oh, that's exciting. Well, I'm not up to the podium yet, so that's a little bit at a disadvantage. So, hi, I'm Monty and I'm going to talk about myself so much because I only have five minutes and it's impossible for me to do anything even a single slide in less than five minutes. So I'm going to talk about the library OS Client Config. It's about making OpenStack more usability-ish. So the problem is that I have a ton of cloud accounts. Infra has five, it has control plane, node pool in three different rack space regions, control plane, node pool in HP. I have two personal on HP, one on rack space and two internal HP cloud accounts. That's 12 OpenStack cloud regions. How do I connect to them? It's a giant pain in the ass. So I could do this. If I wanted to get a list of servers on one of them, I could type out all of these things on the command line. OpenStack, blah, blah, blah, blah, server list, that would be terrible. So it's in fact extremely silly to connect to your cloud that way. I can set environment variables. This is an improvement over putting them all into the list because I can just source in a file with environment variables in it and then I can just operate things naturally. But then I wind up with 12 shell script snippet files in my thing and I have to source the right one before I do the right task and then sometimes I connect to the wrong cloud. If some of them are my personal ones and some of them are infraproduction, that might be a problem. So the other problem with this is that the environment variable processing is actually implemented in OpenStack in the command line tools, not in the client library. So if you're programming in Python and you want to use the client library, the environment variables aren't actually processed by the client library and you have to do all that environment variable processing yourself. If you actually want to check just in the Python library, client libraries we produce, you will find that in fact we copy the exact same set of environment variable processing code to all of our client libraries. So in fact in this case, just looking at those, there's 13 different instances of add option or argument OS-username and that's also in and of itself kind of stupid. There's also pre-existing knowledge you need about your cloud. You have to know the auth URL. That's fair, like it's actually fair for you to have to know the auth URL, possibly your username and password. It's okay, but there's other things you need to pre-node. You need to know the Glance API version that Yacht Cloud is running. It is not possible to find out from your cloud what Glance API version it is using, you have to know. There's other things like the fact that Rackspace doesn't put the Swift URL properly in their keystone or the fact that HP's DNS service, even though it's designated, is listed as HPEXT colon DNS. These are things you have to know about your cloud in advance to be able to use it. So I wrote a library because I didn't like any of those things that I just said. It essentially does this. It processes in this order, a file in a couple of different locations called clouds.yaml, that is a yaml file including, you guessed it, all of your clouds. It also processes the standard open stack environment variables and then finally it can filter through an arg-parse namespace to be able to inject all of the right things. So it will do the thing it is that you expect it to do with all of the data that you're going to give it at the time that you expect it to do all of those things with all that data. And it's a library so you can use it. It also contains inside of itself some vendor defaults and it's more than welcome to accept patches for your cloud's vendor defaults that aren't otherwise included in the keystone catalog of your thing. So if you need to know that Rackspace's image API version is two, which otherwise there's no way of knowing because of insanity and I'm gonna kill J-pipes for that later, you can do that. I also have the fact that that's Rackspace's auth URL and that's HP's auth URL. I don't need that in all of my config files. It's just HP's auth URL and that's fine. So this is my cloud YAML or a cloud YAML that is redacted. So you can see I can express the, that's actually, there's a bug in that. That's actually my password but shh, don't tell jammies in the back of the room. So I can refer to the HP cloud there or the second one should say auth URL, not cloud. And I can give it various different things. I can also give it some region, a list of region names so that I can tell these are the regions in this cloud that I'm using and it can construct the appropriate constructors for me, for a Python thing. This allows me to reference named clouds. So I can say something like opensack cloud equals mortrid server list and that will do the right things to operate on that cloud. I can also add that we've added the one environment variable oscloud. So if I want to reference that, I can do that. Now if you've only got one cloud, that's fine. There's a default cloud called opensack that you can use if you just have the one opensack cloud that things will default to if you don't actually have to reference it by name because it's just the default. So where is it in use? I wrote this library called shade which I'm not gonna talk about right now but it's in use in there. That library is in use in ansible or there's some patches to land there. So it's sort of in use in ansible as well and there's a patch in flight from Dean to use this in Python opensack client. I kind of would love to land it on all of the Python star client things because it allows us to delete some code and it's also a very small library that I'm pretty sure is done. It needs no new features. It needs no new functionality. It just have a very, very, other than landing potentially vendor defaults. There's another thing that it does that I don't think is on this thing. You can actually drop, if you're a vendor who's deploying things, it does allow for dropping in vendor cloud.yaml files on Etsy on the host and so if you want to define other clouds that aren't something that would be in there but you want accessible on all of the machines that your customers are using, you can sort of do that in a vendor type of way. And here's where you can get it. You can get it from get.opensack.org, Slackforge, OS-clientconfig. It's also published on PyPy, PyPy OS-clientconfig and that is amazingly five minutes. Say when? Okay. Hi, my name's Michael. I'm here to talk about Storyboard for those of you who don't know what Storyboard is all about. Storyboard is task tracking for interdependent projects. Why does it exist? It exists to track work. It exists to integrate with infra and exists to replace Launchpad. Ooh! Well, why are we replacing Launchpad? Well, have you ever tried to use blueprints? Actually, have you ever tried to use Launchpad? It has poor reporting capabilities. Its API kind of sucks. It is open source because we have as of yet not had anybody be successful at standing up Launchpad on their own. Jim will confirm that for me. And at the moment, engineers working on Storyboard, well that got messed up, the number of engineers working on Storyboard is larger than the number of engineers working on Launchpad, which is one. Not anymore. Not anymore? You actually have somebody working on it? Oh, excellent. So it's larger, excellent. So what is Storyboard actually? Storyboard is an API, which is written in Python with Picon and Wismy and Oslo and other open stack things. It's also a web client written in JavaScript with Angular and Bootstrap and lots of other open source things. And what can it do? Well, you can create a story which lots of tasks that can each be assigned to different projects. So you've got that whole multiple project thing going on, which can be put into project groups and you can search on all these things. And if you're super ambitious, you can even write plugins for it. Ooh, right? So let's actually take a look at what that looks like. So here I am. I'm gonna go ahead and create a new story and I'm creating a new story. Hey, look at that. It's a really neat story. You'll see the first task has been auto filled but I'm gonna go ahead and fill that in. And I'm gonna say that's part of Storyboard and this also requires some work on Zoom, I think is what I'm saying. There we go. And this requires some work on Puppet Storyboard, I suppose, right? And there you click save and boom, you have yourself a new story with a little bit of history. Similar searching. So let's say I wanna go and I wanna search for something like things in Storyboard that are merged, that are assigned to me. There you go. So you've got sort of a quick breakdown tag based way of searching various little things. Now what will it do? It'll manage releases, that's an upcoming feature. It will manage security bugs. It will report progress on a macro scale and it will federate which means that you can run Storyboard inside of your own organization and slurp down the open stack stuff and manage your own project sort of on top of what's happening upstream. So when is it coming? Well, it's ready more or less now for Infra. We're planning on discussing whether or not Infra is ready at the Infra meeting at Thursday at 3.30. Stackforge, we're planning on being ready by next summit and we've already got a whole line of early adopters lined up. Open stack, we are hoping to be ready by next year which means that chances are you will be using Storyboard by this time next year. I know, right? And that goes out to you hidden influencers out there if you don't want somebody walking in and redefining your software development process for you. Now's the time to get involved because we're making the decisions on prioritization, on workflows and all that stuff right now. So now's the time to start talking to us. Now the question is, can you avoid it? Well, it's been approved by the foundation in particular by Mr. Thierry who's sitting in the back right there. So it might be kind of difficult to avoid it although you could stick your head in the sand. Or you could collaborate. Well, what do you do about it? Oh my God, you run. No, wait, no, wait, no. You contribute, but how do you contribute? Well, you can sign up for one of my UX sessions on Wednesday. I've got a clipboard right there so talk to me after and I will sign you up. You can start using it. It lives at storyboard.openstack.org. You can contribute resources. Now resources can be you are a hidden influencer and you assign some members of your team to it or you can code on it yourself or it desperately needs some UX work. I would love some assistance with that. And you could also do some process and product design and actually engage with us on the overall design of storyboard as a whole. Thank you very much. Excellent. Hi. Hi. Are there any cameras? Raise your hand if you don't have a ticket. Good, probably after. I did. I saw them. I know you go to the VMware when you go to VMware. I know it's one of those. Go back to Explorer, whatever it's called. Explorer? Oh, just saying. Yeah, and they click VMware. Oh, aha. Yeah. All right. Here you go. Push S5. This is not really... Push S5? Now it's one. It just takes a second. Can you guys just go that way a little bit faster? You get it, you get it. Ah, okay. Hi, everyone. I'm Colette. Oh, good, someone's got a timer for me. My name's Colette. I'm a little bit jet lagged. And I work for Monty Taylor at Hewlett-Packard on a team that we call the Flying Circus. So I'm gonna tell you a little bit of a story about us. Our mission, we make OpenStack better and we make HP better at OpenStack. A long time ago, in a galaxy far away, otherwise known as early 2012, Monty was hired by HP along with two other people. And his entire goal was to work on OpenStack upstream. By 2013, his team had grown to about eight people and they decided that they were going to have some needs for some sort of management-y kind of style things. And somebody read the Valve Handbook. For those of you who are not familiar with the Valve Handbook, I highly suggest that you go Google it if you have not seen it. Two important points about Valve. One, Valve is a gaming company and it has a completely flat structure. So there are no technical managers at Valve. The secondary part of that is that, sorry, is that everyone at Valve works on what they want to work on. There's no command and control structure for organizing who does what work. People work on what they want to work on and if they don't want to work on it, they don't work on it. So around about May 2014, we had expanded to about 35 people on the team. And what we were noticing and hearing from people who were joining our team as new members was that they were having difficulty onboarding. We could get them their laptops, we could get them their oath keys, we could get them the technical things they needed, but they still were having trouble adjusting and it was taking them quite a lot of time to figure out how they were supposed to do work. We were throwing them, oh, go read the Valve Handbook and they were like, but we're not Valve, this is HP and we're working on OpenStack. So in keeping with the Valve tradition, I stood up and said, why don't we just make our own Handbook you guys? So that's what I did. We needed to talk about culture and we also needed to kind of do some tips and tricks for new people. I wanted to make it practical, but I also wanted to make it something that we as a team could rally around. And a lot of people talk about culture. There was actually a lot of mention on culture earlier this morning in the keynotes. This is my example for all of you on where culture is the road that the rubber of process meets in OpenStack, right? Submit code, review code, disagree, debate, socialize and drink, rinse and repeat. There's an element of this that is not unlike any other open source movement, but culture is this thing that we all agree upon that we never talk about with each other. And it's not until you're kind of inducted into the secret society of the group culture that you're in that you understand what you're supposed to do. And what I was trying to do is help people who'd never experienced that before know that it was okay to disagree and to debate and to bring up questions. So that's just one example. Basically how this worked was I did interviews with everyone on our team, almost everyone. Most of those interviews were one-on-one. They lasted anywhere between 40 minutes to two hours. Some people have a lot to say, it turns out, on culture. This was the general outline of the questions that I asked every single person on the team. I want to really focus on the values and behaviors here, and then we did come up with our mission that I spoke about earlier also during this. The guide itself, I'm not gonna take you through it's way too long. But the main important points were the values and behaviors that came out of that. And I'm gonna talk a little bit about where you might be able to see that guide when I take you through these values and behaviors. So, uh-oh, yeah. Okay, first one, trust. Without trust, we have nothing else. No other values can exist and we don't have anything to base our work on. We trust each person on our team to demonstrate our values, to do work that matters, and to support one another as a team. We regularly practice this trust with everyone around us. Even those who aren't familiar with our group and the way it works. Openness, and openness related importantly to communication as well as to software. Bravery, we ask difficult questions, self-motivation. We're curious and constantly learning. And again, we tackle the projects we're interested in when we're interested in tackling them. And empowerment, we are heavy on mentoring and championing developer-driven workflows everywhere we go. So, just to wrap it up, our mission again. We make OpenStack better and we make HP better at OpenStack. And what's next? Thanks. Control the wall. So, what do you think? Do you need a ticket for a Slate 10? It's going to be a very short talk. This is a real shit. Did you send me your presentation? Yeah, so it was there. It's a PDF, so it's in the box. The... Thank you. Does ENTO work on this the same as Max? We'll find out, I guess. Hi, my name's Duncan. I'm CEO of CloudSoft. Disclaimer, I am a user of cloud, perhaps an abuser of cloud. Anyway, I'm here to talk about Clarker, which is a new open source project we just started. Otherwise known as the Docker Cloud Maker. So, it's probably not the final frontier, but if you're familiar with Docker, anybody not know what Docker is? Good, okay, I can skip the Docker thing. So, one of the things that the Docker guys said around June this year was that orchestration was maybe not the final, but certainly the next frontier or challenge for the Docker community. And so, as a member of that community, we've tackled that. We've tackled that with an open source project called Clarker. I'll give you all the links at the end, and obviously you can take copies of these slides, whatever. So, what does Clarker do? It helps you spin up and manage Docker Clouds. Now, it does this on any cloud or in fact, any infrastructure. So, it's not specific to OpenStack. We'll get into how it does that in a minute. Having spun up a cloud, it then serves up containers. So, containers become the lingua franca once you have one of these clouds in place. So, what is Clarker? Well, it's both a Apache Brooklyn application or blueprint. So, here we can actually say I would like a new cloud please. I'll specify location. I'm gonna actually land it on software Amsterdam. Give it a name and then give it a location. So, it's a Brooklyn application. It's also a Brooklyn location. So, application and location. So, it's an application because it creates a cloud, but then once I've got it, I want to use it. Well, I can use it with any blueprint that I would normally use within Brooklyn itself. So, here I just say location my Amsterdam Docker Cloud. Who started it? I didn't start it. Disclaimer, I am the CEO. I have some very bright guys on my team. This is Andrew Kennedy. He started the project and is still the lead on this project. He'll be a Apache Con and Docker Con in Europe talking about it. What is it made of? Well, I've already hinted that it's made of a number of other open source projects, notably Apache Brooklyn. So, Apache Brooklyn is something we created a few years ago and is now an incubator project within Apache Software Foundation. So, that lets you model, deploy, and then manage applications. So, as I said before, Docker Cloud is just a glorified application. We use J-Clouds. I'm sure you're all familiar with J-Clouds. J-Clouds not only is used generically by Brooklyn in order to deploy two specific clouds, but it's also got a Docker driver so we can talk to a Docker engine using it. It uses Docker. Well, that would be kind of dumb if it didn't. And finally, it uses a new project called Weave, which I'd recommend you take a look at. This is by the founders of RabbitMQ, Alexis Richardson and Mattius Radu. They've started a new project and a new company around that project called Weave, companies called Zetio. So, what does it look like? Well, here's one I made earlier. This is just a snapshot, I'm afraid. This is looking at a cloud that we set up in Amsterdam. This is me now deploying an elastic web app, a simple three tier app. And normally I would be deploying that to a public cloud or some other piece of infrastructure, but here you can see I'm actually deploying it to my Docker cloud. If you don't believe me, here's another screenshot. Obviously screenshots can be fabricated. And finally, I look at it from the point of view of the elastic web app and it's up and running. So it's up and running in a cloud that didn't exist until I actually created the cloud itself. Small print. Clocker is open source, Apache license still in beta. You can find it on Brooklyn Central on GitHub. Apache open source donated to the Apache Software Foundation, started by another member of our team, Alex Heneveld. Jay Klaus, I'm sure you'll know Adrian Cole if you don't, you should. But the guy I'd like to call out here is Andrea Turley, who's also on our team who wrote the Docker driver. Finally, Weave, links there. So in summary, Brooklyn allows you to spin up a Docker cloud on any infrastructure and then use it, target it, take any blueprints you've already got, or once you want to create and deploy them to it. Web resource is there. This is me. And all work and no play makes Jack a dull boy. So let's go and party, all listen to more talks. Marcello, did you send it to me? Yes. OK, let's go play. Hello, and my name is Marcello. I am an OpenStack ambassador and also I am one of the leader group in OpenStack Brazil. I would like to speak about the OpenStack in Brazil, a vision of Brazil about the US OpenStack to create and compete clouds. So Brazil has over 200 billion people and has the semi-largest economy in the world. So there are opportunities, great opportunities for a growth in local data centers. Our research from CapraMind with IT Executive Organization Brazil showed that a growth of cloud services in the next two years, great opportunities for cloud services and the preference for pervade in hybrid clouds. The companies Brazil wanted the security formation, so it's a good opportunity for pervade clouds. So how is the OpenStack in Brazil? Two large providers are offering cloud-based OpenStack. Government of Brazil has back the creation of services using OpenStack also. So University helping the development of OpenStack, highlighting to the University of Campina Grande with research projects. The University has come to visit with over 8,000 lines of coding, others projects of OpenStack. So we have our group. Our group was created in 2011. And our group has more than 400 people. Only in this year, a great more than 50%. And this year, we had four meetups and another shadow to occur after the summit. So only in the two years ago, we growth about 75% for us is very good. Some thoughts of events in Brazil. This is a picture from OpenStack at the International Software in Brazil. This is a picture from OpenStack birthday for Butter. It's a very nice cake of OpenStack. This is another picture, another picture of birthday. This is a picture from dual tech OpenStack meeting. There are more than 500 people in the meeting. It's very good. This is a picture from above in the future. FutureCon is a congress and business trade in Sao Paulo, Brazil with business companies. So what is the key challenges and opportunities for next two years, Brazil? More use the case of OpenStack Brazil. We need more use the case. We need more telcos offering cloud service based in OpenStack. We need increase by 100% of our group members. We want a group. We need more partnerships with universities and with more developers. And also, we need more speakers and sponsors for meetups in Brazil. So, merci. My name is Marcelo. And thank you very much. Thank you. Thank you very much. So the folks that are over here that are probably standing and starting to get fatigued from standing because standing is difficult. There's lots of great chairs over here. If you guys want to take a seat, your choice. But there's tons of seats over there. There's tons of seats. And for the folks that just joined us, if you're interested in having a chance to win an HP Slate as a 10-inch tablet that runs Android, raise your hand and we'll bring you a ticket to enter. So if you're interested, it's a 10-inch tablet that runs Android. And if you are interested in participating, we're going to do the draw at the end of the lightning talks here in the middle of the lightning talks. So just raise your hand, keep them up until you get a ticket. And that's going to enter you into a chance to win the tablet. So next, we have Yolanda, who's going to come up and speak with us. Hello. My name is Yolanda. I work for HP as a software engineer. I work for a team that is called Closer and in our team, we are using the OpenStack CI project to run internal HP projects. For example, we are running HP Leon for it. So first of all, you may know what's OpenStack CI. Let me show you that a bit. It's an integrated collection of developer tooling and automation. And it's used by OpenStack, but you can run it for your main projects. It's based on three premises. Everything in the code must be reviewed, every change that you do it. If the change is not tested, it's just broken. So every code, every change you do, everything is really simple. It needs to be tested all the time. And you need to automate everything. So any build, any mirror, any publish, you need to create tools to automate it. It's supported by OpenStack community. It's used there. And it allows all developers there to have their code tested, review it, and merge it automatically. So let me show you a bit of our workflow. We are using Git to do all the changes there. We are using a tool that is called GitReview, that is a wrapper for Git, that instead of doing a push of the code to the repo, is sending the changes to Git. Git is our code review system. It exposes your changes there, allows your peers to review it, to flag it, and approve for the night. We are using Jenkins to run all the tests and all the job there. And in the meantime, we have Zool. Zool is the connector between Git and Jenkins. So every event that happens into Git, it triggers an event and tells Jenkins to run this test. Jenkins is providing feedback to Git to allow the code to be reviewed. Then it also, Zool is allowing to do a code merges and it's pushing the changes to Git to our mirrors. That's pretty basic, our workflow. So what are the open stacks and main advantages? It's a quite reliable system. We have been using that in production for over two years. We have been running lots, thousand of tests. It's a very flexible system. You can, we are using that for open stack projects, but you can use it for everything. For different languages, for example Java, you can run different types of platforms, different types of tests. If the scales, we are due to our demand, we can add nodes or decrease nodes, depending on the loads of the servers. It handles parallel testing. So Zool is available to run tests in parallel so you get your tests results faster. And it can be managed by a small group of people because it's a fairly automated tool. So it's still, well, it's a very used tool. We rely on it, but it still has areas of improvement to what you can work on it. First of all, the isolation of projects needs to be improved. Initially it was used by open stack, but it's used now for other platforms. So there is a continuous work to split the project configuration and the CI system as well. The initial deployment and learning curve can be a bit complicated, so we need to be working on improving that all the time where they need. Also, for example, as we are using it in HP, we need to have synchronization capabilities between downstream and downstream, and we need also to automate all the relationships between all the projects. NHP is working actively on it. We are providing people, full-time employees to the open stack CI project. There is the OpenStack Infra-PTL that belongs to HP as well, and, well, there is Jim there. And we are encouraging people to use OpenStack CI system and embrace all the capabilities that it has and start using it and realize the power that it has. So, if you, well, my name is Yolanda Robla. You can contact me at this email if you want it. You can also take a look at the documentation there and we also have a channel in Freenote where all people are there and all we want to help. And if you want to know more, we'll be later at the OpenStack expert bar. So, if you can go there and join us, I will be glad to ask any question. Thank you so much, Yolanda. Thank you. Thank you, Yolanda. So, does anybody need to take it? They mute it in there. So, once they realize I'm talking, because they see the meter, they'll turn it on, which is just nice of them because I probably am over here bumping it and making lots of noise. So, next we have Ricardo. So, he's gonna come up and just raise your hand if you haven't got a ticket to enter for the slate, which is a 10-inch tablet. So, if you don't have a ticket yet and you'd like to have a chance to win, there's after Ricardo, there's one more lightning talk before we're gonna do the draw for the first one and then we're gonna have a few more lightning talks and draw for a second one and if you already have a ticket, the draw is good for both. By the way, you have to be present to win the draw. Hello, everyone. My name is Ricardo Carillo Cruz. I work as a cloud automation engineer at HP. I work with the Olanda. Our team is responsible for managing a CI and city infrastructure that is based on OpenStack CI. This is the same infrastructure that is used to build Hedion and other HP cloud products like CSA or cloud system. OpenStack CI is not just about building OpenStack itself just for building Python-based projects. It can also be used to build Java complex projects. We were approached at some point by other HP cloud teams that they were running their own CI systems and they asked us if they could migrate their Java builds into our systems. We didn't have any Java capabilities by the time so we gathered the requirements and we came up with a list. We wanted to have an open source artifact repository with a strong community support and for this we chose Sonotype Nexus. We also needed multi-tenant isolation for artifacts and repositories and groups through RLs. So what we did is we gave dedicated folders into our Nexus storage system to each team and we enforced them to prevent their team names or their project names into their URLs to prevent name clashing. Due to the complexity of building those products, the system also needed to support Maven multi-module projects, to support multiple artifact generation per project and to support different settings XML values per project. After working on some prototypes, we came up with a final implementation that follows this workflow. So a Java developer pushes a change to our system and it gets into the check queue. It goes through the check jobs, it gets a plus one and eventually a query view approves the change and then the change gets into the gate queue. Then it runs the gate jobs, if everything is fine, it merges into the git repository and then the change enters the post queue. Here we run two different jobs. A post Maven build job that what it does it, it generates, it builds a project with Maven and it uploads the artifacts to our Turbo7. Chained to this job we have a project build and turbo the post Nexus deploy job that what it does it downloads the previously uploaded artifacts into the slave workspace and then it uploads those artifacts into the Nexus. For our implementation we base what we had, what they had in AppStream in the Maven plugin job templates. So the idea was to split the build and Nexus deploy into steps. In the first step, you build the project with Maven and deploy it to a local repo in the slave workspace. And then you upload that local repo folder structure to our Turbo7. Then in the second step, we download that local repo folder structure into the privileged slave workspace. We iterate through that folder structure, uploading every single artifact we find on every Palm XML and then we trigger Maven metadata reveal API call in that to our Nexus. And that was it. I want to emphasize the fact that OpenStack CI can be used to pretty much every project. It's not just OpenStack, not just Python, also Java, you name it. These slides are available on my GitHub. You can also find the link to the AppStream chain for this work. You can contact me via email, Twitter, on IRC, I'm also an OpenStack Infra and nevertheless I'll be around all this week so I will be more than happy to chat with you about this or anything that is related to OpenStack CI. Thank you. All right, thank you very much Ricardo. So we're gonna have one more lightning talk before we do the draw for the slate. So just a quick reminder for the slate, you have to get a ticket, you keep a ticket that we give you and then you have to be here while we do the draw. We're gonna do a second draw at the end of all the lightning talks before we go for the public roll and the ticket you have now is good for that opportunity as well. So next we're gonna welcome up James and he's gonna give his quick five minute talk and then we'll get to the, give him this away. All right, hi. So my name is James Polly. I'm a AAA developer for HP. I'm part of Monty's Flying Circus which you've already heard a bit about. I work from, based in Sydney, Australia. I theoretically work from home but actually I like to change my environment a fair bit. So often you'll find me working from a cafe or from a library or from a park, tethered to my mobile. So when I started working on Triple O, I started running through the Triple O tools and I noticed it was spending a lot of time building disk images and I noticed a lot of the time building disk images was spent downloading stuff from the internet. And this worried me because I thought if it's gonna be doing all of this downloading every time I run, I'm gonna be stuck to my desk because mobile data is expensive and slow. So I was looking for ways to try to make sure that my image builds were as low bandwidth as possible. So this lightning talk is a very, very fast, quick overview of some stuff that I've done to make stuff a bit less network intensive on my laptop. Why do you care? You probably aren't crazy enough to be doing these builds on your laptop. You probably have a machine in a data center which probably has a decent bandwidth connection to it. So why do you care about low bandwidth image builds? In short, speed. You probably want to build your images as fast as possible, especially if you've got them, if you're doing the builds as part of a CI chain where you want to give feedback to developers or if you're doing the builds in response to a customer request where they want their instance up and running as fast as possible. No one likes sitting there staring at a data center at a progress bar. Data centers are pretty boring as well. So what are the things this image builder downloads? Well, it needs a base operating system to start with so it's gonna be grabbing a cloud image from your operating system provider. Those are around 250, 300 megs usually, maybe bigger. It's gonna probably need to install some open stack tools. If it's, say, doing a Git clone of Nova to get Nova, that's around 200 meg, again. Neutron's about 100 meg. Usually those tools will have some Python dependencies as well. So you're installing a bunch of Python packages. Usually you want, say, Apache or MySQL or something so you'll be installing operating system packages. It's all a bunch of stuff that needs to be downloaded. So what can you do to minimize how much a disk image builder is pulling down from the cloud? Well, the good news is it's mostly done by disk image builder itself. Most of the elements in disk image builder that pull these things down will cache them on disk and obviously the first time you run it, there's a lot of downloading. Second time, mostly you can find what's in the cache, check the internet to see if there's a fresh copy to pull down, but if not, it will just use what has already got cached. You can even pass the dash dash offline flag, which makes disk image builder run almost entirely offline. It doesn't even look for freshness. It just uses what you've already got cached. So that's the end of my talk, right? Well, no. For one thing, in the real world, in a data center, you probably don't just have one machine running disk image builder in one place. You've probably got multiple machines. If each of them has their own cache, they're probably gonna get out of sync over time and so depending on the same config can build a different image depending on which machine gets run on. That's not something you want. Even in my case, running it on a laptop, there's no point building an image unless you use it. So at some point you're going to be running VM. That VM is going to be doing stuff and some of that stuff might involve downloading more Python packages or operating system packages. So what can you do to get better caching, than with an occasion that's consistent across machines? Well, the obvious answer is use Squid. Squid actually does a decent job because most of this stuff is downloaded over HTTP but there's a few things Squid can't handle. By default, it doesn't handle very large files. We've got a config as part of our triple Odox that helps to fix that. It can't handle stuff that is downloaded of HTTPS because it can't see what it's downloading. So it's a good start. It doesn't do everything. I've also been running mirrors of the operating system package repos on my laptop. They're large, about 70 to 100 gig. But it means that if you've got a VM up and running and it needs to install a package, you've already got it on your laptop. I also tried to mirror the entire PyPI index. Again, this is about 100 gig. So it's a lot of disk space that I'm chewing up here but it means I don't need much bandwidth on what I'm building images. That is time up. You can find me on Hashtaglow on FreeNode or email me there. And then we're gonna have some more great lightning talks. So we've got John Dickinson actually is gonna be up next and he's gonna talk about some Swift things maybe. Who knows? Might be typical of the Swift PTL as I put Swift but it could be about anything. I don't know. I didn't actually take time to review any of the talks before I accepted them. It's actually not true. A lot of folks did a lot of preparation here. So big round of applause for all of the presenters so far, please. All right, so Monty's gonna pick a number. We're gonna read it out. And if you have the number, you get to win this. Oh God. I'm pretty sure you could hear me before this but zero eight, three, five, seven, nine, seven. I'm Jim Baker and I know some of you guys here. Yeah. I am not HPE. I work here. Thanks. So next we have John Dickinson then we're gonna have Clint Byram then we're gonna have Marie Paul. We're gonna talk about some network virtualization, functional virtualization. We're gonna talk about some Swift stuff. And so stick around. For folks that didn't win a tablet, there's gonna be a second draw at the end of this session. So if you get a ticket, feel free to stick around. You're gonna have another opportunity to win a tablet. I was this morning and you were talking to my designer? Sure. Yeah? I'm sure it's for you. I was trying to decide between the structure and the shirt this morning. So that's why I'm here. Tomorrow? I can, but I'm probably gonna be up on the stage stage. Yeah, I know. It's okay. I know. I appreciate it. That's why I love you, Monty. So we could talk about the fact that, let's see. What are we gonna talk about? Not that. That's yours? Yeah. But we're all... We can't just rate it to the greater panel. You know, rate it. Even though I would sit there with somebody saying it's the best point. I'm just gonna assume you have to tell them. Yeah. I'm just gonna assume you have to tell them. Oh, that's fine. Come on. Yeah. Is that cool? Yeah, probably does. Pretty awesome. Yeah? It's good they need more of it. You're the songwriter. There's kind of a lot of open-sight, actually. Yeah. We're talking about Gabby's spirits. That's what I like to talk about. Yeah? Yeah. Like cows and cucumbers. Yeah. I didn't take you to the most cool place that I own. No. Yeah. Yeah, there's a really great place down on the low end of the lower right side. It's not the most cool. So I might imagine they have an excellent best call for it. Nice. Oh, yeah. Especially from the H1. A couple months ago. Two, three months ago. Oh, yeah. Hilly Gal, Moscow. Nice. The, once again, San Francisco, I would say, Moscow feasting event. They just got about two dozen different people in there just tasting. Couldn't buy bottles there, unfortunately. But it was all online, but it was time to try the production guys. Yeah. They were just trying to get started. They were like, why can't we buy anything? We have, like, three bottles. They've got, like, a barrel or something. Yeah, they're, like, way, way early in the... It's like, yeah, this is what your tequila look like. This is what your mascot look like. Two months ago, you know, when it was first published. Yeah. Third world, most third world picture ever in the world. You know, like, mud huts and these clay pelt pots. This nice, stacked space. It's just like, whoa, that's crazy. Yeah. I'm also, like, really, really small scale. I'm waiting on you, Cody. I can use my... Oh, I've got the timer. Cool, thanks. All right. Thanks, everybody, for showing up for the lightning talks. We just had a great round of lightning talks from Monty, Michael, Colette, and a bunch of other folks. We're going to continue with a number of other folks that have great things to say. And so, before we really get into it, though, if you're interested, you can have a chance to win an HP Slate. It's a 10-inch tablet that runs the Android operating system. And all you have to do to win is be here at the end of the lightning sessions and also, very importantly, get one of these tickets. So if you do not have a ticket yet, please raise your hand and somebody will be nice enough to bring you some tickets. Here, Lisa. Raise your hand if you want a ticket. This is the tablet right here. 10-inch tablet runs Android. So just a quick review. You have to be here. You have to get a ticket. There's two tickets over here. Three. All right. Take it away, John. Hi. My name is John. I'm the Swift Ptl. And I want to talk to you about temporary URLs. Being only five minutes long, I figured temporary URLs is in a short-term URL is a good thing to talk about. So why do we need these things? The problem is that auth is a bear. And why is it so painful? Well, because from the docs, here's an example of a short token from Keystone. Just type it in. It works with curl. So here's the problem with that. And another problem with using auth systems is that you've got to talk to an external system that's on the other side of the network, which leads to latency and lots of congestion and cache issues, like what happens when you expire it and how do you propagate that sort of thing. It's painful. And you combine that with what Swift is. Swift stores a lot of data. Actually, it stores a whole lot of data. And if you start trying to manage this with remote connections and they're hard to cache and everything, you get problems. And so it's worse than that because every piece of data can be accessed at any time by anyone. And even more than that, oftentimes the data is going to be accessed by someone who's not even really the owner of the data. An example of that is Wikipedia. Any image that comes off of that comes out of their own Swift cluster. You're certainly not the owners of that image. So I know, let's use ACLs. Those are container-level in Swift and they're complicated to manage because you're going to have to have every single user needs to be put in some kind of group and given permission. And it just adds a whole lot of complexity and you don't know which way you're going and it's crazy. So in other words, to sum up, auth is a bear. So we need to solve this. I know, let's not use auth. But that just puts the bear in the cage. And that's sad and it doesn't really solve the problem and you don't want to put the bear in the cage. You want to let him roam free. So we could just make the auth system faster. And this is a joke. We could have Keystone Lite Lite and that's a joke for people who've been around OpenStack long enough. So I think we need to do something different. And what we need is something that allows you to sign your requests and that's what Temporals do. It gives you a HMAC Shy 1 of your request into Swift and you can do it on a per object basis. And the really cool thing is that you can do that with no, you can do it locally. You don't have to have any sort of net connection at all to either generate or to validate it. And the URLs are time limited. So that means you can say this is going to be something that is only valid for the next so many seconds or minutes or hours or days or whatever you want. And the great thing about that is you can do common things like let's prevent hot linking and maybe say when I download a web page with the content served out of Swift the content's only going to be good for a few seconds. So they can load it on the page but nobody's going to be able to remote link to it. You can hand them out like candy because they're free, they're very, very cheap to create. So you can give them out to anybody who wants to or not, maybe you don't like candy. And then another thing is that you have multiple keys that you are signing with which means that you don't have to worry, you don't have to stress about swapping them out. You can rotate your keys over time which means that you don't have to worry about breaking existing URLs. You've got this nice, you can rotate the keys and keep the ones that are in the wild out there for good without breaking them. And also they're very, very fast because they are locally done on the Swift proxy servers which means they are horizontally scalable and you can keep just adding more proxy servers just like normal and it's simple just HMAC math on the CPU. It's not some expensive complicated things. Swift. So to sum up, use Temporals. They're awesome. Thank you very much. So, I've said this a lot today but if you haven't got a ticket yet, please raise your hand. You have an opportunity to win a 10 inch slate. So for folks that are joining, if you'd like to win a slate, raise your hand and we'll bring you a ticket. All right, next we have Clint Byram. So give it up for Clint. All right, how's everybody doing? All right, that's Monday. I'm going to ask you guys again on Friday. We'll see how that goes. All right, so I'm here to talk about change because change is hard. It's funny, I actually heard the exact opposite assertion in one of the keynotes this morning and I cackled. So change is one of the hardest things that we do in OpenStack. My example here is the Jenga blocks. If you've ever stacked up Jenga to start, that's obvious. We all know how Jenga looks when it's done. When it can get Jenga started, what's hard is changing it because every change that you introduce introduces entropy. And this is difficult. OpenStack has real problems with change and we've all seen it, right? Upgrades. Everybody is asserting, I've got this Havana, I've got this Grizzly, I've got something more ancient than that and it's hard. And we discovered that while we were developing Helion, we want to be able to upgrade it just, even just patches, just to be able to deliver new software that might change the database, need new objects in the message queue. This is hard stuff. Another thing is timing matters. If you just try to run everything whenever you want to, now really fantastic software handles this great, but, you know, sometimes the squirrel is in the way. You need to wait. So that's my paradigm for locking and we tried to do this with heat and we failed. Heat has the ability to do this but expressing it becomes very verbose. And so we thought of a new way to do it because the problem was this, heat allows you to express this graph. This is a nice representation of what you do in heat. You say, oh, I'm going to give you some user config and some database details and you're going to create me servers and then from those servers are more servers and monitoring and then I'm going to feed that all into some software deployment that's going to go and do stuff inside those servers. It's really fantastic for building the Jenga blocks because you know exactly the end state and you know each step to get there. And what heat does with that? It turns it into a workflow. So inside heat it goes, oh, okay, first I need to raise the user configs or the result in heat. Everything's ready. I'm going to make some more servers. This is all very simple workflow stuff and I can forget about how this workflow works when I'm using heat, which is fantastic. All right. What if I just want to change that server? I just need to upgrade that server. What kind of server it is? Is it matters? What's the state of the other servers matters? The timing matters and the change is difficult and heat has some great hooks for doing stuff like this but it gets more and more and more complex and in fact what I can do is just instead use Ansible and assert a workflow. So in Ansible, Ansible is a sequential workflow tool. So while heat is describing the end state and finding a way to get there, Ansible is actually defining all the steps that we want to take and it does it in a very nice succinct way. It does it agentless. It does it over SSH and it's actually turned into a very pleasurable experience instead of trying to express this all in heat without actual workflow control. So this workflow on the left, there's a few pieces of it here expressed in YAML in Ansible. So the result of that was instead of having to write probably about 15 times more heat we were able to blow that down into just a single Ansible playbook that takes you from Helion 1.0 to Helion 2.0 really soon and going from there the workflow becomes introspectable and you can understand it and when it breaks which is probably most of the time sorry you can actually debug it and you can know and you can retry pieces of that all these things are very very difficult with declarative systems because they try to genericize problems whereas in Ansible it's actually just trying to let you get past the problems and work around them. So where do we go from here? Now that we've brought Ansible in we put that up on Stackforge and I'm inviting more Triple-O contributors to take a look and participate in that we're also trying to invite more Ansible people in to take a look I know there's several vendors out there that build their clouds using Ansible and so the next thing is just make the party grow a little bigger right now it's just a couple of us at HP and inside the Triple-O project looking at this but I'd like it to grow I'd like to see if we could stand up build the Jenga blocks without even necessarily doing as much on heat and do all the software stuff in Ansible just so that we can reuse all that stuff there's other things that we've been thinking about we could probably we don't have a REST API for Ansible that's one real difficult thing so Tusker in Triple-O is an UI for it REST API that's all I have please come to our session at 4.30 tomorrow or Wednesday and go again if you want to talk about it more thank you thank you very much hello thank you very much thanks Clint so next we have Marie Paul so five minutes it's short especially to talk about NFV so who knows already about NFV that's easier so a few of you don't know about it so I hope I can stick to the five minutes so first of all NFV stands for network function virtualization and it's about telecom networks so everybody knows about telecom networks you have mobile phones, you have internet at home you know that enterprise they also need to have connections and so on and so the big thing about the telecom networks is that until very recently they were very static I mean they are asking HP to support hardware for tens of years and so on not to change the software for 20 years and it's been a very static and conservative world except that these years with all of the top services and so on they are losing money and they are being changed very much and so there is a revolution happening and it's virtualization they need more agility and so on so you've heard about NFV in the keynotes which was big surprise to me because we were starting to try to talk to the open source community for you know few months almost a year now coming from the telecom world and the standards and so on but you know we are two different world I mean in the telecom world everything is standardized it's regulated it takes you know two years to set up a standard and so on so it's quite rigid in the open source community somebody some codes it's reviewed it's you know it goes and so on it's very agile and so it's you know we are trying to learn how to talk to each other so just to give you a brief overview of what's going on this is what has been specified in the Etsy and NFV so network function virtualization to represent a kind of functional architecture of what a telecom network could become you know if it was virtualized and when we talk about virtualization of telecom networks it's not just cloud and it's not just using open stack in the cloud it's really virtualizing everything from the setup box in your house to the firewall, NAT and routers in the enterprise the core network, the base stations it's everything and so it's spread across you know countries across different operators and it's to deliver services or you know across all these different geographies and service providers so what you see here I'm trying to maybe clarify a bit the things here is your network function virtualization infrastructure so that's your hardware compute storage networking that's the virtualization layer and it could be anything but for the time being we talk a lot about hypervisor and then you have the virtual resources this virtualized infrastructure is being managed and you have something that we call the VIM the virtualized infrastructure manager for you it's like open stack this is where we put open stack and then on top of that we have the virtual network functions that's all the functions that the telecom operators need to run the network so it can be a setup box it can be a base station it can be an IMS core it can be you know anything that is in the network and these functions if they are virtualized they become virtualized network functions they are being managed by the VNF manager and so these things normally they come from the equipment providers so the Ericsson that you heard this morning the Huawei and so on happens that HP provides some of that in our organization which is CMS communication media solutions we provide some of these VNF functions like home locations and so on and then on top of that because telecom operators they not only need to deploy these functions on the network but they need to teach these functions to build services and then teach these functions with other functions from other service providers and so on to build end-to-end services there is an orchestrator so in this design what we define is an orchestrator that would be on each network operators network to deploy the functions and then teach the services and then you know define these end-to-end services but also to talk to the orchestrator from the other service providers and so on and then on top of that you have all the support systems from the operators to create their own services that they need to sell and so on but also to allow enterprise to build their own services on demand to allow an user you know residential customers like yourself to you know also subscribe to new services, define new services, change services and so on but these systems need to be updated kind of real-time sometimes and there's you know a whole bunch of requirements which appear as we dig into you know this virtualization of the telecom networks so let me have how this works so what we have done is we have defined a number of use cases and there are like nine of them and then we looked into you know what was specific in terms of requirements for OpenStack for instance and we have a number of these requirements which have started to be listed you will recognize some of them some of them already showing in Juno and in Kilo projects and so on and HP is also very actively working on that so you know about Helion I mean some of you are working on Helion and what we are doing is that we have announced thank you we have announced today you read on the news thank you so much thank you so much Marine that was awesome network virtualization, function virtualization is very neat so is there any more opportunities to hear about it this week? yeah not much actually we cannot miss the opportunity to get speaking slots in the OpenStack Summit next year we will do better so I will be at the bar if you have questions all right next all right we have I don't want to put your name Surim or I think we have another name for you Cloud Don so Cloud Don is going to come up and he is going to give his presentation and while he is doing that I want to tell you really quickly that once we are done here we are going to have this draw real quick and then we are going to head down to the booth and we have eight PTL members that are going to be staffing as experts for the technical committee and one member of the board of directors so lots of really folks that are really involved with the OpenStack community the experts are going to be there to answer any questions that you might have or just if you want to chat and have champagne at the HP booth so it is right outside of the big auditorium place thanks Cori, thanks everybody I just realized that I didn't put my name on that Sri Ramasbraman aka Cloud Don this is a man with a black hat Cloud Don I'm here to present a stack accelerator and accelerator for OpenStack startups where do we stand with startups now we are at an inflection point we are more than four years old now we have seen scores of startups some of them had good exits we have seen a lot of funding activity in the past three years and it's also like we are seeing a lot of customer adoption these days and any more startups coming in selling OpenStack because one of the study you can see is projecting the entire OpenStack market to be more than 3 billion by 2018 and we see larger and larger adoptions and more and more coming the first way of startups kind of solved the infrastructure problem and they have been working mostly around distributions or different service models but primarily around support but what is lacking here is not just for the startups adding the core in into this open source into this humongous code base is difficult we have heard scary stories of code reviews waiting for more than two weeks more than two months I'm sorry and we also had very less success with the different kind of business model maybe we have had one business model around support not more than that we need to fix that we need to be able to support more startups navigate this system navigate this open source try to get their product out better also this is the right time for more innovation coming in we have the first way of startups focusing on infrastructure but a lot of stuff needs to be done for instance automation, more DevOps try ambitious projects try disruptive things put in containers as first class cities in OpenStack some of it which bigger vendors might be scared to do startups will be able to do it but they need help so that's where Staxillator comes in Staxillator is a network of mentors who are very strong OpenStack code contributors and also open source business experts they'll be able to provide with mentorship and guidance in getting your code out getting the startups code out they'll be able to fine tune your business model and all the more important Staxillator will provide investor access for startups how we are doing that Staxillator operates in three different models on-prem hosted and distributed hosted is like on-site at Staxillator or incubators workspace, shared workspace select startups will be working for a period of three months working on their respective code development Staxillator's mentors will be able to provide hands-on guidance mentoring over the period of three months Staxillator will also provide 20k of seed funding in exchange of 6% of equity on-prem is some like a large player like HP would be hosting these startups on their premises these hosts will not get any equity in exchange but they'll be able to have say in the product itself they will also have the first access or first right to acquire or invest on the graduating startups startups will get resources both from the hosting company and also Staxillator's list of mentors they also get an access of 6% of equity the distributed is finally like it's a global model startups will stay in their respective workplaces for three months Staxillator will be able to provide guidance remotely for exchange of 3% so essentially Staxillator will be able to provide mentoring and guidance for accelerating your product development by guidance and mentoring on code development and business models if you are interested in either being a mentor or being an investor please sign up at signup.staxillator.com if you are a startup please if you are a startup please apply at signup.staxillator.com thank you alright great thanks too much so next we have Rick coming up and does anybody need some tickets to win a tablet raise your hand if HP doesn't qualify HP people if you want a tablet you're gonna have to go and buy it yourself no I'm Rick and you can also reach me at rev at hp.com so I'm here to talk about containers and the definition I found about containers is containers are an implementation of operating system level virtualization I started this talk several months ago because I noticed a certain bigotry against containers by the virtualization establishment so I'm here to put containers in context and maybe alleviate some of that bigotry or at least explain it put it in context as it were so let's talk a little bit about containers containers started off as far as I can tell with something called jails jails were introduced in about 2000 for FreeBSD basically they used charoute to assign processes access to the file system and isolate the processes that way some followed on with their zones implementation in about 2005 again a derivative derivation on a different operating system and then likewise Linux developed containers called Linux containers or LXC and roughly released that in 2008 along came another company they made something called LMCTFI which stands for let me contain that for you and this company uses containers to deploy if not all of their software a large section of their software on this cloud thing this company is called Google and then they open sourced LTCMFI LMCTFI in around 2013 you're probably more familiar with Docker and Docker really is and by the way Google's LMCTFI is based on LCX is Docker Docker is also basically derivation of LCTX actually it's not derivation it's an improvement on LCTX I'm sorry LXC and it's basically they made it easier to use and manage containers that's really their secret sauce so let's talk a little bit of history about virtualization why do some people don't think containers are virtualization well to understand that we have to understand what virtualization is and again this is kind of a naive history but I only have five minutes but before we talk about virtualization we need to type a brief history of multi-processing so multi-processing back in the day some of us would take a deck of program cards that we typed in hand them to a guy behind the door he would then load them into the computer and our program would run the next person would load in those programs and our program would run then that's one program at a time then Moore's law happened and it turns out we can load many programs into memory at once so what we needed was a system to suspend the running program and then unsuspend one of the suspended programs and this is what multi-processing basically is multi-processing uses the scheduler is based in the kernel but one of its main things it does is it schedules the processes to run in time and then your applications or your services run in user space kernel space is protected it's the very basic of the operating system and user space is where all your applications run now we can run many processes all at once and it appears like magic like we have many CPUs so we're done with talking about multi-processing let's return back to virtualization so virtualization is basically simulating the hardware in software this is the true definition of virtualization virtual means not real so virtualization is not real the software and firmware is created on the host hardware on a system called the hypervisor and sometimes called the virtual machine manager pair of virtualization is basically where hardware is not simulated therefore it is less virtual most virtualization systems you're aware of are pair of virtualization systems but as long as you use the same hardware virtualization works containers are an implementation of operating system level of virtualization which is user space virtualization containers contain the application processes and sub-processes but not the operating system that's key why containers are so fast is because they're not swapping the operating system in and out as long as you agree that you're going to use the same operating system containerization works but note that containers do not can use hypervisors and that might be the gist of the bigotry against containers because most people think of hypervisors as virtualization but I think I've shown otherwise great thank you so much so the next person I'm going to invite up to speak is actually a celebrity she's been in the media recently as one of the top five women in tech that are helping what was the title again sorry a change social change she's actually been a really great job there and I'm going to invite her up to speak and so give her a round of applause please I didn't say her name but it's Elizabeth so my name is Elizabeth Joseph I'm here to talk to you about a tool we use in the infrastructure team called Elastic Recheck so as you may know if you've contributed to OpenStack you can pass a bunch of automated tests to get into the repository this is really good because it ensures code consistency, code quality and presumably when someone downloads DevStack it's been tested and it actually works the development version of OpenStack so that's great yay unfortunately it doesn't always work quite like that sometimes there are failures that have nothing to do with the code that you're actually putting into the repository of proposing so sometimes there are failures there could be an upstream service outage maybe the cloud can't contact the Ubuntu repository or something there may be an infrastructure problem those never happen or bugs OpenStack may actually have a bug that is only encountered every once in a while so your tests just happen to bump into the bug the test itself may have a bug so your code should be passing the test but something's wrong with the test itself there's also sometimes dependency issues now for a long time we didn't have a great view into this developers certainly knew when our tests were failing a lot and we knew when the numbers were failing a lot but we decided that we needed some better view into why these things are failing so for a while we were depending on humans to tell us what the bugs were that didn't work so well because they just do recheck 1, 2, 3, 4, 5 as a bug number which is not helpful for robots to it so Elastic Recheck is a project that is meant to collect, organize and detect failures that have been defined so when there's a failure in the gate and you think there shouldn't be you have to wonder what happened there so Elastic Recheck notices that there's been a failure and it adds it to this website I should have had a screenshot but I don't it's just a website that has a webpage that has a bunch of types of tests and then the things so a developer comes along and looks at this uncategorized list and says hey this test is failing a lot and these are all the reasons it's failing so what they do is they dig into those reviews and they find out which ones of them are failing in the same exact way and by developers I mean you because we need more people working on this and that's why I'm here talking about it so once a developer finds a pattern that all these bugs have been or all these reviews have been hitting they create a bug report and links to the appropriate reviews we then have a log stash cluster that we put together that uses Elastic Search and Kibana and the whole elk stack to collate all of our logs that we send to this thing you know what the issue is you know what logs are referencing so you go to log stash and you create a query that query is then put into the Elastic Recheck repository under a queries directory and then you reference the bug name and the title this allows Elastic Recheck to then go and find the bugs and then report them back to the people submitting the reviews so Elastic Recheck looks at log stash it monitors the logs it makes sure it looks for the bugs and then when it finds a bug that has been reported in Elastic Recheck repository it leaves a comment in Garrett the code review system for the developer so that's nice because it gives the developer a chance either to rerun the test knowing that it wasn't their change that caused the problem and they know it's a known bug it was really really frustrating for developers to have a failed test and not know why that failed and wonder does anyone know about this problem a bunch of times we didn't know about it so now they have feedback when the Elastic Recheck bot comes in and says hey you hit this bug sorry about that also in a perfect world this gives them the opportunity to help us fix that bug which would be really nice if more people did they say you hit this bug you know you can run recheck to rerun the test or you can maybe dig into it if you have expertise in that area and help us fix the bug and that was all I have about Elastic Recheck thank you very much Elizabeth so we have one more lightning talk presenter before we do the final draw for the last slate that I have here we're all going to head down to enjoy the booth so they're going to have lots of great food and so for the folks that just joined be sure to check out our booth we're going to have a bunch of open stack and open source experts and some free food and some free I think it's champagne so it should be a good time so the next person that I'm going to ask to come up and speak and I know it's been a long day so I appreciate you all sticking around is Spencer Crumb yeah it was like last week seriously you don't have one wow we're going to do this just for fun so what I want to talk about today is called Synthesis and so and so there was this guy in the 1800s in Germany called Hegel and I was told about him in the 11th grade and he had this idea about philosophy and he's talking about synthesis and he says that we come up with an idea that's the thesis and then in direct response to that idea the antithesis is generated and these two things fight with each other to generate something called the synthesis which is a new idea that's not either the original two and then the process repeats itself with the synthesis being the original thesis pointed out to something okay so what does that mean for us well my name is Spencer Crumb and I work on something called Gozer which is what happens when HP decides to bring in the open stack infrastructure project internally to HP and so we have this problem where we have a thesis which is an upstream situation and an antithesis which is us trying to bring it downstream and we have to generate a synthesis and so there's this file called layout.yaml which exists and it's essentially a list of all the projects in open stack of which tests to run against them and so what we do with that traditionally it's in get upstream and then it's in get downstream and so we manage get diff so we merge against that and that becomes a real pain in the butt when it becomes a thousand lines long and the diff itself is over 500 so what we'll do is we'll say there's a few things about this file we know that it has structure and we know that it's YAML and it actually is just data furthermore we can make some reasons so when someone pushes a signed commit up to say, of Nova there's a job defined that will go ahead and package that up and send it up to PyPI well we know downstream it doesn't make sense to run that ever so what we can do is we can build a tool that processes the upstream and generates a YAML file for ourselves so instead of simply taking a thesis, an antithesis and generating a synthesis, we can actually pop a layer of meta up and generate what we need to do with tooling and I think where that comes into play in open stack is if you're running say you're running a downstream heat you're packaging up heat and shipping it off to your shipping it off to your customers what you might want to do is instead of simply managing a diff you could make a tool that grabs the heat source code and then applies function decorators to all of those functions and generates a new file and you ship that along with your own library that contains those decorators so you can open up these functions and classes, change them to your needs without maintaining a giant diff against upstream that's just an idea, I don't really recommend you do that but it's a way to think about synthesizing a new situation thank you thank you, can I have BDL Garby come up and pick from the box oh and does everybody have a ticket already I know some folks have came in and left you need a ticket to win so if you don't have one already just raise your hands and like you just don't care and we'll bring you a ticket it's okay, did you pick that? did you want to read it out? yeah Ken, where's the number, oh you're at the top they're all in the box already they're already in the box okay so 0835790 yes now maybe 0835790 suckers take that one 0837563 is that true? cool, cool thank you so much BDL what's your name? Jan Jan? what do you do? I'm doing product management for OpenStack at a small company in Burnin congratulations, you got yourself a tablet thank you very much alright thank you everybody for coming to the Lightning Talks today be sure to head down to the floor number one all the different booths, they have food and drink and definitely be sure to check out the HP booth we're gonna have our HP experts there and get some cheese and champagne I just want to give Cody Somerville a huge round of applause can you guys give it up for Holly Batter and Cody Somerville who emptied this entire day for you thank you very much Cody and thank you Holly