 Yay, it works. Hey, everyone. Welcome to the upstream development lightning talks. Yay You all made it through so we're gonna have some fun exciting talks. I'm sure you've seen the lineup It's looking pretty good. So let's go ahead and start with Lana. Yay Thank you Okay, so the story begins so many do With an argument on a mailing list In the olden days around the Taka we only wrote and maintained install documentation for what we used to call the deaf Core projects so that's no big glance keystone uterine cinder horizon plus we had a couple of rings Every so often some other project would ask to be included and we'd sort of go Do you mind could you just go off and do that in your own repo or something please because we kind of had to draw a line somewhere so Anyway, eventually somebody worked out that this wasn't a thing that was really cool And so of course what the inevitable happened and there was a bit of rant on a mailing list So being a good docs PTL that I am I put on my big girl panties And I waited right on into that mailing list thread I didn't come away entirely unscathed, but you know, I did have a few battle scars But we did actually have a plan. That was a good thing So the problem is that the big tent is not just big. It's huge We have the best big 10 It's not getting any smaller either that big tank keeps getting bigger and Pretty much all these projects within the big tent want an install guide So but having an install guide is also part of the project navigator So it's a way of being a grown-up project in OpenStack land. You need to have an install guide So at this point, I would like to acknowledge my co-conspirators. You know, I can I can see one So while I was making sort of wild-eyed assumptions and throwing the proverbial at walls Tommy Yuki Kato and Andreas Jago were behind the scenes making sure it was all actually going to work And that the things that I was saying it was actually a thing we could do So I appreciate that those guys did that and I didn't even feel like hitting them not even once So in ride these wonderful guys We get things set up in such a way that allows each and every project to write and maintain their own install guide in their own repo and we publish it up to the docs website and it looks exactly the same like all the others and They sit alongside their core install guide So this is what we started referring to as treating each project as a first-class citizen and it meant that we That became a core principle behind the changes we implemented So we also wanted to try and make it as easy as possible for projects to be able to do this So Andreas set up this really clever little tool It actually is a little github project by a woman called Audrey Greenfield and it lets us set up this basic structure It's called cookie cutter Let's us set up this basic structure and that means we get a consistent structure through each individual document And it helps it to look uniform across the doc site So in summary as of the Newton release a couple weeks ago newcomers to OpenStack have a much clearer path to getting started There's the basic core install guide which gets all the underlying services installed and things like your networking in your database And your compute services from there There's additional project specific guides pick up from the base first step and build on those services So what next there's always more to be done We need to keep on supporting various projects to write their guides We also want to redesign the index page because at the moment it's butt ugly. So that's next and We also want to make it more user-friendly. We also want to get some other install methods documented So we know from the user survey that most people install OpenStack using a tool such as some Puppet or Ansible and we'd love to see some guides on those tools get up there as well I'm going to redesign the site to make that really user-friendly for people to get at It's also why we changed the name from being an install guide to being an install tutorial Because we understand that if you're installing things by hand, you're probably not doing that for a production system You're doing it because you're trying to learn about what OpenStack is In summary if you work on a project and you want an install guide That is now a thing you can do if you are a user you're looking for install information That is now something that's much easier for you to do and everybody wins Before I go just real quick before I get kicked off the stage I'd also like to mention there is a super user article up on the website right now that explains all this in much more detail It's not quite so quick and has all the links and everything and you can go and check it all out. Thank you Speed demon let me start stop the timer. Okay, so we're gonna jump over now Uh God did I say it right? I apologize everyone. I'm horrible with names. So and then on deck we have Spencer so Hello, everyone. I'm who pin I Contribute to the OpenStack Cinder project. So this talk is about troubleshooting Nova Cinder API interaction failures by tracing volume state transition from log files Yeah, so now a call Cinder API is to perform various operations And these operations result in change in volume state for example the attach API changes of volume state from Attaching to in use so there are bugs in the interaction and especially with a cleanup and Because of those bugs if something goes wrong the volume state can be inconsistent between now and Cinder This inconsistency could result in different kinds of bugs for example The volume Cannot be deleted because it's attached with no corresponding instance and in another case We cannot detach a volume because it's not attached at all So in typical troubleshooting we try to identify the volume operation which failed from the log files Then identify the volume operations The successful volume operations leading to the one which failed and then figure out what happened in Nova During those successful volume operations. So this is time-consuming So since these bugs are due to volume state inconsistency can we use volume state transitions to speed up the troubleshooting? So in my experience we can use volume state transitions to speed up the troubleshooting So the the problem which motivated me to look into this approach was a bug report Which says invalid volume volume doesn't have any attachments to detach. So someone tried to delete an instance and No, I try no a call Detach API on Cinder to detach any attached volumes and that detached call was failing So I have a script which Generates volume state transition diagrams from log files given a volume ID as an input. So here we can see the Volume state transition diagram of the volume in the bug report. So for each transition we have the request ID and The operation are responsible for that transition So we can see that transition number eight is marked red because it's an invalid transition because they try to detach a volume Which is it's already in available state So we can see that transition number seven was a detached call and was a successful one So in Cinder the detached call was successful But now I think that the volume is still attached. So something that happened in Nova during transition number seven So we can get more details about that transition using the same script We can give the request ID and the volume ID as a input So it prints the transition details of the requested transition and the previous one for more debugging context So we have various info such as the request ID the instance ID and the timestamp So using these Info we can quickly identify relevant lines in novel log files So this is what I found from the novel log files Using the debugging info in the transitions. So we have the attached call and it was successful and then In the third step the Nova instance spawn failed and now I tried to clean up the instance And as part of the cleanup Nova called the detached call and that detached was also successful So then why try why Nova tried to detach the volume again because even when the detach was successful So one possibility is that Nova did not clean up the volume related metadata after detach So I looked into the code responsible for this cleanup and I couldn't find any destroy call on the BDM entry, which is the blog device mapping the volume metadata in Nova and I think that's a root cause of this bug That's it. Thank you. Thank you. I've been so let's go ahead So we're gonna be hearing from Spencer. I'm gonna cancel that timer and We have on deck Jeremy and Matthew Okay, hello everyone the logo changed My name is Spencer Crumb and I work on the infrastructure team and if there were no other hints I work at IBM And this is a talk about Vince so Garrett is this thing that the infrastructure team runs that hopefully most of you have interacted with and it's it's very famous for being pretty ugly and terrible and Yep, so like Garrett is probably a really good example of like they got a great data structure like the internal data structures are really Solid, but like the UI is just bad And so over the years we've done a lot of things to try to make this better A lot of people might not know this but this little the little like the grid there showing you your test results That's not in Garrett. That's actually something that infra injects into JavaScript to make Garrett look less bad Just as an idea, so there was this thing that came out of that which was this idea We would write something called Vince which was basically saying well Garrett has an API and good data structures We'll use that as a as a thing as just an API service and we'll write a UI that doesn't suck And this idea has come and gone several times But eventually I was like tired and I decided I would try so I tried And so you do the simple thing where you just have a web page that just you know has the client go make requests against Garrett and that immediately fails because the cross origin request signatures or whatever the cores the same origin policy So you get to the point where you need to make requests from The Garrett service itself and so I don't actually know how to program JavaScript And so what I did is I used grease monkey to just inject some JavaScript into Garrett Whenever it would load to a specific change and I I started deleting things and I played with the Garrett API until I could give a Plus one vote and a minus one vote with some little events. I mean at that point it was like Kind of pocable and I showed Greg and Greg was like this. This is this is really really bad And then we clicked around some more and we realized that that there was a big security vulnerability in our Garrett That had been revealed by me screwing with grease monkey. So I'm totally calling it That's always good from now on But then we go back to I don't actually know JavaScript and uh, hurgler burglar Diana Whitman Chose up and says hey, I know how to use this react.js stuff. Let's make a better vins. And so we actually Made a thing that looks kind of not like poop That uses you can't really scroll up and down in this because it's a screenshot, but It's a it's a it's a view of a change that does not look super cluttered and is not All table driven and you can you can look at it on your iphone and you're at your tablet The title of this talk used the word viable, which is like not even remotely close But we did write it and uh, it's got a whole thing Um, there are other people working on this right like everyone hates this problem So the garret developers themselves are using the polymer framework from google to rewrite the entire ui And that will be basically forced upon us the next time we garret upgrade or one of the future garret upgrades will be like Here's the new polymer based ui. I hope you hate this less And then there's a group called octo garret, which is basically just doing vins where they make it look a lot like github If you'd like to get involved in the project, honestly, diana left open stack She was a horizon dev at hp and then moved on to do other things and then i don't really know javascript that well But i would be if anybody was super excited to work on it like i would be happy to do it We have a pound vins on open stack or on free node And it's it's kind of cool You feel pretty sweet when you're like i made this a little bit better and we're actually like pretty close Like if we wanted to we could deploy it under review dev and have it be live enough for people to click on So anyways, that's the project. Thanks for your attention and have a great summit. My name is spencer crumb thank you All right, we're gonna hear from just jeremy otherwise known as fun guy um Yeah, and then we're going to be hearing from krill stan and Demetru So here we go Sorry As mike says i'm just jeremy But also known as fungy Matt trainish is Not here obviously, but who's here in spirit and um, we originally proposed this talk together. So i'm gonna try to to represent his uh His tenacity for this project as much as i can So we want to talk about a an experimental Service that we've introduced into the community infrastructure recently Something that was a sort of a brainchild of matt's and with the assistance of some of the rest of our developer community We now have the fire hose running in our community infrastructure I always like to start out with a big complicated diagram of our community infrastructure But i'm tired of adding them to slideshows so imagine Our massively complicated community infrastructure and there are lots of services that have lots of transitions going on within them continuously as as we perform testing of open stack changes and And provide wikis and mailing lists and and so on um So the idea was to take an mqtt broker And use it as a carrier for events within various pieces of community infrastructure We have anonymous read-only access to this mqtt service on port 18 83 You can also do tls ssl on 883 or 88 83 and There is some rudimentary web socket support We had it running But there's a lib web sockets bug that has caused us to temporarily disable it until we can rebuild against trunk version of lib web sockets So we kind of need to work through a little bit of that but basically for those of you unfamiliar with the protocol mqtt is a Publisher and subscriber messaging protocol Was originally known as mq telemetry transport. It's now an iso ic standard The the protocol itself dates back to 99 so it's got a fair amount of history It's it's basically a lightweight message queue design Designed for low bandwidth and and low resource utilization We are using mosquito as the as the broker on the fire hose at the moment and There's an example there of using the mosquito utilities Which are basically just you know command line subscription to the the wildcard topic in the fire hose If you wanted to really get the full effect of seeing all the events we're publishing in there right now Which is a lot but from a limited number of services so far As you can see here the services that we are publishing into the fire hose They're represented by various topic patterns, but we've got us a Publisher called gare mqtt that matt wrote that is publishing garret stream events The same sort of stream events that zul and garret bought are are watching in our environment We've got another publisher. He wrote called lp mqtt, which is actually a mailbox subscribed to Launchpad projects receiving bug status changes and then translating those into mqtt events and We've got a log stash outputter that is temporarily disabled because it's still a little bit rough around the edges But for people who want to kind of work on translating log stash Entries into mqtt. There's that They're all free software projects within the infrastructure family of repos for open stack and So our future plans here. We want to get garret bought using Something other than a direct connection to the garret stream so The fire hose is a good option for that. There are plans in zul v3 For those of you who are familiar with zul's current development status We would like to after the v3 release get the ability for zul to consume mqtt or some other messaging bus as a stand-in for the garret event stream And then some more publishers we would kind of like our ansible log Entries from changes that continually happen for our configuration management to get dumped in there And state changes in node pools so we can see Node creation and deletion show up and whatever you'd like to suggest We're pretty much open to to options here. This is an experimental service as I said So we're kind of just wanting to have people play with it and see what they think would be cool to add And if you want more information You can reach us on the info mailing list or in the info channel on free node and I've got links there to the System config documentation for how we manage the service the spec that introduced it And also links to the home pages for the mqtt protocol and the mosquito suite of applications And that is it. Thank you very much Thank you. Just jeremy Okay, so Just corral That's it. Um, and then on deck we have travis and matt for searchlight horizon So, hi everyone, my name is grill I'm a ptl of morano for this cycle and the previous one. I also have here stan. He's the actual author of yuckel The thing I'm going to talk about today and mitro. He's one of the best morano apps author so Today I'm going to talk about Yuckel and yuckel stands for yet another query language. I know that we're not good with naming things So it's a small neat open stack library. It's actually on the requirements. So you can use it in your project And you might want to use it in your project or if you're a user of a project that already implements That already incorporates yuckel say heat or mistral you might want to know how How you can enhance your experience with the with your project by knowing more about yuckel Uh, so Yeah, yuckel is basically a query language that allows the you to operate to make queries on arbitrary data You have to supply some data and then Yeah, then basically um Let me start a few afresh So, uh, most of the projects sooner or later operate on some kind of data be it a heat template on in the heat or morano operates on the uh Object model in morano or test graph and fuel so in many cases if not in all you would like to extract some data from that object model you would like to transform it somehow operate Make some calculations or aggregations over it um, so yuckel gives you the language and the tooling to embed the Uh expression language into your dsl and operate on those data um Yeah, here's what the yuckel expressions actually look like so you can you can have some simple arithmetic You can have some filters you can have some aggregation functions and imaginable examples of yuckel expressions can be well say alarm conditions for monitoring systems like, uh fire this alarm if half of the servers are in some state Data mining like give me all the vms, uh, whose name starts with the certain prefix or for example, give me all the The most used flavors or the most used images or for example, give me the names of the users Who have heat stacks spawned two plus weeks ago or stuff like that So for example, the the one two three four the fourth one here is well, it operates on an imaginary object model, but it kind of should give you The top five most used images um Yuckel comes with batteries included or so it comes with a large Uh standard library with a lot of things so you can have string operations basic math Well queries grouping aggregation You can also The thing about yuckel is that it's simple and it's also extendable Well, I'll get to that just in a moment. Uh, you can try and use yuckel from the cli it comes with a ripple So basically just pip install it surround yuckel then you need some Um Some data model to work on so just load some jason and then you can fire your queries and here's how you can use it from the python so Just create a yuckel factory parse an expression and here you go Then here you can operate on that data that your project uses So yuckel is already in the requirements Uh, and several projects are already using it. The first one is well, it was originally designed for morano So in morano, uh, we use this as a basis for morano pl So actually every single line of morano is a yuckel expression that operates on the object model Then there's mistral mistral uses it for data transformation and data flow between the workflows And then there's heat Uh in heat you can insert the yuckel expressions to make transformations on the data and its templates on the input parameters And on the outputs you can transform those and finally if you joined us in using yuckel and uses it for Data flow for Um That predicates for the deployment. So if you're a heat So if you're a a fuel, uh Fuel plugin developer and you have a graph of tasks you can with yuckel you can set the Perquisites for this task to be run or to not be run and that's it. That's basically what I wanted to say about yuckel Thanks if that was my carbon that would have shattered Okay, so we're gonna hear from travis and Okay, we actually have both people and wow the first one and then so we have on deck. Oh, man. I'm going to butcher this so badly Uh, henrik in raldo like horrible I'm going to refresh this and and off the mic All right, thanks. This is the first time I felt like I was on prices, right? Who's up next? So let's see. Yeah, I'm travis and that's matt And we're just to talk a little bit about search light So if you look at the cli today, what do you have? You have basically a pre does a fine list for all the things So I have I can go and ask for availability zones I can maybe ask for instances or networks or something like that You go out into horizon. What do you get you get a predefined list for all the things that somebody decided? This is how you should look at it And there's one way to do it and you can go into your hypervisor get a list of hypervisors details of one of them Maybe go to your volumes list of one things details of one of them, you know, kind of the same pattern But then what about my things like where's my list of things and so that's where we came up with cloud really requires search So we have here. This is actually view of the horizon plug-in for search light and the first thing you'll notice You know, we're not filtering anything here and you have some instances and images But if I want to find my thing so thing star I'm going to find volumes images dns records instances everything that has thing in it And if I want to say well thing or python, I just had python and now I've got volumes and images But you know what? I really just want python and web Things and here I have very quickly in a quick list. I've got Instances and image with things and I can act on them. I can do rebooting. I can do various things And the same thing works in the open stack client So if I just want to do open stack search query Python and web and there you go. It's the same thing and you can see your types There's my servers or my images or servers and you can limit it to whatever resource types you actually care about Or you can query across all of them So when we came in looking at open stack, we said, well, it's a set of distributed services And this means you have distinct responsibilities. You have different project teams many layers of code different sequel databases Very little consistency of the querying and you can't really search across these services And so I said, you know, we're going to bring in a search line. What is that dough? It gives you unified search And it's based on elastic search So under the covers we have elastic search and you can use the entire elastic search api to find your things Which gives you a consistent search api across the cloud You have full text search on any resource search term discovery So one of the cool things in the horizon which I didn't show is you can click on it Say I want an instance and you can click on and say Oh, here's all the availability zones that you might want to filter on and it finds them for you We have auto completion the fuzzy search. So if you mistype security like somebody did earlier on purpose, it'll fix that for you So conceptually it just kind of it's pretty simple really Basically for listing and querying you can query against the search light and it has a set of plugins When you make action requests, you're going against your services and they get indexed either on demand Like as an initial start-up phase and then via notifications we load In the incremental updates as you go. So if you make a change to nova, we receive those nova events And we populate that into elastic search behind the scenes Uh, we take care of all of the our back for that So in the core engine we have zero downtime bulk indexing meaning if we have to reindex something from scratch You don't notice because we Apparently make use of aliases and all the fun stuff that elastic search gives you We take care of the incremental indexing that we have the policy based access controls There's per user fill level data security. So if you're an admin You can see the admin feels like on a nova instance You'll see what host it's on you can search what host it's on but if you're not you can't even see that So we take care of that Kind of transparent Kind of transparently And then the resource plugins are very simple to write a simple one Generally is only a couple hundred lines of python. Maybe 150 200 lines and it takes just takes care of your data mapping and some of any extra resource mappings So i'm going to turn over matt a little bit to talk more about the ui Sure, i'm matt borland. I'm contributed to both horizon into the searchlight ui plugin And i'm just going to talk a little bit about how that plugin works Um horizon in the newton cycle introduced the concept of a registry of resource types and what that means is that Throughout the ui you can register information about all the different resource types like you know os nova server os glance images And that will influence how it's displayed within horizon So in this case resource types correspond to the heat types And let me just pop over to what you can see inside of searchlight itself All the different data that you see in the results is garnered from Combining the result set that is coming back from searchlight and elastic search With what we know in horizon from the registry and there are three basic things that the registry knows It knows some basic metadata like what are the names of the resources? So the fact that it's called server or instance or whatever you want The second thing is views the various ways in which you present the data So if you look at this little drawer that's popping down from the server line That's a view that's basically referring to an angular js template And that tells it how to compose all the data that's coming from your result And then the third type of thing that you register are actions these things on the right where it says Create image and then down below where it says pause Every type of resource Excuse me every resource type has a set of actions that it can use and we actually have three different types That we've got ones that can be done on a per item basis. So like on server we can pause suspend, etc We also have what we call global actions, which are The things that are shown at the top here So you can create an image without having already selected one You could also launch an instance for example without having selected something before So what was really nice is that horizon sort of allowed us due to this Ability to have a registry we could actually just plug in everything and get everything to work And I suppose our time is up, but thank you very much. Let me show you one last slide We have more information about searchlight and searchlight UI at this url Thank you I couldn't stop the timer on my phone okay All right, so What Okay, I I thought oh, there we go. Henry. There we go. I can say that and then on deck We have Roman So There you go. Take it away So I just like to tell you why you should not be using the v2 api in 2016 So Just to be clear we are talking about the v3 api of keystone That's the brand new that's not so new version of the identity api that was released in grizzly So it's about is there for three years and many people At 2016 still use the v2 api Which was deprecated in mitaka and in v2 we have Large set of problems related mainly to the global admin issue across the cloud So if you are an admin in v2, especially you can assess the whole cloud You cannot have a good multi-tenancy using v2 And although it's pretty old it's deprecated We still see some operators on rc some new users trying to use And the mailing still trying to use v2 and we try to tell them not to but they're They really see sometimes And even though and people use more recent open stack versions for example, uh, when they upgrade the their deployment to mitaka to liberty and They don't know but they still using v2 internally or for the services communication And if you're still not satisfied that you should be moving to v2 Just because of the deprecation that's going to be removed We have here some cool features that only v3 has for example domains that you can That that are now the new set of Projects and users that you can have a better control of the resource in your cloud We also have federation only available at the v3 api of keystone Which you can of course connect to other clouds to other identity and service providers And we can also have a better rule assignment in policy management. We have inherited roles. We have Domain-specific roles. We have domain-specific. Oops. That's how you're smart. We have Domain-specific roles and other cool stuff that you can do with the v3 policy sample And haild is going to talk about other cool stuff of v3 So another thing that we have on keystone v3 is the hierarchical project which provides the ability to create sub-projects On keystone. We have domain-specific backends Which is a thing that provides the ability to create different L-dapps for example for each domains that you have on on your cloud We have finite tokens. There is a non-persistent token And fix a lot of things on the UED and on the pki pki tokens So it's a Really good thing on keystone on v3 and we have another features like trust for example Oops So if it works absolutely We have v3 only gates now running across services and On these gates, we run the functional tests for forever servers in an environment when the v2 is disabled So it's just v3 and these tests are running in the core services on open stack And to be short When you have to migrate now V2 is deprecated and you'll be probably removed on the key release So thank you so We have roman Hi My name is roman and i'm working on nowhere in asla db And today i want to talk a bit about the new engine facade thing and why you as a consumer project which uses asla db should care Uh, so the new engine facade thing is essentially a new set of apis implemented by mic bear back then in liberty which Obsolts the old engine facade and engine facade is basically the primary interface to asla db From which you obtain connections in orem sessions and this new set of apis was meant to fix the existing problems And was meant to become be more concise and less error problem And the key thing here is what the multiple improvements here is that first of all you get Free thread safe initialization, which you had to do manually back in the old engine facade Now the imperative interface of obtaining session is replaced by declarative interface So you can declaratively define the scope of sessions and transactions and We have different decorators for marking off reader and writer Transactions so that we can for example afloat read-only transactions to asynchronous replicas or let's say retry the read-only transactions on db connection errors So just to give you a couple of examples how the new is apis better So this is how the pattern of initializing of the engine facade instance used to look like You have so basically you need to you need to create an engine facade instance And you have to configure it and you have to do it lazily because by the time you create it You might not have the config option parsed. So basically you need to boil operate something like that And you needed to know that this this is can be initialized concurrently, right? So you needed to use locks and some people use it some people didn't so in some cases it was broken Oh, not horribly but still In the new engine facade this logic was encapsulated in the facade itself And the interface is much simpler in cleaner You just import the decorators of pre-created facade instance And then you just use them to to decorate the db api methods that we'll talk to the database and the Session and connections will be injected to your context And this doesn't mean that you have only one pre-created facade you can create as many instances as you want Let's say for complex cases like in nova when you have more than one database And you can actually so this will actually use the But by default it will use options from a database config group automatically and if you need to Overwrite them. Let's say to enable foreign key support for sqlite. You can always do that by calling the configure manually or use the existing hooks for example to Execute these hooks on creation of an engine to enable to allow for integration with things like os profiler And another pattern that was not really good in the old facade Was that you had to create sessions manually and define the transaction scope with this conduct to conduct expenditure And the main problem here was that you needed to pass The created session to other db api methods to make sure they participate in the same db transaction Let's say here you create a snapshot right and You call the another method snapshot get and if you forgot to pass the session object You could easily achieve the result that the get method Would not find a row in the database because it would be another transaction. So it was error prone And we had this notion of private db api methods Which essentially started the name was started with underscore to denote the fact that They do not create transactions on their own, but they participate in existing transactions So we had to duplicate every other method to have a public public interface and private Now it is it is done declaratively So you don't need private methods at all and you do not create session explicitly You just declare the declare the transaction scope by decorating and db api methods and The d session or a connection will be injected to the context object So it's not new it's been there since liberty and we have like major projects using it like no way new turn complex ones and For some projects like ironic the immigration was really simple So you should take a look at the these examples And use it to switch to the new engine for certain your project And If you need any help just ping me or my bear on OpenStack Oslo channel or just post To the mailing list File file boxes you find any Switch to the new engine for that. Thank you God switched off Okay, um, I believe that's it on our list of the presentations that I have given to me. So Thanks everybody for coming um That's neon cat Thank you