 Hello. Anybody can hear me? Yes. Cool. Yes. Thank you. So, hi. I'm Rico. I'm Rico Ling. I'm working at EZ Stack. And then you want to introduce yourself. Hi. I'm Zane Bitter. I am the member of the Technical Committee for OpenStack and long time heat developer. And I'm heat PTO, by the way. So, today we would like to talk about to improve the... To share some improvements in the OpenStack integration for application developer. We're going to first to define what is an application developer for us. What's the meaning for us? And we're trying to let... After this sessions, we kind of hope that everybody get the clear view. What is the new generation technology we have in OpenStack for you? And what is not there for you? And we're trying. And where is the crazy idea we have? So, an application. So, what is an application means? That means for OpenStack, everything built on top of OpenStack in certain way should be considered as an application. Like, what you want to do maybe for... You can hear now a day for IoT, for 5G network. You hear a lot of awesome, crazy stuff in the keynote. And those are applications for us. So, who is an application developer? So, whoever like writing a code for their own product, for the services, and keep getting like getting ping for every time the service get crushed, will be by definition is the developer. So, you have to build your application infrastructure on top of OpenStack infrastructure. That means you will start to think about how OpenStack... I mean, the new world will be how OpenInfo structure do for you and do for your application. Like, you're thinking about... Even when you're thinking about you want to have a database service, but when you're trying to have a database service, you don't want it to get crushed. You don't want it to get unsolid. You want to be well-maintainable from through time. And you want to be able to like keep... Innovate the application. I mean, like the version 1 and the version 2. And you don't want to get like bother every time that your application crashed because something beneath it like the infrastructure getting crushed. So, Rika, would you consider something like Kubernetes an application? For a certain way, no. But, you see, Kubernetes is one of the way you build your applications. But its part for us is considered part of the application. I think it depends where you look from, right? Because if you're at the application level looking down, Kubernetes looks like part of the infrastructure. It is. But if you're at the OpenStack level looking up, then it looks like part of the application. Indeed. But you see, like how as an OpenStack developer can define what is OpenStack and what is applications. So, Kubernetes is like something built beneath it. I mean, sorry, something on top of it. And sometimes Kubernetes go beneath the OpenStack infrastructure. But for us, we consider Kubernetes as part of applications. In terms of it is something you require to build your application. Why? Because your application, what we mentioned might be my SQL server. It is on top of a container, right? It might run inside a pod. That's why we might consider Kubernetes as a part of application, but might not. But for me, personally, I think it is part of our application will be fine. So, spoiler alert. A lot of things we're going to be talking about today are equally applicable to running Kubernetes on top of that. And just purely the Kubernetes layer dealing with OpenStack, it kind of requires some of the same things as applications running. Exactly. Like if you see the new user survey, you see the Kubernetes is wow. Like, I don't know how many people are using Kubernetes for the application, but you can see the percentage, almost a lot of, like, most of OpenStack, I mean, a lot of developers service on top of OpenStack is using Kubernetes. So, definitely. So, let's talk about where it needs to be improved. We like a more higher level of concept. One is cost community integration. What do you mean? Like, what we have just give a very good example. Kubernetes is another community, which is huge as OpenStack. But how can we integrate the two community effort like to achieve your daily require to achieve the best performance and best quality for your service, right? As an application developer, you really don't care your infrastructure detail, but you have no choice to see how to integrate with it. And for us, our responsibility is how we can help as an infrastructure developer how we can help to help you to integrate the OpenStack Kubernetes without just telling you just go to Kubernetes community to get something or you just go to OpenStack community to get something that's whose responsibility. No, that's not kind of a thing we're going to tell to you. And the other thing for cost project integration, if you have been in OpenStack for a long time, or you have to try to build your product on top of OpenStack, every time you have some issues, you feel you have to feedback to the OpenStack community. But always it's a scenario and it always is a cost project. And sometimes you will feel you will have the experience to integrate your feature into OpenStack community, but sometimes not. You might want to get some feature in Nova, but it happens to be Nova plus single feature, which you require a cost community work. And that's why we think to have a cost project improvement is important. And indeed still today need to be improved because from time to time as an upstream developer, we heard from a new user, like he might be a new developer or new operator to complain about they got some very brilliant idea, they feel very painful to in touch with the community. But that is a brilliant idea we have to give, right? And on the other hand, there is a very great brilliant operator. They got a very small like architecture, but they just don't know what is there, like cross-projectly already implemented on the way, because who can have the knowledge to know that much project? So that is something we think that is definitely need to be improved. And the third is the cross-site and platform integration. So you might have a small OpenStack running, which usually not because we're talking about OpenStack is usually heavy user, big cloud or like telecom cloud or unless you might want to have your own data center, unless like a lot of user is at less scale to actually require an OpenStack to integrate it. So like multi-region, multi-site, if I have to OpenStack, how can I do? Do I as an application developer on top of this infrastructure, just do everything by myself? Then you are become an infrastructure developer, you become us, which is not what you needed. So to think about now today the cross-site and platform integration in community, we will say still on the way in certain rights then, you think? Like the cross-site and cross-platform ability for the entire OpenStack community is still under develop. Yeah, it's mixed bag I think you would say. Some parts there and some parts are still on the way. Exactly. And sometimes you got some feature, you look into it and it's there and sometimes you have cross-site, cross-platform OpenStack running and you got new requirements and this time the new feature is not there which is something that we definitely think it needs to be improved. And the community improvement is about like what we just give you an example. I think a lot of people here might have experience since you might be a real application developer, you have experience on engage with upstream but that experience, like what we mentioned, some use cases might not be actually that good and some certain use cases say he or she feel bad about trying to join a community because they got turned off and as a community how can we help to improve this kind of experience? Unfortunately to say that nowadays we still trying to find more ways to improve the community and so that's definitely on the list where it needs to be improved and finally it's upgrade. Well, upgrade is a well-known issue, right? You need to upgrade from one version to another, you need to rolling upgrade the whole way, you need to upgrade from certain versions for your applications. Now the infrastructure upgrade from my perspective is getting solid but it's in the terms that you need to think about application so it's not just about upgrade your infrastructure it's about move your infrastructure with your application still running and all the way to the new version but nothing broken and so that is actually kind of a heavy discussion in the entire community so those are the items we think that need to be improved and that's something, those are something that we actually spent a lot of time trying to help and that's why we're trying to have these sessions it's about how we can help so are we there yet? Are we there yet? Are we there yet? Are we there yet? I don't know But how about now? Well, like if you are like father you will have this kind of experience annoying but not yet but we're getting to something we're getting something we started working on something right Papa? How about now? How? Yeah, don't worry my little smirk so then I think let's start from if you see on the right on the top right we will on top left we will show what kind of area this is for like we have discussed about few items right so then So this is one that is actually ready and available now in Queens so application credentials and the problem this is solving is you have an application running in OpenStack and you need it to call OpenStack APIs and the very last thing you want to do is put your password which is not only the password to your OpenStack but probably the password to your email and everything else because your keystone is backed by by LDAP or Active Directory or something and you really, really don't want to give your password to virtual machine running in the cloud so application credentials are the first step in the solution to that problem so you can generate a credential which is tied to your user account in keystone so by authenticating with this application credential the application can do anything that you can do but only in OpenStack because you're not giving it your password so there's probably a lot more to be said about exactly how this works but the documentation is available online and once again this is available right now in Queens so you can go try this out and I highly recommend it if you're running Kubernetes on top of your OpenStack or something and you need it, it's using the cloud provider you need to give it credentials like I would recommend using application credentials rather than I mean the alternative would be to find out your IT department and get them to create a service account in your Active Directory or your LDAP back end so that you can run an application right, clouds shouldn't be about that you shouldn't have to call up the IT department to get permission to deploy an application so this is something that can can help short-circuit that and get a more cloudy experience for your applications I mean for us we so for like we trying to develop something and now they like the new new design for us we even consider, I mean we already trying to do something with application credentials which we will mention some of them after like later and this is this is the next step of application credentials so as I said right now an application credential that you create can do anything that your user in Keystone can do subject to what roles you give it so you can lock it down to a smaller number of roles but what you'd really like to be able to do is to give it permissions to only do the things in OpenStack that you know that your application needs to do so if you lose control of this credential because somebody hacks your VM running an over or whatever you don't you don't give it access to everything you can do in OpenStack you only say alright this application needs to do this thing this thing and this thing and that's the only thing it can do that can be done by someone obtaining the credential and this does not exist yet by the way so I was talking to Colleen yesterday she wrote the application credential stuff in Keystone and Queens and she's going to be working on this in Stein I guess so I don't know if it will land in Stein or train but this is on the way and it's the next step so you'll be able to express by the way exactly which APIs and right down to which kind of watch data is part of the API so you'll be able to give it access to a specific server or a specific container in Swift or something like that and make sure that it's really locked down to only things your application needs to do and then I remember like an years ago there is a forum in OpenStack and all the best developer are talking about what is the most important thing for application developer and what we should improve like the only thing that can solve and consider as a brilliant idea is exactly this application credential and all right? I don't remember that forum but it is if only the most brilliant developers were there it would be something else don't worry about this guy but it is check the record okay so excuse me the next step that we'd love to do after this and it's probably not going to happen until after we have those capabilities on the application credentials but what we would really like to do is to have NOVA actually provision credentials for you so before you spin up a server you can say okay the application running on this NOVA server needs to be able to access these APIs and then NOVA will somehow provide credentials to the server and the reason that we want NOVA to get involved here is because it's pretty much the only thing that has a way to actually change the credentials so if you put some credentials on your server probably eventually they are going to get compromised like you should probably plan on that so what you want to do is rotate those credentials regularly so that any compromise is limited in time as well as limited in scope because it's locked down to the APIs that that server needs so by getting NOVA involved in the picture it'll be able to automatically rotate those credentials so your application will always be able to get the latest ones that work and if it loses them then that's kind of okay because they're going away soon anyway so that's the next step which we'd like to see happen after capabilities are added to application credentials so this is still on the plan right? yes it's a forward looking you know past performance may not reflect future results and if you really like the feature I mean you definitely need to reply and feedback to the upstream community to say you like the idea and otherwise we're trying to help to accelerate this kind of a plan but it takes user in this case it takes application developer to actually tell the upstream about what you guys needed which in our perspective is very good stuff that should be done long time ago so the other thing that you can use right now in lieu of fully locked down application credentials with capabilities is pre-signed URLs and this kind of the problem with this is it only exists where some project has specifically implemented it so I think Swift is one example with the temp URL middleware that it has Zacar is another example but where a project has implemented this it'll let you generate a URL which contains the things that you're allowed to do so you know for Zacar for example you might have a URL that can only receive messages from a queue so I can't put any messages in the queue but it can receive messages from the queue so it'll be signed URL action get only and the name of the queue and then it'll be pre-signed sometimes for the expiration date so you can make them expire automatically in the future again so that they're limited in time if that's what you want and this is a URL you can give your application it doesn't need a Keystone token then to authenticate with the signature on the URL and this is a good interim solution until we get the full on application credentials this is probably how I would recommend doing stuff but again it only works where specific services have implemented this for specific operations so in the future we'll have a much more generalized framework where we can do this for everything but right now if this does what you need that's what I'd recommend so you're saying this feature is already ready there in the OpenState community yes again in the projects where it's been implemented I think it's a very cool feature and this feature is actually like Zen mentioned it's actually very interesting when we kind of like do a lot of things in heat and for us it's like how can we best protect the user and how can we best protect your information and your credentials and the pre-signed URL is something that even if it's not internal of OpenState service you should consider to use okay so this is a this is a fun one kind of moving on from the kind of low level things available to applications we put together this demo a couple of cycles back I want to say Okada let's say Okada and this is basically integrating a whole bunch of services in OpenState to do something you're all probably wanting which is to self-heal service so to actually notice when a server has died and replace it and so it's taking advantage of the fact that the heat template knows how to create the server in the first place and therefore how to recreate it so and again I think we can see on this slide that you know some of these bits are pluggable, you can have different alarms, different signals I think the next slide it's got things plugged into these boxes, am I right? you mean like details? yeah I might not put it in oh yes we have smart so basically how this demo works is we're combining heat, Zikara, aid and Mistral so aid is doing the alarming so it's looking at the Oslo messaging bus looking for an over to say the server has been deleted or gone to an error state or something's happened to it it's putting the alarms from that onto a Zikara queue and Zikara is using a subscription to post those messages to Mistral which triggers Mistral to call back to heat the workflow definition on the right there which is marking that resource unhealthy in heat so it's telling heat hey I got a message from an external thing saying this server isn't good anymore you need to replace that next time you're doing something and so it's going to mark it unhealthy and then it's going to initiate a stack update which is going to cause heat to go through when it gets that resource it'll say ok this one's not good it's been marked unhealthy I'm going to create a replacement and the template sort of hooks everything back up so the new server is hooked up to the alarms and all that kind of thing then it will delete the original that was unhealthy and you can do this multiple times and it will allow thanks to the convergent architecture and heat so you can do multiple stack updates in parallel now so you mean if I'm an application developer and on top of my Nova instance I have other things it's running is it possible that I try to monitor those application services on top of it and trying to trigger the alarm backed and adds to here wow I mean you could if you have your own application monitoring then you can just plug that in in place of aid here so you could post those straight to the ZikarQ or honestly you could post them straight to Mistral just when you get the advantage of ZikarQ is you can use a presign URL give that to your application monitoring and have that hit it and then it will authenticate to Mistral with a trust so you can do that at the time that you create the subscription and so you can have your own application level stuff triggering this as well. Cool. I mean this happened to one of the example that we're trying to use in presign URL to protect and that's why we think I put this diagram like this what you want to what is your service it can be an application, it can be a low level instance and it can be so high level as you define as you like and how you how you want to handle a signal and how you want to trigger a fixed job and those features here are already ready in OpenStack so you can try to use OpenStack service for all of land and try to replace one of land and use your own service just make sure that it can communicate with each other through this kind of workflow. Yeah and going back to your other point Riko about integration between between projects in trying to implement this I found a whole bunch of really subtle bugs where each project did everything that it should come up and kind of build this feedback loop there was like a little mismatch and so I went through and fixed a bunch of those so this now works very smoothly starting again as we said in Okada there may be other things like that in OpenStack and we're kind of relying on you guys to tell us what they are but those are the kind of things we can improve and fix and make the application development experience better yeah cool and the vision oh so this just merged today which is exciting so one of the things I've been working on in the technical committee is kind of putting together a vision for what we think an OpenStack cloud should look like in the future it's not we're not limiting ourselves to what OpenStack does right now but we're trying to set a goal for where we're headed and make sure everyone in the community is going in the same direction so one of the things we try to do in this exercise is actually to define what a cloud is because not everybody has the same definition I mean everybody's different I got a different definition for what you guys TC should do so the two things we said that clouds must be able to do number one which we all know clouds must be self-serviced right it goes back to what I said before like you shouldn't have to go call up your IT department and say and ask for permission to put in an application whether that's asking for a service account in LDAP or anything else so clouds are self-service and that means they need some other features like multi-tenancy and that kind of thing to make it safe to be self-service that's the first thing and the one that everyone kind of agrees on but the second thing is clouds have to be a resource for the application not just for the user so your applications are running 24-7 you want them to be able to access the cloud and use the cloud when you're not sitting in front of your desk clicking buttons that kind of feeds into a lot of these things that we've been talking about in terms of requirements for application credentials locking down the capabilities on application credentials having a secure way basically for the applications to authenticate to the cloud so this is something that is becoming more visible in the architect community I hope through initiatives like this and I think we're starting to get on the same page in terms of figuring out what we need to do in the future to make sure that OpenStack is a first-class platform for developing applications on and that's exactly why we put it here because we want to send a message out that OpenStack is really up to trying to help the application to get the better infrastructure and we're not just trying to do downstream well that's like very cool stuff and our next step is to expose the Siege and Working Group which is still we are still on the plane we have four round this week as well and what we think about is so you are considered as a user operator on top of us but when you have a feature get into OpenStack in not always getting well feedback you can go to the right person so how about we have a follow-up for you you can put your story you can put your blog you can put your report into some place and we got people like have the interest to help you like a Siege like a Working Group that they are there to help you to make sure your requirements like and well go to the right person is about they can help you to direct those tasks into different projects I mean people in the Siege might be a project PTL might be a very experienced developer they got the knowledge about what this might go to which project which you not necessarily and you don't want to have sometimes because you just want to do your application that's why this idea is coming out and unfortunately nowadays well you absolutely can do that to we have Sieges if you got any like scenario like self-healing scenarios you got Kubernetes scenario find those Sieges and then the working groups they're going to help you what we put here want to do is we want to push TCUC and everybody in the community on the same page to actually define the flow and define the guideline to help you and it's not I think it's a very exciting stuff as well which already have patch in heat already you can co- orchestration you can co-heat in one of your open set clouds and to ask to create a resource in the clouds and you can use exactly same request and to create a stack in another open set clouds that means if you got the knowledge it's a structure like a nasty stack means a stack with another stack but in this case the trial stack is actually a stack from another open stack that means you can control all of your open stack cloud from a single cloud you can control from there and what is the connection between using application credential to make sure they are secure and in the implementation we propose is also using using Bobby can to actually make sure whatever you store in open stack for your information your authentication information it's actually secure store so I like how you worked application credentials in there you can get application credential in it the amazing part is you can get whatever currently keystone supportive way for this it is not pin at all you just have to create a stack as usual but this time give it a credential secret now the development is already there in the upstream it's not merged yet but we're kind of looking forward to get merged it'll merge as I review your patch now there's Zen talking about you're behind on your bribes okay so there's a video recall for you I will be very happy and the next thing we're talking is about autoscheduling now we have a forum session today as well to talk about how we can have better autoscheduling improvements for you by talking about autoscheduling we're not just talking about heat we're not just talking about heat we're talking about heat we're talking about Kubernetes have autoscheduler as well which it got a project called Kubernetes autoscheduler and it's support using Amazon is support using Google Cloud but it's not supporting OpenStack yet which is exactly a missing piece we're trying to work on that as well now we have like a dev page there already but what is needed is that we definitely will try to keep pushing ourselves to implement it now we're going to try to support more like in this case we're talking about the resource group and scale we also consider from Kubernetes part when the poll needs to scale we can also directly talk to Mac now and Mac now will do the scale drop but anyway it's still on the plane sorry about that we always wish to tell you it's ready but usually when we mention something it's not but keep mentioning and we will be there I hope I'm talking about to have improvements to improve the ceiling to improve the heat to make sure the user get a single tunnel and to make sure the user get the correct documents and correct way to do it that's also what we already promised this week we're going to have documents for you and the other thing we're trying to promise as well but let's see we're going to trigger a discussion if you are interesting join or if you got ideas join us on the mailing list and we will figure out how what is the best for people on top of OpenStack you have one minute to talk about heat and Kubernetes it's like we got one minute it's just like this diagram you're still going to stare at this diagram but we're going to figure out what is the best mascot for the new Siege don't worry about it so we kind of like run out of time but let's briefly describe some of the features one of them is to have heat control Kubernetes and using heat you can use the heat resource to actually customize your own resource type so Kubernetes is definitely one of it how about you can on top of heat define all of your infrastructure from all the way to the Kubernetes you don't have to create a NOVA using NOVA to create Kubernetes and jump out and using some API to create your part we kind of consider to have this idea but it's just plain yet it's just plain once your Babacam code merges for the multi you get it possible we're on the way and so is Ansible as well right? I could see us doing the same thing for AWX which is the upstream project for AnsibleTow so like the new kind of way is capable to merge to current OpenState infrastructure we just have to figure out how we can do it so that's the most thing we're trying to do we're already doing and we are still on the way so we didn't leave you any time for questions but we will stick around after this and people who are watching on the web, you're out of luck okay, thank you thanks