 Testing. Hi, everybody. Hello, hello. How are you? You're on? You're on? I'm on. You want to go first? Sure. All right. Hello. I'm Scott Moser, and this is... Josh Harlow. Josh Harlow. He used to say Josh the bomb Harlow in the name, but he decided to tone it down a bit. Yeah, I didn't want to piss off my mom. So this is an evil user, super users, how to of launching instances to do your bidding. So I guess... Yeah. So just for certain people that don't know what Kotlin is, it's a... We'll go into a little more depth, but it's available on all the operating systems that you've used. There's various things that... Windows is in progress, but it's something that you've always used for most of your time that you used instances in OpenStack or EC2 or elsewhere. Yeah. So we'll go quick kind of give an idea of how instance initialization works. So after you go... You launch your instance, I don't know if any of you know or care necessarily how the underpinnings go. Lots of people are obviously involved in OpenStack here, so they have an idea, but maybe others do not. Let me ask a question. How many people have launched an instance via EC2 or OpenStack? Everybody, right? Yeah, that's good. Okay. So you've seen what we're talking about. Right. So, yeah. Yeah. So here's Nova Boot, you know, essentially to launch an instance on OpenStack, you're going to pass it some... Your key name to what key you want to be able to get in with, you know, the server name, the image, flavor, and then this thing called user data. And this is very similar across most... Most clouds, private clouds, OpenStack or Azure or Amazon. They also basically have similar sorts of things when you go to launch an instance. You know, you select the image, choose a flavor and a type, maybe attack some networks, say who can get in, and then, you know, click go. So, under the covers on OpenStack, what happens is, you know, OpenStack provisions and then either uses config drive or the metadata service to provide some information to the instance there. Right? So in OpenStack, the two mechanisms are config drive where it actually attaches a disk to the VM with basically metadata, user data, and vendor data. And then cloud init or some other instance initialization software will load that up. And respect it. Then the other path is through the metadata service, which is at that address. And cloud init or some other software basically knows that DHCP is off ETH0 or somehow gets networking up and then addresses that. It won't... Let me go back. Let's show the metadata just as a thing. All right. Let's see. I'll come on later, too, if you want. But this is one example that I put up from a real VM. Let's see if we can get it to load. Oh, no. Maybe not. No. Let me try. Let me try. Okay. Get it did. Yeah. So essentially, these are the kinds of things that are in the metadata. The OpenStack then provides the instance with information on the availability zone. What your name is, some network information configuration, and things like that in the instance ID. And so cloud init or another instance initialization service is going to use that bits of information. Yeah. So you can... For example, there's some special stuff that's in here that hosts name all... And lots of information about your servers plus other things that you can actually put in for vendor-specific information, too, if you want. So let's go back here. We can click on... I'll show you the config drive, which is sort of similar to my as well. It's a little bit... If you can think of a web service having like a file system layout, that's basically what it sort of becomes. Yeah. And then in OpenStack, the config drive and the metadata service basically render the same data so that they're consistent. Now, clearly, the config drive is a one-shot thing where the metadata service could be more dynamic in the future. And in other clouds, it is more dynamic. Amazon populates it with some new information, but... And you can't write to it. Yeah. Sure. Go ahead with your question. Oh, yeah. So that... Okay. We'll get rid of that. So that was from Sochi. There was apparently some... I'm not sure why we were discussing that. That was an earlier topic today. Thank you. You can look it up later if you want to know. Sochi... Toilet security. That's all you need to know. Google. Google or Yahoo. Or use Yahoo. Please. Use Yahoo. They're not too secure. That was the final result. Yeah. So I think... So generally speaking, that's how the instance gets access to data about it in one of those two ways on OpenStack. And they vary in different clouds. Sometimes you end up going through a serial device. On Azure, you mount a CD-ROM and take some data off of it. One way or another, the basic concept is there. And Ironic, I think, has a different way for... Maybe it talks in that later this week, too. It's sort of another kind of way to get data there. I think it may be a partition or something like that. But there's various ways getting some kind of metadata or user data onto the instance. So something else can take advantage of it. So sort of what it's used for... I mean, why do you want to have this kind of metadata or user data available at all is a common question. Usually you want to try to maybe have your instance sort of perform some kind of boot time initialization, not necessarily using cloud in it to do everything, but using some kind of standard software you already have, like a Puppet or Chef or Ansible or SaltSack or there's a bunch of other ones now. Or you want to allow yourself the SSHN to do various commands, right? Or you want to install some packages. So maybe you need to get Chef installed before you start using Chef. So you need to sort of connect Chef in and then start running Chef. Yahoo and probably other companies have some other stuff that sets up various other packages at least for Yahoo use case. We have installing users via another package as well, some Yahoo-specific things that people can run so they can run commands that do some Yahoo-specific programs. And that's all sort of extensible via cloud in it. And you can pass that in via user data or you can have it automatically happen via a pre-configured package cloud in it configuration. So how does all this happen is sort of a good question. And probably if you've ever messed around with the user data section in the instance, like if you go to Horizon and there's usually a tab or a box where you can sort of not necessarily known for what the heck it is. So at least at our company, we can make sure that we make it pretty declarative. What is this box and how can you use it? So anyway, you can do various things, actually a lot of things. Some people use it for just dropping shell scripts in to say instead of having cloud in it do a bunch of package management or install packages, they just want to run RPM themselves or they want to run high mom or whatever. And so the main, the first example, the simple one allows for pretty powerful stuff. But one other thing that's natively supported is that cloud in it tries to provide this added functionality on top using various kind of Python modules that it provides. And that standard format that those are configured is via GAML. So for example, this one here will activate a cloud in it module that will try to do distro agnostic things to install packages. So like for the top example, when you're running bash, you sort of have to do all the distro agnostic things like yourself basically have to like do red hat release or if you want to and then you have to do this or that. So cloud in it tries to provide sort of packages that do that automatically for you along with some other packages that do various neat things like mounting your drives that you may have attached. I think there's some a couple more. So here's sort of how we're using it at Yahoo is a good example too. A lot of people have made get repos or standard kind of things they can share with their team to basically configure configure well mostly Chef Yahoo they can configure Chef they can install packages they can pass around like how do I how do I get my user maybe how maybe all my team wants to be on certain package version and they want to all have it pre-installed so they can all share this same cloud config that or the YAML file or bashcript or whatever they can put it somewhere where it can be version controlled in a standard manner and that sort of leaves it out of the image like the other way they could do it is they can make a custom image they could snapshot it which is not at least for Yahoo not what we've been trying to encourage because the rest of the whole system is frozen including all the packages you lose all the security updates basically the whole thing is frozen in time versus having this user thing user data file or whatever you can get and you can version it you can look at the history and you're not really tying yourself to any specific image too tightly as long as you stay away from like if you do bashcripts in Git and you're going to be tied to an image at least a version of a distribution potentially the other thing that sort of does makes that clear boundary as a sort of state of there you can decouple the image from how it's being ran or how it's starting up and what it's installing yeah so the other thing as I mentioned before it's cross-platform it's not just as was shown before it's not just Red Hat or Ubuntu it's various different images of different operating system types that all sort of understand the same YAML format and they can basically run the same commands and additional distributions without any difficulty so it's sort of nice so let's see here yeah so some of the stages we're reading from there's various stages that it goes through to pull this off and there's basically three stages one is the concept of a data sourcing cloud in it it's where you can imagine the data stuff that was mentioned before metadata is one source of information it's an open stack and EC2 specific kind of concept other ones such as CoreOS they get it from different places you may get it from a config drive you may get it from web service you may get it from a serial port Azure has a different one right so there are all these various places that you can get it from so this is called the initialization phase and the data source is where that information is coming from basically cloud in it comes up and it starts looking around for different modules and each of the data sources there's a number of them configured in cloud in it and it basically goes through looking for one that finds a source that it's looking for if you look at etsy cloud config on your instance you can sort of see what the order is and what it's looking for there's a data sources section that says EC2 open stack we'll try to find the first one there and use that as the source of information they're meant to get out early when they don't find something if you've used it a lot you may well have seen the Amazon one kind of pulling around it's kind of a legacy thing often times early Amazon Ubuntu would boot faster than the Amazon metadata service would be there so you just kind of had to pull and ask for it again do you have it now, do you have it now and so that's kind of legacy I don't think it's necessarily a problem anymore but it props its head up so after it's found the data source the data source usually contains all of your information so it's going to consume that and write it to disk if you look under varlib cloud in your instance I think that's the standard location where all this is written you'll see a bunch of the stuff that it has saved and it will basically write that and various things will happen on the user data I think there's a section on that in the next slide that happens what happens to the user data is it may get bigger it may change itself so downloading additional user data it may be composed of various formats that will sort of affect how it gets used later so after that big initialization phase what happens is the next two stages are mostly just module running so after all that data has been consumed and written somewhere it is going to run various modules listed in your configuration that you've probably seen if you look at etsycloud, cloud.cfg whatever you'll see all the modules that are potentially going to run they don't all have to run based on configuration that you passed in, they may turn them off may not run at all so here's a little visual diagram sort of the stages it may be a little hard to read but these are sort of some of the modules that are by default included in the initialization one for example you may not be able to see it but there's SSH keys down here stuff is in here, package installing up at the top there's some basic ones like writing files doing some first kind of boot initialization stuff stuff that you may not want to happen at the later stages because of ordering dependencies or whatever at the last one you'll see like there's one that's sort of useful there's a phone home one which is sort of neat not a lot of people know about it but one your system is done booting actually you can call out to some other web service and say I'm done and here's my information and it allows for some pretty powerful concepts that people are starting to take advantage of at least a Yahoo to say now once they know that the cloud is done maybe Chef is done at that point too because there's a Chef module that's not listed in here but there would be one and then they can say okay now I can start the second stage of my stuff and then I can continue on doing maybe I don't know more advanced maybe I can start my performance analysis or something on the VM or start serving traffic or whatever so that's sort of a neat one one of the things that's sort of neat about I usually add is that it's actually much more powerful than I think many people realize and that cloud and it has some concepts that are sort of somewhat well documented somewhat well not but they're there and if you know about them you can do some neat really neat stuff like you can combine I think maybe Scott knows the historical reason but there was a reason we wanted to have this archive format so you can combine together multiple cloud config basically files together into one file I think there was a size limit that was originally part of the reason yeah so Amazon has 16k blob associated for the instance I think OpenStack there is a limit on the size I think it's 65k it's whatever though there's a MySQL field where this is all stored and it can be gzipped I mean it can be compressed and cloud and it will arbitrarily uncompress it but yeah there's a limit and it can be added the include support so you can say you know get this from your github account or these other URLs you know get more information so it allows you to basically create as much or as little as you want and if you can't fit it all and you use your data because your cloud provider's limitations you can put it somewhere else and then fetch it and so some of this stuff I already talked about like there's these modular plugins there's like 30 of them I think now and they do various things combine those in your configuration and you can if you want to run Chef, if you don't want to run Chef you can take it in, take it out there's another concept that's even more advanced I don't know how many people know it but you can add there's this thing called part handlers where a part is a piece of the multi part message that's actually used and you can specify your own kind of Python code that actually gets used to process that message and you can do various things based with that it's pretty advanced I don't know how many people actually use it there's some that know about it and if you have any good interesting use cases for it I'd like to hear it actually afterwards because I've always wondered maybe Scott knows so since the input to CloudNet is multiple parts CloudNet will just ignore things it doesn't understand and then one of the types if it comes to a part that is called a part handler declared as a part handler it'll load it as Python call method that will then register I handle these part types and then subsequently as it goes through the list it will call you then and say hey handle this part type so then it allows you to get into it allows you to shove code into the instance early in boot and then kind of act like that was there originally so if you have to be interesting do you know people that have been using it anyway you can talk about it afterwards we'll skip that for now so some of the formats that it sort of understands the default the gzipped one so you can save space on it if you only have 16k and you need 17k of data you can compress it down and you can hopefully fit it the multi-part MIME one is sort of an interesting one if you've ever used email it's that kind of format user scripts which you can basically bash scripts and stuff like that which I showed with an example previously the include URLs so that's you can basically paste a bunch of these URLs with the special include like C or whatever and then download those and combine them together as different pieces for you so that's even the more safe saving space and then you can of course just use the ammo you can use upstart jobs I haven't ever tried that one but yeah because we're redhead but we're using redhead and then you can do cloud boot hook which is another one the part handler one which Scott was talking about so I bought a bunch of examples some of these are from Yahoo maybe this one will be interesting there's a big push for going for Chef at Yahoo and that's this is actually like I improved I had to do some reworking of the Chef module to make it so they could use it up to whatever standard they want so that's all in I think it's released recently or a little while ago and this is sort of an example from one of them let's see I had to modify a little bit not too much take out some of the keys obviously you can see some of the there's a Chef server at Yahoo there's some names of some validation stuff they set up some keys I'm not 100% aware of all the Chef concepts but these are the main ones I think the PEM keys and stuff and they make sure it's installed actually so it has the Chef RPMs so it's nothing too special but this has been a pretty big last six months at Yahoo been a lot of Chef stuff that's been going on trying to move away from internal kind of deployment tools to Chef so they've taken advantage of this I think it's being used quite a bit on all the VMs that we run on OpenSack and bare metal after that so that's pretty neat there's some other ones, the multi-part one you can look at later when these slides are online another one that's sort of neat that has been used is you can make it actually benchmark stuff say you wanted to fire up a VM test how your VM is doing or whatever kind of performance stuff you can automatically put this for example a script in there that says I need some curl package I need this thing to run, which this file will be this will be like the script that runs and all the executable permissions and then after it's done executing it's going to turn the whole machine off, the VM off and then that will be I think on some clouds, I think on Amazon they won't charge you, I'm not 100% sure on this the VMs that are powered off so you're basically just running it as a performance thing and then turning it right off and you're probably only spending however long the test runs for and then you're saving you can just basically gather the data you want and get the heck out of the VM and delete it later I guess that's one use case from a co-worker that was doing that kind of stuff for some analysis for internal VM comparisons or whatever most of these are the rest of them are pretty basic so what else does it do these are some of the things that people you can do with it so remember these are also the chartered distro agnostic it's trying to provide an abstraction layer that makes it so you don't have to care if you're running on Ubuntu or Red Hat or CoreOS or whatever all these different types and FreeBSD actually too now so it's trying to provide the abstraction over that so you can set the hostname it will set your mountain planes it will do some distro package installations for you the main one you're probably used to one entropy source and one was new so you can make your VM not have a bad entropy which it will maybe have if you're not doing it correctly some other examples are online on Bizarre or more examples on our cloud I read the docs page there so as was mentioned before there's a whole bunch of data sources this is sort of the listing it's sort of interesting to check it out the Google GCE was able to be recent Azure I think was somewhat recent too I forget SmartOS was somewhat recent too but they all so this sort of shows that cloud has been used in a lot of places it's been and it's being used to provide a standard kind of initialization format and usage for many different clouds and I'm pretty proud that this list has been growing and growing like the last I think when I joined maybe three years ago it was OpenStack and maybe CloudStack and EC2 maybe Maz I think too and then it's been going from Azure Jump popped in, GCE popped in giant people popped in and we've got a good and a lot of these have come from the vendor the cloud vendor actually so it indicates we're getting acceptance and people are interested in making their instances on their cloud work so that's good we're not having to do all the code so that's great the no cloud one is an interesting one it explicitly is designed for you to be able to launch a VM kind of directly with KVM or Libvert on your local system, feed it some data have it do the thing and then be done and not involve setting up a fancy metadata service or a cloud or anything you can kind of launch an instance do some stuff, tear it down and be done with it Question, sure so by default each of these in like an Ubuntu image each of these is enabled in config you can disable them if you want or just say which explicit ones should be looked for and then it basically goes through a list with EC2 because of the polling at the end and walks down and looks most of them are declarative are safe in that they're not going they can get out early the ones that are on the the disk the no cloud is a file system that's attached that has a label of no cloud and so then it will just look through disks look at their file systems and find that and if it's not there it says oh, not there and so most of them behave basically like that it looks for indications of here a lot of them actually now to avoid retrying this though let's see some of them that we're looking at DevTTY as zero that's a device from the 1980s and it's not very well you can kind of get yourself into trouble poking around at it so a lot of those now that is the joint one no yeah I think joint and there might be one other they look at DMI information so joint when they launch an instance they expose it as a joint cloud and so cloud and it looks and if that's not there it doesn't even bother going further on it so they're intended to basically be safe does that answer your question? that's the goal at least for cloud and in general we want to be able to take we want you to be able to take an image and just run it wherever there is a cloud and so make one image that's going to work for you everywhere and the Ubuntu images are almost bit for bit on all of our that is identical so the Fedora images I think are similar I don't know which ones they enable I haven't checked but for Yahoo at least I know we shrunk down that list we only run one type so we shrunk it down to like I think OpenStack, EC2 I forgot why I left it in and then like no cloud at the end just so if nothing really works or something's really wrong we can have at least some fallback but I think the idea is you don't have to build your own cloud and it should be somewhat smart in figuring it out and if you find issues please open bugs I mean we want one image that works places so that's a serious issue so that's a good question so I've talked to the Keystone folks and the recommend well I had a spec open that it hasn't still happened but I think the recommendation is you can use user data to do that no problem but you want a more native interaction where it doesn't have to be any kind of it's automatic with OpenStack so the Chef issue at Yahoo we had that kind of question and it's still an ongoing question so there's been a delegation service sort of made that will fetch your keys and the only keys will only last for like five minutes so they will expire after the Chef job is done so there's a delegation service that Yahoo has that does that I think if people still pass it via user data they're told to basically rotate it pretty often so that they don't if you expose yourself you're only exposing yourself till it's next rotated I think there's ongoing involvement with the Chef team at least from my perspective but for Keystone I'm not 100% sure I know this similar question had with Chef who was like PEM keys those profile keys are sort of you can install whatever those things can install at that point Amazon has a solution ish for that I don't know if you're aware in their metadata service there's basically a set of keys that are for that instance and then the user you could delegate whatever power you want to those keys and those keys rotate so that's something that OpenStack could have and tie into the metadata service also it would be a useful thing maybe it's something we need to bring up with the Keystone first to make sure I think I've talked a little bit about it before but I haven't followed up too much something we can do so yeah as we were talking about here all those different companies that are doing this that we sort of mentioned before some that are just running images from various places like Rackspace or different ones OpenStack in general so sort of where it's useful it's pretty much everywhere almost every image you ever launched it's been somewhat involved and the container stuff is an interesting question maybe Scott can talk about that later but here's where he starts so this is sort of a little bit of a reworking of some of cloud and I'll let him get into more details but I'll give like overview our general introduction there's some stuff that we wanted to do with windows that there's some I don't know if they're in the room but there's some windows folks that were doing this thing called cloud base which is similar to cloud and in a way but handles the windows booting wanted to do some more advanced stuff with integration when things are hot plugged into your instance say your network or your volume and it would like to be the thing that actually handles what should it do with that network once it's been plugged in or unplugged and Neutron I think supports that stuff OpenStack sort of supports it, Amazon supports it so there's a question does the user basically have to go into that VM or bare metal or whatever and mount the drive themselves do they have to configure the network again so there's some limitations in the current cloud in it that it's not ready for so we're hoping to address some of that in this new version that we're calling CloudNet 2.0 and Scott was not on Star Trek but he won't notice so anyway go ahead CloudNet 2 is really just just starting in its development and we're really hoping to address kind of a lot of the issues that I've seen and basically short-sightedness when developing something over time that I think we can more cleanly address one of the big things for people I know for a lot of contributors especially that come from large companies the previous license of GPLV3 was not necessarily the favorite of enterprise lawyers so we are re-licensing under a patch it's a dual license of Apache 2 and GPLV3 and so essentially that most of the time I've heard that Apache 2 is a good license so should be acceptable if that's been annoying to you before please reconsider and reconsider contributing also we are a stack forge project now code's hosted there and then downstream mirrors at GitHub just like most of the open stack projects we've got Garrett reviews and yeah and we'll have continuous integration set up the cloud based people are setting up and Canonical also have some and we're interested if third parties want to do voting feedback on reviews will also be open to that Garrett gives us a nice workflow that you're probably already familiar with that sort of thing one of the things that was mentioned all those various Fedora or Ubuntu we want to have a more automated testing on cloudnet that we're not busting everybody else so having the ability to do that automatically would be pretty nice and then yeah and then so cloudbase is a company that has done the did cloudbase and they're focused on the windows support and then Canonical and Yahoo and then we've got some other people to contribute and hopefully we'll have more have people contributing so and then also struggling for documentation on cloudnet I will absolutely accept that doc is lacking so we're hoping to do a better job of that it's gotten better yeah and these are so python we're attempting the goal is to attempt to support rel6 so python 2.6 is the the big thing there and then 2.7 and 3.4 you know a single codebase and and then yeah windows vista and going forward and then also free vst and there's some other operating systems that have been suggested could we'll have or would like to have cloudnet support too so I know some folks from IBM are interested in having their AIX support so we want to we want to support that and want to give people the ability to use cloudnet there wherever we can we'll do backwards compatibility if it's just completely not saying but I'm willing to be pushed on that too so I don't want people to have to you know I don't want people to have to know what's inside an image as much as little as possible so as Josh suggested cloudnet in 0.7 has been very in it and many times people have asked about a persistent agent or event based responses and I've always been kind of saying that cloudnet is the thing that gets you to something more intelligent and so that's how a lot of people have used it to tie into chef or puppet or you know or juju or some other management system that is doing further further management but there's a lot of things that it would make sense that it makes sense for cloudnet to do after boot and hot plug of devices is the the big thing if you're familiar with Amazon the way that when they attach a device you get a hot plug Nick if you look in the metadata service there's information about what that Nick should look like what it should be configured as the MAC address and then you know this IP address in these routes and so Amazon Linux has some things that respond to that and configure your Nick automatically for you we'd like for a cloudnet to have that sort of stuff built in so that they bunch of images and any images using cloudnet on Amazon we'll get that and then there's some specs and review for OpenStack that will do something similar provide more gosh config oriented description of what your network should look like then right now if you're familiar that the information that you get about networking configuration on OpenStack is a file that is formatted in Etsy network interfaces style so hopefully going forward that will be a more declarative network description cloudnet will then consume it and render the network so there's a couple of specs I've seen at least one just yesterday and another one ongoing for that and I guess Cinder has the same kind of question like how do you determine what the volume device should be that popped up or maybe Manila has the same kind of question in the end so we'd like for cloudnet to be around all sorts of things also as a block device gets added if the metadata service provides you information on what should be done with that cloudnet could then render a rate array to a series of them and then put a file system on it and mount it for the user and then so to the user that's all magic and transparent because clearly that's not stuff anybody really wants to care about and then for life cycle events cloudnet has always had this idea of you don't need to capture it so it goes and looks for instance a new instance ID and acts on a new instance if it finds itself it's new but a life cycle event might come from the cloud provider that says hey I'm going to take you down or the user requested you go down for capture or you go down for file system sync and so cloudnet could FS-freeze the file system or might bring itself down so it's really clean to make itself more ready for capture and then yeah and we'll have ways that you can tie into those hooks. Windows 1 will be interesting I haven't mentioned it. Sure question. Of hooking it? Yeah, so either an event maybe something come up through that shutdown is easy in ACPI event or something like that but then the purpose of that is different if the metadata service did provide me after I got an ACPI event that said hey I'm going down for capture or why are you or also some way that you could run and say hey cloudnet you're going down one way or another even if the user had to poke it to have an interface to do that and then going forward things can be built on top of that. I think we want to try to make it a pulling thing or you don't have to poke it too much but I think that's I don't know how Amazon is pulling it off probably hopefully something that works in a scalable and not like user-centric manner. Yeah, cloudnet is not just going to sit there and pull on a metadata service as much unless that's necessary. Right. Cool. Yeah, I think we already covered both of these things so similar kind of things. Yeah, so I guess the one thing networking has been a pain for cloudnet and largely because cloudnet has kind of been pretended that networking didn't exist that the operating system that you got was configured already to do the right thing on networking and usually that meant DHCP on ETH0 but it's becoming the standard case that networking is more complex than that. Right. And so going forward we'll have hooks into the operating system into system D or upstart or system 5 in it that basically enforced that cloudnet is able to find a network configuration source prior to the networking coming up so we don't have to deal with the networking and came up now I gotta take it down and bring it back up and we'll block and set all those things correctly. We mentioned the block device one so we can move ahead of that one. Oh yeah, I guess we're at the question stage. So yeah, sure, go ahead in the front. What's the interaction with the No-Zero So the No-Zero, you may know I know a little bit about the issue with the 169 but it's implemented I believe in Neutron V... I can't exactly... Neutron has some logic to connect it there's like an IPTables rule in Nova Network that made sure that that was available to the instance. It's a metadata service as part of Nova that runs and answers it and then Nova sets up, yeah, or Nova and Neutron work together to set up routes to this metadata service. So yeah, the No-Zero I've heard of that for some reason, I'm not sure why but yeah, I think I've seen that too I forget exactly the reason what it is who they can look into it and come back there. Cool. If you're using... If you're finding the network, it's an interesting... figuring networking configuration from the network source that's interesting. In the OpenStack case where you would have config drive and a metadata service potentially config drive gives you initial network config information and then you just kind of use that to get to the metadata service. We have to solve these things for sure and then also indicating oh, this is the first time I'm up I should kill all networking from the previous time versus the user configured stuff and they don't want me to do that and then the snapshotting is again... So there's definitely some interesting problems to solve there. I've talked to the the CoreOS team at one point and they're open to doing that sort of thing and trying to... where it makes sense to... they've actually... I think is it where it makes sense they've used the config information that CloudNet did just in order to do that but I'm open to doing that and as we go forward to CloudNet too I really want the modules to declare a better job... do a better job of declaring what they're expecting to have a JSON schema and things and be able to check their intact. There's lots of fixes. Cool, so it looks like we're running out of time but maybe one more, sure. So I know that you can... you can... for user scripts that are bash scripts, that one's more outside of CloudNet but if you can turn up the log level and start seeing a lot of things that we found useful but if it's like a bash script it's just going to... CloudNet basically at some point just execs that bash script and then it's up to the bash script to echo whatever it needs to echo. Do you have any other recommendations? VarlogcloudNet.log is extremely verbose. I mean, most of the time when somebody asks me what's going wrong if they're able to paste that I can figure it out. That's where... so that's there and then also by default varlogcloudNet-output.log contains any programs that were run by any of the path of CloudNet like its standard output is getting teed into that file. So lots of times you'll see, you know, like a program air output in that file that went to console otherwise and is often... is often dev-null. Thank you.