 Good afternoon. I'm Benjamin Gilbert. I'm a software engineer working on Fedora Core OS. I also still work a little bit on Core OS container Linux. Test. Test. Can you hear me? Oh, you can hear me. I'm Jonathan. I work with Benjamin, also on Fedora Core OS. Cool. Get us to events. Try this guy. Interesting. We've got two simultaneous copies of this going. There you go. Sorry, let me play with this for one sec. I can see what's going on. Oh, I see. It's only moving. Sorry about that. Okay, so what is Fedora Core OS? It is a new edition of Fedora that we're building specifically for running containerized workloads at scale. If you've heard of Core OS container Linux, this dates from Core as Incorporated, which Red Hat acquired, what, a couple of years ago now. And container Linux has been pursuing a similar model for several years. It's a small integrated OS specifically for running containers. So we're continuing that sort of model plus integrating technology from Fedora Atomic Host. Here's the mission statement. But I think there are a couple of elements that are important here. Fedora Core OS is targeting several different sort of workloads in this space, or use cases in this space. It is targeted at clusters, but it does not necessarily require running in a cluster. And obviously we're interested in running in the Kubernetes ecosystem, but we're not going to require that either. It's also possible to use Fedora Core OS to run containers of standalone. You may have heard of REL Core OS. So what is the relationship between these operating systems? REL Core OS is specifically intended as a component of OpenShift. It updates along with OpenShift. It's versioned along with OpenShift. It is based on the Red Hat Enterprise Linux package set. So there's not actually a separate skew, as we say. You can't download and use REL Core OS as a standalone operating system. It's something that's just integrated into OpenShift, and you shouldn't have to think about it. Fedora Core OS is targeting a slightly broader set of use cases. It shares some of the components in tooling. It shares a lot of the same people working on it. But it is standalone. As I said, it's targeting a slightly broader set of use cases, and it's based on the Fedora package set. So there are a few elements that's important to understand about how we think about actually both of these operating systems, Fedora and REL Core OS. And the key one is immutable infrastructure. These philosophical elements are not things that are embodied in code exactly, but they affect how we design the operating system and how we want it to be used. So immutable infrastructure. The idea here is that you'll need to make some customizations to your operating system, right? You might need to set the host name, configure static IP addresses, configure your container runtime. And the idea is that all of those customizations should be encoded in a single file, which is the provisioning config. And then that provisioning config is given to the node when it first starts, and it applies all of the configuration. After that, we think that you shouldn't go in further to configure the node. You can SSH to the node and change things. You can use configuration management tools if you'd like. We don't stop you. But the problem is that then your provisioning config gets out of sync with the actual state of the node. So what we think you should do instead is change the provisioning config and spin up a new node using it, and then tear down the old node. So essentially, once a node exists, you treat it as immutable. And the reason for that is then if you want to scale out due to changes in demand, then you can just launch new nodes with your current config and not have to think about configuration per se. So another philosophical component is that software should run in containers. The operating system is for supporting hardware. It's for providing the container run times. But if you're running your own software on the node, it should always be in a container. Toward that end, we don't ship interpreters. We have bash, obviously. And Awkin said, if you count those, we don't ship Python, we don't ship Perl, we don't ship Ruby. If you want those things, run them in a container. Similarly, ABI compatibility for libraries within the host is not something we worry about too much. The operating system is self-consistent, but if you copy some random binary onto the node, it's not guaranteed to keep working over an OS upgrade. And speaking of OS upgrades, OS versions themselves are in implementation detail. Think of the node as something like an appliance. It should auto-update itself. I'll talk about that more later. But really, you shouldn't have to think about it. And in particular, when a Fedora major version upgrade occurs, if you go from Fedora 30 to 31 to 32, that should just be a regular upgrade. You shouldn't have to think about that at all. Okay, so what does this operating system look like in a little bit more detail? It's targeted at cloud instances and bare-metal servers. We aimed to have it available on a wide variety of clouds, all of the usual suspects, really. Workloads, as I said, should run in containers, which means that the operating system image itself can be pretty minimal. There's not a lot in it. It's an image-based distro. So you're not running DNF. You're not installing RPMs. You get essentially a monolithic file system image, and then it is updated atomically. So if you've heard of an RPM OS tree, you can think of it a little bit like Git for the operating system. You have some revision, and then when an update comes along, you pull that down and apply it atomically to the disk and reboot into it. So you're never running in sort of a half-upgraded state the way you might be with DNF. On top of RPM OS tree, we add automatic updates. So RPM OS tree, by default, will upgrade when you tell it to, same as DNF. We are adding a system on top of that, which automatically fetches and installs updates, and again, we'll talk about that a little bit more later. So specifically, what clouds? The targets that we're focused on right now are listed there. That also includes virtualization systems like QMU. One note, though. Some clouds, often, you'll find if you launch in, let's say, GCP or Azure, many Linux operating systems will include those clouds agent. The agent does different things depending on the cloud. On some platforms, it has to check in with the cloud before the cloud believes that the OS is booted successfully, for example. Sometimes it provides additional management functionality so that from the cloud's web interface, you can add users to the node or maybe even run code on the node. In general, we are going to try to avoid shipping those platform agents. A lot of the functionality is not all that meaningful for a sort of more specialized operating system like Fedora Core OS, and not all of that code is uniformly well-advised. And so what we are generally trying to do, we have a piece of code called Afterburn, and it is sort of a generic minimal cloud agent. So on those platforms where you have to do something special to get network configuration or hostname or to tell the cloud that you are ready, Afterburn will do that, and we will use that instead of the cloud agents. On the bare metal side, it is pretty much what you would expect. You install the operating system to disk and run it, except for one thing. In some sense, Fedora Core OS is a cloud-native or cloud-first operating system, and in the cloud you don't have an installer. There is nothing like Anaconda. You are just launching a prepackaged image. And so bare metal for Fedora Core OS doesn't have an installer either. What we do instead is essentially a shell script. You run a thing, and it fetches a monolithic bag of bits, and effectively just DDs them to disk. Consequences will get to in a minute. But the idea is that the install process is as simple as possible. One other note. Container Linux supports live pixie, so you can essentially just run your entire production OS from RAM and never install to disk at all. We are going to have similar functionality for Fedora Core OS. This is actually fairly widely used on Container Linux as a way to even further minimize the footprint of the operating system on disk. So what is in Fedora Core OS? Of course it has all the latest Fedora Bits, system DE, kernel. It's not based on Gen2. We have all the basic hardware enablement software that we need. There's basic administration tools. Like Benjamin was saying, we don't expect you to log into Fedora Core OS that much to play around with it because everything should be set up right from the beginning with your Ignition Configure from the provisioning step. We'll talk about Ignition after. But we have SSH, of course. There's R-Sync, guitar, all the basic stuff. You can check the journal to do some basic debugging and stuff. Of course, there's Container Engine. So we have Podman, Moby, and SystemDN spawn. And then we're still discussing how we'll provide the Kubelet and Cryo 2 nodes. So watch out for more discussions about that. Okay, so how do you actually provision a Fedora Core OS node? So Fedora Core OS uses this tool called Ignition. Ignition is very similar to... an idea to cloud in it for those of you who are familiar with how Fedora Atomic Host was provisioned. So Ignition takes in a configuration file. This file is a JSON document and is provided using whatever the user data mechanism is for the platform you're targeting. So in most clouds, you can pass in some user data. So what can Ignition do? You can write files. You can write SystemD units. You can create users and groups. But then you can also do fancier stuff like partitioned disks, create rate arrays. You can format file systems. So for those of you coming from the rail Fedora side, it's sort of a mix between cloud in it and kickstart. And that's part of the reason why, as Benjamin was saying, the bare metal image and the cloud image are actually the same thing. Ignition is sort of taking the place of kickstart and cloud in it almost. So the reason why we can do those kind of powerful manipulations is because Ignition runs in the init ramifest. So it runs even before the system is really booting. It runs exactly once. And most importantly, if anything fails during the provisioning steps, if anything in the config can't be executed, it'll fail the boot. So that means that if your node came up, you're pretty confident that your configuration has been obeyed. Okay, so how do you write an Ignition config? So like I said, the configs are in JSON. It's super mechanical. It's not that pretty. It's unsugarred. So we have this sort of layer on top called the Ferrochorus config language. So this one is meant for humans. It's in YAML. So it mostly maps onto the Ignition config, but it has some additional sugar for things like, if you want to set your time zone or if you want to set whenever update windows for when your node should update, things that the Ferrochorus config transpiler will convert into things that Ignition understands. I sort of jumped the gun because the next bullet actually explains what the transpiler is. The transpiler is the thing that converts the YAML into JSON. So it converts the Ferrochorus config file into the Ignition config JSON that Ignition actually understands. It gives you a chance to be stopped by the transpiler before you actually bring up the node if there's a really obvious error in your Ignition config so that you don't find out whenever you're booting your node in AWS. Oh, I missed a closing bracket here so I have to reprovision the node. Okay, automatic updates. So this is basically one of the key features of Ferrochorus. It's inherited from the continuous Linux philosophy of automatic updates. And the basic idea is users should not have to think about updates at all. The machine should just take care of updating itself. It should be able to just pull in the latest bug fixes, the latest security patches. A consequence of that, if we really want this to work, is that we need automatic updates to be rock solid. We need it to be super reliable. Because if it's not reliable, users will just turn them off. And if users turn them off, they don't get the latest fixes. They don't get the latest security patches. So that basically means we can't have any breaking changes and if we do envision any sort of breaking change we need to have a really long deprecation window that we publicize widely. So how do we make sure that we don't break changes? We don't introduce breaking changes. So a lot of CI, obviously if the build doesn't pass CI it's never going to make it to the users. But then we also have some sort of process level mechanisms. So Benjamin, after, is going to talk about managed update rollouts and the different release streams that we have with different semantics around breaking changes. And then finally we have automatic rollbacks. So the node is capable of, let's say it pulls down the update. It reboots into the new update. And if there's something wrong with that new update, for example, some service doesn't come up, it'll detect that and it'll roll back to the previous update, the previous version that it was on. This also includes the ability for users to specify additional checks that the boot has to pass before it's considered a successful boot. So you can imagine if in your scenario you can have a service that you really need to be up. And if that service is not up, even if everything else is working you don't want it to propagate across your fleet. Okay, so I'm going to do a demo now of the, I guess the provisioning workflow and also the automatic updates. Oh, this is going to be interesting actually, because I can't see, let me switch into mirror. I guess we'll revert the settings. Is it not? It's just dead now, isn't it? Okay, fun. Let me try something here. We're trying to make it mirror. Oh, there we go. You can go ahead though. So you talked about the immobility of the nodes, but then you also talked about kind of bringing up a parallel system. Yeah, sure. You talked about the... You can't change the nodes once they're up. We also talked about having updates and a live reboot system. And I just, those were kind of, I didn't quite understand how that worked. Right, we think of the system as immutable from the perspective of the config. So the user, again, is free to change configs and things, but probably shouldn't. The node itself will continue to update. So it's not truly immutable in the sense that it's a fixed bag of bits that never changes. It's just that we don't want the user poking around at the system after it's done. Can everyone read the font? Okay. Okay. So this is a PhotoCoreOS config. So it's YAML. It's through every field, but basically we're telling, we want our config to create a user, a core user, and we want it to have a specific SSH key as an authorized key. And then we also want it to write out a systemDunit file. So now we have this YAML file, and we'll convert it to... Actually, let me bring that up. We'll convert it to the ignition JSON. To do that, you use this tool called PhotoCoreOS config transpiler, shortened to FCCT. And actually, I'll just use my bash history here. So you give it the YAML file. This is the file that we had just opened. And then we want it to output to JSON. Enter. And I'll just show you quickly what that looks like. So essentially the same information is there just in JSON, but in the future you can imagine more complex things in the YAML that gets translated to many more things in the ignition config. Okay, so let's boot up a machine with this config. So actually, I'm going to boot up two machines. One is going to be on the previous version of PhotoCoreOS, and the other one will be on the latest version of PhotoCoreOS. And the reason I do that is the one that's going to be booted on the previous version will be for demoing automatic updates. But for the purposes of provisioning with an ignition config, I'm just going to use the latest AMI. Because you should always be using the latest config normally. You wouldn't have a reason to use the previous one. Or the latest AMI, sorry. So, okay, this is a lot of goop. But essentially the key part here is we do AWS EC2 run instances. And right here the user data, that's where we're passing the ignition config to the cloud. So let's run that hopefully conference Wi-Fi is on our side. Okay, cool. Okay, so I provisioned two instances. So the second one is the latest AMI. So I'm just going to fetch the IP for that, actually. So remember the ignition config basically had the authorized keys that we wanted to add in a systemdunit file. Just a dummy foobar in it. Okay, that's fun. Might have to wait a little bit more. Let me just double check here. 380, 140, 187. Okay, so we're in. So the fact that we logged in, that means that the SSH config went in. And then let's check our systemd service. Okay, our bar has been food. Very cool. So now let's look at automatic updates. So I'm going to log into the second instance that I put it up, which was actually the older version. So it's this one. Okay, so this is interesting, actually. Okay, so right now it's on the previous version. So I didn't actually show you the archmage status of the other node that I booted up, but it was on 0801. And this one is on 0725. So it should be in the process of updating, and we can check that by looking at the status, the journal entries for Zincati and Appianmostry. So Zincati is a service that actually checks for updates, and when it finds an update, it'll tell Appianmostry, okay, upgrade to this latest update. That's what happened. Okay, filter query Cincinnati. Okay, so Zincati will retry in five minutes. Do we have that luxury? Well, what I can do is... Okay, so now Zincati detected a new update, and then it's telling Appianmostry to deploy this new commit, and now Appianmostry has deployed it, and now the node is rebooting. Yeah, so what just happened there was that we start the update client without respect to whether networking is already up, because if the first attempt to check in fails, try again in five minutes, it's fine. But of course, for a demo, it's not so fine. Yeah, let's see how quickly AWS nodes reboot. What's funny is when I tested this earlier, it honestly reboots fine within the first two or three minutes. It doesn't hit that race condition, but the one time you do it live... Okay, so there's already a good sign here, which is in the login prompt thing, it already tells you the latest version you're on, and this is the latest version. So we know the upgrade went well, and if you do an Appianmostry status, you can see that it's in the new deployment. So if you're not familiar with Appianmostry, this is basically saying I was in this previous version in 0725, and now I'm in this version, 0801. And let's go back to the presentation. Cool. It's married now, so... Yeah, cool. Okay, we do a couple interesting things with respect to how we publish install images and upgrades. So as I mentioned before, when you install Fedora CoreOS, you're essentially just fetching a monolithic set of bits and copying them directly into your desk. In many Linux distros, you can go to some FTP or web server somewhere and browse around and see all of the release images. We intentionally do not enable that for Fedora CoreOS. The older release artifacts are still there, but we think it's important to exercise closer attention to which images are being published. So the starting point if you want some Fedora CoreOS bits is a JSON document at a public endpoint. And it's highly nested, so I didn't put it in the slides, but if you look at that JSON document, you index by the CPU or architecture you want, the platform you want, such as AWS or GCP, and then perhaps even the region of AWS that you want. And when you get down through all those levels of nesting, what you get at the end is, okay, here's the Fedora CoreOS version, here's the perhaps cloud image ID, or here's the URL to the download you want. And the idea is that if you have scripts, for example, that fetch a QM image and deploy it to your machines, those scripts start with the stream metadata and always know what our recommendation is for the OS version to run. In fact, the download site, which is just a traditional download web page on getfidora.org, reads the same JSON document. So the idea here is that if we put out a bad release for some reason, and then we find out about it, someone files a bug or whatever, we can stop new deployments from using that release because we'll point the stream metadata back to a known good release. So this always represents our recommendation for what's the safe release to run. On the update side, we have something similar. So you saw it in CAUTI just now. Before performing an update, it checks in with a service run in Fedora infrastructure, which gives it permission to update and tells it, essentially, here are the versions you can update to. That lets us roll updates out gradually. General Linux did the same thing. When CAUTI checks in, it picks a number between 0 and 1, and it says I'm interested in being this aggressive for this rollout. And, you know, if I'm a client and I say I'm, my aggression is 0.4, and the server says I'm only rolling out releases to 0.3 and below, then I won't get that update yet. I won't even be told it exists. So what that lets us do is just roll out releases gradually over time, and that means that if someone reports a regression to us or we find out about it in some way, we can stop the rollout without having bladded this update to however many, to the entire fleet of nodes. And other things in CAUTI can do, so that's sort of a distro-wide piece of functionality. But individual nodes, individual clusters can also be configured to have their own services that they check into. So if you're running a cluster, you probably want to make sure that every node in that cluster doesn't update at the same time. And the way you can do that is have ZenKETI call a service in your cluster and say, may I update now? And that service can make sure that it gives permission to exactly one node at a time or two, maybe it only gives out permission in the middle of the night, whatever it is you want to do. We will provide at least one implementation of an example service or you can provide your own. Release streams. Jonathan mentioned automated CI before. CI is good, it can't catch everything. We are shipping the Linux kernel, we are shipping SystemD, we are shipping multiple container runtimes. That's a lot of code being written by a lot of people and CI is just not going to catch all of the interesting bugs. So when we roll out a release, we want to be able to do it in a way that users can test it with their workloads, with their network configurations, on their hardware and let us know if there are problems before it hits the entire fleet. So the way we do that is we start with Fedora 30 right now plus the set of update packages and every two weeks we snapshot that and we make a release on the testing stream and the idea is users we would encourage to run a few percent of their nodes on testing report problems to us. After two weeks, we take that testing release and we maybe make whatever fixes are necessary add security fixes, whatever and roll it out to the stable stream and that will be rolled out over time to everybody. We also have the stream next which is intended to give extra baking time for longer changes. So for certain types of things, a two week test period is probably not enough. That's things like new substantial kernel releases, Fedora 31 as a whole and so those sorts of things will be on the next stream for longer to get more time to get feedback. As it says, we hope users run a little bit of testing, a little bit of next in production next to their stable nodes and in order to make that work we will apply critical bug fixes and security fixes to all three streams. So you're not in a situation where I'm running next and that's fine but there's this unfixed security vulnerability. One other thing that's interesting that we're doing is we're enabling machine counting by default. There's a trade off here. One of the things that we've found with Container Linux, with Fedora Atomic Host, Fedora as a whole, in fact, it's very difficult to focus development time if you don't know how many users you have and what they're doing. So for Fedora Core OS do we spend more time on AWS? Do we spend more time on digital ocean or packet? Making those platforms better, that kind of thing. On the other hand, privacy is important. This is free software. People don't want their machine spying on them and we understand that it is very important to us as well. So we're trying to strike a balance here. By default, Fedora Core OS will report some sort of generic information about nodes that are running and by what time really mean not identifying information? So this is things like, I'm running on AWS. I'm running an M4 large instance type. My OS version is X. The original installed version of the OS version was Y. These are things that hopefully are applied to enough people that it shouldn't fingerprint a node. If you want, you'll be able to opt into reporting additional information, like if you're on bare metal, what type of machine you have. There's probably fewer of those. So it might theoretically fingerprint you. Or, of course, it will always be possible to completely opt out of this reporting. The key point here is we will only look at this information in aggregate. We will not look at individual records and no unique identifiers will be reported at all. The way this effectively works is that once a day the node says, oh, I need to report in and give this information. But there's no unique IDs. We feel it's important to have this on by default because otherwise we won't get an accurate sample. But hopefully we're preserving the privacy properties that we want to be preserving here. And we will carefully document in the Getting Starting Guide and everything else that this is going on. Okay, so how do we actually build for a CoreOS? So actually this is shared with Red Hat CoreOS. The main tool is CoreOS Assembler. So it's sort of this collection of capabilities that together make it really, really easy to build for a CoreOS locally. So it's both used for developers, for development purposes, and for production. So essentially, just with three steps you can build for a CoreOS locally. You do COSA in it, and then you give it this repo. So that for a CoreOS config repo is where all the definition files for what goes into for a CoreOS live. So, you know, all the packages. And then soon we'll have lock files. So actually, specifically what version of each package we want in there. And then fetch, fetches the packages and build it, of course. So a cool thing about CoreOS Assembler is it can run fully unprivileged. So under the hood it uses RPMS3, of course, to convert the RPMs into an OS3. And then Superman. So Superman is both used for doing the unprivileged stuff, but also for actually creating the disk images that become the cloud or the bare metal images. A big difference between for a CoreOS and Red Hat CoreOS compared to, let's say, the rest of our traditional Fedora is it does not use Anaconda to build images. So there's various reasons for this, but the idea here is, like I said earlier, Ignition is sort of the only tool we want to have to specify how you want your machine provisioned. So how do we actually run the CoreOS Assembler in production to build the production images is a for a CoreOS pipeline. So it's simply a Jenkins pipeline that will run CoreOS Assembler in OpenShift. So everything happens in OpenShift, and then we push those out to the S3. Okay, so where are we now? There is a preview release of Fedora CoreOS available today. It's not ready to run in production, so please don't. What we'd like people to do is try it out, report bugs to us, report missing features. Be aware that we're reserving the right to make backward incompatible changes during the preview period in order to fix things or improve the design. In more or less five months from now, we're planning to have a stable release, at which point we will recommend that Fedora CoreOS is ready to run in production. So specifically what's next? We need to finish implementing the all three of the streams that I mentioned. We're working on adding additional cloud invert platforms, support for architectures other than x86, live pixie support that I mentioned also, earlier, also live CDs, some work around network configuration, additional sugar for the config transpiler. Machine counting right now is actually only a stub. It reads the config file and makes sure that it's valid so that you can configure your nodes, but it doesn't do anything yet. We will have much more documentation and we are also working on some details around integration with OKD and Kubernetes generally. This is worth calling out specifically because the OKD for effort is getting spun up at the same time. In the short term, the plan there is to essentially branch Fedora CoreOS and bolt in whatever needs to be bolted in just to get OKD to a minimally working state to have a demo. After that we will start working on integrating that OKD work back into Fedora CoreOS. There's one distro that's used for OKD use cases for other Kubernetes distros as well as for non-Kubernetes use cases. I listed there are a couple of open issues in order to make that happen. Finally, a note on the distros that we came from. Fedora Atomic Host has not updated to Fedora 30 and will go end of life late November or so depending on Fedora schedules. Container Linux will continue to be maintained for about six months after the stable release of Fedora CoreOS and we will announce exact timing when we get closer to that point. For Container Linux users will provide migration tools and docs and as much help as we can to help people migrate their existing workloads from Container Linux over to Fedora CoreOS. Here's the usual list of places you can go. The website is the download site. It also links to the Getting Started guide. The second link is essentially the focus of development for Fedora CoreOS and then there's some other places you can discuss Fedora CoreOS with us as well. Thank you very much. Looks like we have about two minutes left. We can maybe take a question or two and I think both of us will be available outside after the talk. Thanks. My question is about what updating and releasing Fedora CoreOS will look like. So, do you plan on releasing, for Amazon case, regular AMIs and then if I wanted to update Fedora? And how often will you release those AMIs? I'm hearing nods, so that means often. Sure. So, the exact timing is subject to some change, but all three streams will release at least every two weeks. In addition to those scheduled releases, we can have out-of-cycle releases. So, if there's some major security update that needs to go out immediately, we'll do special releases on all three channels for that. And that will be an update payload and it will also be new AMIs, new QMU images, everything. Okay, so you plan to release the OS trees and the image builds at the same time. Yes. Perfect. So, you mentioned towards the end of that about OKD and the integration of this. When you say by branching Fcause for this, do you mean specifically like there's going to be a separate OS tree stream and a separate image stream for this or is there going to be like some kind of overlay, package overlay or container overlay? What do you mean by that? Because that's kind of confusing, especially I came from just hearing the Silver Bull talk and he was talking about all kinds of weird things you can do on top of the RPM OS tree core for this. We don't actually know right now. It may be a separate OS tree. That was, I think, what we were talking about at first. Then we started talking about just doing package overlays. The latter approach is cleaner because it requires less extra stuff. We want to go for the smallest possible branch. Not clear what that's going to be yet. Two quick questions. One, automatic updates. Is that going to destroy custom things? Second question, if you add all the bug fixes to stable, how does that keep it stable? Right. So the first question, if you have customizations in Etsy or you have something in opt, for example, that will not be destroyed by an upgrade, the parts of the system which are modified by an upgrade are read-only, so you can't accidentally get yourself in trouble there. And the second question, sorry, remind me. I'll have to keep it stable if you add in all the objects in the stable. Carefully. So it's a tough, we've been dealing with this on the container linux side for several years. It's a tough call. If there's something which is a large change and probably not that urgent, we would roll it through testing first. If there's a change which is small and urgent, we would probably send it direct to stable. If there's a change like fixing spectra and meltdown, which is important and also a giant patch series, we may have to hold our nose and send it directly to stable, and that's not ideal, but it's a judgment call. Yeah, we will... Yeah, we'll do that when we feel we have to, but with security updates and really critical bugs, there's a judgment call sometimes. Anything else? Cool, thank you very much.