 So, good morning, everybody, and welcome to the second day of DEF CONF. Just a gentle reminder for the rest of the day, if you come into a conference room, please close the doors gently so that not to distract the speaker, and please do go and rate and comment on our website about the speeches. And so, let's start today with our first speaker, Daniel Rieck, and he'll tell us some interesting stuff. Let's welcome him. Thank you. Thanks for getting up so early. I'm relieved that I have the grayest beard in the room. Just a question, who in the room knows what this background picture is taken from? Okay. I only had one presentation where people knew the movie. That was with some DNS guy at a very sign there. It's from Darkstar, probably the best movie ever made, so yeah. All right. I'll be talking, I have to warn you, it's brilliant. Great idea to redo all my slides yesterday. So we'll see if that works. They are not all really new. I picked them from a bunch of, but I changed the whole story, so it means I probably have to talk very fast at a point because I run out of time and have some logical breaks because it really sounded great at 3 a.m. this night. So bear with me. You can leave the feedback on the website then and tell me what you think. So Greybeard thinks he's really wise. They hate SystemD and their fork distributions. If they haven't forked Debian over SystemD yet, I'll probably get them to fork again whenever Debian starts believing what I say, which isn't my first concern because I work at Rathead, so who cares. Let's go and take a philosophical view on what is the role of the Linux operating system. Whenever people at Rathead draw an abstract vision of our stack, right now they put Linux in infrastructure. And there's some logic to it. It has all the device drivers and stuff. It feels like infrastructure. But you can take an alternative view, and if you look at it from an application point of view, you realize the core function of Linux is not to make hardware work or provide infrastructure. The core function is to provide a user space runtime. It's the only point for running IT is to run something on it, not to run the hardware for its sole purpose. So really the core function of Linux is to provide an application runtime. The application might be infrastructure management, in which case it runs in the infrastructure, but most of what we do is actually in the user space. And most of our users are people who really care about the running applications on top. And if you take a historic view, where did we come from? Why are we doing things the way we are doing? And this is very, very long ago when this started. Early on the mainframe model was basically a completely proprietary integrated experience. The vendor controlled what software you could run. The machines usually were leased. They didn't own the hardware if you had it. And that got replaced by what then was called open systems, Unix. In the PC world, Windows wasn't server yet, and I'm going to leave NetWare out. But open systems came along, Unix came along, and there you had a vertically integrated hardware to operating system stack and a vendor controlled tool chain, but you started to see an open ecosystem on the ISV side. You could run your own software easily. You could buy software from a third party. The problem there still was a vertical integration. If you see it, it was the history of Unix. They had all these standards that didn't mean anything in reality because it didn't actually provide compatibility on a binary level. They had all the open labels, and then you were totally locked into one vendor's ecosystem whenever you decided to go with that vendor. There was some level of source code compatibility, but if you ever wonder how a behemoth like Autocon was created and why, look at what they had to deal with to make this work across all these different versions. And I know we are getting there too with Linux. It's getting hard to keep things compatible, but for very different reasons because we're moving faster, not because we have proprietary differentiation on the technology level. And then when PC hardware got good enough, and people started wanting to use PC hardware and you had multiple hardware vendors, it was the need to break up this vertical integration. And that's the role of Linux. Linux provided the neutral user space runtime across different hardware options. That's a historic role of REL in the enterprise. REL became the neutral runtime that the open ISV ecosystem could standardize on, and then you had choice on the ecosystem side, and the hardware vendor side, and you were and bought into that. And if you look at it today, where is Linux being used? Today Linux provides you an application runtime independent of whether you're running on your laptop, you're running on a bare metal server, you're running on virtualizations like VMware, or you're running on the public cloud. It's the standard for getting this level of compatibility so my binary can work in all these options. So today, and that's my proof point, today probably most Linux deployments are either on VMware or on public cloud, which proves that Linux is not just infrastructure. It proves it's an application runtime because the reason people run, the reason why VMware never came up with an operating system is because customers would not have wanted a vertically integrated infrastructure stack where the application runtime is owned by the people who own their infrastructure management. Very simple. Never even happened. And yeah, we'll get back to that philosophical side later, but so the next question is what does that mean in practical terms for how we do software management? Because this is an evolution of a time, right? We went from something very integrated to something semi-open to something open to something even more flexible because basically what happened was virtualization is we got rid of the hardware. Now along the same evolutionary track, we changed how we deal with software, how we deal with the user space. Early on, servers, they were not just pet exhibition docs, they were shared in the center of the doc show. You had multiple admins per UNIX host and they had a lot of termers, a lot of people working on it. You compiled stuff locally on the machine and then because you wanted to map that to the binary stack, you think like Stowe or you mounted binaries on NFS. It was high maintenance. The problem is it was very fragile, right? The behavior of your binary depended on the build environment on the production host and you were compiling stuff on the production host, which kind of leads to disaster if something goes wrong. So that was too fragile and it didn't scale in the PC environment. When Linux took over, there were too many machines. It wasn't efficient anymore to go into each machine and compile software and use a local. It just couldn't be done, right? Couldn't hire that many admins and the value wasn't there. Plus, the software stack started moving faster and it was just too hard to keep that stable and working. So the next thing that happened was that we got to binary reproducible packaging. We introduced RPM. That happened when it took over 94.5, perhaps. Probably was when Red Hat Linux 2.1 came to Germany, at least, that's where I was at the time. Actually, as part of the Caldera network desktop, which is kind of funny in retrospect, because it turns itself into SEO later, but anyhow. So the first Linux distributions I used actually were just bunches of tar and zip files. We had to re-model already because at the time compiling everything was just crazy. So you had tar files, but they didn't have any artifact tracking. And so RPM started to change that. With RPM, we had a clean way to describe the build, build it, package the results in a way it can be distributed easily, and then installed in a reproducible manner, managing dependencies. And then actually removed again. That was important, too. You could actually remove stuff without leaving artifacts behind, or having some file you touch when you install it, then you do fine based on the time and all the stuff we did to deal with that back in the day. There is an important thing, though, that RPM did. It's kind of unintended. I think it was unintended consequence. That it turns the whole system in a pure single instance, single version model. RPM has no viable mechanism to let you install multiple versions of the same software stack, or run multiple versions easily. An example is, so when we want multiple versions, we have to rename the RPM, right? Python 30. That's straightforward, but painful and not very flexible. But the multi-instancing is actually more interesting. We have a lot of software in our distribution that you can install as one binary, but then you want to do multiple versions, like in the web server space, some content management systems, and do weird things with R-syncing stuff into var, lib, HTML, and stuff like that, because RPM doesn't easily you install the code in multiple versions. This has been a big contention with the JBoS community since we actually had anything to do with JBoS. I used to be in product management for a long time, and we had always this fight trying to get the JBoS people to package EAP in RPMs and say, no. And we said, well, do it anyways. And then they said, but no one is using it. Do it anyways. It has to be done. It needs to be done. It's crazy to ship zip files. But at the end, if you look at it, why don't they want it? And why didn't customers use it? It's very simple. Because in the Java world, you're running multiple instances and multiple versions at the same time, always. So they actually needed multiple versions of EAP on the machine at any time, and RPM just sucks at that. And our RPM package didn't allow that. So that's an unintended consequence of what we did with RPM is that we forced everything to be a single version, single instance system. Another aspect is that it's component level packaging. So it's a late binding model. We declare all the dependencies. And they're in different level of specificity. I can't say that in English. Specificity. So sometimes you have a specific version dependency. Something that's just a very abstract package name. And when you install it, that gets combined when you install it, which is fine. It's much better than compiling in user local. But it still means that when you build something, you have one version of the stack. Then you move to test. And person security router came along. And something slightly changed in your stack. And you test that. And then you move to production. And then the expectation is that OpenSSL has another named bug, which isn't really good. So you have to apply that. And that means that in production, you're expecting that you can apply a security fix to your software while it runs. Restart your software. And it keeps running. Which means that what you're running is not what you've developed on and not what you tested. That's an important side effect of component level packaging. Past RPM, there were a couple of things that were added on. Kickstart, spader for a retic, satellite-density engine, and things like that came along. Which all were about in the scale old world where you have all these machines. You don't have a few servers anymore. You have a few big unixos. You have a lot of servers. And you need it just to centralize how you manage that. Basically, this is still the stack we use today. Yes, the F-engine was replaced by Puppet. That's just the F-engine on steroids. It has most of the same problems, if you ask me. If you don't believe me, go to James Shubin's talk later about management. And then now Ansible is a different approach with, is a different animal. But at the end, it's all the same kind of core principle. We still use Kickstart. That was probably the most long-lived thing that Ratat ever introduced. And I blame it for winning over SUSE, because they didn't have something like that. They had a much nicer installer, but the automation capabilities of Kickstart, which is awesome. You could do what you needed to do. So at the end, at large, this is still the kind of stack we use today. So what does that mean for application deployment? If we take this level up, how am I using this stack? So a core model in this is that we have a single user space shared between all applications. That's a traditional model. We have an application runtime that is defined by the lifecycle of the operating system. And it isolates basically applications on the hardware level as a trend. And that's kind of something we inherited from Windows more, because in Linux, there's no problem to run multiple applications on the same host. But in Windows, you could never really do that. So at the end, in the enterprise world, they always buy hardware for applications. And so you end up with one application per host. It has to do with resource control and managing all of that. And the whole thing is very much about long-term, binary stability, moving the hardware cycles, updating, enabling hardware. It has some issues. It has limited flexibility. The lifecycle management on the component level gets fragile. And I forgot to put a slide in here. I wanted to put in the Fedora dependency chart, which blows your mind, because you see that everything is dependent. Because we have to make every option work. And we have to make sure that when we install something, all functions work at the same time. It's basically impossible to get fricking cups out of your server install, which is annoying if you don't want to print on your server, which most servers never want to do. It's really hard to manage the side effects of all this. You add a new version of something over here. And then you have to test every application, because it might break the API over there, and then things blow up. And it basically ties the user space. The current model ties the user space monolithically to the hardware lifecycle. It's not so much on the Fedora side. On the REL side, definitely. Our lifecycle is driven when the kernel needs to be updated, because we can't enable hardware anymore. And that's when we do major changes in the user space stack. So it's tied together. That's definitely a downside, because it's not necessarily how you want to drive your user space. Like if you consider Linux infrastructure, it totally makes sense. But if you consider the user space runtime, then this has some downsides. The next thing that came along are VMs. And I'm just throwing VMs in there. And I'm throwing cloud and VMs together. It's all the same thing from my point of view. Because essentially, it just works around to the hardware tie-in that I just described. All it does is it lets you virtualize your hardware so you can basically run your operating system application-centric. So when you now update your hardware, you update the hyperhazard layer. Whether that is an open source stack running REL or RAF or open stack, or whether that's a proprietary stack is secondary from the application point of view, because you don't even care, because the Linux OS you're using inside the VM abstracts you from all of that. So from an application-centric view, it's all the same. It gives you higher flexibility. It lets you get independent of the hardware life cycle underneath whatever you're running, and you can focus on the application. It lets you also have application-specific stacks. In my VM, I install what I need for the VM I'm running. I don't have to install things for other applications that might be needed. So I can optimize things a bit more. I can have different versions. I can have the same hardware set up. I can run REL 5, REL 6, REL 7. I don't care anymore. I don't get into this either or. But it does the rid of the hardware tie-in. The problem is that it's too much overapplication. And the management is too complex, because it's a black box. It's essentially a new piece of hardware, just a shared hardware. But you can't introspect from easily into it. There's a nice example. We are really bad at that. VMware is much better at this kind of stuff. But so when you want to do a backup of a database on a Windows machine in a VMware cluster, what you do, you basically tell VMware, hey, I want to backup that VM. Backing up nowadays, it's just taking a snapshot and managing the artifacts that are in there. So you tell VMware, then VMware has an agent inside the guest that tells the VSS service in Windows, hey, I want to backup this machine. And VSS tells the database, oh, you need to get your flash, your buffers. And it flushes the buffers. And it collects the file system and stops everything, takes a snapshot. VMware then takes a snapshot. And then VSS tells everything to start again. It's great. It works great. A, we can't do that because we don't have this level of integration. B, it's a lot of moving parts that have to work together. And the only alternative is to put a backup agent in each VM, which basically undermines the whole reason why I put a VM in first. Because that means that now I have A to share the ownership between the application owner and the ops people. And B, I have to put an agent into the VM that has to be compatible with the user space runtime in the VM. So I'm recreating shared dependencies as soon as I try to actually operate this. So the lifecycle management is still too hard. Another important thing, in theory, it's all cattle. But in reality, and that's one of the reasons why private cloud isn't going where we want it to go, every VM you see in reality is still a pet. So what people do, they might deploy as a virtual image. They might deploy a virtual appliance when they update it. They log in and run YAM update at this point. So it's really just getting rid of the hardware dependency itself. So another philosophical view. So at this point, we got rid of Unix and vertical integration. We have a nice open ecosystem running on Linux. You have choice on the hardware side. Linux even manages quite well to give you at least source code level compatibility across different hardware architectures, which it works fairly well. It's nice. My home automation is all based on Raspberry Pis. And most of the same stuff works there. It's a different distribution at this point, although I'm working on getting Raspberry Pi 2 with Fedora. So even that will be the same experience. So that's really nice. And we got rid of the hardware dependency through virtualization for the enterprise production. And we have a certain level of automation and efficiency. Now there's some other trends in the market that change behaviors right now. And we are in a paradigm shift that goes very deep because it's driving control outside of the traditional software and IT space. There are techniques and tools like DevOps, tools like containers that give you finer-grained control over things. But the biggest trend is that everything is software. Software is eating the world. Traditional business used to be about something physical. That was what mattered. And software was the thing, the IT people in the basement. Who watches the IT crowd? Very good. It's the IT people in the basement that do the software, and you don't have to deal with that. That's changing. Every business now is defined by software. Software is strategic for everything. There are more developers in the business side than in the traditional IT side or traditional software industry. If you look at car manufacturing today, the hard part about car manufacturing is more and more getting the software right than to actually build a car, because everyone knows how to build a car. That's not much changes. And the engines get more efficient, but you have choice there. And Tesla just proved that you can, from scratch, just create industry-leading car business without. But started by a guy who was a software guy, right? And he just had enough money to get that going and decided, let's build this car and get higher some people who know how to do that and let's do it. If you look at the issues that cars recently had, let's say the Volkswagen compliance issue with the environmental limit, how they cheated, there was software. How did they fix it? They changed the software. If cars get stopped on the highway because someone hacked the Bluetooth thing, or a couple of years ago, they hacked, I think, a Dodge or a Chrysler thing by playing an infected song through the MP3 player. It's all software. So software is eating the world. Really, literally means not only that a software is in each device, everything is software. It means that every business is doing software. And that changes how the dynamic in the software industry because it means that business differentiation for a company, not only for software companies. And we know that for us, business differentiation is driven by all software. But now for a car manufacturer, a good part of the business differentiation is driven by their software. And it's an important change. It changes power in companies. It changes who gets to decide what technology to use, what software to use. Right now, in training, you had IT experts decide what stack to use. And they could standardize a stack for the company and say, OK, this is what we're going to use. Nowadays, that's decided by marketing executives in the line of business for a car manufacturer. They set the requirements. They set the pressure. And then that fuels a lot of the other changes we see. Move to cloud. That's partly in CapEx versus OPEX model and flexibility thing, but it's partly also because the requirements change so fast that buying hardware just doesn't, it just takes long to buy hardware. So I have to go to public cloud. I need to use elasticity because, otherwise, I can't keep up with the business demands. That's why I always need the current version of code. I'm not going to wait for a release cycle of RHEL. The line of business is telling me I need this new UI integrated with this new feature and supports this new protocol on that cloud. And it shifts from a broadcast model to an on-demand model. Everything is on demand in the current world. I have to speak faster now because I'm running out of time. I want you about that. So this leads us to a two-faced world. On one hand, we have an ops-centric environment, which is traditional IT that's not going away. People are still operating ops. It might be outsourced. In that case, you don't run it yourself. But someone is operating. It all runs on hardware. It all runs on infrastructure. That's a world where the Linux distribution is at home. We are trying to provide stability. We provide, at the end, what Red Hat provides is an insurance policy for people who need to download and install and update binary components in place. We make sure you can do that. And our community is doing basically do that without a commercial assurance. What we do is we give you a pre-packaged software stack so you don't have to compile anything yourself. You don't have to find out how it put it together. You don't have to find out which versions work together, which versions work with your hardware. And it's amazing what we do there. It's really great. And it really works. I can install. I can update my Fedora laptop, which is really a Fedora laptop, even if it doesn't look like one. I can update that across Fedora releases now with not much problem other than the freaking proprietary wireless driver I need from fresh RPMs. And it just works. I don't have major issues, even at least nothing that would block me from going to the newest Fedora version whenever it comes out. There's another site now, which is the app development site that's growing and growing and growing. That's the business drives, the industry now. And there you are in an area where you download to build. These people do not install binaries. They compile software. They're like us. They compile software and then test it and then deploy it. So that's a download to build user case. And that's very important because right now everything we do, the whole structure of the Fedora ecosystem is optimized to install and update in place model. And it's not optimized for the download to build model. And I'll explain a bit more about that. There's another aspect. It's just a sheer complexity. This is from module counts.org. So I found it on the internet, so it must be true. Count of modules across higher level languages is the most popular repositories. So I don't know why it says Debian unstable down there. So the bottom line is the count of packages in Fedora raw height. Fedubuff is Debian unstable. And then .NET has double the number of components right now. And then it goes up. There's 600,000 individual modules. And there might all be forks of each other. And 90% of them never used. But the message is we cannot package all of that in a Linux distribution. It's just not possible. Just too much stuff. And that's the stuff that these download to build people are pulling from. They're downloading .NET, Maven Central, or Ruby Jam stuff. And they have the choice of 200,000 Node.js packages. And that's putting a Linux distribution. It's a big challenge for a traditional Linux distribution. It's just too complex. And there is no critical mass in this user space. And we're just too slow to give them new versions. Even Fedora is too slow to give them new versions, even of the packages we have. That's why they go directly to upstream. And I only have 10 minutes left, so I'll just skip every other word now. So you get the abridged version. And the solution to that is a containerized stack. In the containerized stack, you separate each runtime. And so you separate the system runtime from the application runtime. And each application gets its own runtime. The host is deployed as an immutable image. Ideally, you don't have to do that, but it really helps. And then each service gets its own container, although you can have multi-service containers. That turns Linux immediately back into a multi-instance, multi-version environment, because you can simply run whatever you want in each container. And they don't affect each other. You completely get rid of interdependencies. The dependency chart basically gets reduced to my container's dependency chart. And the next container can do something else. It gives you maximal flexibility. Also, let me delegate at the container level, because in VMs, one of the problems is that it's still the full operating system, including the system runtime, including the hardware piece, the networking stack. I don't have that in a container. So I can easily give the application, people, control over the container, still control it from the outside. And they don't get to break out of that. And I can still introspect it. I don't need to do these things I described about backups in VMware, because it's not a black box. A host or even a privileged container can inspect into other containers. I can just see it. We'll publish the slides, so don't worry about that. So in the use case for containers, there are basically three levels of use case. And the first one will make some heads explode, and I think I'll get a call afterwards from Brent Baudi, who just did a blog telling me not to do this. But it's a pet container. And this is the first thing you should try if you're doing containers. I do that on my laptop all the times. I just instantiate a Docker base image, or the REL tools image, or Fedora image. And then I basically treat that like a VM. That's perfect if you want to try something out. You want to install random NPM stuff. Or you want to, you know, I'm working a little, for my basement, I'm working on a little PXE boot server because I cannot upload, you know, it's some like Java, ILO, 32-bit, Vietnamese. So I need a PXE server. And I don't have a machine. I don't want to install all of that on my host because I have to find all the pieces. Even in, you know, it's gotten much better with DNF, but still it's, you know, it's just easier to put it in a container. I do whatever I want. I compile some shit in there. If I don't need it anymore, I just remove the container, it's done, right? So I'm basically partitioning my laptop already into multiple runtimes this way. Just for convenience to try out things and do things without having to deal with it in polluting my core system. And it's going to get more and more. There are things I can just deploy. You know, I wanted to try out MetaMOS and GitLab. They give you a Docker container with the integrated solution. So it's just run it and you got it, right? Don't deal with anything else. If it's a trusted source, of course, you know, if you do that from the wrong source and you just introduce the botnet in your environment. But no, seriously, it's a valid use case to run a pet container. And if you're moving from an existing application, you know, a lot of people say containers are mode two and then we have this mode one application. It's a continuum and you can make basically every mode one application work in a container. The next thing is multi-service containers. This is another thing that makes some hats explode, right? Because containers are micro services. Not true. Then watch it a lot of work to get system deworking in containers. So just if you need an existing application, a container, just, you know, make CMD has been in it and then you have system de-running in your container, you might, depending which version of Docker you're using, you might need to give it some privilege in newest version, you don't need to do that. And then you can just install your software like you're used to and it just works, right? It just starts the unit files and it just works. And you just need to do the mappings on the outside which is not hard as in dealing with firewalls or anything else. And then you go to multi-contain application is really where you need to start dealing with Kubernetes because that lets you orchestrate them. The real container is applications that looks like this, right? It's everything is a cluster. We are always in a scale-out world. The host is immutable and you in applications basically in an orchestrated set of containers. I have Kubernetes as my cluster manager and the definition of my application is this set of containers and then it starts them. And it's always multi-tenant and multi-source. Taking a step back to the philosophical view, if we compare that to one of my first slides where I talked about the role of RPM when we came from user local compile into something that scales. Docker slash OCI, the container format we are standardizing on basically has the same function that RPM had in this transition. We are at a point where because of the complexity of the stacks, installing things in production on the component level, resolving component level dependencies is so complex and fragile. It's almost equivalent to the problem we had 15, 20 years ago, I'm not really that old. 15, 20 years ago with user local compiles. Because especially one, the next thing we have to do is weak dependencies because we have to get rid of that cups server in our server installed. So we need weak dependencies. Once you have weak dependencies, the installed stack, the late binding stack that gets bound when you install it, not when you build it, is gonna be so dependent on parameters, not known when you build it, that it's gonna be too complex to deal. When you're outside of the RPM space, that's already the case. It's too complex to deal with. So the only way to deal with that is by doing aggregate packaging. Early on when you build it, when you're in the download to build mode, when you're application developer and enterprise, you build the artifact and then you move the same artifacts to build test production, automate that process, and you use the aggregate packager as to manage the installed artifacts. So we go to an early binding model. Interestingly, there's a precedent for this. Years ago, I was in the field organization in Europe for Red Hat. And a big thing for us back then was to install Oracle for customers in like rack clusters. And you never wanted to run the Oracle Java installer on a production system, because it was kind of, it was very fragile, it would not always work, it would be dependent, it wouldn't take care of dependency because it wasn't an RPM, right? So what we did is actually did it on one machine, then build a binary RPM of the resulting installation, distributed that through satellite. So maybe we moved to aggregate packaging to deal with the fragility of this Java-driven installer and fragile software stacks that didn't do the dependency management. So this is basically what Docker does very conveniently. It does a package stack, it's the next logical set. It doesn't invalidate RPM because it's a level above it, right? But it moves the consumption of the RPM dependency resolution in a pre-stage for the application developer. Not for the system people or maybe, but that's a different discussion. And then you use that to distribute that. And it makes you, it allows you in a clean way to add other packaging artifacts on top of that. One thing that Docker doesn't do, it only gives you container-level packaging. There's another thing I'd like to take a look at, which is the Atomic app or the upstream project called NulliQL. And I think we have some talks about that. So this is basically the next step where you go from packaging individual containers, I described an application as always consisting out of multiple containers and being orchestrated through Kubernetes. Neither Kubernetes nor Docker give you a transport format for this higher level construct. It's basically when you wanna, today I can do a YAM install IPA, free IPA, and then a IPA server install and I end up with a ready-to-run orchestrated parameterized instance of IPA. When I want to do that in containers, I'm back to copying templates around and, so VIM becomes my IPA server install, which is really great because it's a functional regression. That's what Atomic app solves. It gives you a way to package the aggregate metadata and define a way how to parameterize that on deployment. That all, like putting this all together, so this is kind of the enterprise stack. It's a bit of a jump, but I want you about that, that there will be some conceptual jumps in my presentation. If you put this all together, we basically operate on three layers. It's the infrastructure, the application, runtime, and lifecycle management application content. Mapping that to what we do. We have a bunch of different cloud options. We have Atomic Enterprise as the runtime, OpenShift as the lifecycle management and our frameworks on top. That's a logical view and I'm out of time, so I'll just skip to the last slide to close the cycle, it's only a slide logical. If we look at what's happening in public cloud right now, we're seeing actually a move back to vertically integrated Unix stacks. Every cloud vendor and every private, proprietary private cloud offering is trying to do vertical integration up to the pass level. If you're going to Amazon, they're trying to get you to use all their services. And they tell you, hey, don't worry, it's just database. Don't worry which one it is. Our role in this model is the same role we had when we broke the vertically integrated Unix stack. We are breaking the vertically integrated cloud stack and we're giving people alternatives and the ability to define the applications independent of the vertically integrated stacks, gives them an open ecosystem to choose from and then run that application without the application actually knowing the specific of the underlying stack across all these options. That's the vision where we're going with the operating system here. That's it. Thank you. And that's a logo I'm proposing. And I'll be around for questions and apologies for going over.