 OK, so my name is Patrick Corrolli. I'm the director of alliances and embedded technology at SUSE. Actually joining me today is Tui. Tui is the program manager for our embedded program. And I'm going to talk to you a little bit today about what we do in the embedded space, what we do in open source in general. And feel free to ask any questions at any point in time, and I'll try and get them answered for you. OK, so within SUSE, we kind of set some guidelines how we want to be perceived in the market. And what we focus in on is controlling our infrastructure, optimizing operations, and improving or innovating faster. And we do this in a number of different ways. And as we go through the conversation today, we're going to harken back to some of these topics about how do you maintain control over your infrastructure. In this case, it could either be your infrastructure or the infrastructure that you're providing to your customers through your solutions. How do you optimize operations? How do you really take advantage of not only your support organization, your development organization, so that you can build solutions that focus on the industry that you're interested in and not focus in on something like the operating system? And innovate faster, and we'll talk a little bit about how we approach innovation at SUSE and what we do that makes us feel that we are highly innovative. So people have been coming up and say, hey, I didn't realize SUSE had an embedded product, or I didn't realize that embedded SUSE was a thing. We don't actually have a project, and we don't have a product. What we have is a program. If you were in the keynote session earlier this week when Linus was asked the question about, will there be an edge kernel, a small kernel that is just for edge infrastructure devices, Linus's response was pretty on par with the way we approach things, which is, no, Linux is a general purpose operating system. And the more you focus that operating system, the more you focus on the solution that you want to provide, the more planning you put into how you want to present yourself out into the market, that's what creates the opportunity, that's what creates the innovation and the value. So it's our objective to focus on the operating system so that you can focus in on what you want to focus in on. Now at the same time, we don't go and put out a different kernel for embedded space. Our Linux kernel is the same Linux kernel that we run on the Raspberry Pi as we run on IBM System Z mainframes, as well as Intel and Power and everything in between. So it's a common code base that we use and that we maintain and we support. But we do provide you opportunities to scale and shrink that code base in a supportable fashion so that you can take that solution to market and you know that we still run that through our same QA process, our validation process. And if you ever call for support, that will provide it. At the same time, we've got a program in place because we recognize that the price that you buy for or the price you pay for an enterprise Linux distribution is going to be very different for something that you may want to embed in a very small solution that's going to scale to millions of units versus tens of thousands of units. So we've actually been in the embedded space for quite a long time, probably maybe 15 years or so. And actually our first foray into the embedded space started in the retail space. So we worked in the retail market working on cash registers and point of sale solutions. We expanded into kiosks and little by little people come to us and say, hey, could I include Linux or SUSE in this solution? Maybe it's a data warehouse. Can I include SUSE Linux in this key server for encryption? Can I include SUSE Linux in this firewall option for a rack mount server? Can I use it as a virtual appliance inside of Amazon? Can I use it inside of this medical device, imaging devices? Can I use it for metering mail solutions? So we've expanded into all of those different areas through relationships and partnerships, and not one size fits all. But one kernel has worked in all these spaces for a number of years. And now we're getting down to the point where you say embedded to somebody and everybody immediately goes to a Beagle board or Raspberry Pi from a thought perspective. Well, we support ARM and we support x86. But the important thing is that we're going to support you based on what your solution is. And you don't have to be hesitant to think, well, is that an embedded solution or not? Well, it depends. From our perspective, if this is a fixed function system and not a general purpose system, we typically consider that an embedded solution. So at that point, we encourage you to reach out to us and send us an email at embedded.susa.com. And we'll work with you to make sure that our platform can meet your needs. So we've got a number of solutions in this space to really partner with you. We don't want to approach the embedded space as a customer space. We want to approach this as a partnership. And we have support solutions. We've got feature capability solutions, beta programs, resale solutions, all different types of solutions in the market that we can tailor to fit your needs. And again, there's very few of our embedded partners that we work with that work on the same model. And that's one of the most important things you guys can realize. So with that, what I want to do is actually talk a little bit about how we can offer a secure, a flexible, and most importantly, a supported environment for your embedded solution. So building an embedded solution is hard. We're actually building any solution is hard. And most of the work actually has to start early on in the planning process. Again, if we harken back to the keynotes, one of the key takeaways is the fact that security cannot be an afterthought. You have to start with your surface or your attack surface in mind. You have to understand what controls you're going to need, whether they're through networking or access control list for users, access for different devices. You need to start thinking about security as the first step. And one of the major points about planning and security is actually one of the last steps that people or the actual end user customer touches, which is patch management and upgrades. So many times we see solutions hit the market. They get updated once or twice as the interest and the innovation is high on that particular product or solution, and then we don't see that product updated or innovated after 12 months. Well, that's a vulnerability. It's a huge vulnerability. And that's why we see, even though out in the security industry, as we see the number of denial service attacks increasing moderately, the effects of those denial service attacks are becoming greater and greater. And we're talking hundreds of thousands of gigabytes of denial service attacks, but typically from the same attack area. So one of the things that we have to talk about is why security is so important and why, actually, sometimes it's a thankless job to go and build a Linux distribution and manage that. And that's what we want to talk about. So during the planning process and the coding process and the building process and the development process, you're working in a closed system. You have an idea of where your target platform is. You have an idea of where your target operating system is. And obviously, we're all here at the Embed at Linux conference. Everyone knows that the target platform of choice right now is Linux. And you have choices to make. Choices, do you go and roll your own distribution? Do you go and take a community distribution? Do you partner with an enterprise Linux distribution in order to provide that support and service? And what we're going to talk about a little bit today is if you do partner with an enterprise Linux distribution, the fact that we do have tools to support you during your development process. During your build process, during your deployment process, and more importantly, from a security perspective and from a patch management perspective, we also have tools and solutions that support you in that environment as well. So this gives you the opportunity to build a custom Linux kernel that's supportable, flexible, and secure. So what we're going to talk about are a few different products and solutions that we offer. And of course, the foundation from our perspective is SUSE Linux Enterprise Server. And again, this is the same operating system that we deploy to all of our customers. And this is the same OS that we support for all of our customers. And I think that's the most important thing. We have a support organization worldwide. We have engineers and a security team that are focused on watching critical vulnerabilities, providing those patches back upstream into the kernel and back into our code stream. We also provide a solution that we'll talk about in the next slide called Just Enough Operating System. And this is typically where people kind of perk their ears up, much like in the robotic videos earlier this morning. They perk their ears up. They want to learn a little bit more about what we mean by Just Enough Operating System or JUICE as we refer to it. We're going to talk about package building. We're going to talk about image building, how we can support multiple platforms. And in some cases, multiple distributions. We are an open source company. And we recognize that community and collaboration are extremely important in an open source environment. And we'll go back and cover some of these topics as well. So I highlighted a few of the platforms that we support here. I mentioned them earlier. ARM x86 power. We've recently announced support for Raspberry Pi. So we're getting broader and broader in our support. And the reason why we can offer so many products and solutions is because we've been doing this a long time. We started initially in the x86. We quickly moved to the mainframe. And as new technologies were introduced, like Itanium and ARM, we've invested in those solutions as well. But the important part is that you guys are building critical systems in most cases. And we can't stress this enough that reliability and availability is extremely important in this space. And that's one of the reasons why you've chosen to use Linux in your environment is because of its reliability. We put a lot of additional effort into making sure that our solution is highly reliable and highly efficient. And we focus really on making sure that you can take advantage of innovations that are out there as quickly as possible in the Linux kernel so that you can support the latest innovations that you want to bring to market within your solutions. The other thing we hear in the embedded space is a term that we hear over and over again is with long life. We need a supportable Linux for five years, 10 years, eight years, whatever the case may be. So we have what we refer to as a 10 plus three year support lifecycle. So when we start a code stream, we will support that code stream in a generally supported manner for 10 years. Within each one of those service pack releases, we offer the ability to have long term service pack support. So even though we provide a brief overlap period that you would upgrade from service pack one to service pack two, if you need to stay on a particular service pack, whether it's for development reasons, whether it's for compliance reasons, we offer capabilities for you to have long term service pack support. And when we end general support at the end of the 10 years of our release cycle, we still offer an additional three years of long term service pack support for that final code stream. So in essence, if you wanted to run on a common code stream, this is one of the best models you're going to find out there. But more importantly, if you really want to get down to a very stable and static code stream, typically you want to look for the last service pack release within any given cycle, because that's the one where the code gets locked down. You do fewer and fewer feature enhancements, but you also have fewer and fewer bug fixes and fewer and fewer security vulnerabilities, because as they talked about in the keynote, when you release a product, there are no known vulnerabilities. The vulnerabilities only come after that. So obviously, after three or four years, the vulnerabilities tend to lessen within any given code stream. Now, this is a sample. We may do four service pack releases in a code stream. We may do three. We've varied between five and four in the most recent code streams, but we always support two code streams in our distribution. If anybody's keeping score, that's currently Sless11 service pack four and Sless12 service pack two. But the most important thing when you start to think about the code stream is, well, what does it provide me and what's the supportability of that code stream? So we talked a little bit about rapid innovation at SUSE. So it was actually about five or six years ago that we decided that maintaining a common kernel within a code stream just wasn't worth the effort anymore. We were getting requests from our hardware partners, requests from our customers to support new hardware and new enhancements in the Linux kernel. Could have been virtualization. Could have been containers. It could have been new chipsets. And we basically had made a determination at one point that it's not effective to go and backport tens of thousands, 20,000 lines of codes in some cases. I'm exaggerating. Into an old level kernel just to maintain that kernel level, because the number of changes that you make after a few thousand changes, it's really not part of that kernel tree anymore. It's changed so dramatically. So one of the things that we do in order to provide you innovation is we focus on the API and the ABI compatibility. So we want to make sure that the applications that sit on top of the kernel are highly compatible. Because ultimately what we are doing is we are supporting the kernel and the packages. What you put on top of that is your responsibility. So what I did here is just got a little bit of a sampling of how SUSE, in comparative releases between some of our competitors, how we have constantly been innovating taking code from the upstream kernel, basing on current kernels, making sure that customers have access to the most recent technologies and whether that is, again, networking technologies, memory protection technologies, or other enhancements on the processor side, we're trying to always stay as close to the mainline kernel as possible and do fewer backports. So the minimizing of backporting in patches, this provides better peer review. Because as code is going upstream into the kernel and more people are looking at that code, we've got more eyes on it. If we were to start backporting all this upstream code into our specific code tree, then it's just us looking at it. And that's not the way we want to work. We are always upstream first, and that's going to be our focus. And I think yesterday Ryan Ware from Intel, yeah, so he was talking about critical vulnerabilities in the CVEs. And again, this is one of the reasons why you want to try and focus on a modern current kernel is because as you backport critical vulnerabilities, there's lots of other things inside of the Linux kernel. Those could be touching, and you're just modifying as you go along. And tramping out bugs as you go backwards and backwards. So we always try and keep moving forward. That's one of the ways we try to innovate. So a recent study had found, and I think we actually have a blog post on SUSE.com about this, a recent study had found that about 85% of developers admit that they've rushed their applications to market. Now I'm not going to say they've done this maliciously, right? But the moment you consider rushing your application to market, you're not looking at the whole picture. And this is where we think security plays a huge spot. So we recognize that within our distribution itself, we have about 3,000 packages that make up our enterprise distribution. Well, that's obviously not something you want to put into an embedded system. If you go and install everything that SUSE has to offer, you have an extremely large, essentially, attack surface. Every one of those applications potentially has one or more vulnerabilities, including the Linux kernel. So what we've started to do is we've started to provide this concept of just enough operating system. It's an ISO image. There's a few templates associated with it. You can build a template for container service. You can build a very small application server. You can build a very small web server, whatever the case may be. But we started with these templates that if you want to, you can deploy this just enough operating system in any of the virtualized environments that are listed there. Now this is not a physical distribution. But what we do when we provide just enough operating system in these virtual environments, we also provide something along with it and we call it a Kiwi package. And I'll talk a little bit about Kiwi as we get further on in the presentation. But right now, our just enough operating system is a very lean function-specific operating system where you start with about a 300 megabyte kernel. And then you add on to that what you need. It gives you a starting point. So instead of starting from ground zero, we start you from what we consider our base level of support. This is the kernel that if you strip away everything else, we'll run on ARM, Intel, Power, x86. This is the kernel that we support. This is the kernel that we watch for critical vulnerabilities we apply patches and updates to. And we'll talk again how you can take advantage of this with Kiwi in a build process a little bit later on. OK. So packages, platforms, and repositories, oh my. So how many folks have heard of the open build service? And it's not a trick question. OK, some hands. Good. So the open build service is a generic system to build and distribute binary packages from sources. And here's the last three. In an automatic, consistent, and reproducible, or repeatable way. And this is key for us. Now we have a lot of partners and a lot of customers that use this. And they use it with SUSE, and they use it sometimes with other distributions. But our open build service is a tool that we've actually been using for close to two decades now. Internally, it started out with a different name. We called it auto build. When we started to recognize that people wanted Linux on more than one platform, we realized that we needed to create an automatic, consistent, and reproducible way to build multiple packages for multiple platforms. So when we release a product, we release it on all of our supported platforms day one. And that's because of the service. This is a toolkit that we use in-house. It's a toolkit that probably about a decade ago, maybe 15 years ago, we actually open sourced. If you wanted to go build your own build service, you can go out to Git, and you can get the code, and you can build one of your own. Mileage may vary, but it's something we use. And we constantly update because it's part of our infrastructure. This is what we use for all of our products. The build service can build packages for multiple distributions. We support RPM and Debian packages. We support all of the architectures that are listed there from a build perspective. And if it's not listed there, you can use QEMU to emulate some of those non-existent hardware. And in the back end of that build service is another tool that we're going to talk about in detail, and I'm going to separate the conversations called Kiwi. So we'll get into the Kiwi conversation in a few minutes. But most importantly, one of the things that the build service offers is a repository server. So this gives you the ability, as you're developing your application, to have a toolkit in which you can put a framework around your build environment, your target platforms, your target packaging format, and maintain a repository for yourselves. You can lock that repository, and you can branch that repository. You can then use that repository to do patches and updates, and there's collaborative functions in there for the developers as well. So let's talk a little bit about how the build server works. There's lots of material out online, and you'll learn a lot more from that than you will from me. But I want to just give you a couple of highlights. So we build from source, and we also build from binaries, if that's the case. We do support not just SUSE, but about nine or 16 different versions of distribution. So if you think there's about eight or nine main distributions out there, we typically will support two or three versions of that distribution if you want to build a package. So if you want to use the build service to build a SUSE package and an Ubuntu package and a Red Hat package or a CentOS or a scientific Linux package or a Debian package, you can do that. So you can have multiple targets built with and through the build service. Now the package sources in the individual package are put into projects, and you can obviously link multiple projects together to make larger and larger builds. You can also take those projects and branch them. So say you've got a stable 1.0 release that is considered your GA release. You can take that project, branch that project, and start working on your packages. And then you can submit those packages back through the build service to, let's say, your release engineering team or your QA team, because they'll have access to your repositories. Now you can maintain these repositories onsite. We support multiple repositories that are out there, SUSE's, YAST, RPMs, APT, ARCH. Multiple repository types are supported in this function. But we also allow you to pull from public repositories. So if the source forage or Git or other version control systems can also be pulled into this. The nice thing about this is that we have this service that runs in the background that monitors those repositories. So if something changes on Git, new code was submitted, in your nightly builds, if you choose to do so, if you're doing nightly builds, for example, you can actually force or rebuild if one of those dependencies were there. So what we do is we basically use virtual machines or cheer routes in order to build out the infrastructure. It happens in four phases. We've got the pre-install phase, the install phase, the build phase, and then the post-build phase. So the pre-install phase is where everything is defined for, let's say, your build environment. So you go and define your file system. You go and define your tool change. You go and define your bootloaders and the packages that you want to include. And then you've got the install phase. That's where the VM kicks in, where the cheer route takes place and actually builds out the machine that's going to do your package build. At that point, it takes your packages and does the actual build. You can define what the compiler switches are. They can be predefined if you choose to. There's some recipes out there that you can choose and say, well, if I want to build this on power, it's going to use this version of the tool chain and these switches or I want to build it on arm. It's going to do this. And that means that as you go and make code modifications to your repositories and resubmit those, whereas those repositories change upstream, the builds are going to happen automatically. If a build fails, it's going to give you an alert, it's going to back everything out and you're going to have to go in and fix it. Sorry, we don't do that for you, right? But if the build succeeds, you get to choose what happens with that repository. You can go and take that successful build and move it to a repository. And that then becomes what your QA team and your maintenance team test against or what your maintenance team will then try and use to recreate problems or issues that may happen. And that's called download on demand updater, that repository checking on the backend. So again, you pick your target distribution, you pick your target platform, you go and define a package and now you've got a repository inside of your organization for your solution. Remember, we're not talking about building an OS. We're talking about you guys using these tools to build your solutions, right? The OS is not a question. Now, if you're not that interested in doing that, we have another solution. I'm only going to spend a minute on this. So we introduced this concept called package hub. Not every organization has developers and a lot of organizations, despite the fact that we provide over 3,000 packages in our distribution, somebody wants 3,001. And sometimes it's an open source package and all they really need to do is have someone do the effort and compile it for them. So if you need community packages and you want to run them in a way that's not going to break SUSE's kernel and it's actually built with the same backend build service, we actually use the community packages through package hub. So if there's a particular open source package that we don't provide, it's likely already been built by the community and it's already up here on package hub. You can just download it included as part of your offering. Shortcut. Okay, so you've defined your projects, you've identified your packages, your sources and your RPMs and your binaries, you've created your projects, you've set your targets and you've set your platform and you've built your repositories and now you've got a bunch of packages for your application that you want to do something with. And this is where Kiwi comes in. So we've built the packages, now we actually have to build an image. Now sorry to say we don't build images for competing distributions, if you're going to build an image, you're going to build it on SUSE. We kind of think of Kiwi as a soda vending machine, put the quarter in, you're going to get the soda out that you want and we think this is the best one. So what Kiwi is, it's a command line tool that was written in Perl for building Linux images and it supports a variety of image formats. Everything that SUSE builds also goes through Kiwi. So you can build, and my eyes are going, you can build ISO images, live CDs, you could build pixie boot images, hard disk images, if you just want to go into a DD over to it, USB keys, we support virtual machines, appliances based on OVF, VMDKs, QCAL or all, and we've added support for containers. You can use Kiwi to build cloud images either for OpenStack Cloud or cloud images for Azure, cloud images for Amazon. All these things can be done with Kiwi and the nice part about it is you don't really need a whole lot of infrastructure. You can actually run it on a laptop, right? Just be careful with it because it does run as root. But again, another open source project that we provide, it is available in GitHub and there's multiple documentation sources out there. If one of the things we do do is we provide Kiwi with every version of SUSE Linux Enterprise. So it's not installed by default, but the packages are there. I think in version 11 it was part of the SDK and in version 12 it's just part of the regular distribution so you can choose to install Kiwi as the package. Well, now this is where we go back to just enough operating system. So we provide you this concept of just enough operating system. We kind of limit it to virtual machine use because that's the easiest way for us to get it out there. But when we provide you these five or six different templates that you can choose from for this juice image, we also provide you the Kiwi template for that. So it gives you a starting point and you can start with a minimum build for Kiwi and you can include your packages that you built with the open build service which also does dependency resolution and builds the dependent packages as well. So your repository actually has everything that you should need inside of it to build your solution and you can then run this through Kiwi. Now, if you're running on the SUSE open build server that's available online, Kiwi is actually one of the targets in the build. So I talked about the open build service being able to build to multiple distributions but also multiple platforms. Well, if you choose not to build to a platform, you can actually build it to a Kiwi image and then you'll get those choices that we just talked about, the virtual machines, the physical machines, the ISOs, whichever type of image that you need to deploy out to your customers. So you can pull from your repositories to the private repositories and the Kiwi happen, the Kiwi build actually happens in two steps. Much like the build service when it prepares the build environment, it uses CheRoot. So it's going to go and build, based on your configuration and your repositories, it's gonna go and build an unpackaged root file system that has everything that you need to create your image. At that point, it takes the image description and the image description can have lots of different things. It can tell you that there are particular additional files you wanna pull in. It can go and do cleanup, it can go and do some scripting changes. But the image description then takes that unpackaged image and packages it up as the final destination. So if that's an ISO or a live USB key, whatever it is that you're providing to your customers as the end solution, that unpackaged image then gets packaged up and the CheRoot is destroyed. So this leaves nothing behind in the build process. You can't go back and look in and see what you wanna tweak. If you do need to tweak something, you're gonna find out about that during the installation process when you go through your QA. And at that point, you can go backwards and update your image descriptors to change your boot image, change your config files, adjust at what point certain files get brought into the root before it gets packaged up. So what happens after you've built your image? QA, this is the thankless job, the job everyone hates. So we use a open source tool called OpenQA and actually this is actually used by a few different projects out in the world today. SUSE uses it, Fedora uses it, a number of our partners use it, but this is a product called OpenQA and the focus of it is to test applications and operating systems. It is primarily done via a GUI console. You do have the ability to check various code paths and various installation options and it basically comes down to two basic concepts, running a job and this thing called needles, which is comparing expected outputs to actual outputs inside of the distribution or the image I should say. So the way OpenQA works is obviously there's RESTful APIs that you can use that you can automate the entire backend system or make calls into it as you need to, but there is a web interface. Now the web interface is responsible for coordinating all the workers and all the questions that come out, but ultimately the worker schedules and the pool of resources that you have there are all targeted at managing that OS auto inst that's listed there on the side there and basically what this does is it uses QEMU to build an installation framework for the packages or the image that you've just built. It uses at that point, this concept called needles to go and compare screenshots. So you go and set specific needles for specific screenshots and you compare it for certain information. If that information is there, it has the decision to, if that information is there, it's gonna continue on the process. If that information is not there, maybe the needle will sit and wait for a while, maybe it'll ask for user interaction, maybe you've got to snap another needle, update your expectations and then put that JSON file back into the auto inst in order for the quality control to continue. Now it doesn't just do needles, this can be an automated QA process. So it'll basically take whatever images that you wanna push into the virtual machine or the QEMU and start that up and run it through a scripted install. Take that scripted install and then do additional scripts, capture the output of those scripts and tell you where the successes and where the failures are so you can then go back, report those bugs back into your developers, goes back into the open build service, build service, builds the targets, nightly builds come back out, images are there, they sit down the next morning and it all starts all over again. So when we're talking about controlling your infrastructure, optimizing your infrastructure, these are some of the tools that we use. These are the tools that you can use, these are all open source, these are all freely available tools. The only thing that we are actually selling as a product is the operating system. Okay, so patches and updates. So when you buy the operating system or when you work with us to include it as part of your solution, we have a free technology called the subscription management tool, very rudimentary tool. The goal of this is to provide you a way to get the patches from SUSE into your organization and determine what you wanna build with it. So it answers the questions about how are you going to do maintenance and security patches? How are you going to push those maintenance and security patches out to your customers? How are you gonna report on the compliance or the vulnerabilities that exist in your solution that you've created? And the subscription management tools are really simple system. It basically acts as a proxy server inside of your organization. So it reaches out to SUSE customer care, it gets access to the patches and update channels that we are constantly updating and providing new features or capabilities for and allows you to put it into one of three areas. You can take those packages and you can put it in basically a staging area to test and validate. So you've got your replica, you've got your staging area and then you can determine which packages you wanna put out there. Once you put it out there, all your customers are gonna get access to this as long as they can reach your backend system. If your customers are disconnected, maybe they're operating in an air gap fashion. If you wanna put it out there via USB key, you wanna put it out there as an OS image. If you wanna go package up a repository and send them a disk of some sort, you can do that as well. So there is a sneaker net component to this at the last mile if that's something that you do need. If you need greater control and capabilities beyond that, if you actually needed to do some additional post-scripting where you actually want to break customers out into specific groups, then we've got a much more robust and higher level product called SUSE Manager that we'll talk about. Okay. So I wanna stop here. That was a ton of information. It actually went by pretty quick for me. Hopefully you guys weren't tortured. Any questions? Comments? Snarky comments are welcome? Guy in the back's laughing. Okay, so the rest of this is basically commercial self-service. We like build. So we are an open source company. Some other companies may have tried to spread FUD. We are an open source company. Everything we do is out in the open. We don't do a whole lot of acquisitions. We've done a few recently, but those were actually open source projects and resources that are still working in the open source community. Everything we do is open source first and foremost. We work very closely in the community. We try not to put things on rails. We are very collaborative as much as we can be. And that's not just in the open source community, but it's also with our partners. I can't tell you how many times we go to conferences and people say, God, we love you. And hopefully we're not stretching ourselves too thin. So B, love that much, but we do appreciate the customers and the partners that we have. These are some of the projects that we work in. These are not all of the projects, but some of these are actually our own projects. Some of them are shared projects. Obviously, the Linux Foundation. CEF is an area of high interest for us nowadays with distributed storage as well as OpenStack. So we are sitting on the board in a number of these projects. We are participating with our partners and sometimes with our competition in all of these projects as well. This is a little bit about us if you're not familiar with us. And we were the first Enterprise Linux distribution 24 years of experience, thousands of certified systems, et cetera, et cetera, et cetera. And that's really all I have. So if you guys have any other questions, I'm here. We will be at the booth tonight. I think there is a happy hour tonight, so we're looking forward to that. But I do appreciate your time and your attention. And again, if you have any questions, feel free to ask now, or I'll hang up here for a little while longer. I think we have some time before the next session starts up. Thank you.