 So most of the talk, that seems really loud to me. Is that loud to you guys? Yeah, don't do that. So most of the talks that we've heard today have been talking about why containers are great, how we're using containers for new technology, how we're doing containers for improving things for the server side and OpenShift makes it all fantastic and Vashik will absolutely stand up here and say, OpenShift will make it all fantastic. See? But we can also abuse this sort of technology. We can take it and twist it in ways where it wasn't quite intended to go in that fashion, but we're gonna do it anyhow and it actually works out pretty well. So how do I know this? Because I've been assist admin since around 1999 and thank you Nigel for making me feel old on stage. Fantastic. And I've been a member of the Sentos project since 2004. I decided that I wanted to dabble in a little bit of evil in my career and so I moved from education consulting into oil consulting, little bit of other things in there as well. And about two years ago, I started getting involved in the 64-bit ARM effort on the server side development for ARM Workstations or ARM Workstations, ARM Server, John Masters actually was the person who got me involved in that and I've been doing that now for the last two years, give or take. And there've been a lot of lessons in getting the ARM folks to stop thinking embedded and start thinking server and the more we played around with that idea, the more we realized it actually goes the other way as well. You can take a lot of the server mentality and transfer that into the embedded space and it does make a lot of sense. So the first thing we have to kind of look at here is the idealistic approach for IoT and for how Linux distributions are actually done. And one of the ways that we do that is look at what drives IoT. Because most of the folks here are either ops or engineers and that's great. That means that we want to play with this technology, we want to tinker around with it, we want to build and develop it. But we have to come up with a way that's convincing and pitch this to management and say, hey, here's why you should let me do this in my spare time. Here's why it's a good thing. And really it comes down to money because a lot of the money that's listed in IoT over the next five or 10 years gets pretty ridiculous when you start talking about the amounts. The low estimate for 2020 was given at $470 billion. That's a pretty significant chunk of change. Now that does encompass everything IoT related. So that you're talking about the low end RFID tags all the way through gateways, through Amazon with Alexa, through Google with whatever the hell it is, Google's plugging in with Nest. A lot of the various things like that. In addition to all of the individual devices, you've also got all of the data that comes out of that. Somebody has to process that data. Somebody has to look at collecting it, figuring out if you're going to track the actual data real time, if you're going to look at trending data, if the numbers matter or if all you care about it, are we going up, are we going down, is it cyclical, that sort of stuff. But these are basically the drivers for why people want to get into the IoT market. The other side of this is in addition to abusing the data analytics side of it, you actually get a chance to look and see how quickly you can respond to something with your business. And if you are using triggers based on the data, then you don't have to make a lot of the low end decisions anymore. You can sit there and say, okay, when I hit this amount of widgets in my company, order more, you don't have to have a person who sits there and tracks that inventory and says, okay, we're getting close. I'll go ahead and put that order in. You can just program this automatically and essentially start doing CI on your business as well as on your code. Everybody stands up and says IoT is horribly secure. For the most part, they're right. There are ways around this. And this is one of the places where I think that a Linux distribution, be it Fedora, be it Rel, be it CentOS, actually makes a lot of sense in the IoT landscape because right now you've got companies like Samsung doing all of their, well, I won't just single out Samsung, but all of the IoT companies are doing their own stack, their own Linux deployment, their own software deployment, and these are primarily hardware companies. Lots of the hardware vendors don't necessarily know the right way to deal with software. Who does? Ostensibly software developers. So, I mean, IoT is kind, ooh. Hey, it doesn't like that slide. Okay, there should be, yeah, all right. It won't do that. That's interesting. We don't need typical IoT type stuff. It's skipping a thing. So, basically what happens when you have companies, when you have IoT companies who are developing their software in a silo, they'll oftentimes, especially for hardware companies, skip putting out updates because once you've bought their product, you're a customer. Everything after that, you become a cost center. And this is one of the places where having a distribution model actually helps because they don't have to worry about each of the individual CVEs or each of the individual security problems that come with building a Linux distribution up from scratch. You can form a lot of that workout onto a stable base platform like Fedora, like Sentos, like RHEL, and then just focus on what you want that piece of hardware to do. So if you're making a camera, you can just focus on the user interface for the camera and let the Sentos team or let the RHEL team or let the Fedora team worry about that pesky open SSL update that just came out or dealing with Bash or any of the other things. And everybody looks at IoT and says they're talking about small scale devices. We're talking about chips the size of your fingernail. Well, that's not, we'll see if it comes up with a slide that I want it to. Yeah, we're talking about the servers on or the IoT boxes on the left, not the ones on the right. The one on the bottom is a legitimate product. You actually can go get that. They make Bluetooth heated insoles for your shoes. Why? I don't know, but people buy it. The equipment on the right is an HP Enterprise IoT gateway. It is no different than your typical X86 64 server. The fundamental difference for this piece of hardware as an IoT platform versus anything else is the way in which the software is delivered. These gateways are done as a firmware style operating system. So there isn't necessarily the login that you would expect. You're not going to sit there and run YUM update or DNF update or anything like that. It is a one shot software piece. It's taking a generic CPU style system and making it task specific. And this is pretty much when I say I want to put CentOS on these systems. I'm talking about the industrial gateways. I'm talking about the cash registers that handle your point of sale transactions when you go into a store. These are all the IoT style servers that I want to see CentOS running on, that I want to see RHEL running on, that I want to see God help me Fedora running on. So last week at Red Hat Summit, one of the use cases that we had brought to our attention was an MRI machine. And that one caught me off guard. I wasn't expecting to see medical devices as something talked about for IoT. And the reason that they were brought up as an IoT style device is these are one shot installed systems that ostensibly move around throughout a lot of countries and don't necessarily have the ability to stop in any one place and get updates. The people who brought this up to us were part of the Doctors Without Borders team and they were moving an MRI machine around Afghanistan for some of the mobile hospital type work. They're not going to be able to plug in a system and do a YUM update. These are high-end medical devices that have to be certified at each step of the way. And if you're doing an IoT style update or an, well, the RPM OS tree, who's familiar with RPM OS tree? Let me ask that question before I start. Okay, about half of you. RPM OS tree is a very good way to let you generate the tree of updates that's going to be pushed out to every single system one time. So you don't have to log into each individual server and run YUM update. You do the batch transaction once and you push that out. Primarily we've been using that as a container platform because it makes a really nice way to drop containers on top of that. It also works in the IoT space for exactly this because I can have a golden image that gets pushed out and most people should be familiar with the concept of golden image. Yes, hands, the heads, yes, okay. So you generate the image. You get that image certified for a medical device for whatever. And when it's time to push an update for that, you go through that exact same process again. It's a read-only install that doesn't let you move from one to the other ostensibly. And so for MRIs, it actually made a lot of sense and the guy who brought this to our attention really wanted to see this done. So this is something that I think we could actually push along with automotive, the in-dash entertainment systems, most of you aren't gonna connect your car on your home wireless network, tying it in through a Bluetooth cell phone or 3G connection, 4G connection via cellular is pretty much the way that's gonna happen. And the last option is basically the home gateways, the Amazon Alexa-style devices that act as brokers between what you're doing and the cloud as a whole. If you have a bunch of light bulbs in your house that are IoT connected, you don't necessarily want to have to talk to a surfer in China to tell your lights to turn on in your house. If your network goes out, you can't turn on the lights, that's kind of a problem. So a home gateway acting as a broker for that makes a lot of sense and a distribution that has a method for tracking and updating CVEs or other security vulnerabilities almost immediately helps a lot in that plan. And I answered my question before I got to that slide. So the reason that we want to use the enterprise-style operating systems or the long-lived operating system is basically nobody's going to replace the refrigerator every two years simply because somebody stopped producing a software update for the in-dash panel on the refrigerator or the dryer has an application so that you can be notified when your laundry is done. When they stop pushing the updates to that, you lose some of the functionality that the hardware manufacturer is using as a selling point for it. Having a distribution that can continue that 10-year life cycle makes it easier for these appliances to continue to be useful and productive despite what the manufacturers think. At that point, you now know that it's a forcing function of the business rather than, oh, we don't have the time to put in the first software on this, so it means that it's easier for the user to upgrade to do things right. And a lot of it comes down to, it's the exact same tool chain that people are already used to using. A lot of businesses are already certified on RHEL or on CentOS or on Fedora. They already have that workflow down. They already understand how that operates. And this allows us to continue that and push it down into the lower end for the, the lower end chips as well and into the IoT space. It's the exact same process. Things behave the same way. It gives everybody that consistent platform that they understand already. Does anybody have any questions so far? Does anybody think these ideas are completely and totally crackpot yet? I have no problem heckling the audience. Feel free to give it back. So if you have a question, stop me and ask. I have no problem with that. And basically this is what we get out of it. Is the vendors can focus on making the hardware do what they want the hardware to do. They don't have to care about the operating system. So we don't need to have 18 vendors coming up with 19 different ways to produce an operating system. We can give them the platform and they can focus on making the device do what they want. We have, within the CentOS project, we've got a few vendors who are already trying to talk to us to get us to do things like this. I single out Samsung. They're fresh in my head for IoT. They actually have a few devices that are using, I wanna say it was built on Fedora 21 that was their embedded model for this. And they came to talk to us at Red Hat Summit asking for ways in which they could get updates to the system for this so they didn't have to constantly refresh all the time. And the RPMOS tree style of updating actually seemed to work for what they were doing because this was basically a DIY TV chip. It was an embedded chip that they were using in a lot of their different technology and they wanted the customer base to be able to play around with it. This gives everyone a way to do that. So if it's running a standard distribution you get to play around with it. You get to do what you want with it. You can try and break it if you want to and contribute features back. And again, the atomic updates are basically similar to the typical firmware style. So it makes things very easy to just push the update, see if it breaks. If it does, it rolls back automatically. You get stuck in some of that. Who here's in ops or has been in ops in an RPM distribution in the last 10 years? Few of you. Anybody ever had a system lockup midway through a YUM update and have to roll back through all the transaction tables and everything? Yeah, I know you've had to do that a few times. The RPMOS tree style prevents a lot of that headache so you don't have to go through this. And I keep harping on that point but it's one of the biggest selling points of it. And it actually, that aspect seems to make the most sense to me. So how does this actually work? One of the, right now, the current IoT workflow, Alpol comes from the automotive industry. A large chunk of the automotive industry is either standardized on QNX or Yachto Linux. And I don't want to take away from the Yachto folks too much. It's a great project but it's really cumbersome to work with. And we've had a few folks come up to us to complain about their workflow process with Yachto and what they described looks something like this. They do a get clone of the Yachto code. They do a base build of Yachto. They start to add in the changes that they need. They put their application code inside that same directory. They cycle their builds and their configuration files. They copy it off, they test it and they iterate through that. And every single time they do, they get a fresh build of Yachto which means they can't go back, they can't reproduce any specific build because they're already into the next tree by the time somebody files a bug. They don't necessarily have a sane ability to roll back to a previous get snapshot, see what was there, see what was built, test against an issue that somebody's filed and then produce a patch. They have to send that person an entirely brand, or send that vehicle an entirely brand new build of Yachto and of the vehicle software to see if that actually works and fixes their bug. That workflow is problematic for a number of reasons. If you're already well through your software development cycle, you have to roll back a number of things based on one particular issue. If you're using Fedora or REL or CentOS, reproducing a build and taking a known working set of RPMs makes things very reliable and you can take a snapshot of a system where somebody's filed a bug or filed an issue and go back, duplicate that exact environment, see what it is, and then figure out what needs to happen from there. It's a lot easier to do that in Fedora or CentOS than it is to do within Yachto. So on the CentOS side, it's basically package your application as an RPM or a Docker, can we still call it Docker containers? Do we have to call it Moby containers now if we're talking about community work? Okay, so as long as I use the lowercase D, then we're good. Okay, I'll include the trademark then later in the slide or change it to a lowercase D. You expose the package to a repository, either a container registry or a YUM repository. You install it and you deploy. That's pretty much it. The only thing that you need to do it in between is validate that it works the way you want it to. That's it. So you add a CI pipeline into this or God help you tie it into OpenShift and everything is fantastic. The other thing that you can do, just deploy it or just publish it as an RPM, expose it to a YUM repo and build a layer on top of RPMOS tree. You can actually, if people are familiar with RPMOS tree, you can unlock the tree, install an additional application, close the tree up and then push that layer as just the layer. You can add it in as a second tree and push that. That workflow is still a little problematic but it does actually work and we're talking with a developer to kind of make that a bit more of a usable workflow. The reason that we push that is if you do it as an individual container, you have one container running on one device, one application that seems like kind of overkill to involve a container at that time. If you're running multiple applications, if it's an in-dash car player, you're probably gonna want a thing to play music, a thing to answer phone calls, a thing to pull contacts in and that's fine. You can have as many in there as you want. Containers in that space makes sense but if you're talking about a camera, your camera's going to take pictures or video and that's pretty much all it's going to do. There isn't necessarily a need to have multiple things layered on top of it. The other thing that this gets you in this particular workflow is the ability to add in anything else that's already packaged up. So if you want to start looking at sharing out storage between multiple containers, if you have a home gateway and you've got six different devices, you want to be able to share things between them, you can put something like Gluster in persistent container storage and share that out so you can actually have seven or eight things that all accessing the same point. And I'll single out some of the Gluster folks in the room just because Nigel was nice enough to throw out that introduction. I'll see about making this part of his task later on to do CI for this. I told you my revenge would be swift. The big thing out of this and the thing that I keep coming back to is making sure that everything is consistent. The ability to have a workflow where you can reproduce an environment where somebody files a trouble ticket is kind of key. The ability to test against things and reliably roll back and then create new tweaks is really important. But this also leaves out the smaller devices, the ones that we saw on, yeah, those, the tiny ones. We've talked about the big stuff. We've seen the rebranded servers that now count as IoT. But the tiny stuff, we've still left on the side. So we need a way to deal with these. Otherwise, we're abandoning an entire chunk of IoT. So how do we deal with the small ones? The Zephyr operating system is actually really nice. On the, one of the IoT working groups, Red Hat has joined and is part of the light, trying to remember what light stands for. It's the Lanaro IoT technology gateway or something like that. I can't remember what it is. But Red Hat is involved in this and one of the demo platforms that the light working group for IoT is using is the Zephyr operating system. Zephyr is a real-time operating system, so it's more tailored to real-time collecting of information. A lot of the lower sensors need to have where the smaller chips are sensor designed so that you can pull in and collect information for temperature things. The railroad industry is actually looking at a large portion of this type of thing because real-time actually matters for trains. Who would have thought? Finding out if the signals are working right, if the gate comes down on time as the train is passing through, real-time matters in that instance. So the Zephyr operating system is tiny enough that it can be built for a lot of these low-end devices that will never get fedora on, that will never get sentos on and that Red Hat just has no hope of running on. But Zephyr is kind of purpose-built. So you can work through the Zephyr builds and duplicate this in QEMU. So the QEMU environment allows you to do the build on sentos natively on fedora as well. Run it through the virtual environment to test and make sure that you've got all the functionality that you want and then deploy the build. So you essentially add it into your current CI chain. You can do the build and test right in the same operating system, right in the same pipeline that you're already used to using and then crank out the small chip set. If you don't want to do the native compilation, fedora actually provides all the cross-compile tools so you can build on your X8664 laptop for ARM, for ARM64 for a lot of the other chips that they provide the tooling for. So essentially you never have to leave your current environment even to deal with the stuff that you can't necessarily run sentos or fedora on. The point of all this comes back to keeping it simple. You want to have the, it was said earlier today, developers are lazy and I know this because I am one. I will say that again and I will steal from that guy because I'm a developer, because I'm lazy, I'll use his words. And it's kind of the same concept. It's basically use the core that you already know. Sentos is already there, fedora is already there, rel is already there. You just need to apply it slightly differently. It gives the developers a consistent base so they don't have to learn about debugging Alpine and debugging sentos and debugging fedora and debugging Debian. It's all the same platform. It's all the same libraries. So everything works the same way that they're already used to. And that gets us almost to the point. Yeah, a little bit ahead of time. I was told 15 minutes for Q and A were at 17. So questions? Anybody, anything? Yes. One sec, he's going to bring you the mic. And if we can get somebody on the opposite side of the room to ask a question next, it'll be great to see him run back and forth for the... Have you done Linux from scratch? There's a project out there where you just take the source code and try to build it from scratch. So what I had to do was try to do something like sentos from scratch because if you are building something like to cross compile the whole sentos for another target, it's kind of difficult so because there's no standard instructions of how to build a sentos. Like build one, not just install sentos and run it, but build a sentos. So if you truly want to do this sentos on IoT, some instructions along those lines will really help the adoption. And second thing, is there a definition of what is minimally a sentos? Like should it just be the kernel, the DNF and SLNX or should it be something else? So is there a definition for that? So that definition will really help, like... You bring up two very good things. The first is directions for building a version of sentos. And that changes. So it's more difficult to provide than you would think. Initially when you look at what it takes to build sentos, you assume, okay, it's just a bunch of RPMs, I'll cycle this through. And that's not necessarily the case. For each series of builds, things get kind of tricky. You have to walk the build platform back and forth. Even sometimes for little things like libnss, the NSS library actually has several check sums in it that break their basically code time bombs. MariaDB and MySQL do the same thing because they validate SSL certificates. And one of the NSS tests actually goes out to PayPal, grabs the PayPal certificate, pulls it in, and then tries to validate it. And if PayPal has updated their SSL certificate, like they do fairly frequently because they do things right, that test fails. So it becomes problematic for, okay, follow these directions to build a distribution. But know that you're going to have time bombs in the code when you try to build these 12 packages. So you have to do this step, this step, this step, because every time that package gets updated, something new is in there for Golang. Golang up to 1415, you could build that against GCC. No problem there. Now Golang requires Golang to build. So if we push those instructions and we say, okay, here's how to do it. The problem becomes maintaining those instructions every single time and we spend more of our time focusing on building the instructions rather than building the distribution. It's not that the tooling isn't out there or that it's not discussed. I mean, we're not the only ones out there that rebuild this code. Scientific Linux is out there. There's another one from Stanford University. I wanna say it was called Springdale that I think they've renamed it to something different. Oracle does it. I mean, there are a bunch of different ways. The other side of this, when you come down to what's a minimal distribution or minimal version of CentOS, that gets tricky as well because if you're bringing up, do you need a kernel, for example? Well, for a bare metal install, absolutely, yes. And we won't support it if it's not our kernel. If you're talking a container, you don't need a kernel in a container because you're using the host kernel. So which one is CentOS, which one isn't? They both meet the definition. We ship them both. So that gets tricky as well. I would say being self-serving and standing up on stage talking about this, I don't necessarily want you building your own version of it and putting your own tweaks into it. What I want is you contributing to the stuff that we're already doing. I would want to expand the community out and say, hey, we've already got CentOS here. Come help us do what we're doing and tailor it to fit your needs or add an additional package into an additional repository instead of building it again from scratch. Building it again from scratch is pretty much what got us in the position where IoT is right now because Samsung has done it one way. The folks from Dell have done it a different way with the Edge X application. There are a number of different examples for IoT products where they've done it their own individual way that they felt met their needs at the time and it's gone horribly wrong for them. The idea is this. So generally, if you look at other operating systems, I mean, the only, in one of your slides earlier you said, developers are lazy and the job of an operating system or a distribution is to give some sort of consistency to a developer. So when he comes and sees, hey, what are you running on this piece of hardware? And if I say CentOS is running on this piece of hardware, I know exactly what to expect from there. So I agree with whatever points you have mentioned, but any definition which we can give like minimally, for example, what first thing that comes to my mind is CentOS has a silenax, like while Debian based or Ubuntu, they will not have that. So I can expect, so if I'm going to distribute some application which is going to run on that specific piece of firmware and it's not necessary that you are always going to be supporting that piece of firmware. Like there's some random chip out there which I have got for low cost and I have decided to use it. So can I build some firmware which is, which kind of ascribes to the definition of CentOS, even if it's not certified by CentOS or whatever is the authority? You can absolutely do that. So how do we define like this is like CentOS, the most minimal basic requirements, not like list of packages, these packages should be there in CentOS, but more like, hey, it has the Linux kernel, it has a silenax, like something, some definition would really go a long way. I mean, it would be a starting point, but. The container argument is the reason that we haven't put out a firm definition of that yet. Because if we require a kernel, then even the official CentOS Docker container doesn't meet that definition. So if we're just calling it a kernel, can't do it, it won't work. If we say it has to have SELinux, well, OpenSusi has SELinux as well. So that doesn't work. If we say it's, the way that the distribution as a whole has said that we will deal with this, this is to say, if you are running code that we have signed with the distribution key, that is code that we will support. If you're running your own kernel, that's great. You're completely welcome to do that. As soon as we hit something where the kernel is singled out as causing a problem, you're on your own at that point. As far as trademark goes, if you wanna do something that ascribes to CentOS, the trademark guidelines for the distribution allow you to do that. So you can sit there and say, hey, this is based on CentOS, or this is powered by CentOS, but I have added X on top of it. So there are guidelines on the website for doing that. You absolutely can do that as long as you're following the trademark guidelines and the tooling around it. As far as the, for the rest of it, I want people contributing to the project rather than trying to copy the project and duplicating. It doesn't seem like there's a lot of effort in building a Linux distribution. It's not necessarily the compilation that requires the work, although there's a ton in it. It's the long-term maintenance of it. It's the ability to continue supporting it for that amount of time. So that's where a lot of the value is actually there. And that's one of the things that gets pushed in the IoT landscape because the updates for IoT, just they're not there in most cases. They're hardware vendors. They're selling the hardware. A lot of the developer boards even the Android cell phones that you look at today, most of them are running a 310 or 318 kernel, even though there are known problems in those kernels and Android itself has moved on within the Google aspect to a 4X based kernel. They're just not being updated. So that's one of the things that gets triggered. It's that long tail of maintenance that's actually the biggest part of the value. But saying that, yes, it's maintenance that makes this thing function. That maintenance isn't new. It's not hot. It's not the sexy thing that pulls you into a product. So people get sold on the features. They get sold on the upfront UI, the interface, the flashy. The support comes afterwards about a year into it when they start saying, hey, I need an update for this because there's a problem. Any other questions? Yes. Hang on, the microphone. So I'm quite interested with the GPIO stuff, which I didn't find with Fedora. So that center some supports the GPIO. On the 32-bit side, we have some limited GPIO support. It's not fully baked in the way that you would likely want and we don't necessarily have the hot plug for certain devices for adding and removing while the device is powered up. There are a couple of kernel tweaks in that that need to be done that aren't there. On the 64-bit side, we don't have that at all currently because most of the servers that we've been aiming for don't have GPIO support. So there hasn't been a need to do that. In the future, we probably will turn that on for some of the smaller scale devices. I know that on the Fedora side, they were working to enable some of it, but because Fedora builds specific functions into the kernel and it's a single kernel built for all the devices just per architecture, getting some of those features turned on causes issues for other architectures and that causes the whole build chain to fail. So there are some reasons why Fedora doesn't have it just blanket enabled that are a little different. You're not making him walk that far. All right, I could just walk over there. So if you are finished with the answer. Okay, so my question is CentOS is building atomic images based on OS tree, but it's container oriented. It has Docker inside by default stuff like that. Do you build or do you plan to build other atomic? Not atomic, but OS tree based images or trees that would be as minimal as possible for these IoT devices without Docker because maybe people don't want Docker there. They want to just use OS tree install or something like that. Eventually, we have to have the workflow around that before we can start producing those images. We have to have the developer environment and the workflow for people to consume that. So we have to get that built first and then say, okay, this is the tree. The easy way is to say, here's how you build your own tree from this list of science CentOS packages from this repo pulling from Apple. Here's how you can do it yourself. That seems to be the first step to me. Whether or not we say, here's a base tree. This is what you need to use as your initial platform and work up. We'll see where it goes from there. I think we're good. Okay. Thank you, everybody.