 So today I wanted to talk about like alternatives to deploying containers in production without using Kubernetes. So yeah, my name is Claude Verna. I'm actually now an engineering manager at Red Hat and I'm working with the CoreOS team. So we are involved with Fedora CoreOS and also Red Hat CoreOS and the OpenShift product. So why would we look at alternatives to Kubernetes when we want to deploy containers? Because I think currently we can all agree that the main common solution to the problem of running containers in production or containers in workload is to go for Kubernetes. And really I think Kubernetes and using containers in production as the goal, as like many different goals. And I try to sum them up on this slide. I think mostly it's like reducing the system administrator cost, infrastructure cost and the time of people that needs to be on the infrastructure and need to be caring for the servers and for the services and things like that. A big factor also is to reduce the time needed to make a project available and we hear a lot about like MVPs and like time to market and things like that. So being able to deploy a solution or service or an application fast and have like quick feedback is very important. And with all this also, there's been like a big push on like GitOps. So having a lot of the operation tasks or operation knowledge and stored in version control and stored in Git. So instead of having to log into a server and run some commons, you would pretty much just change some files in Git and have like some automation or have like some type of CI CD to manage your infrastructure for you. You also want to be able to do clever updates. So with Kubernetes and containers, you are like all those concepts of like green blue deployment or can I test things or things like that. So you want to be a bit more smarter and be able to do some more interesting thing with updates. And a lot of a lot of fuel are also looking for this flexibility between running your own servers on your own infrastructure and also be able to to use like the public cloud or like over cloud provider. And to me, I think that maybe the biggest reason to go for Kubernetes is this scalability. And if you think that your service or your application will have a big need of scalability and need to be able to to scale a lot of a lot of services to to response to like user demands. Kubernetes is probably a good solution. If you don't have that need that might be interesting to look at alternatives and not necessarily go to the to Kubernetes and to the added complexity. I tried to put in on this slide like kind of like the two solution where I think Kubernetes is like this big boat with like all those workflows and those containers and everything is very well managed and very, very well organized. But sometimes you don't really need to have like such a big infrastructure and such a big investment. You might just be fine with a train and just like running some smaller workloads and smaller containers. An interesting fact. In fact, it's if we look at the CNCF, the cloud native foundation for last year 2020, they are running a survey every year. And you see that 20% of the respondents of that survey have a fleet of 20 or less machines. So, and you can see in the very low in the one to five. It's actually raising so people that have like a very small infrastructure so one or five like one to five VMs or bare metal servers are starting to run containers and does it make sense to to look when we have like such a small infrastructure such a small footprint. Is it really something to to consider to run Kubernetes in some case. Yes, but I think it's it's also nice to to be able to have alternatives and to to not necessarily default to to Kubernetes. I think the 21 to 50 servers is also maybe a case where you can start to to think and to be to be on the edge. But yeah, definitely when you start to to go into the higher end and like, definitely if you have like more than 5000 VMs or servers of them, you probably want something that helps you to manage all those all those machines and have to schedule your workload. So one possible alternative is to go with services that are provided for you and in that aspect like cloud providers or infrastructure as a service providers, give you some possibilities. So yeah, I listed like maybe the three most famous, the Azure container instance, a WS Fargate and Google cloud cloud run service. And really, I think this is probably great if you really don't need to care what infrastructure and operating system and you just really you have like a project you have your code you want to build it deploy it and you know, you really not interested into like how your service runs and what is behind the scene. Probably a trade off of that is that you are going to build your delivery pipeline and it's going to be strongly coupled to your provider. So if you look for flexibility to be able to run like on on premise and in the cloud at the same time, or be able to scale through different cloud or change later is going to be probably quite a lot of work to to move out of that service. So again, if you start to think about running containers in production that's probably you have something to consider about like advantages and trade offs that are there. I think the going for public cloud or those types of services are probably a great thing if you want to do some type of proof of concept or you know like really something really fast to your users to get those feedback. But maybe longer term I think it might be more interesting to to like have a solution that would be provider agnostic and that you could you could pretty much run on any any cloud or in your own infrastructure. I think it's interesting to look a bit at like what would be your kind of DevOps or delivery pipeline workflow. So with this solution for the clouds provider, you pretty much have your code and your test you want that your traditional development environment. The build side is an interesting, interesting phase. I think you can either have it part of your like CI pipeline and do your container builds, you know, just like normal using Docker files or build or any type of container build mechanism. And once you've got your container artifact, you can just use some of the platform dedicated services to deploy those those containers and quite often also the cloud providers also offers you like dedicated monitor services where you can monitor your application and see if that works. One thing to consider also is that most of the time the service that you're developing is is not going to be self sufficient, you will often need other services like databases or like, you know, storage or things like that. So if you go the cloud way, you will also consume those other services from the from the cloud provider and from the from the platform. So what are the other alternatives. And today I will mostly focus about like Linux and see what's happening in the in the in the Linux ecosystem about like dedicated solution for for running containerize application. And in particular, I will focus on what we do at Fedora and our Fedora Core OS offering. But pretty much when you think about it, what you really want to to run containers is you want your Linux scanner obviously to be able to to create containers. You want a way to provision your servers. So it can be automated, it can be quick and fast. You want security so you want your server to have like the latest updates. You can also make great use of AC Linux to harden the harden your security. And you want some type of container manager container engines or at least in a few like Podman Docker or any other other container manager. And that's pretty much all you need and all you should be concerned to have in your like Linux distribution. And they are like many offerings and they are like, while the philosophy is pretty much the same there are like many different ways of doing so I listed like the main year. And you can see that, for example, it's quite interesting, but the cloud providers also have their solutions. So for example, AWS, they have like bottle rocket. But they are also like more traditional Linux distribution. So like Fedora Core OS or open source micro OS and over over solution like this. So if we if we come back to like why why you want to learn containers in production and why would you go for for Kubernetes. If we look at like pretty much each criteria and we try to look at how Linux distribution like Fedora Core OS matches those criteria. So about system administrator cost infrastructure cost. One of the big thing that one of the big feature for Fedora Core OS is about automatic updates. And you kind of want to have this philosophy of like you deploy it and you forget about it because the operating system will will manage itself and will update itself. You want to reduce the time needed to make project available so it's very easy to deploy and provision you can adapt you can start to think about having your operating system within your development pipeline. And we'll see later I think it's quite an interesting concept to to start to think about. So yeah spinning up Fedora Core OS VM it takes less than a minute to provision it and have your service running. Version control so all the configuration to provision the the operating system is in configuration files so as a YAML translated to JSON so easy to store and easy to to adopt a GitOps philosophy. Yeah coming back to clever updates really for Fedora Core OS the main one of the main goal of the of the distribution is to have stable updates and updates that don't break. And and be able being able to run on premises or in the cloud. And I think that's probably also one of the great value that you get from projects like those is that you currently Fedora Core OS is available on 12 platforms. So I listed a few years like most of the main platforms are there and it's very it's very easy to run from like one platform to the other and since you have the common the common base and common trunk of Fedora Core OS. So going back into a bit more details about like some of those features. So really, as I was saying the updates is really something we care a lot, because if we want people to use automated updates we want them to be rock solid and not break anything. So the development of Fedora Core OS is using extensively CI pipelines and we're testing on different providers so for example we have a AWS we're making sure that Fedora Core OS is working well on AWS. Yeah. 10 minutes. I think on AWS or Google cloud and over like more like on premise cloud like open stack or just like live virtualization. And we're making use of the RPM OS tree technology. I'll talk a bit quickly about it after where in the case, in the case of an update that would break your workflow and break your application. It's very easy to come back to the previous state. So you have this concept of fallback where, okay, I got an update it's not working. Let me just go back to the previous state that was working. And in the future we will be working on automated that fallback. So user could specify some health check making sure that your services back up or like you have like the database access or thing like this. And if that does not happen you would come back to the previous state. So talking about like the streams that are offered by Fedora Core OS, we are like three streams next testing and stable and pretty much. Yeah. You want to be able to have like a few machines in your fleet that are on next and testings, and they kind of they kind of are your canary VMs or canary nodes that you can use to test what is coming to the stable stream so there is Fedora Core OS is releasing every two weeks. And usually the testing stream is promoted after two weeks into stable. So during this two weeks period, there is there is time to get feedback. And if you see that something stops working on your machine that is running the testing stream. It's, it's the perfect time to give the feedback to the project and we'll be able to fix it and not break the stable stream. So yeah, really the stable stream is something we want to be rock solid and not break. About provisioning. And that's probably something that is a bit like different compared to over Fedora. Like artifact over Fedora release. Like the workstation or server edition Fedora Core OS is using ignition. And it's allowing us to, to automatically provision like one from like one VM to thousands of VM the same way so it's using a declarative way and everything is done like from a fresh, like a fresh starting point so you will always have the same provisioning using the same configuration. In more traditional way, you might be a bit more familiar with kickstarts for bear metal or cloud in it. And with Fedora Core OS you just use the same, the same configuration file and the same mechanism for, for every platform. So be it bear metal or cloud in it you just have the same, the same mechanism. So both like all the versioning works and the security for Fedora Core OS and so I talked quickly about the RPMOS tree but it's often like compared as like Git for your operating system. Pretty much the whole image of your operating system is in a commit like objects and you have a, you have a version number and you have a hash. And it's very easy to know which version of packages you have in that version and in that hash and that's a great way to, to track what you're running in production but also that's a really amazing way to do testing, because you, you know really bit by bit what is, what is in your operating system. Another feature is that we, it is a read only file system so it prevents accidental corruption or like some sort of like attacks where you would be where like attack to go and try to modify files on the systems and try to get some more privileges or things like that. And obviously we have a C Linux enforced by default so that also helps a lot security wise. Okay, so if we look back at the workflow with something like like container for like something like Fedora Core OS or container OS. I think you suddenly get the level of abstraction that you can start to put into your delivery pipeline. So you have your code your test. And now what you build. Maybe where where you start to have like some dependencies in the way you're building your application because now I think you don't want to build only a container you want to build the container plus operating system solution. And your service what you're starting to release is not, is not the container anymore, but it's this combination of container and operating system. So you can start to test your application running against the specific version of your operating system. And what you're testing is exactly what you're going to deploy. So here really I think this container plus provision in configuration is is really the artifact that you want to to release at the end of your delivery pipeline. A good example that we've worked recently it's to run a matrix server so matrix is a communication protocol open communication protocol and they provide an implementation of that protocol called synapse and the project provides everything as a containers, but you don't really have like a solution where you can almost like press click and deploy, and you have, you have like all the services needed to deploy that service. A good example here is using Fedora Core OS and the ignition config where you will define all the provisioning of all the services. You can have that solution where you have your operating system image and the configuration of your services and that to artifact gives you a service that is ready to deploy anywhere so either on your infrastructure or on the cloud. If you want more information, there is a link to a GitHub repository where you can try that and you can deploy your own server. And finally we've we've like more traditional Linux distribution you also get the benefit of having a community behind and being able to talk to people and like get involved also in the project and try to to propose changes or improvement to the project. So I think that's also a great aspect when you look at at solution to run your containers is to be able also to to be part of a bigger community and be involved with with people. So a few links I will share after in the chat the link to the presentation but yeah if you want to get started or more interested the feelings there and tomorrow there is also a workshop. So getting started with Fedora Core OS. So if you're interested, I'll encourage you to participate to the workshop. And that's it I think should have like a few minutes left for questions. Maybe a black one question, then we need time to for to prepare for the next talk, but we have to actually in Q&A section. So let me look at it. So are the processes that are the best practice for things to Fedora Core OS. No, I don't think it's it's unique for Fedora Core OS. I think other other other other solution as well as like cloud provider solutions or other Linux Linux tiny Linux or like container Linux operating systems are also following that way. I think that's probably something that when you start to be in the like the one tick to run container in production, I think this is something that you see a lot like trying to adopt those github practices.