 And welcome at RC3 back at the stream of chaoszone.tv. We will be continuing shortly with the next talk and a short note. This talk is being translated and repeating that note in English now. It will be translated into English. You should find the switching button to the other language in the below the video. Alright. You can ask questions about this talk via Mastodon, Twitter or in IRC. And you will find these under the tab chat below the video. And the speaker asks that a lot of you join IRC because he wants to do some live polls there with you. Okay. The title of the talk is DevOps Disasters 3.1. And I feel like I'm recognizing a pattern with these numbers. And that is the case because same procedure as last year. Now with your own Docker container. And welcome to Stefan and enjoy the talk. Alright. Hello. Good day. Great transition. I hope I'm visible in the picture. A bit different with these new situations for everyone. Exciting for me as well. Welcome to DevOps Disasters. And now as 3.1 now published as 3.11 because we have chat we can communicate. And we want to use these options that we have. So I wrote here how you can participate. There is an IRC channel where you can join. And I also have on www.devopsdisasters.net the web chat below the video. If you go to me to see ZDE below the video there's the chat as well. And what I also did, not supposed to, but I did it this year. I already published the slides. So if you have a very bad connection and you want to see the slide still, you can watch them live. Well, they're on Google. Alright. So I will tell you a little bit while you're trying to join IRC. Who am I? I work in live tech and a tech collective. And we do the technology for larger NGOs. And I'm in the operations department. Responsible for shipping the things that our dev department builds. And making it run in the real world. Which means that I see a lot of weird things. Not because our devs build weird things. They build great things. And we talk a lot. But in the world there is a lot of weird stuff. And once a year I collect it all and do this talk now for the third time. So let's start. What happened so far? Looking back on the previous years. Where is this coming from? What I do. So like I told, I'm working in this IT collective. We have this chat where we talk Richard Altscher. And there's a failed channel, which was originally for our own error culture. We want to communicate openly if something went wrong. And then everyone can say, oh yeah, that's great. We broke production. So that's what it was supposed to be. But it's developed into trashing anything that happens out there that's weird. And the great thing about this is at the end of the year I just have to go through this channel and then I can do this great talk with you. And one important thing is in our failed channel, lessons learned. It's very important to learn from something. So how can you do this better? We're not just showing you what's weird in the world, but what can you do that it gets better? So looking back on what were the topics the last years and you can find slides on the website. We looked at logging, what you can do wrong there. Process management. So process forking out, config handling, how you can misconfigure your software persistence, how you make data in distributed systems available. We looked at high availability. We looked at patient installers. So how does he ship software to the world? Continuous integration, continuous stability systems and also hype trains. I'm showing you this because I'm going to do nothing else. But going to read this and see what happened this year with a focus this year on hype train. Docker, I already talked about that. All right. The year is 2020. Let's go. What happened? So in the area of logging, we had something nice. A nice situation which showed that in this Docker world a lot can change. So the challenge is we have periodic tasks in Docker. What's running there? So there's a Docker container out there. And the job is to run something every five minutes, 10 minutes, every hour in the old world. When we didn't have Docker, we had cron for this. So what do you do? One option is to use these worker frameworks that are out there for asynchronous processes. Sidekick is a popular Ruby salary in the Python world. And those have extensions such as scheduler for sidekick or beat for salary. And then you start a container and start these jobs. And that works well or less well. But there can be some challenge problems, but it worked fairly well so far. And the second option is Kubernetes, which can do cron jobs for now. They noticed that Docker containers have to do something like cron jobs. Yay. If you use Kubernetes, which we don't. And if nothing else works well, you can run a cron demon in the container. Where else is it supposed to come from? So you put a cron job, you put a cron demon in the container which runs the jobs. And that's in the Docker container. The question is how did we do this? We ran cron demons in the Docker containers and then how do we get the logs? So the situation is Docker logs everything which goes to standard output. And if you know a bit about cron, you know that cron likes to log things per email. Because nobody looks at standard output. And the nice example of, it's a nice example of really old software. 23, 30 years in the field still does weird things. Who would have thought that you could log to syslog or something else? So there's a bug report for this from 2018. It says, it would be great if you could log, run Docker in the foreground. If you could log to syslog or a standard out or something else. And it's open, unanswered. Well, we have a problem. How do you get the logs? And we, the chat is writing, you could use sysmd. And of course, here's the slide. The people who have thought to the last few years will have seen this already. If you want to do logging, then please use the libraries that exist for that. Logging is a solved problem. You don't have to reinvent it. You don't have to send it per email or write it in files or anything. We've looked at this over the last few years. Why you don't do that? Use the libraries that your program language has. You can make it configurable so that as a person, you can see what's sensible. And if nothing else works, then write it to standard out or standard error. But don't invent a new logging solution in each application. We have seen the weirdest shit. Like we are logging into the database of our application and writing an interface or something. We've seen that. Config handling. Again, something with Docker. It's always interesting with what happens. So the answer of Docker is config handling should happen through environment variables. So from outside anywhere. Set the environment viral and that's your integration. The problem is strings and nothing but strings. And a lot can go wrong here. Especially you don't have booleans here. We had this last year in the talk already very quickly to show the problem. In Ansible, you have a true or false. That's not a string true or false. Might be written in upper or lower case. If it's general, it's getting parsed and casted around. And what you have in the end in the environment is not what you thought it was going to be. There are some nice examples that we saw. You tried to set something to true and it didn't become true. And in the end, there's a Ruby application there that parses differently. And another problem is you don't have complex configurations. You only have simple values that you pass in. So as soon as you would like to have a dict or something, it's over. What we've seen in the practice is that people write JSON into their environment variables and then write that and parse that. What we see pretty often in the last year very often is, hey, we have a database in our application. Can't we put the configuration in there? And that's what we had last time. So who started writing configuration in their own database? Overview. MetroMOS likes to do this recently. It's a chat over Slack alternative. SensorGo monitoring program has started to do this. And the highlight of this year was this year influx DB2 has started writing config into their own database. And if you know influx DB, you will say, wait, isn't influx DB a time series database? There's time series in this database. And yes, of course we write influx DB2 wrote configuration into a time series database. And I put this here from the documentation. So it's about adding a check in influx DB. So adding alerts for some metrics. And this is how influx DB influence this. So we have standard metrics and run a query on that. And the result of that query goes into a new metric. And then we apply a check rule to that. And the result of this check rule goes into a new metric. And then at the end, there's an alarm alert somewhere. So this is a nice case of if I have a hammer, everything looks like a nail. So config handling is we had this last year already. Config handling is a solve problem. Config is not code. So things like we are going to load in the beginning of our program PHP code or Python code. And then that's our config Django likes to do this. PHP likes to do this. No, config is not code. Config is not user data. It doesn't belong in the database. There are a lot of config parsers out there in the world. We can parse any style, YAML style, whatever style config for Docker use the environment variables. Doesn't solve the problem that we had. Docker doesn't really have a great answer to this yet. One nice corner for nice technologies make open new questions. What is starting to show is new technology has users like these key value stores that exist out there at CD and console and then teach applications to read this. And this has the advantage that sooner or later you can have the modules to enter this with module with Ansible and stuff. That's also always a problem. Config in databases means that it's not automatable or not well automatable. If we want to use with Ansible or something. Another nice example from the high availability area we saw this year. I'm not going to name the software just yet. But a software from the high availability area thought it was a good idea to build a state machine, namely a distributed state machine and not just distributed across different servers but also across servers and clients. Great idea. Does anyone know which software I'm talking about? Maybe you can write an IEC. If somebody maybe knows somebody remembers what it could be. Let's see nothing yet. Apache Zookeeper. Why did we encounter this problem? We were wondering do we have to back up our Kafka clusters? There's a ton of data going through there. Message queues cluster status. And we decided that it's not really possible to do a sensible backup of Kafka cluster. Which is quite the interesting conclusion talking about new technologies. So if you're trying to backup Apache Kafka you would have to try and log all the messages and make the entire system replay capable, build everything item potent and make everything replay capable. So that just raises a ton of new questions and I'm not sure that anyone in the world really solved this. So we just use message queues. And when we figure that out it's kind of the look on our faces. Also more on high availability. From the last years we learned that high availability always introduces new complex technologies. And you always have to see what is actually high availability and which part isn't. The Zookeeper example is a very good one. Of course a server can go down but then when clients go down or the state gets lost and you're just in trouble the cap theorem keeps popping up again. It's provably impossible to have all the different properties of a system that you want at the same time and guarantee them in the last part which part of the system is actually highly available. Packaging and installers. We had a lot of that this year. Somehow everyone decided to write their own installers this year. We already had some of that last year that didn't get any better this year. So here's a short list. Such as Ansible, Ansible Collections is a new format that they developed with a new Ansible version. And they said the PIPI installation is no longer supported. It was always a bit problematic. So either do it manually or use our RPM package. It's great. Yes, Ansible was all by Red Hat. And this is how we notice. Poetry I encountered about two months ago. And honestly I have to say I don't understand why poetry exists. Maybe I really just didn't get it. There might be a good reason that I didn't see. Poetry is a new package manager for Python. And I have to say with PIP and the universe around it it's a pretty stable package manager. It's well known. It's widely used. So when I saw the project I thought maybe somebody from the JavaScript, NPM world or the Bundler world came around the corner and didn't really understand that the mindset in Python is a bit different. And so I decided to rebuild that in Python. And the funniest thing about that there is one program that's Poetry Export which exports lock file to other formats. So you can then use it with PIP. So I'm not sure why somebody built another installer for Python. And I see it just says why do we need installers. Everybody has curl pipe to bash. But yeah, let's not do that. Then Anaconda a distribution that's widely used in the artificial intelligence field. Which mostly just puts together all the different Python packages that are out there that have something to do with AI. And there as well I didn't really understand why it exists because you can just install stuff with PIP. But the reason that I give is that if you install stuff with PIP you sometimes need to link something against C. And so you need to see headers. Although that currently happens automatically with Python with the wheels. So what they do is with Anaconda they build the binaries themselves. Then they need to somehow link that into the environment around. So the solution is to just take their own environment and bring that in and add that to the path before the system libraries. And that means stuff like tar, curl, libtool, openSSL and other system tools are out of it in this own environment. And so all the security updates that you install via RPM or whatever your distribution uses are not used anymore because it doesn't even hit that environment. So applause for that. Or just building another package manager again. And as I saw in the chat, there's some fans of the Poetry Package Manager. Please join chat afterwards, come to the JITZI and explain it to me. Maybe I just missed something. Another example from the packaging install area. Minio, we already talked about in the last few years. There's an open source S3 alternative so you can deploy your own S3 alternative in your data center so you don't have to use Amazon. So a lot of people use it. The issue with Minio is that they release pretty often. So I looked at the last two weeks. They had a release on the 10th of December on the 12th, on the 16th, on the 18th. Another one on the 23rd. And for each of these releases, we have to do quality assurance. So we release it on staging. We check that the software still works. We need to find the release window for production and have to roll it out for production. And it happens quite often that we release something on production and our learning monitor already tells us that the next update is available. And the next question is, do we need to install each release? Can we just update if there's something security relevant? And the question is, is it security relevant? And if they don't say anything about it, then we don't know. And if we don't know, then when in doubt, we will treat it as security relevant because we don't know any better. So we have to update just to be safe. And that's the issue with Minio. They don't flag things as security relevant update content. So our ask of you if you build or distribute software, please figure out a way for us to tell which of your updates, which of your releases is actually security relevant. And which are just feature releases. So release early release often just means there's a ton of other work for us. And we just spend our days doing this. On the other hand, at least there are Minio packages. We also had a case where we had a doco wiki plugin that we wanted to install. And we just found a repository. We quickly send a message through GitHub. Check whether there's any way to get releases. It's pretty easy with GitHub to just create a tar archive. And the answer we got is this is such a simple plug-in. I think updated only twice in the last 10 years that stable releases aren't going to be very helpful. So everything that's pushed to the master branch of GitHub is a stable release, which I think is that's that's a bold statement. But yeah, there's a high likelihood that this will actually blow up in your face. And again, for packaging and installers, how to do it right. These are solved problems. Please don't reinvent the wheel. There are existing packaging formats for all kinds of programming languages and all the distributions. Sometimes they're a little bit weird. We also have issues with the art and composer, but at least they exist. And there's a central thing where we can see what issues there are and what people are developing and bundling all the efforts in. So please don't build your own and don't just install stuff randomly in the system. Because you will get something wrong. The likelihood of something will go on is very, very high. So this is everything we had from the last year in the known categories. Let's look at what's going on in the world of Docker. That GIF I found there is pretty much a summary of our experiences. And we're going to look at it in detail. Why is it relevant? No matter where you look currently, somebody has created a Docker container for it. Sometimes there's even a Go binary or all the things that are using Snap and Flatpak. Everybody in the neighborhood has a Docker container and thinks that that solves all the problems. So let's start at the beginning. How do we have containers? I went to the Docker website, the Docker homepage and tried to figure out the reason why Docker exists. And there's a lot of things that say Docker is cool. Everybody uses it and it's really groovy to use. And the reason it exists is only one sentence. It says containers are a way to get a standardized unit of software that allows developers to isolate the app from the environment. So that's great. There's three important things here. There's an app, so a piece of software. There's an environment and somehow we want to isolate them from each other. What many are missing, I think before we start talking about Kubernetes or anything, it's not just the app that's inside the Docker container. It's also the runtime environment. So it's not just about taking my Ruby, Rails, Django, whatever application in the container. But there's a ton more things in there. There's Openness, there's Lipsy. So that is also part of the container's contents. So environment means there's some infrastructure around that that we can ignore for now. And I'm not going to talk about Kubernetes today. We're just going to look at Docker containers. And yeah, we're going to isolate each from each other. So it's a Docker example, a very simple example. There isn't even a command. We just take Ubuntu 18.04 base image, the official one. We create two folders and then we list the folders. Does anyone see an issue here? Let's check IRC. Nothing yet. So the issue is in the container we have not only the process that we put into it, but also the runtime environment. So this runtime environment can have security issues. Let's play a quick game. Let's see in IRC. Who of you uses Docker? If you're using Docker, please just send a one as a message. And let's see what's happening. There's a ton of ones. Great. So you're still here. You're listening. That's good to know. The second question is who of you rolls out new containers when there is a security fix in the base image? Please send a two message. Okay. Okay. That's a ton of people. I saw one, two. Great. I was just told to slow down a little bit because you have some latency. But yeah, there's a few tools, which is interesting. It's a change from last year because I've been asking this question before. This is the first time we actually have people updating the base images, which is good. That's a good development. And I'll please respond with three IRC. Who of you is monitoring the running containers for new vulnerabilities in the base image? I'm going to wait a few seconds for our latency. Maybe come to 10 slowly. So far I haven't seen any threes, but I'm very happy that there's some twos at least. Oh, and there's three. Great. We are getting better over the years. We're just going to keep doing this talk for another five years maybe then everybody's doing all the best practices and then we're good. Great. We're getting closer. So the example we saw before this very simple container just you went through 18.04. I tried this yesterday. I built it. I uploaded a security scanner where you just upload images and it looks into the manager state and it found 32 vulnerabilities. So even the simple container that I showed you earlier has 32 security vulnerabilities. And the important conclusion is something that many people don't realize yet. If you distribute Docker containers for your software, you're not just responsible for the software inside, but also for the runtime environment. So the Linux that's running inside there. You are giving people a Linux server, broadly speaking. And just to be very, very clear, I think that's the most important conclusion that a ton of people haven't understood yet. If you're giving Docker containers to other people, the runtime environment is your responsibility. So in reality, you would have to publish in your interface every two or three days because that's the frequencies at which distributions release security updates. So last summer we wanted to find out whether we can trust base images. We looked at a bunch of standard base images. We uploaded them to KO. One of these images that doesn't do anything. Just upload it and see what's coming out of that. And these numbers are a few days old. So if you do it today, you get a slightly different numbers. But it was never completely fine. So Ubuntu 1804 from today to container is a little bit older. We have 58 vulnerabilities of which 27 are patched. Again, when we did this initially, it didn't look much better. Debian 10 base image, 81 vulnerabilities and 20 have patches available. What's interesting is that for Ubuntu, there's more patches than for Debian. So Ubuntu seems to be reducing more updates. Then CentOS 8, we had 133 vulnerabilities. At least all of them are patched. So we could just update the container initially. And Alpine is relatively widely used. We used it ourselves for a long time. The issue with Alpine is they only publish containers every day. But they only publish details to security vulnerabilities only when they are fixed. So they look pretty good when you put them through a scanner. But it's always behind on the patches because they don't publish that they have an issue. And that's the issue why we moved away from Alpine. We've been running for a very long time. Our Ruby developers always kept saying there's a vulnerability in Ruby. It's public, it's being exploited. Where's the update? And we have to say, well, we don't know when Alpine will finally update. So our devs went to the maintainers and basically kicked them. And now the original audio just cut out. And it's back. So yeah, obviously if you don't publish your vulnerabilities, then nothing can happen. And Fedora 31, we weren't able to scan it. So we don't know. So these are the base images that people start off with. So before you even did anything, you already have a host of security issues. So that was a moment where we were like, what's going on in the Docker ecosystem? Let's look at one specific container. And we will look at the node container. That is the official Docker container for Node. If you're doing anything with Node.js JavaScript, and you say you use Docker, you're going to use the Node.js container as a base. Because then you get the easiest way to get the version you want. So we start from a base image that's called bitpack depth. There's just a bunch of intermediate layers where something is installed on top of Debian Stretch. Where some extra software has been installed with that. And at the end, we just get the regular old Debian base image. And on the way so far, no updates are installed. So before we even get started, we have 50 security issues. Then Node and NPM are installed. And they're doing that by downloading a pre-compiled Node NPM image by using curl. But at least they check the checksum. But of course, what we don't know at this point is, how are these built? Like how is Node and NPM built? So that's technically linked on it. Is there security issues in NPM? Which version is it exactly? Where is it built? Which libraries are used? We don't know. Which means now we cannot systematically monitor Node and NPM for security issues. We cannot just upload that to KO or some other open source scanner. Because the scanner doesn't know that Node and NPM is in there. So we could create a custom solution that somehow checks this Node and NPM. But Ops don't like creating a single-use custom solution. It doesn't scan. There's also a yarn in there. So somebody said, yeah, we download a Node container and it has yarn in there. Yarn also comes through curl. There's pre-packaged JavaScript. So again, it's basically impossible to systematically monitor this for security issues. Which version of yarn is in there? Not sure. It was built at some point and especially when you download the latest image, then you have no idea whatsoever. And then of course this holds true for all images that are based on Node Stretch. So if somebody says, I have a JavaScript project that I put into a Docker container, often it will inherit the Node container. So all of the above holds true for every image that inherits from that. And then of course, similarly, this applies to Ruby container, the Python container, the PHP container. So all the base containers are basically coming out of the box with security issues. And the source audio cut out for a second. We're back. So I see it's asking a question. Is it impossible? How is it impossible? And no, it's obviously it's not technically impossible. You can create a custom solution for each of these containers. Like login, try to find out, try to compare things. But these are all custom or manual processes that we have to write for each of the software. So we were fighting with our devs because our devs wanted to have a current Ruby like to have in their development environment. And they said, we're going to build our own container. And we said, no, no, no, we want to systematically check this. And we want to have a systematic solution. As regards developers, you're great. So this is the point where we wonder if this is the right approach at all. But that's not the end of it. Obviously this can go to the next level. And here's some next level shit with Docker compose. Compose is a way to put a bunch of images together. If their application needs another piece of software, we can use Docker compose. For example, we can put database also into the container and codify the relations. So look at this also, let's look at the official sentry image. Sentry is a solution to receive stack traces and arrow likes that you can then analyze with your team. Pretty useful. It's a software as a service, but it's also a self hosted variant and open source. So called sentry on premise. And that comes in a Docker container or in a Docker compose center. Which contains a third party Docker containers. I looked at this yesterday, I cloned it yesterday and took a look. So this is the official release indeed from the master branch from yesterday. It contains tier non xm4. So sentry of course has to send emails somehow, they need a mail server. So to get xm4, namely one that somebody called tier non built. I don't know who that is. I don't want to suggest anything. He's probably doing a good job, but a mail server that I'm running in my infrastructure with root permissions. There's a memcache in there in version 1.5 based on Alpine with the all the problems of Alpine. dated from February 6, 2020. So approximately a year old and an updated sense. Sentry uses a Kafka stack on messages internally. Which means we need the Kafka stack from Confluent. At least it's the official variant. But that is also from the from April 22. So eight months of security updates. And I tried to figure out how the supico container is built internally. I couldn't. There's a ton of internal intermediate containers. There's an internal build pipeline, something with pip and make and bunch of other things. To figure out how exactly that speech is anything fishy in there. But it doesn't really matter because it's eight months old anyway. Same thing for Kafka. It's also built from April. Then it's a so-called click house server from Yandex. Yandex never heard of that. Also it's from May. So half a year, no security updates and so on and so on. On top of that, there's a sentry docker container. It starts with from Python Slim Buster. So everything that's in the Python base containers applies here as well. And every single one of these containers has root access, root writes, at least inside the containers. But maybe also breaking out the system. So the entire list you see here, all these people that are somehow involved, you trust them implicitly. And yes, this is the official sentry on-premise release. I'm just saying, yep, take this, it's docker, it works. Take the container from February 2020. What's the problem? The chat just tells me that Yandex is something, a Russian search engine. Yeah, I saw that at points to Russia, but I didn't really look at it. So you can decide for yourself to trust them or not. But anyway, even if you trust them, it's still old. What else? When we speak about docker, I was only the security side. So if you ever looked at how docker container is built, you wonder if we're in the 90s, where the database image uses W get and then uses chat 20 to 56 some that's hard-coded in there and puts everything into the GCC. And of course they delete the GCC because they want to create new layer and so on and so on PHP similarly. Also as a GPG sec of something and a configure and make as well. The set and all disasters, which we had managed with Ansible and Saltstack. We had such nice automation software and now we are back to the beginning and make shell scripts again. You also like to see this. It's just a container. What can happen? The PHP basement justice, for instance, change much since 77 on part of the HTML. Of course, this is a links environment where it routes. So if you have an exploit on this and left security issue in your app and the app will write something there, just like in any normal Linux environment. And we've also seen a lot of things running as route. Where's the problem? It's in stock container for all. And there are a lot of crude wrapper scripts to translate these environment variables from outside into internal conflicts. These are even more wild than these shell script battles which build the containers. And there's a lot of assumptions about some kind of internal condition integration or delivery system. A lot of containers can't be rebuilt or checked. Like the zookeeper in Kafka once I had the problem. I couldn't say where this comes from because some internal conflict stuff runs there. Build something. No idea where it comes from. Just have to trust it. And then we have a lot of new hypes from the Docker world. So these problems which come because Docker isn't so trivial as you think it is. And then you have the new solutions which just come casually around the corner. So many new HTTP proxies in the last one to two years which thought it's not a problem. HAProxy, 20 years development. All of that's offered. Now we're going to write the proxy in two years. That's not a big deal. All right. Now for the bad news. What do you have thought there is bad news? All of these examples that I showed are not exceptions. They're the rule. I would have liked to show you only the disasters but sadly the Docker world looks just like this. And the next thing here is true exactly as I wrote it. I had in the last year no Docker container, no container set up on hands where I didn't say on the first look that's something wrong here. That's a security issue here. We can't say if we trust this thing or not. So what I just showed what we saw here with Sentry is not worse than the rest of the world. They do what you do in the Docker world. So what we saw here is the state of affairs in the Docker hype. All right. Let's let the chat catch up while we watch these ladies thinking about Docker. All right. What to do when we see this that the Docker world out there really is at first principles and will take a lot of time to get stable. So first thing, if you get a Docker container from someone and right now everyone gives you a Docker container, look at the Docker files. They usually usually can be found. If you look at Docker hub, there's often the link to the Docker file there. Look at that. And what you will usually see and we did this every time looking at the Docker file. Usually you want to build your own container. And that is our guideline. If we want something with Docker, we built a container ourselves because what's out there is really scary in some parts. And if you start to build a container, use trustworthy and minimal base images. So Debian or Ubuntu base where nothing else has happened, where no one has started combining something with GCC or copying something in. And probably as a first step, you want to update the base images. So that's what we do. We are based on Ubuntu now and our Docker build process starts as the first step and update. That loads the container, but at least it doesn't have any security or release anymore. And use packages from the package manager of the distribution. This is, we keep having arguments with the devs because of course they want to have them nice shiny stuff. And then Ops tell them, no, please use the Ruby or Python that's in the package manager. And yes, it's not the latest one. It's two releases behind. Do not build something yourself. This means you can't use the new versions if in doubt. Why? Because if you're using package manager, you can check systematically if it has security issues or not. And even if your devs hate you for this, which they will, again, we love our devs. Yeah. At the core, you have to have the same security requirements as for any other Linux server. You have the software in there, it's reachable from the world and you have Linux in there, which means all the security nodes we have still holds for the Docker containers. And the most important thing is we need patches. We have to patch everything as fast as possible. Once you have your container, scan it systematically and continuously for security problems. It's not enough to scan at the moment where you build the image and say, well, we don't have any security issues there. You have to do this every day. We actually do this. We have the security scanner. We pay some money for it. It's open source available as well. And we have monitoring for that as well. So we have red lights, which start flashing if the security scanner says there's an issue in there. And then we have to rebuild the container. And if you offer containers for others, that means you are responsible for publishing new image continuously. When there are security issues in the base images, so you can't just say, we have the new version of the program now, of my web app, of Django app, and everything's fine in the Django app. That's the release docker container and the next release is the next docker container. That's not enough. All right, so far. Now let's go back a bit again. That's the sad truth about Docker in the year of 2020. What else did we have? Just looking around some things that didn't fit anywhere else that we noticed. Anaconda claims you can install it without root access. And you do this by entering two, three commands with Zudo. I really love that. And you can write Python C style. That's a nice bit of code from Sentry, where they try to find internal network adapters. It's not like you have a Python library for this. No, you can do these bit masks even in Python, if you want. Indian standard time. We had a lot to do with time zones, especially because we work with people around the world. And last year, especially with people in India. And does someone have an idea what's changed here apart from time zones being difficult in general? Sound cut out for a second. I hope it's coming back quickly. I heard that the stream was gone. Just the second last sentence again. I wanted to ask you if you know what the problem with Indian standard time was. You're still on Docker. You're all scared. We'll meet in GC and make a therapy session, group therapy. Has anyone worked with UTC five hours and 30 minutes? There are time zones which are not aligned to the full hour that exists. You have to be able to deal with this. There's the answer. 30 minutes offset. Yes. SaltStack had some really great issues this year. They had a remote code execution in the SaltMaster, if the SaltMaster had its port in the internet, you could run arbitrary code as root on every salt host. And it says everywhere that the communication with the SaltMaster is supposed to be secured by certificate. They published this on May 1st, which is especially nice because all the ops didn't work there. And then after that was a weekend. So we had just like that three to four days. And after three to four days after publication, the ops were back to their desks. And there were already several thousand data centers shot during that time. And at the same time, SaltStack didn't publish patches for older versions again without announcing that. So formerly supported versions were not supported anymore. That was a lot of fun for people who we had to manage whole data centers. And because this worked so well in fall, they had another remote code execution in SaltMaster. Oh, I'm learning that Nipal also has a weird time zone. Five hours and 45 minutes. They're on some of you every day. Ansible again is a sentence we said pretty often this year. So this is what we encountered this year. Ansible wants if you enter the loop, it wants to have a list and if it doesn't get a list, it thinks that's a bit stupid. And we had this a few time now. If you give it Ansible, a generator which returns a list after all it's Python, then the Ansible is going to serialize the generator into the stringer presentation and turn that into a list of letters. That's amazing. All right. That was everything else we had. At the end of the day, like every year, everything is awful. I'm going to grow koi fish. So if you can contact me if we want to make DevOps disaster tea next year or something. And on this page, there's also something linked on the web. A C3 room in the lounge. There's a sushi bar and you can go there and find me. I'm the one with the stockings with some pattern. And now of course the image isn't working. I had such a nice image, but you'll find me right in the lounge. Just go through the cat gate. We have a street dev first house. We can meet at the bar and keep chatting there. And there's also the direct link to the jitzy on devops disasters.net if you're not in the C3 world. Do we have a few minutes for questions? If there are any, if anyone's prepared to ask questions to me. All right. Thanks first of all for this interesting talk. And I would say the most common question is if you could repeat which scanners you use or which scanners are available. All right. There is from Red Hat an open source project. That's the name of which I can't remember right now. Unfortunately. So I can make a bit of pro statement now. Yeah. We use QRO, I think, something from Red Hat, the Docker registry. They do strange things as well sometimes. Sometimes scanning something doesn't work quite, but it's something. At least we get some scans. They have an API. You can add your own test to that. And we have, there's a Senzo plugin that does this as well. Check against that API. Feel free to write to me if you have contact data. No, you have my contact data. Someone wrote it correctly in chat. The name. There's also an open source version, but we never looked at that. And next we have something. How relevant are these security issues really? They are just as relevant as if you run them on your Linux server. It's not supposed to mean that you are able to break out of the Docker container to attack the cloud machines. But you have an application in your container, which talks to your database, and there's personal data in there. So if you can break into the application somehow. Then you can start jumping around there. And you have this Linux environment. And if your application runs its route and your application has changed about 777 writes, then you can use these normal attack vectors on the app and can break out of their directory, can start writing files, can start confusing the PHP files, and open backdoors. And if the backdoor goes into the container, that's enough already because then you can go there and keep going and find your personal data or something else. So you have to watch out. Containers aren't VMs. Don't make that mistake. But still there is a Linux in there which you can attack and where you can then keep going. There is also the question, why Ubuntu as base image? Why Ubuntu as the base image? Because the result of this test where we did, we looked at a lot of base images and uploaded them to the scanner. Someone is also answering in the chat. Claire is the name of the open source version. Claire is the name of the open source version. Claire is the name of the open source version. Claire is the name of the open source version. Thanks for the input. And we looked at these images, updated them to the scanner and what is the status? Everything was terrible. Everything had issues. And then we looked, what can we look against that? What can we do against that? And Ubuntu was the image where you, if you initially update the base image, so at the moment that you built the image to updates, then you have the best possible state that closed the most security issues. And yeah, that there were still some issues there without patches, but that's the same situation as on a Debian or Ubuntu server. If Ubuntu doesn't offer patches, then you're out of luck or you have to patch yourself. So we said, if we take an Ubuntu image, we can at least bring it with Docker on the state of any other Ubuntu server. And that was in fact better than for Debian for Ubuntu. And that's why. All right, there are some few questions left.