 Test one two test test Test one two test test Hello, my name is Brandon Phillips. I'm a little teapot short and stout. I'm a little teapot Brandon Phillips test one two test one two Hello, I am Brandon Phillips testing one two test one two two That's one two two two one two Hello Sorry, I can use one of these Yeah, I turned it off you I had it on you I turned it off. Hello test one two Okay, it's good Okay. All right, we're gonna get started There are people outside this room that must have their live stream. So we will begin Hello, I am Brandon Phillips. I am the CTO and co-founder of a company called CoroS We build a lot of open-source software Particularly around the server infrastructure and I'm gonna be talking through kind of the motivations behind a lot of what we're building and What we've learned over the last few years about building this sort of infrastructure. So And we as FOSM have a nice little gift for you. Excellent. Thank you and would welcome you all to give them a round of applause You may just watch me eat biscuits for the next 15 minutes So what I'm gonna do is talk through sort of some of our motivations on building the stuff that we've been building And give you kind of a hint at where all this stuff is going All right, so there are 3.5 billion internet users today and This is a pretty overwhelming number It's a little less than a majority of the world about like 48 percent of the world's population is able to get access to the internet and There are about 29 million people in the world that are in software engineering or the IT industry that's us and so 3.5 billion versus 29 million means that we are extremely outnumbered There are a lot of people out there who do not Speak the language that we speak of technology and software What they care about is they care about their communications. They care about their commerce and Largely, it's our responsibility to take care of those things for them And it's not like this problem of us being outnumbered as people in the Computer industry is going to improve over time last year 238 Million new people came online All right, so what are these people doing largely what they're doing is they are taking their data their billions and billions of Phones and laptops in the world and they're taking that data and they're putting it onto servers I know a lot of us as people in the technology industry Don't feel that the client server model is necessarily fair for for people's rights For the way that people should have freedoms and freedom from tracking But it is the dominant paradigm of the world The reason that we have companies like Google and Facebook and Twitter is because this model works extremely well for the consumer the 3.5 billion users out there and So these people like I said are putting their documents commerce and communications into these servers And it's our responsibility to take care of that data as responsibly as we possibly can so Best estimates is there's on the order of a hundred million servers in the world Pieces of hardware connected to the internet that are taking in all this data Storing it securing it. Hopefully Giving it back to people on their request Which means that with 29 million of us a hundred million servers. It's about three per person in the software and IT industry How many people here maintain a server themselves? All right How many people in here maintain over three servers themselves? All right, and over a hundred servers themselves All right vanishingly small I'm gonna guess that these people work at an internet giant of some sort The internet giants are the people like Google the people like Twitter the people like Amazon The people who largely their businesses have one way or the other being transformed by technology We're seeing this all over the place your neighborhood grocery store is now having to compete with with every single online retailer who wants to deliver you things via bicycle via Van by a guy walking your groceries from the grocery store to your house But every single company in the world one way or the other is having to compete in this way so How do these companies that are maintaining? A hundred or more servers per person doing it We saw that when everyone raised their hand in here we saw a majority of people were comfortable You know managing under a hundred servers and then they got vanishingly small As you started to talk about managing over a hundred servers And largely there's been a lot of best practices and ideas developed over the years that a lot of these companies This is a book. I didn't know that O'Reilly was a sponsor, but you should purchase this book from our sponsor O'Reilly Also, the book is under creative commons, too so if you don't want to support the sponsor you can do that as well but What what this? What this book describes is the perspective of Google and Google's engineers They have a set of engineers called site reliability engineers Which are kind of a hybrid between software engineering and system administration? Where they spend some of their time on call? But they spend a lot of their time thinking about how to better organize how to create better processes How to make the application more reliable for both the people who are building it and for the people who are using it and And so in in this book It's about enabling teams to organize better to specialize so that people are able to focus on problems People work best when they have a handful of things that they're responsible for improving instead of a massive Overwelling wall of everything is broken. I think we've all been in that situation. It's not very motivating when everything is broken and Take risks if you have if you have people who are focused on things And you're able to measure it and you're well organized. You're able to take calculated risks and Largely they do this they're able to effectively ship software Because they have a bunch of technologies that they've built as well It's not just people in process, but also smarter technologies that enable this So those technologies that we're going to be talking through here are containers clustering and monitoring So who here is familiar with the concept of container or docker or anything like that? Great It's not half. So I'll do a quick review And then we'll dive into some of the interesting things that have been built around kubernetes So containers are pretty straightforward if you aren't familiar with them. It begins with you And you are software engineer So we'll go through a couple personas use a software engineer you take your source code You turn that source code into a container image a container image really is just a file system like a tar ball With everything inside of it that is required to run your program So if it's a job application it might have a jar. It's a python application. It might have a python file And then you give it a name so This name is how you'll be able to Tell other people to download it. It'll be the place that you upload it So think of something like github only this is for container images. So you know in core us We have a thing called quay.io It's called quay because that's how we pronounce it a Lot of people may pronounce it key Sorry, we we looked it up in a dictionary and we didn't know how to pronounce now It's quay and so you You upload and download the containers from quay Hopefully you Find the digest. Maybe you take that digest and you create a signature and move the signature around as well all right, so now you have this little piece of Thing this little asset that you can move around And host on the internet somewhere and then you as an operations engineer the person actually wants to run the application Your world looks like this you have your three servers I want to make it comfortable for everybody didn't want to have too many servers up there You have your three servers you say one way or the other. I want those I want that container running on my three servers Three three copies of the application show up. Maybe ssh den and maybe you ran a fab file Maybe you use configuration management Maybe you said I want to run this particular container on this other server Maybe you want to run this other container on this other server a couple of times And so the neat thing is that you're able to deploy lots of little programs On top of the same servers and not really worry about the conflicting with each other Not necessarily taking up the same ports. You don't have to worry about packages installed etc So containers are really about application packaging and it allows you to Kind of ship around app store style pieces of code All right, so we have this source code and what we've done is we've We've transformed it into this container image. So what what was actually the process that happened there? What what happened to green source code to container image? So let's just run through a quick example. I download this particular repository. I Tell that repository. Hey build this container image for me. I'm going to name it like this with this version number and then I push it off to The quay hosting service so it feels very similar if you're familiar with get or any sort of distributed Version control system feels very similar only there's a build step in there And then what's happening behind the scene is that similar to a make file? There's a little DSL that describes how to build the thing and then you can push it off after it's built All right, so you end up with this little container image at the end of that process You've taken your source code and transformed it and then inside of this one because the program is written in go All we have is libc Because of some little reasons and then fcd Which for the most part is statically compiled and then we push it off the quay and Then you know similar to github or whatever you can look at it and see it and share it with friends and star it etc All right, so the next bit is about actually running the container. So How does that happen? What is the process there? So containers are just normal Linux processes that happen to live inside of their own file system So when I do something like this where I say run the container What happens is that a normal Linux process is created that is Talking to the normal Linux interface really the main difference at the end of it is it lives in this thing called the name space Which isolates it from the rest of the system? Meaning that it has its own root file system And so it only sees the things that were in the container image that it was built in Okay, so pretty straightforward not a lot has changed essentially just like you would app get install something or Yum install or DNF or whatever they're calling it now the thing and Putting it on a big shared file system you create all these little file systems And so what it allows you to do is abstract away the operating system from the application and this is really really powerful because Who here likes to maintain really really large apis with sprawling dependencies? There's one guy in the back Just need to point that out so no nobody likes doing that engineers like Engineers I feel like all want to maintain something that looks like Unix. It's like I take some bites. I send some bites I don't care. That's that's my life. I don't want to interpret them if I read I don't want to buffer them I don't want to do anything with them But the reality is is that our world is very complicated and we're asking these Linux distros to do something really hard When we're talking about server software where they have to maintain us The stability of our databases and our web servers and everything else and then at the same time We're asking them to make sure that all the latest security patches get applied as well And then also I want to make sure that the piece of software that I need is installed But it doesn't conflict with anything else So it absolutely must be Python 2 installed or it absolutely must be Python 3 installed on the box It's a lot of interdependencies and I think we all kind of hate to have that job Can you show hands of who is a distro maintainer in the room? Yeah, it's it's it's hard work And so what we've done What containers enable is they enable you to ship a lot less code in the actual Linux distro the the thing that's running the kernel and push a Lot of that complexity and managing the interdependencies for particular applications into the container images All right, so now what we have with all these pieces is that we have a way of Developing software and packaging it into a unit a really nice Kind of sealed unit and then we have a way of running that on top of regular Linux servers But isolating it from the rest of the software that's running on that server So we have a really really nice way of organizing people together You can now start to imagine that me as a operations person only really cares about the container I don't have to worry about really what's in there I just have to know that it'll be a file system and then a Linux process will come out And then me as somebody who's building a CI system I don't really have to think about is it Java or Python or whatever Everything is kind of sealed together and then me as a software engineer I don't have to really think about the underlying kernel or what Software is going to be available on the server when it gets there So I'm sealing my software in together with everything it needs All right, so we have this unit that can be shipped around and so what naturally it leads to is this concept of clustering and I like to just think of clustering as botnets, so We've had this idea of clustering servers together for a long time But it's usually been in in the context at least in the media and for a lot of us in the context of Somebody mean maliciously taking over thousands of machines And then they have an IRC network or something that they control them from We're creating essentially through clustering and all these technologies the I don't know what the right term is, but we're creating like the polite botnet so if you're going to manage hundreds of Servers per person as these internet giants do what do you do? You have way too many servers for manual placement. So are you going to remember that? Oh, yeah, yesterday. I deployed to server 90 Today I deployed the 80 shoot. What did I deploy two weeks ago on server seven? And so it's just kind of an intractable problem like we're joking ourselves if you think that a human can do this Maybe you'll get cute and you'll be like, oh, I'll try to while loop and I'll track it in a file And I'll check it in to get But then when you start to think about statistics and realize that computers actually fail pretty regularly and whether for Hardware reasons or you hire the new intern and he tripped over the cable Whatever the reason is inevitably you're going to have machines that go away And now you have to remember our shoot something was running on that server. What was it running, etc? and so The problems here is that there's really no monitoring if you're just placing things randomly and there's no state to recover When something goes sideways so Common pattern and what we're going to do is we're going to create our kind of control network So we're going to take a couple of machines from our hundreds that we're having to maintain and we're going to set those aside and What we'll do is we'll run an API on here and we'll run a couple of little databases on here and We'll use this API in this database to be able to control the other machines in the cluster and So with this we're able to have a centralized place to start to monitor the system we're able to entrust the state of the system to a number of computers and computers are really really good at horribly boring work like Looking and seeing if somebody's doing what they're supposed to be doing if they're not supposed to be if they're doing something They're not supposed to be doing reconciling that and sort of sitting there and just saying are you doing what you're supposed to be doing every five seconds so What we actually end up with is these sets of servers telling these other servers what to be doing And then these sets of servers are actively sitting there checking them every few seconds seeing if they're actually doing what they should be doing and maybe a The intern trips over another server. You really need to fire the intern The The software will notice. Hey that instance of the application isn't running. I'll schedule it to a new machine All right, so it's a pretty simple concept Reserve a couple of machines to control the rest of the machines So what's actually running on these machines? So this is where we get to Kubernetes and Kubernetes is a open-source project actually see a couple of Kubernetes stickers in the audience, which is great It's an open-source project that was introduced by Google has now been moved to the cloud native computing foundation Which is one of our sponsors. I'm really hitting the sponsors. I can't wait to get my kick back and it's been donated to the CNCF and it's a project that is about 18 months old since it's 1.0 and What it does is it is an API and a set of services to create this This control cluster And what it looks like when you dive into it You can think of it just as this high level abstract pretty logo and not think about the details But when we get into the details really what it is is it's a couple of components It's a primary data store and this data store is special and that it's replicated again because No server is safe from the intern You you have those three special servers that are running kubernetes But what if the intern trips over one of those you make sure that the data is backed up So it's a replicated database and then you have an API server, which is what everyone interacts with That's where the command line tools interact with it's what the servers actually check in And the monitoring happens there, etc. So pretty simple You know if I was to draw this diagram it looked identical to something like WordPress You have a database and you have an API server And in this case we have a database and an API server pretty approachable architecture. Nothing really fancy All right, so as discussed one of the things that we need to do when building the systems is we have to face failure Failure becomes more and more common as you start to maintain more and more servers and With 3.5 billion users and only a handful of people in the software and IT industry You better be really really effective at facing failure It's not going to be great for you if you have to care every time a server goes down And it's not going to be great for the user think about 3.5 billion users and then a hundred million servers That's a lot of users per server You're going to make a lot of people unhappy if one of those servers goes down for 24 hours and you're sitting there Trying to desperately get the hard drive back or switch out the power supply You've taken thousands and thousands of people offline from accessing their data and services so What we're what we've built is this thing called at CD At CD is this special database that Kubernetes uses And so it was introduced by core OS, which is the company that I founded and worked for in 2013 it's the primary data store and it does this interesting thing Where without human intervention it essentially runs a little democratic system an algorithm called raft Where if the machines go down Somebody reelects themselves as the leader of the cluster and work can continue with no human involved and this is for you we've put in Thousands and thousands and thousands of engineering hours I had to do this little fancy trick where computers run their own voting system and elect new leaders For one simple reason Getting woken up at night sucks Is really the worst if you're on call you don't want just because one server to go goes down To be woken up and have to take care of logging into the machine and deciding which one's the leader today So what we're going to do here and as an introduction to at CD Come on internet Is there's a service that we run called play at CD IO. This is like an MMORPG for computer failures So what inevitably happens here is that a few people will pull up this site. If not, I'll run it myself But this is a at CD cluster. So we have five machines in this cluster And we're able to arbitrarily at any point tell them one of them to stop. So right now the little circle in green there is the leader and so I'm going to test this idea that the servers are running their own democratic system and we'll elect new leaders by clicking stop and hopefully within a few seconds because Computers are much faster at voting than human beings are the votes are counted and a new leader has been elected And what this means is that I'm able to write data into the database and What you'll notice is that all the little hashes here Ha until somebody takes too many offline So as as a community, I would ask that somebody turn on at least one more server Democracy kind of breaks down once you have less than 50% of people voting. All right. Thank you So I'll restart that one. Just give me a second. All right, so What will happen is that I'm able to actually put data into the database and you'll notice that the hash will actually update hopefully Rate limit exceeded. I love you guys So What will happen is that you're able to write into the database Once the rate limits not exceeded I have to back off for three seconds What will happen is that it'll write into the database. It'll get replicated around and then The the database will ensure that that data is available for reads later All right, you guys are having a lot of fun with this. I'll let you add it You get the basic idea computers are able to elect and take care of the database on their own Okay, so what should happen is that if you have the database staying up is that you're able to Restart individual machines you're able to write into the database and then You'll see the value stored Okay, and then one final thing I'd like to mention about fcd before we move on is that This happens so I hadn't really been aware of Fosdum until a couple of years ago We designed the logo for at CD in 2013 Awkward It's okay The I think I think they're just really good friends or maybe relatives cousins or something It's fine. So That's the data sort of at Kubernetes is that CD and then I wanted to talk through more about what Kubernetes actually does and why it's an interesting project And why we've seen such rapid adoption of it in such a short period of time And really what what what Kubernetes is doing is it's creating really really consistent infrastructure APIs everywhere So this is kind of what Kubernetes looks like in the abstract you have Maybe you're running on some on Amazon. So you have the Amazon API's you talk to the Amazon API's you create some virtual machine instances and then on top of that you put Kubernetes and Kubernetes behaves like Kubernetes and kind of abstracts away the underlying infrastructure And Kubernetes works well on Microsoft Azure or maybe on top of the Google APIs Works on top of the Digital Ocean APIs and so essentially on top of any sort of cloud infrastructure. It works fine OpenSack API's and Then also, it's able to run perfectly fine on top of bare metal machines as well So what you have is this way of shipping and talking about infrastructure That's really really consistent and can be ran pretty much anywhere And this has a few advantages that we've really never seen So one of the problems that we've had as kind of an open-source community For a long time is fragmentation. Can I get a Vim? Can I get an Emacs? And fragmentation has hit us hard in the cloud environments as well You really when you choose a particular cloud You have to program against their API's OpenSack made a valiant effort to you know, create a single standard API Didn't work out so well. It's okay But Kubernetes kind of moves up the stack and we're creating a single API to talk to any sort of infrastructure And so what Kubernetes is creating consistency for was creating an API for is essentially all major components of what we think of as Computing at least back-end server computing So compute it's able to run all these different compute environments Networking is able to be flexible and work inside of any networking environment Whether it's you know top of rack switches that are smart or VX LAN or etc It can talk to different storage systems So you can mount Disks whether they're from Amazon's EBS or an NMS mount or Gluster FS And then it can do load balancing you know the pieces of software that's actually routing the requests down to Your application down to your container And so across all of these sort of foundational pieces of what is back-ended in an infrastructure Kubernetes has created a consistent API where I don't necessarily know when I'm talking to the Kubernetes API whether that API is being served on AWS or OpenSack or bare metal I'm pretty useful pretty useful property and So one of the things that's interesting with this is this idea of Federation which the Kubernetes community has been Marching towards so if you imagine that this API can run against any compute And that all of us are having to maintain Lots and lots of servers So you have a single Kubernetes cluster here and the boss or your company decides You know what you've been so effective at managing hundreds of servers With Kubernetes and we actually are just going to double our capacity So if you just manage, you know a couple hundred more that'd be great for the company if you can handle that and then a few months later this is going well and so just a few hundred more in another data center and so This is actually possible and something that's been worked on inside of Kubernetes It's called Federation and it's this interesting concept that we take The exact same thing that Kubernetes is doing today with individual machines And then we click it up to the idea that we run that exact same infrastructure against Against entire clusters. So you'll notice the architecture is very similar. We have Kubernetes Running by itself inside of a San Francisco data center in New York data center in a Berlin data center And then at the top we're running Kubernetes again with Etsy D data store only it's That that API is spread out over all three data centers and controlling the individual data centers So it's thinking about a data center instead of individual hosts so a pretty interesting concept and something that Eventually allows for this thing where you could actually be running applications a single application across multiple different Cloud providers or physical networks This is all work in progress about 40% of the Kubernetes API today is able to do this thing where You have federated API that then talks to clusters the clusters and talk to individual machines Okay, so Kubernetes has this API and what does this API actually do well? It does a number of things you can tell it to run a container and it'll go off and run that container for you but a really important thing that it does is it allows you to Connect pieces of the infrastructure together using a concept of labels So in a lot of infrastructure we think of hierarchies we think of the front end in the back end We think of the load balancer and then then then the scale out here But kubernetes has a little bit of a different opinion on how service discovery and the overall system works So you may have different objects these objects are here in these gray boxes on these might be individual containers They might be services that represent load balancers, etc. I mean what kubernetes does is allows you to Label and group these things in arbitrary different ways, which we'll see in a second so perhaps you're interested in separating out the parts of the infrastructure as Their component parts front end versus back end Or perhaps you're interested in figuring out who deployed the infrastructure So was it my colleague rethu did she deploy these containers or was it me who deployed the containers? Or maybe in a different way you would like to look at the infrastructure as a separation between the production side and the dev Side and so kubernetes has this really flexible system where there's actually a lot of different perspectives on how infrastructure is organized Sometimes we'll think of it in hierarchical terms, but it is really really convenient to have this idea of different groupings in a query and So this grouping in this query system is really at the core of what kubernetes does and it's throughout the entire way that the API works And so one really interesting thing is we end up with these really decoupled systems where we can start to think of The system as a set of control loops acting on top of these queries looking for What the user has asked us to do and then making a query and finding out what's actually happening in the cluster So this is a really practical example So you're running an application We'll look at this in a second. You're running an application You say that the application is going to be labeled in this way It's going to be called app people app equals web and it's going to be environment equals prod and Then the system is constantly looking like a thermostat looks at the temperature and talks to your thermos Or your talks your furnace. What it's going to be doing is it's constantly looking saying well are are the Is the state of the system in this case the state of the system is One running container is the state of the system matching the desired state of the system the user has asked for And it's constantly checking the state and it's using these labels to do that And if it finds that it's actually not matching the state What it'll do is it'll ask the system to schedule new instances of the containers Pretty powerful concept We're able to kind of decouple these concepts of what's running versus. What would I like to have running? So this is a demonstration of that happening So it happens really fast because Kubernetes is very very responsive So what I'll be doing is I go into the console. This is a console for Kubernetes I say I want to have two copies of this application running and boom within about one second I have two copies running and I can drill in and start to look at metrics of that that running copy of the application And this happens because I was essentially set the thermostat at two and the system responds Now that works fine for really simple applications the scale out web applications It gets more interesting when you have to think about databases or running other sorts of applications on top of kubernetes So what you'd love to be able to do is say run my database on top of kubernetes and Make it really really simple But databases are harder because they store state. They need to replicate state, etc And so you have to worry about resizes upgrades reconfigurations backup Healing what happens when instances fail? And these are concerns that you really don't have in a horizontally scaled application if your Web back-end fails just start up a new one And so Earlier or late last year we introduced this idea of kubernetes operators And what they enable you to do is start to specify really complex applications that require active specific management At a high level so what you'd like to be able to do is say I I want to have a postgres database I want it to be in a cluster of three. Maybe I want a couple of read replicas and one write replica And so what these pieces of software that we call operators are doing are they're representing the human knowledge about a scale and back up these systems in software and So we've done this initially for the database that backs kubernetes called etz-a-d Where you can actually just ask the cluster give me an etz-a-d cluster You ask kubernetes giving up etz-a-d cluster of three and it'll handle all the backup and Recovery and healing of that cluster over time So it essentially goes through this constant loop of is the cluster healthy if it's not what should I be doing to make it Healthy I'll take those actions is the cluster healthy what should I be doing etc in a constant loop Now the final kind of bit of this is we have the ability to now scale our application We have the ability to run the application over lots of different disparate pieces of Back in infrastructure and server compute infrastructure on the last bit is monitoring without monitoring you really have no idea whether you're Serving the users and you have really no idea on whether the system is working at all So we built this thing called the Prometheus operator Prometheus is a monitoring system inspired by The system Borgman that comes out of Google But what we've done is we've used this idea of labels in kubernetes and applied it to monitoring systems So I'll show you an example of this So I have this little application called host info running and host info is Deployed on my cluster and we have Prometheus going and actually just scraping a lot of the basic Metrics about host info so what we can do with kubernetes in these monitoring systems is we can go all the way from The load balancer down the individual container down to the server that's running that container So I'll show you that live here So we go into a Service We go into this service for host info try to make this bigger I'm going to the service for host info inside of host info service. We have this label selector This label selector finds that there's one copy of the application running. I drill into that I can find out, you know, how much RAM and CPU has been used there I can drill down again and find out which machine is this running on What what labels are on this machine? What version of software is it running what kernel of version is it running? This is really powerful stuff We've gone from all the way from the load balancer through the running process down to the running machine I'm in a few clicks and the whole way through we have live up-to-date Statistics on the the process on the machine. I'm pretty pretty powerful concepts and then I am also Would like you to try out so I'm running this application host if up dot org if ups my personal domain and This application is essentially just keeping a visitor count And then at the same time I have Prometheus monitoring the application directly So getting application specific metrics. So what I can do now is I can come in to the application I can say I want to scale it up to maybe five copies and Prometheus will immediately Respond to that as those applications are deployed and start to pick them up and then we can start to do useful things like say I want to find all HDB requests that have happened in five second intervals and get the rate of that And make a graph and maybe give it for the last two minutes And so we start to see all the live statistics of what's happening inside the cluster and for this particular application So what's next for kubernetes? There's a bunch of different work going on kubernetes is one of the most are the most active github repo right now today in the world So there's a really huge healthy growth of the open-source community Better metrics of monitoring across the entire system Ever improving security defaults. We have role-based access control and we can use internet identities like open ID connect Support for more and more cloud platforms More pre-packaged applications so you can just deploy a WordPress or deploy whatever And if you're interested we have this entire tracking repo of features Now the last thing I want to touch on before I go is Why I'm here what drives me to build all this crazy technology? So core OS has a really clear mission It's a pretty straightforward one secure the internet. I think we'll be done here any day So what core OS is trying to do and why we build this stuff and why we think all this stuff is important It's securing the internet is if we go back to our 3.5 billion users these people are pouring their lives into these servers and It's our responsibility to take care of them in the best way possible using the best possible technology Again, it's it's their commerce. It's their documents. It's their personal photos. It's everything. They've communicated and so With all these servers We need to take responsibility. This is a heavy heavy responsibility and So we like to think of ourselves as building what is self-driving infrastructure essentially taking away a lot of the toil and concerns that all of us have in maintaining infrastructure and Making it more like a set of applications where you don't have to be an expert You don't have to be an expert in patching every single component these these infrastructures getting dizzily Complicated and it's impossible for us to be both kernel experts database experts experts in our own applications and At the same time and so we need help and automation to make all this stuff successful And the reason there's urgency here and the reason this is important is because without expertise We will inevitably miss the latest security update. We have as Engineers inside the next kernel across the entire ecosystem. We haven't any given time About, you know, maybe a month maybe two or three months Time period where there's not an entire panic on the internet where some horrible Security vulnerability comes out and we have to all respond to it who had to respond to dirty cow Who had to respond to heart bleed? These are things that happen constantly and we need automation in order to ensure that our software systems Are caught fixed and that our users are secured because remember we're responsible for more and more millions and millions of users when we run these servers and We know how to make really really secure systems, right? This is the most secure computer in the world But it's not interesting. It's not interesting because it's not connected and so we have to we have to worry about these vulnerabilities because connected systems are the interesting ones And so I wanted to like create a diagram for the internet and just looking for creative common images of Internet is great. So I wanted to just share a couple of these with you Oh, actually this so we talked about We talked about this time window where security vulnerabilities are disclosed to where we fix them This is not something that's in the popular culture of our world And if there's anything that I hope that you take away from this It's the importance of patching and keeping our software infrastructure up to date John Oliver if you haven't seen him he has a great show He gave this talk on the safety and security of mobile devices and he had this great moment where This is a bunch of Apple engineers supposedly Finding out of a new zero-day in the iPhone And having to respond to it in a responsible manner But at any any given time all of our computer systems as they say here in this clip Are dancing madly on the edge of this volcano this bitter edge of systems being secure and Insecure and the only thing that ever is able to guarantee that security over time With this idea that the next heart bleed is just around the corner is our ability to patch and update these systems That's what keeps us from falling inside the volcano So we run software We automate the updates of it. I'm no matter what it is and that's what we do at coro s If you want to get free stuff Get kubernetes for yourself run it on your laptop with mini kube We have a tectonic free tier tectonic is our kubernetes product. You can find it at coro s comm slash tectonic You can join us in the community and help us build great code There's cool stuff under github comm slash coro s whether you like operating systems databases Identity or anything else github comm slash coro s github comm slash kubernetes largest growing kubernetes or open source community on github Lots of charts showing you you should join up with this stuff. It's exploding We're coro s and we're helping to run the world servers We have offices in Berlin San Francisco in New York if you want to join us directly and And we have an event in May in San Francisco, and that's all I got. Thank you for your attention So if you want questions, we have five minutes But it's cool. Yeah Yeah, I'm good