 Hello. Good morning to you all. I'm just great. Sunday morning, which is mostly very early for most of you. I'm going to introduce Steph Rogers, and he's going to be talking about training machines to be continuous. So Steph. Good morning, everyone. Thanks for coming out so early on a Sunday morning. Good. God, what made you do this? Is that on? Good morning. I am not a cyborg, but I am part of a cyborg team. And I'll talk to you about this today. Why would anyone do such a thing? What does it mean, and how do you do it on your team? I'm going to show you proof that this exists. All the proof that causes me to believe that this exists. And you can see how you bring it into your work and your project. So let's talk about why. Why would we want a cyborg team? This is really cool, but it's a short book. It's worth reading. It's called The Great Stagnation by man called Tyler Cohen. And it talks about all of the progress, the advances that we've made over the last 250 or 300 years, and where they've come from. And much of the research and data points to the fact that they come from various low-hanging fruit in the world. But things that changed, such as children that died during childbirth, people that are in real health care, or having them in water and in houses, everyone getting education, new lands to expand into, like the Americas, and then things like electricity being implemented and declared across society. These are things that then we saw by consuming these low-hanging fruit, by having these low-hanging fruit in place. We saw more advances. We saw productivity increases. We saw inventions and innovations just streaming out. But the book says, and the theory goes, that we're starting to see the end of that, that low-hanging fruit has been used up. We eat all the low-hanging fruit of modern history. And you can see this a bit if you look around. I have to tell my son, Kai, that the fastest civilian aircraft, the Conqueror passenger aircraft, no longer flies. And it stopped before he was born. I have to tell him that humanity went to the moon when it was in the days of his grandfather. In various cases where we see stagnation come along. And the theory goes that we need to find new low-hanging fruit in order to push ourselves past that and see the next wave of innovations, the next wave of productivity increases, the next set of awesome advances. So we don't say we have that. Let's see what we have here. We could buy the next set of innovations, the next set of advances. The problem is, when you actually look on this case, Robert Stolo, he's one of the, I guess, fathers of modern economics. And he said that you can see the computer age everywhere but in the productivity statistics. We see machines all over the place. There are all sorts of things. But they have yet to have a big effect on the amount of innovation, the amount of progress we see in society. Let's look, and I believe that that's because we use machines as tools, but we don't use them as team members. Let's look at another analog for this. Back in the industrial revolution, we'd like to pull the industrial revolution into everything, but at least what we have to do right here, there is a power source of a steam engine. In a factory like this, I sit on the top, and all the machines were connected to those line shafts. A massive steam engine drove those line shafts and everything revolved around this power source. The steam engine had to keep going all the time as long as any one machine in the factory was needed for some past. People's lives revolved around this. They had to come in not because the product was ready for them to work on or whatever they're building, but because the power was there, and they had to do it. Things that needed more power were closer to the steam engine, things that needed less were further away. So the first time electric motors came out, they were figured, we can make this better. We'll have some significant productivity increases here. Some advances, we'll replace that steam engine with an electric motor, and everything will be better. Well, it turns out that wasn't the case. Less people died of black line and so on, and there was some moderate boosts to what they were doing, but until you actually thought differently, took that electric motor, put it in all the different machines, ran electricity throughout the factory, released everyone's schedules and work around the fact that they no longer were tied to the power source. You could move different pieces, different parts of the production through a different factory and so on. That took like 20 or 30 years until that change really had a massive productivity increase in advance in society. And that's what we're seeing with machines now. So that's why I'm talking to you about this today. A cyber team is a team that is part human and part machine. It's a team that doesn't use machines as tools, but uses them as team members. Why is this a lonely fruit? Why is it easy for us to do this? And that's because we as engineers, as developers, we can speak the language of the machines. In our projects and in our companies and in our open source communities, we can pull this off simply. We can talk to the machines and tell them what to do. We can interact with them as team members. Imagine that you were doing something completely different, that you were mining or something, something fundamentally different. It would be very hard for you to instruct the machines what to do. You couldn't teach them as team members. You didn't have that skill. This is a lonely fruit for us and we must take advantage of it. So, enough waiting with hands. I want to show you how this is actually working. This is the proof. The proof that convinced me that this was the case. The proof that is a Linux session in a web browser and later, well, not much later, but 1130 in the config management dev room, we're gonna be diving into this and talking about what it does. But in case you haven't seen it, this is what it looks like. You log into a Linux system, need to hurry any more, log into a Linux system and you can check out what's going on, see the states, the load, there's so many different things you can do here. I can talk for hours about this. You can see containers, a cluster of containers and inspect what's going on in each one and how this thing is put together. You can discover and learn about the system. Talk to you about how this works because this is part of the proof. This is JavaScript. This is the Chrome JavaScript console. And in the console, we're typing a Linux command here and we're actually seeing results from it. Cockpit interacts directly with the system from JavaScript. So here we're responding to A, a process and here we're accessing a DBS API. The interesting thing about this is that the pieces of cockpit outside of the JavaScript don't know anything about these APIs. So this is the hostname API. That's part of system D. It has various properties here that you can see. You have information. We can call different methods on that API. Cockpit itself doesn't know about these APIs. It just relays them to the system. And there's a component that translates these into actual Linux calls. We can access Unix sockets. We can monitor files. We can call REST APIs on the system. You name it. We can do it. These guys here put cockpit together. Actually, some of these folks have moved on to other things. And you can see that guy in the middle. He's very incognito. He didn't want to show his picture. As you can see, you can understand. Show his picture at Boston is a big no-no. But fold cockpit together made it do so many different, access so many different parts of the system. You can see here that there's eight folks working on this, or we have worked on this, but they managed to talk to this many APIs on the system. This is a quick list that I built of the different things that we talked to on a Linux system. There's things, there's obviously big APIs, like SystemD, Docker, Kubernetes APIs. There's tools like Q and U. There's processes like SSH or SSH add. There's files in here too, like the password file. There's, yeah, you name it. We're interacting with all sorts of different parts of the system. Take that. Take those 90 different APIs, and cockpit works on 15 different Linuxes or different products. Linuxes like Vintu, Debbie, and Fedora, Atalic, Bell, and then, of course, other things like Kubernetes, OpenShift, or Rev, is places where cockpit interacts with a different version of those APIs. Each of those 90 APIs revs at a different pace on different products and different operating systems. There's different branches with different functionality as well. There's different browsers that then we have to interact with. These are the ones we test with. Of course, people use more than that. And then, on top of all that, the product does weekly releases. Actually, now it's bi-weekly, but for the longest time we've done weekly releases, getting that functionality to work stably every single release. So this leads to a real combinatorial explosion. If you did a math there, that number adds up to over a million. And you can fudge this in different ways and throw different factors in there, but really, this is insane for a team like you saw earlier to pull off on its own. It just doesn't work. The effort of a solely human team does not scale past a certain complexity point. You will reach this barrier where you can't take your product any further. Some companies, the Googles, the Facebooks among us can push this point out a little further. They can add, throw more people at the problem and try to get something more complicated, something more integrated, something more advanced working. But you will reach a barrier where you just can't throw enough people at the problem to make it work. To copy this one of those cases, it would be impossible for a team to pull off that massive combinatorial complexity in an effective way. So that's why cyber teams, machines as team members, is look at the proof of various things that we can see this in action. And I encourage you to not look at so many of the details, nitty gritty. When you look closely at a fish, when you take it apart, look inside, you'll see goop, you'll see little bones and stuff. And, but when you take a step back and watch it swim, you see that it's a life form, you watch what it does, how it acts. So I wanna show you how these things act in a project like cockpit, how machines act as team members. Here's the various ways that they do so. Bots owning mundane work, pair programming with bots, humans training the bots, bots learning from the humans, bots shipping cockpit, bots as committers and the team's staffs without bots. We'll look at each of these. So owning mundane work, there's many examples of this. So much of the work that the bots do is trivial stuff that people are kind of scoff at. They're like, oh, of course I can do that. Here's an example. A bot came and opened a issue on GitHub on the cockpit project and said, you need to pull the translations from Zanata. Zanata is a place where translators congregate and translate the project. Now a human can go and do this task, but it just so happens that a bot decided, I know how to do that and started working on it. You can see that the tagline on the top changed to work in progress and the bot starts actually performing this task. The task itself is rather trivial. It's a bunch of commands that you or I could run. We pull down the translations. We build it, we upload the new translation template and so on and we make a commit out of it. The bot turns us into a pull request and a whole little army of bots come to test this thing on all sorts of different operating systems with all sorts of different browsers you can see there. And containers and so on. Make sure that this works. Actually in this particular case, that very pull request had a bug that broke some of this stuff and it wasn't merged. But typically a simple pull request like this, after testing, after all the bots verified in thousands of different ways actually, it gets merged. A human says, okay, this looks good and merges it. So this is the kind of mundane tasks that you might see the bots doing. There's things like pulling security updates from NPM automatically, noticing that a certain NPM dependency has a update and trying it out. Bring them in to cockpit opening a pull request, making sure that it works, then of course all these tests run. Bring in various operating systems so that we can test them here with new dependencies, new combinations of those 90 different APIs. There's work that to do with, I guess, what was the other one? There's all of these trivial tasks. And these trivial tasks that there's things that take up your time every single day. Some people spend 10 or 20% of your time on this. Some people spend 80 or even 100% of your time on mundane repetitive work that you shouldn't have to do. And of course you use machines to do them but you should be able to hand it off to machines. The biggest one is of course testing. Testing that combinatorial explosion. This is in September of last year, the amount of tests and the amount of operating systems that were booted in VMs in order to test a project like cockpit. You can see there's half a million VMs run in the month and this is that repetitiveness. If you go and repetitiveness of mundane work, imagine you tried to hire enough testers to actually do this, it would be crazy. So the, I guess this is really the core of where cockpit gets that advantage from using machines is repetitively testing all those various millions of combinations over time to make sure that the project is still functioning. So, yeah, we can see those numbers again here. This is a more recent thing. One of the interesting things about this is data, I'll show you how we use it later, but it's data about the tests that were run and in the first line, we can see that we've pulled this data and we're counting the number of lines inside of it. We see that there's more than three quarters of a million different tests run. When we look for the failures, failures, and that points to a very strange situation where most of the work that these bots are doing is basically useless. You throw it away. It's like, nope, that worked, nope, that worked. It's green, it's repetitively doing the same thing over and over and over again in a very frustrating way, so in some ways we're abusing these machines, just making them do repetitive work over and over again to derive that small amount of value to find the places where things do fail. These are things that we never do to humans. Think about it, it'd be like a crazy sweatshop, but we do them with bots. So another way that we see bots interacting on the team is pair programming with them. This happened during the run up to a rel release. Now you may know about rel, Red Hat's Enterprise Linux. We support it for 10 years. That means stuff that goes into rel has to be relatively stable. The people who put it there, our team, if we put cockpit there, we need to know that we're not going to be embarrassed about it in six months, much less 10 years, that we're going to be able to continue to support and update that. So typically when you do that in a team, you have this long stabilization period that comes up to it, or a freeze period, where you say, well, during this period of time we won't make any big changes. We'll make tiny little fixes and changes, and we hope we stabilize the thing and don't break it. That was not what happened with us. As we were running up to our first release in rel, we had big changes landing in the week beforehand, thousands of lines of code changing, and I was asking some of the folks on the team, how in the world is this okay? How can we do this? And their answer was, well, we're actually pair programming with the bots. Marius told me this, he's taking code and pushing it into the testing so often with different combinations, sometimes breaking things on purpose so that they'd find problems running through 50,000 or 100,000 different iterations of checking whether a race is there or a bug is there and finding all the possible problems so that you can deliver this. So another way that we see this in action, humans training the bots. This is actually really critical. When you have a team, the team needs to interact well with each other, and the humans need to be able to teach the machines, teach them constantly about what they need to do, whether it's useless tasks, whether it's telling them to go and do this differently today, and just today, and tomorrow I want you to do it, I want teaching them about process that the team doesn't want to have to do. This is usually things that are broken, things that are awkward, build systems that the team hates and where they have to push their software anyway, making sure that the bots can do the work that the humans don't want to have to do. You can see, if you look at the Git log, these are the, everyone on the team is contributing to teaching the bots how to work. And this is constantly going on. In fact, if you estimate this, it takes about 25 to 30% of the time of the team to go and teach the bots about the stuff that they will then repetitively do, that they will take care of. This is normal though. In a given team, spending that much time talking with each other and bringing on new folks, mentoring them, discussing what you want to do next, that's a decent amount of time. We're just taking that and applying it to the machines to make sure that they pull off what the team stands for, that they're working in the same direction. So bots learning from humans, we see this too. This is a very recent thing, something that I've been working on and interested in, it has to do with machine learning. Bots can actually watch what, what the members of the team are doing and try and replicate or try and derive some information from that. We do this with test flakes. I'll talk about this more later, but at the bottom of every test, some tests will fail just for no reason whatsoever. It just seems random. These pop up in any testing system that you've seen and people typically refer to them as flakes. After this happens, a human will go and say, well, that's not related to my pull request, that's not related to my change, this is insane and either retrigger the test or figure out a way to merge it anyway. The bots will watch that behavior and learn from it and start to apply, we tried two different techniques, a neural network, but also unsupervised clustering to figure so that they can then mirror what the humans did. The humans treated a failure like this in this way before, they merged it anyway. So maybe the bots are now learning from that and you can see they're starting to put these numbers at the bottom to indicate that. Hopefully soon we can actually take action based on it. Bots ship cockpit. This is a bunch of routine work that so many of us are involved in in various distros. Every distribution does this completely differently. It is very awkward. In the cockpit team, a human signs a tag and get with some release notes or some, I guess, release lines in the git tag and then bots come along and they do all sorts of stuff like making the tar balls and patches for uploading to different distros, updating the spec files or the Debian control files. They release preview builds like copers or PPAs. They'll push into the Fedora packages and start the whole process of getting something released into Fedora. They'll upload packages into Ubuntu, into Debian. They'll upload tar balls. They'll do container builds on Docker Hub. They'll upload new documentation and so on. There's all these various tasks that they'll go and do. If you take all of these together, if everything goes well, it might take a person about a day and a half to do this stuff and if you do this every week, that's a phenomenal amount of time that you spend here. If things go bump and break or you have to repeat something and you have to wait on some step, it can take even longer than that and it's just a massive waste of time, of people's time. So that for having the bots own this part of the work allows the humans to actually do some interesting work, create some stuff and not have to sit there and repetitively do this like as if they were sitting in a factory or a sweatshop. Bots are committers on the project. If you look, we have, we've called our bots this, we've given them the name Cockpituous. You can see that the fourth highest committer to Cockpit of all time is Cockpituous. It's the bots. Now, if you look at the actual poll request, no, sorry, the actual commits here, you'll see that they're not that interesting. They're the mundane commits of the project, really the lowest tier of the uncreative commits but the reality is those would all have to be done by humans. They would all have to be, humans would have to spend their time on this. So by freeing up the humans to do some interesting work, suddenly we're able to make dramatic progress in the project. And lastly, the team stops without bots. We saw this a while ago where we had a, we didn't have a distributed architecture for these bots and one of the machines that they were running on went down a key machine and we basically stopped. When you lose a significant portion of the team or your community or your contributors, it is a dramatic blow to the project and this is exactly what happens when the bots go away. The process, the work on the team stops and we in this case had to actually spend two weeks figuring out how to, how to make this not happen again, how to implement it so that we don't have, it's distributed enough that we don't have a failure like that ever again. So over the, those were the proof, the way that we've seen this in action. I don't think this is the only way you'll see this happen. What was cool is that we started off in the project not knowing about all these things and having a master plan of where we wanted to get to, but by implementing some basic rules, some basic fundamentals, it grew, it was sustainable. It grew a life of its own and became something that completely defined how we worked as a team and changed, made the project possible. And so I wanted to share those rules that we discovered with you today. And these are the key to making it happen in your project. And hopefully you'll be able to do way cooler stuff. The results will be way cooler than what we have and that's what's exciting to me. But let's look at the principles behind these rules. Let's say we want to give in behavior, a behavior of having a cyborg team, having the humans not do the mundane work and the bots doing the mundane work and the humans teaching the bots and making sure they're a real coherent team. At the beginning of the last century, there was a social psychologist, one of the early social psychologists and he proposed a theory that behavior is driven by two forces. There's the driving forces that push you in a specific direction, that make you want to go there, make you want to have a certain behavior and there's the restraining forces that prevent you from behaving in a certain way. These two reach a sort of equilibrium. If you think about, let's say, driving your car, getting to a destination, you have the driving forces, I want to get there faster, let's go faster, right? And then you have the restraining forces, oh my God, I'm going to get a speeding ticket for my camera, so I'm going to drive slower. And at some point you make that decision, okay, this is the right balance between the two. So the way to change behavior, it turns out paradoxically, is you first diminish the restraining forces and then you increase the driving forces. Kurt Lehmann was the one who proposed this and it has played out, it has become one of the foundations of behavioral psychology. So another way to look at this is, why don't we have cyborg teams in a given project today? What is preventing that from happening by itself? There are certain restraining forces that are preventing that from happening and we need to reduce those in order to make it happen. Afterwards, of course, we can encourage it, we can inspire people to do this, but first we need to remove the obstacles. So here's what we came up with as the rules that reduce those obstacles, that make this possible and make it sustainable. Number one, teaching a machine must be as easy as teaching a human. If you get an intern or a new guy on your team, sometimes you make them do the mundane menial tasks, go and get the coffee, and you should be able to teach them, you should be able to say, well, no, don't get it from here, go get it from that other building because they have way better coffee, right? And you expect that to happen from that point on. You expect to be able to teach and change and instruct people. It's not a super easy process, but it's something that we're comfortable with and we know how to do it. We have to make it as easy to teach a machine as teaching a human. In our case, we put the code for the bots in the same Git repository as our source code. All the team members know how to contribute to the source code for cockpit, so now they know how to contribute to the bots. That's not the only way to do this, but that's really important, to make sure that you don't set up an artificial barrier there. The second is machines must produce feedback into the team's workflow. Even if that workflow is shit, it doesn't matter. The bots, like you saw before, when they're working on an issue, they put a work in progress tag at the top, just like a human would, that's how the team works. Try to remove the differences between how a bot works and a human works on the team. Use the same things. Over time, you can change your workflow to be better for both the bots and the humans, for both machines and the humans, but make sure you don't set up this ivory tower, the special place where the bots work and the humans are in a different place working. That does not contribute to building an actual cyborg team. And lastly, a human should be able to impersonate a machine and a machine impersonate a human. Given the right credentials, this is really important. If there's a bot doing a task on your project, if it's doing CI, if it's doing releases, I should be able to take that bot, run it on my machine. Of course, I need the credentials to make sure it works. And I should be able to hack on it, change it, and perhaps throw it away and build a better one. This is a very open source principle. Imagine if you had to call Linus every time you wanted to run a build on the kernel. And someone else did the builds. You didn't get to reproduce it on your machine. This is the same way if there's a bot doing some action and you're not able to do that same thing yourself, that's a big problem. And vice versa, if there's tasks in your workflow that only a human should be able to do, make sure that you don't set up a technical barrier to prevent the bots from doing it. It may be your policy. Like on our project, we have policy that clicking the merge button or merging an actual pull request is something that a human does, but there's no technical reason why a bot couldn't do that in the future if we needed to change that or remove that repetitive part of our lives. So, and back to the core of all of this. If the bots, to communicate with the bots and tell them what your team stands for, what is good and bad, what is right and wrong, where you want the project to go and where you don't, you have to write tests. The tests are the way of basically preventing a bot from just doing random actions but working in a given direction. There's a good example of this. There was this cool project done with a Pac-Man video game and machine learning and a camera watching the screen, a microphone listening to the sound, output of the machine and a little controller for the joystick. And without teaching the machine to play Pac-Man but just telling it the test of what is a good noise and what is a bad noise coming from the machine. The machine is able then, of course, with machine learning techniques to play Pac-Man, learn how to play Pac-Man. But without that test, you can see it just doesn't even start. Without the tests that tell the machine when they did something good and when they did something bad, all of this falls on its face. You basically have machines doing random tasks and random things and they become a workload for you instead of a benefit to you. Those trivial pull requests we saw earlier, let's say the one where that I was talking about about bringing in a new Node.js dependency. If the bots were randomly just updating Node.js dependencies, it would be a credible amount of work for a human to then go and say, well, does this work? Does it not? Is this valid or not? But by having all those tests in place, we're actually able to have the machine do most of the work and the human should say, approve it or reject it. So we don't have much time and we have so many things we could dive into. These are all sorts of things that we did or didn't do in the project. There was a principle about don't assume bots can't. This goes to one of the basic machine learning theories that things that we think are easy in our heads to do as a human are actually really hard for a machine to do, such as walking downstairs. The things that we think are hard, such as playing chess are actually plausibly easy for a machine to do effectively. One of the things we did is don't rework process. Just because you have bots doesn't mean you have to change everything. Have them do the stupid junk that you don't wanna have to do and free up your time rather than reworking everything to make bots work in your workflow. Containerization is a really awesome way to make bots work well. The bots we have are distributed, they scale out, they kind of work, nothing orchestrates them. They go and look for tasks to do and see what's available. They'll try to do it and then they use collision avoidance and detection to figure out if someone else is already doing this task, just like two humans would. The bots are self-validating, so when someone changes them, you can have the bot check itself and do that task to see if it works. And, oh well, we talked about avoidance and detection for collisions and then machine learning. I had this highlighted actually from the last time I gave this talk because people were interested. But I'm happy to go into any of these topics and talk about them or we could have questions. We could dive into things that were unclear that I may have skipped over or jumped over. Is there anything here that, yeah, let's do questions and if a certain topic comes up, we can then quickly go into it and I can show some slides about it. There's a question here. Yeah, like, okay, we can do that. So, well actually, here's something that I'd like to show first is there's of course the question, sorry. So, imagine you wanted to start on this and I'm gonna look at the core of that question. It's like, where do you start? And of course, one of the places that we say we start is with the tests. If you don't have the tests, it just doesn't work. One way is to add tests to a Fedora or Debian package to start to define the behavior of that package, not just the content. That's a good place to start. Another place to start is of course with Travis CI that many of you, most of you, I hope, are already using unit tests in some way, validating this and Travis CI is a good way to start there. We have an example here of running a full user land for integration tests. You know, Travis uses Ubuntu. You may want to use something else and here you can use containers to do this. You can use VMs to do this. There's an example directory inside of this project, github.com slash cockpit project slash cockpitchewis and how to use a full user land to do your tests. There's, you can run your testing and your CI and your bots in Kubernetes and containers. CentOS CI offers this to open source projects and you don't have to be in, you don't have to, you know, be in CentOS or buy into that to be part of this. This is just something they want to do. This is one thing they want to contribute to open source and I applaud them for it. They'll give you a Kubernetes account to run bots and testing and CI. Lars here did a really good example of how to run VMs inside of containers if you need to do that for your bots for testing. We do that a lot and there's a, this is actually a clickable link. I'll upload the slides and you can see how clean that, it's a good place to start for a clean implementation. And then there's this container, cockpit release container for the bots for delivery, for updating RPM, for updating a control file and you know, doing builds and all of that stuff. So let's look at that one actually and I'll show you the code for it. Then you can, that's an interesting place to start. Oh, that's a good point. I'll show it in the terminal. So I have this, I have this code checked out. It's under the cockpit project in a repo called Carpichus. And in this project here, you can see these various directories. There's a release one. The release one contains all of the tools and scripts and services that the bot for releasing cockpit into all of those different venues. Here we can see, we can see that when we look really closely, these are all really shell scripts. They're pieces that the bot uses to do its job. They're parts of the various tasks of the, sorry, various tasks that it does throughout that process. Let's look at where this is defined in the project though. Here is the cockpit repo and you can see there's a bots directory and inside of here is lots of bots. The top level of a bot will always be the team member and then it uses tools that hopefully and other projects do use these release tools. The top level of the bot, the thing that actually is the actor that goes and does stuff is specific to the cockpit project and so it's in the cockpit repo, not somewhere else. I've forgotten the exact VIM incantation to increase the font size. But if you look at this file, you'll see all the various jobs and these are the various scripts that this bot does. And so it'll go and perform these tasks. You can see there's awkward things in here like, oh, I need a Kerberos key tab or sorry, ticket to upload to Fedora and you'll see that the top level, this is not very pretty. All the various pieces that it uses though are reusable. So this is one way that it mirrors the actual, well, this is one way that it interacts with the project and mirrors the fact that it's a team member. But I'll show you another. So let me look at this here. Oh, I'll look in the collision avoidance and detection. Sorry, here we are. So inside of the top of the project and the top of the cockpit project, you'll see this file called .tasks. This has all the various things that most of the bots, beside the release bots, they go and look at the kind of things that should be done. And when you run this file, it will run all these various scripts and produce output. The output is just lines of different shell commands that are ready to be run. You can see here that many of these are tests run on specific commits. There's the bot called tests invoke. But there's also other bots here called image prune or various other tasks or refreshing of images, pulling all those 90 dependencies together and making a new thing that we can verify that cockpit runs against. So we print out all of these and then the bots will choose one from the list. They'll randomly choose one and run it. Now if two bots choose the same thing, they'll figure out that they're both touching the same GitHub issue or that they're both updating the same task and one of them, the one that didn't win will back down. But this is a good example of, I think, showing where the bots operate, how they work. I hope that answers the question. All right, another question, yep. Okay, that's really simple. I hope that's not disappointing. Self-validating and self-aware bots. One of the things that they do is one of the first things that a bot does when it's started is it will check out, if it's operating on a certain pull request or issue, it will check out its version in that issue. It will check out the code for that bot as relevant to the task that it's doing. So let's say you run a bot on a certain issue or pull request, you tell it, I want you to operate on this. It will check out the code for itself from that one. Then it will run itself with a slightly different behavior than everything else. But more than that, a bot will actually run things like this where it checks, in this case, first one is checking, it's running in a container, but do I have access to DevKVM? Can I start VMs? Can I perform tasks that need to run VMs? Am I running in such an environment? The second one is that, am I in the Red Hat network? Can I access the next version of Red Hat Enterprise Linux because that's not out in public yet? And so if not, then I'll skip those kinds of tasks. I won't do those tasks. They're not valid for me. Another instance of this bot running somewhere else will perform those tasks. And this points back to how the bots are set up. They're organic and distributed. There's nothing orchestrating them. Multiple instances of these things are run all over the place, in many cases in Kubernetes, which you can easily scale up the number of containers that you run in a pod. Oh, but I run them on my laptop from time to time, machine running under my desk, all over the place, these bots can run, as long as they have the credentials to do the tasks that they need. And of course, check that they do. They check, am I on this network? Do I have these credentials? Can I perform this task? Then they'll go, they'll ask, in our project we use GitHub, so they'll ask GitHub. They'll query the API, they'll look for outstanding issues, things that are not done yet, things to test, things that are issues that they know how to solve, like we saw that updating of translations earlier. They'll all do this independently and go and look for this stuff. They'll post their results publicly. That's really important. Otherwise, in an open source world, you can't have them hiding what they're doing. Even if they're running behind a firewall or under someone's desk on a machine that's not accessible from the internet, they'll post their results publicly. And we then update GitHub for whatever changes are necessary, for example, the fact that a task was completed or pull request is updated or created. And of course, lastly, they'll try and find each other to share state. Sometimes one of them will do a machine learning task to actually train itself from some data. And once it does that, we don't want them all to have to spend time on this. The boss will exchange data or exchange the images that were built for the various operating systems that they're testing on and so on. We have a mechanism for doing that, which is very simple. It's HTTP based and uses hashes to represent state and stuff like that. It's, I guess that's one of the messages here. This is all very accessible to you in an open source project. By making these rules by following those basic rules of enabling that dialogue between the machines and the humans in a productive way. So they act as team members. You come to all sorts of cool solutions and you don't have to worry about spending the rest of your life implementing this stuff. So any other questions? Yeah? So the question is, as far as training the machines, how much is it humans training the machines and how much is it machine learning techniques? What is the balance between the two? And honestly, most of it is that low hanging fruit. The low hanging fruit of make it really trivial for the humans to talk to the machines and remove the barriers there. Most of it is just humans telling the machines, I don't want to have to ever do this again. So I'm gonna take what's in my head, how I know how to do this task and code it so I don't have to ever do it again and let that run. That is the low hanging fruit. We also do, we started very recently doing machine learning with techniques like this. There's a theory I'd like to, I want to prove and I'm in the process of doing it and has to do with test flakes. Has to do with the fact that I feel that test flakes, failures in the test, random failures are really like fuzzing. They're mutations that we should be taking advantage of. They're the machine's finding flaws for us and then we use these kinds of, in our case I've experimented with these kinds of ML techniques to do that, to actually take advantage of these flaws. Time is up and I'm sorry about that. I love talking with you about this and learning from you about this as well. So please let me know and there's another, I mean please, I'd love to discuss this more. There's another talk about cockpit 1130 in the config management room. So I do have to run there but if you have anything to contribute I really want to discuss this further with you guys. All right.