 All right, good. Yeah, good morning. My name is Eric Hemelreich. I work for Red Hat as a principal software engineer and team lead in the OCM group. Today, I'm going to talk about how we use GitOps in the OCM organization to help us scale. I'd like to start with a story, a bit of background. Before I worked for Red Hat, I was a software consultant for a few years. And oftentimes, I would be on a three or six month assignment where I was set to join another team, be productive, and then leave. And I've kind of lost count of the number of times where I show up on-site at a client on a Monday morning with my computer, ready to get started. And I don't have my access to whatever systems. So I'm stuck waiting for a couple days to get access for their IT to set things up. And so I guess I can talk about definitions. What is GitOps? Basically, what we're talking about is putting a lot of VML files in a Git repository as the source of truth, and then having a reconciliation process that runs often to go apply those files. So I'd like to maybe start by looking through some examples. This OCM Resources repository, we've been using for about three or four years to help onboard and off-board people into the OpenShift cluster manager group. So talk a little bit about the structure. There's a ton of files in here. As I mentioned, there's a reconciliation process. It's a Go program. But I think what's more interesting to look at here is the data directory. We have a ton of VML files in here defining users and organizations and what information they should have access to in OCM. So you see we've got three environments, production and a couple pre-prod environment stage in integration. And I think maybe the easiest place to start is look at an onboarding task. How does that work? So you show up to a new job and you need access to certain systems. At most places, you file a ticket with IT and wait, or it's a manual process that somebody does. Instead, what we've got here in OCM Resources is a merge request. So this person's being onboarded. There is a JIRA ticket linked with even more details. They open the merge request, and we can look at the changes here. So I'm going to go kind of fast, and I'll try to leave some time for questions at the end. But there's a lot going on here in this file. So we've got a schema defined. So we validate the structure of all the YAML files. There's a user ID identifying the person, as well as a second user ID. And then the most important part here is the roles defined. So this person needs SREP roles and SREP developer permissions in the staging environment. And then we've got metadata about user ID, what country they're working from. And as I mentioned, we have the JIRA link there for even more context. And if you just kind of look at the merge request, you can see who opened it, who it was approved by, and who it was merged by. And you can see down here at the bottom, every time someone supplies some YAML to get onboarded, we have a validation process to make sure the YAML will apply cleanly in the environment. And then we can see who approved it and who merged it. So what's good about this versus the traditional IT process is this is auditable, it's self-service. Very quick to reconcile versus kind of the other way of doing things, filing a ticket with IT and kind of waiting. I'd like to go through another example. So that's basically users. We also model organizations in here. And so just thinking about the normal flow of developing a product, maybe you have some feature that's not ready for production yet or should only be exposed to a few users. That's the merge request we have here. Adding a capability to a specific organization. And I'm going to go quick here, but we have a schema where we validate the YAML structure. We have an organization which is just a group of users. We've got a number of things like access to quota, which is OpenShift Cluster Manager, Create and Manager Clusters. Part of that is getting quota to do so. And the interesting part here is this last line here, which says, we're going to define this key and value, install config override, and set it to true. So that's a way through GitOps to enable some optional functionality for a subset of your users. You can think of if you're building features and they're pre-production, not quite ready for all customers, you could enable that through GitOps. OK, so I've gone through two basic examples here about how to define the YAML, self-service. Users can submit it. You can see who approved it and merged it from an auditing perspective. I'd like to briefly go through the reconciliation process. And there's a lot here, so I'm just going to kind of go quick through it. The reconciliation process is just a go program that parses the YAML and runs many times a day. So we're looking at the YAML data and using it as a source of truth. So it's going to make API calls to the system to make sure that user had SREP roles or that organization had that specific capability. And so there's a lot here, but basically it validates the YAML. And then this is a lot of install and setup steps. But you can see here we're sort of starting to get to the output. We're going to validate all of the YAML and then move on to the reconciliation phase. So it's checking that everything in the YAML is still in the system via API calls to OCM. So make sure that organization has quota capabilities. Make sure that user has those two roles assigned. And what's nice as well is we have kind of an audit log here of every action it's taking. So you get timestamps. Who requested this change? When did the change go into place? So all of that information is quite useful to have. You think about security and compliance certifications having a formal process like this is required. So GitOps is one way to do that. And I'd like to also mention that we've been using this process for about three years. It's grown over that time. And so we're starting to automate more and more tasks, which would be manual, which is common handy as we've hired more and more engineers. So I'd like to look at another example. Here we've got a person in support needs access to support OSD and Rosa. And we can look at the change here. Again, it's that user file. Here's a user ID needs access to these roles in this environment. And that's about it. So here, the interesting part is if we look at the activity on the MR, we've got that validation job that runs to make sure the YAML will apply cleanly. So there's a couple of pushes there, just getting the structure quite right. And then the interesting part here is this comment by the OCM Resources bot that says, hey, the YAML looks fine, but you need an approval comment from your manager. So this used to be a manual process where we'd ask people to get that approval. We're starting to automate it. So it's very flexible. You can start small and then add more automation as you need it. Another interesting thing that the spot user takes care of is I've talked a little bit about onboarding, hiring new engineers, and so on. Another process is somebody leaves for another job, or maybe they transfer to a different department at Red Hat. So in terms of security and compliance, we need to make sure when somebody leaves, you walk down their access. Very important. So as part of this reconciliation process, we've got it set up to run three times a day, and it checks for any users that have left and revokes their permissions in OCM. Let's see. So I've got a few more examples and details I can talk about. Somewhere else, this GitOps style of project came in handy was I got a request to do quarterly access audits from some of the compliance folks. So here I've added an auditing command. You can configure a list of permissions that are considered sensitive or elevated, and then this audit command will produce a list of users who have those permissions. And then the compliance folks can take a look at that, make sure it's the right list of users, and remove anybody who shouldn't be there, and so on. So that's the audit command. Another benefit of taking this GitOps approach would be the fact that we had a situation where I talked about those three environments, production, stage, integration. We had to move our integration environment to new infrastructure. In terms of GitOps, it was very simple. Just point this repository at a different integration environment and run the reconciliation loop, rather than having to think about how do we migrate the data or having somebody do that manually. I mean, it was quite straightforward to change the URL for integration right there. The last sort of benefit I'd like to talk about taking this GitOps approach, putting your data in YAML in a repository, is that if you follow this process, you get some other benefits. Think about, I mentioned quota for clusters. If you have a cluster and you want to install an add-on, you can request quota here through YAML. So instead of having to build another program to get analytics about who's using these add-ons, you can do a lot by searching the human readable YAML in the Git repository. That was a lot, and those are some of the benefits. Anybody have any questions? Sure, I think, saw your hand first. Yeah, so the question was about, is this inventory management in YAML? I guess the question is about what does the number mean? It's the, sure, so I think, and this has shown, this is defined in the scheme of files, but those are quantities rather than inventory management. So it's somebody saying, I need to install 100 copies of this add-on. Does that make sense? Oh, yes, so the file name is the ID for the organization. It's internal ID, yeah. Any other questions? Yeah, go ahead. The question about how to manage user data, like user names, needs of the user, because they will be public and... Sure, yeah, so we've gone with username for these files. I'm sorry, repeat the question. The question is about how to, looking at the user files, there's emails, other maybe sensitive information. Maybe users don't wanna have that public. We've gone with user names here, which yeah, it was a decision, a choice. It's what uniquely identifies the user in our system. So you could identify them with another, an ID, an anonymous ID, that would be fine too. Like we've done with organizations, we're just using an ID in the file name there. And it defeats the purpose because you see the ID of the user ADCD and you don't know who that person is and you'll prove it in the access for any time. Yeah, so the question is about using ID or username. And I think what you're saying there is, yeah, we're using the username for that reason. Like in this repository, we don't have a reason to keep anything secret, really. Sure, go ahead. Yeah, so the question is about like, how do you teach users to use this? How do they know that they have to specify this add-on and in this format? What I haven't shown here is a lot of teams have, as part of their onboarding documentation, go to OCM resources, open merge requests with the ML file like this. So it's in the other teams onboarding documents. And the other thing you can do is, it's a git repository. So you can, you're looking for examples, you can just browse and see what other people have done. All right. And thanks so much, I think we're out of time. Good morning, everyone. My name is Keryl Satarin and I'm senior software engineer at Red Hat. I'm developing SAP Automation Ansible Content and I would like to present about how we used Ansible Molecule for development and testing of all this. So first, what I will be presenting. A little bit quick introduction into Ansible Molecule. Then I will be showing what Ansible Molecule driver is and how to use it. And then how we used Ansible Molecule driver and what are the benefits we saw using the Molecule driver we developed starting with Ansible Molecule. Ansible Molecule is community project. Its main goal is to help in automating a testing of Ansible roles but it can also be used to automate testing of other content. The workflow of the Molecule is that you create infrastructure, then you apply the role and validate the result of your application and then you can turn down the infrastructure. Infrastructure can be a lot of things. It can be containers, it can be instances in the cloud, in the cloud, it can be virtual machines and so on. So we mainly focused on creating instances in the cloud. SAP systems are quite huge ones and the landscapes are relatively big and Molecule helped us in that to create a managed infrastructure. And for that we created our own Molecule driver which enables us to manage infrastructure as AWS and IBM cloud. Molecule is just a Python package you install, this PIP install. After that you have to install the driver so particularly I will be talking about Molecule driver Azure which we developed. You can do all these commands by yourself. As soon as you install the driver and Molecule you have Molecule commands available to you. For instance Molecule dependency, create, prepare and so on. So each command responsible for certain steps in development or testing per process. So first the dependency you install all the dependencies you need for this particular scenario. So Molecule operates in... You can have as many scenarios as you like. Yeah then you create your infrastructure. So actually provision it or ensure that all the instances started in the cloud. Then you prepare it. So installing the systems, ensuring systems, all the systems are running. Then kind of main loop starts. This is a Molecule converge command which you can apply many, many times. And this is actually the trial and error process during development on testing. Then you verify and destroy. So and all together all the steps can be executed with one single command Molecule test. Digging deeper into the Molecule scenario structure and how it's organized. So you have a role name as a root folder. You do not need actually the role by itself. You can apply Molecule like testing to testing models and other content. You actually don't have to test anything you can just use to create and manage infrastructure. Main file which describes the landscape and kind of complete configuration is Molecule YAML. I will be talking about this a little bit later. Then you have playbooks. These YAML files create, prepare. These are just playbooks. By default they're coming from the driver. And you can extend them. You can change them. You can use your own. Important playbook, I would say the most cool one is the side effects where you can do basically anything. So we use it for backup, restore and other actions as well. So how this Molecule YAML looks like. Some parts of it. So we have scenario configuration on the left which describes for each and every command like Molecule create, what will be the sequence of actions you're doing. So for Molecule create, we just create stuff. And create means we're running create YAML playbook. And this is all configurable. This flexibility of Molecule allows us to manage infrastructure. Very cool. Side effects allows us to run any basically playbook which is available in sensor. So we do basically anything. We focused on backing up and restoring the whole landscape and delegating it. This is particular details of Molecule YAML file structure for our Azure driver. So on the left you can see configuration of the driver. So we have dependencies which is basically shell commands which ensure that all ansible collections for Azure installed, which Azure collections also require Python packages. They also installed in the dependency part. And we have driver name which determines the whole platforms configuration. And platforms is a list of infrastructure you're going to manage. So in the middle you see just an example. This is not complete list of parameters you can set for the instance. Yeah. So as you can see Molecule not only creates infrastructure and manages it, it also configures it. So for instance, data disks. It will provision data disks. It will create file systems, mount them and ensure this in an important way. So if you run playbooks many times, nothing will break unless you break it, yeah. And all the parameters on the right, the additional parameters so you can create instances in different locations, different regions in different resource groups, virtual networks, subnets with different users and so on. These are all parameters that are extendable quite easily. So whatever is available in this particular case Azure collection, Ansible collection, you can add this parameter. A little bit overview of how it looks architecturally. So we're starting writing Molecule command which calls the driver. Driver determines which playbooks are executed. And this playbooks they contain links to Molecule driver collection. Molecule here is being namespace and driver just a name, collection name, yeah. This collection, Molecule driver is a wrapper around other collections which actually manage stuff. So Azure, Azure collection, Amazon collection, community crypto for managing SSH keys. So it will ensure that all SSH keys are uploaded. If you don't have SSH key, it will create one. For instance, it's very handy in CI environment where you don't have SSH key. Yeah, community general and POSIX for disks and file system management. In order to extend Molecule, we added this environment variable action which allows manually run any side effects basically. So we focused on this side effects. But again, this is extendable. Just in one comment, you can do some major actions of your landscape. You just need to have a playbook for that. How we use Molecule and what do we see as a benefits. So first we used it to create SAP operations collection which is SAP automation in Ansible. During development process, we used it to ensure our landscape is running. So you don't have to actually create the landscapes from scratch. You can create landscape manually and then pass all the information to Molecule.yaml file. And this allows you to manage infrastructure using Molecule, ensuring that it started. So all the systems will be started. Instances created and so on. Then if you don't need it, you can delegate all the instances to save costs at the end of the day. This allows us to focus on development not on managing infrastructure because infrastructure is created and dedicated with just single one command. We assure that we can do whatever we like in the landscape because we can always turn back to backed up landscape, the whole landscape. Usually several systems. The copy and paste nature of this Molecule.yaml file allows us to quickly change something. So if we had scenario which was tested on rail eight, we can quickly switch to rail nine, just copying and pasting the Molecule.yaml file to another scenario and rerunning it and saying, oh, now we have completely different tests for completely different configuration. And as I said, there is no need to start it completely from, completely use Molecule from the beginning. You can just gradually create infrastructure manually, manage it in Molecule and so on. And it's very nice in continuous integration because you just need one command Molecule test and it will test everything assuming you have all the cloud credentials. So what do we see as benefits? It was not actually that hard to create a simple Molecule driver. So we had our specific needs and anyone with their specific needs can either extend current available drivers or create their own. We see development speed up. Again, we don't care about the landscape. We just develop. We added it to CI CD. Why anyone would use Molecule to manage infrastructure? If they are developing Ansible content, I think that's a great tool for Ansible content. The second, if they just want to manage infrastructure with Ansible and Molecule allows to do that. And that's all from me. If there are any questions. I am quite bored. But I don't have the money to run those tests on the cloud. But are there drivers for the... Yeah, so the question was we have a podman drive. We're using podman. We don't want to run tests in the cloud. Are there any Molecule drivers available for other virtualization? Yeah. I think there is one for QVM, I think. Yeah, there are some drivers. So you can go to Molecule plugins and Ansible community repository and check. Because before that, before some time ago, all the drivers were separate repositories. Now they're all in one repository. All community drivers, I mean. So you can easily check if it's available or not. Thank you. Okay. Hello, everyone. Thank you for joining me for this session. It's about Ansible console, how to debug many nodes in an emergency or not. The idea and all the slides are done by my colleague Pierre Blanc, who is unfortunately preferred to run marathon than coming to the conference. So you are stuck with me here for today. Let's jump right into the talk. So here is Pierre, you don't see him today, but you could always join him by email if you have any follow-up questions. Today's agenda is about short story of appearance of Ansible console, how to install it, how to use, and some benefits that you could get out of it. So in short, it's a console for executing Ansible task. It's an open source project and it's another way to consume Ansible modules. You could have immediate usage as visit hop commands. It's perfect for occasional or urgent use and the really great feature, it allows you all bash short cards and I will show it off in some moment. So a bit of history. Ansible shell was written by another C-vocal, also known as Dominis. And his first contribution are of 2013, so Ansible appeared in 2012. So the need of having something like Ansible shell was immediately here and it became a part of Ansible code in 2016 after the acquisition of Ansible by Red Hat. It's a part of official Ansible project since version 2.1. So it's included in Ansible code, so how to install it, it's easy. You have it with RPMs the same way as you have Ansible dock, Ansible galaxy or Ansible playbook. That's really easy, it's all in here. So concerning the place in Ansible ecosystem, okay, here it's really simplified but what I want to show you on this slide is that it's basically the same access to collections modules that plugins as Ansible ad hoc has. But it has some additional benefits in comparison to ad hoc. So compare them both. So ad hoc allows you to use a module to perform a single task, okay, so no history. Ansible console allows you to execute multiple tasks and it keeps the state of the previously executed tasks. So it's more extended usage in comparison with ad hoc. The ad hoc. So let's jump into the execution. So you execute it exactly as the same way as Ansible, you are providing the inventory and here we have the typical prompt, okay. So what do we see here? Here we see a username demo, we see all groups selected, we see 20 nodes and we see 10 forks, forks if you want to run a parallel execution. So what we could do with inventories here, we could list selected nodes, list available groups and the select group of all nodes. We do it like this. So here we list all the groups backend, frontend and ungrouped. We select frontend and we list three machines that are in frontend. We could also come back and select all the machines or we could just select only backend and list two backend machines. We could just select one machine and you see the number of machines selected, this displayed right here in the console. So as the first part. So again, we are coming back to all and we are having all five machines, three frontends and two backend selected right here. We could also list available groups and then the rest of the part I already showed you here. So let's jump into the usage. So what we could do as a simple thing. First of all, we could just execute an honorable task. So let's install HTTP server on every frontend machine. So what we could do here, we could first narrow down our group to the frontend. So instead of five machines, we just keep here three frontend machines and then we execute an uncivil task to install HTTP server on each of them. And so here is the result. Nothing to do, everything is already installed. Okay. So that's how you execute uncivil task with the help of console. Let's go to a more extensive usage. What is good about uncivil console is that you can use facts. Okay, so here is how you can get facts. And here is how you could just check out this fact. So again, here we are executing uncivil command, but now we are using the fact that we covered right before. So we just discussed sort of uncivil usage of the console. Let's now jump to a shelf usage. So it's not only uncivil commands that are available here. The great thing is that you have also shell available and I will show you that also actually you could combine both. So for the moment, let's just check out shell. So again, we are in the frontend group and we are just displaying a Linux release that I installed. And we also could force the execution by using the exclamation mark if needed. So we discussed uncivil usage, we discussed shell usage. And now here is the more important slide in this presentation. Let's discuss the combination of both. How we could use uncivil facts within the shell command. So here it's a bit in the middle of the gif. So let's just wait in the beginning. So what we could do here is that we could basically execute the fact covering and get the facts for all the machines. Okay, so now what we are going to do, we are going to execute a shell command. Okay, so we are going to check the content of flashetcd slash modcd file. And here is the best part. So we are combining here shell command with the uncivil variables which is retrieved from before. So you see we are keeping history, we are using uncivil facts that were retrieved by uncivil commands and we are using it within the shell command. So it's really great, you have a huge flexibility. So you have really a combination of shell and uncivil and goodies from the boss within one part. Shell, uncivil and you are keeping history of the previous command. So it's really great. It's something that I really want to promote. So this is the most important fact of the presentation. This is what I wanted to pitch here. So then if we just speak about how to start, uncivil console provides you an interactive help. So you could basically check the syntaxes of each command, you could check the execution and you could easily get a helper and auto-complete with every modules. So here is the demo. So for example, I want to get a help of the copy command and I display what it could be done and I also want to get an auto-complete and here is how it goes. So let's go to some bonuses. It's like real quick highlights that I personally like here. So what you could do, you could exclude a node from the selection. So it's really handy if you want, for example, to debug all nodes except one in which you are really sure, you could use the become option, be careful with that, but you can. And you could really trim a number of used forks. So if you want execution one by one and no parallelization, just use a fork one. If you want to run everything in parallel, you could use a lot of forks. And also the story of your execution is right here, Ansible console history. So if you want to check out what was done, you could. And let's jump into the conclusion. So it's easy to access fast and intuitive. It's improved that Hock command. So what you get in plus in comparison with that Hock, you have history, you have Ansible variables, you have multiple tasks. So really convenient for one time interventions like testing and debugging. It could easily replace softwares like cluster SSH, PSSH, and so on, because of its combination of shell and Ansible usage. So here you have links to the source code and to some resources. And here is a quote from the creator of the console. Hope no one accidentally nuked the entire infrastructure and that's it. Thank you so much for the attention guys. You have five minutes for the questions. Let me just go back to the most important slide of the presentation and here you go. Okay, yeah, go ahead. Can we use console to input some module that I would be developing? I can write in a console module for example, can I input it in the console or do I have to wire it somewhere, in some way to be able to call it? I'm not really sure to be honest. I don't know the answer to that question. I think once you publish the collection as official Ansible collection for sure you can. Okay. If you can it during development, this I don't know exactly. I'm not really into Ansible Development Lifecycle. You could ask Pierre. I'm sorry for not having an answer for that. No, no problem. Thank you. Yeah, do have, yeah? Go ahead. And the thing that, yeah, that's what you know, right? You know, but let's say that I want to use in Ansible, can I use it or how? And for me, if you try this, it's fine. Yeah, it's okay, sorry, I will repeat. So the question was how it's distinguished between Ansible Commons and Shell Commons. Okay, in some obvious situations as these variables, you have Ansible Syntax writing here. And I imagine that you could enforce Ansible Syntax using the code. But to be honest, I'm not really sure about really tiny situations then the names could be mixed up. What, from what I've seen in the usage, it's basically first tries to interpret as Shell and then it tries to interpret as Ansible if it's not available. But again, I'm afraid I'm not, I don't know the exact answer in all the tricky situations. Thank you. Do have any other questions we still have sometime? I don't know, okay. There is also email of Peter on the first slide of the presentation. So if you have like really in-depth question to which I'm sorry, I did not reply today, you could reach out to him directly. So he's a really good guy and he could provide you really in-depth expertise on the subject. No questions? I think we are good then. Thank you so much. Good. Welcome everyone. These are the two words that you'll remember after this talk, exploration and exploitation. They are the main concepts behind the algorithms that we are going to cover today that are called multi-armed bandits or just bandit algorithms. Weird name, you can Google for the historical reason. They are part of a bigger family of the reinforcement learning that is more popular than the others. In the last years, thanks to projects like DeepMind of Google or chapter itself that is using reinforcement learning to improve the quality of the answer. So they are, let's say, in many scenarios. Today, most of the applications that we are using, mobile, web, they are constantly learning from you, from your habits, your preferences, and they are adapting somehow. For example, Netflix. Netflix is constantly changing the homepage, the images associated to the movies to select the images that are most attractive for you. They want you click. They want you watch the movies. So my homepage is probably different from yours, yours, yours. And everything is publicly documented. On the slides you'll find the study. They are explaining how they are using these algorithms to adapt the homepage. This is public, but I assume that also the other major streaming platforms are doing the same. So everywhere you have a homepage the thumbnails, probably they are using the same approach. Another example is the new times that is constantly changing the titles of the articles to select the most clicked one. You know how clicks are important. So they are changing and finding the best titles to make you click. Click means money. And again, they are disclosing everything. They are disclosing the technical details and they assume that most of the news websites, at least the major ones, are doing the same. Advertisements, in general, that's the main industry, one of the main field of application of these algorithms. So reinforced learning and bandit algorithms. The goal is to show the right image, right video, right text to make you click. And there are plenty of researches, plenty of articles. So these examples are on the front end, on the UI, but the same approach can be used also on the back end. So for example, here there is a use case for the network routing, but also in clinical trials like clinical trials and financial portfolio optimization. Same approach, so how it works. Let's take the advertisement scenario. Suppose that you have three banners, blue, red and green. You have to show one of them your website or your mobile application. You know that one of these is more clicked and you want clicks because one click, let's say, brings you one dollar. But you don't know in advance which is the best banner to show, so how you do. You are against the exploration phase, the first word. So you have to learn something. So you are going to use the basic reinforcement learning approach. You have to learn something of the environment, of the world, so which banner is the most attractive. You do something, you analyze the feedback, you observe how the environment changes based on your action and you adapt. Very simple approach, if you think we do it every day because it's one of our learning process. It's modeled on biology. These are my kids, my children, different ages, four months and four years. They are both using reinforcement learning to learn a new skill. So they are trying random moves, random actions to explore and learn and the next day they will discard the actions, the moves that didn't work and they will keep what they liked. We learn to survive in this world thanks not only but also thanks to reinforcement learning that is embedded in our brain, animal brains and so on. So let's say that we explore the environment and we randomly show the banners to the users. So after a while we have some performance score and we see that the blue banner is the most clicked. So every 100 users we get 10 clicks, $10. At this point we can start the exploitation phase. So we explored, we get some knowledge and we can exploit the knowledge. The optimal strategy to maximize the profit if nothing changes in the environment is to show, to always show the blue banner, right? Is there anyone that thinks that showing the blue banner is not the optimal strategy? If nothing changes, the few people that are there you are losing money because it is, not fixed. But assume that there is something that changes in the environment. There is a news, there is a viral video, you know how it goes on the social media, something suddenly happens and people are interested in something else. So the environment changes and now the red and the green are better. So at this point, same question, who thinks that showing the blue banner is not the optimal strategy? Good, good, good, yeah. Got the point is not now because the environment changes but there is a problem. I told you that environment changed. The algorithm is still in the exploitation phase so it's always showing the blue banner because it was the best action, right? So how do you solve this problem? You solve, as you can guess, mixing exploration and exploitation. This is the exploration and exploitation dilemma. So you allocate a portion of the traffic to explore and test and continuously test the non-optimal choices because you hope to find something better. It's a risk opportunity problem. So you invest something to find a better opportunity. So how much to invest is the challenge in these algorithms? These algorithms are public from decades but the parameters are the secret. Every company is tuning. In many machine learning algorithms, the parameters are the secret. So usually you start exploring a lot because you don't know nothing like a baby and then you decrease. So as long as you learn, you decrease and you exploit your knowledge but you keep some exploration rate in background. This is what you find on books and papers. In real world, it is this way. So whenever the environment changes, you increase the exploration rate. When I design these systems, I always define or try to define a metric, an indicator that tells me if the environment is changing or not. And that time is time to increase the exploration until I know the new state. If you cannot define, you simply sample over time, you increase the exploration base. The challenge is that there are not only three banners. So there is this metrics and the metrics that was in the description of the session. So imagine thousands or millions of possible banners or in general actions multiplied by thousands or millions of user profiles. All of us are in a profile. Even if we don't register to our website from our IP, we are at least labeled by country but they also can guess the age based on patterns, cookies and so on. So the goal of the algorithm is to choose the best actions for every profile. The Netflix homepage is different from profile one, two, three and so on. And there are different strategies. You cannot just sample everything randomly because you don't have enough traffic, you don't have enough time, resources. Sometimes doing an action, depending on the scenario, for example, in the clinical trials, doing an action is dangerous and expensive. So it's not like showing the banner. So there are different strategies. And these are the most popular strategies, algorithms. If you're interested, you can click on the slide so that there are some links. If you're interested in to see some code, you can also take a look on my GitHub repo. There is this project that contains the implementation on those popular algorithms. And there is also simulator that compares for the advertisement scenario, that's because I worked in these industries for years, compares the performance of the different algorithms. And if you want to, again, continue and learn more, I can also recommend this book. This one. It's 10 years old, still valid, pretty simple. Again, it contains in Python some explanation also on the other strategies that are available. And so we are at the end. This is a complex topic to cover in 15 minutes, but what are the main takeaways? So you have seen that we are surrounded by this algorithm. So reinforcement learning somehow is everywhere, is embedded in us. We are using it unconsciously every day, but also technically in the applications, in the web, it's driving some of the application. If you're interested to use it, you have seen that you can use it both on the front end, that is the main field, but also on the back end, but also if you never implement these algorithms, it's still useful to know the concepts of the exploration exploitation because if you pay attention, you'll start noticing while you're using the applications, when you are in the exploration exploitation. While you are scrolling, you'll see some new content, sometimes labeled like adjusted for you, new for you, that is unrelated. If you click or you stay for three seconds, it will be considered a positive feedback and you'll start getting there. So most of the information that we received today is influenced by bandit and reinforcement learning. So it's good to know. Is it bad? Is it good? Up to you to decide. The important is to be aware. And I hope now you are. Thank you. Two questions. If you're faster, more. Yes, so the depression is the difference between A-B testing and this approach. One difference is that usually A-B testing has only two options. So here we have multiple options, sometimes non-limitless options. And yes, not sure, probably also A-B testing can be adopted in some way, but this approach, as you noticed, can be used continuously. So in theory, the mathematical formulas that are behind allows to train, retrain the feedback for every click that you receive you could train. And yes, I have always used these algorithms in a continuous way. Let's say, learn as you go mode. So you start with nothing and this is also good. There are no assumptions. You can start with assumptions, but they are pretty good in discovering things with zero knowledge, like the babies. Out of time, thank you. I hope you enjoyed it. Good to go. Okay. Hi everyone. Welcome to my talk. Today I'd like to talk to you about how to unleash your features. Let me dive right into the implementation. So what is a feature toggle? A feature toggle is essentially a branch in your code asking if this feature is enabled, then do this, otherwise do that. That is essentially what a feature toggle is. But what's the use cases of using a feature toggle? So I wanna start with a simple example. I work on a project called OpenShift Cluster Manager where we install a lot of OpenShift clusters. Sometimes these clusters may break, they may get into some weird state and they end up in an error state. And what we wanted to do was we wanted the system to collect any piece of information it can and then report it as a Gira ticket so we can track it. We will have all the information available on Gira for investigation and have that automated process running. But when I started working on this feature, I thought to myself, well, there's kind of a risk with introducing this feature. For once, this gathering of these pieces of information, it may cause some performance issue. We're talking about the service that is now supposed to communicate with another system, whether it's the cluster itself to get information from the cluster, whether it's talking to Gira and communicate with that system, add all these information pieces as attachments. It might break, it might fail. So I was kind of concerned with how much risk am I going to introduce to the system just with having this background process running around in my service. So I thought, well, how can I reduce the risk of introducing this? And one of the things that I thought was, let's see, let's try to think of introducing this with some activation strategy. I want first, I want the developers and the QE team or the QA team to be able to test that feature with just a small amount of clusters just to see how it behaves, to see that tickets are opened on a timely manner that the system is able to collect all these pieces of information from the cluster and have it attached to that Gira ticket. And once I have enough confidence that the flow is working, I wanna see some feedback. I wanna see some customers actually going through that flow and see, did I actually increase the response time? Do I catch issues earlier than I could have caught them without the solution? And once I gain enough confidence that everything's working and there's no performance penalty with this solution, I can open it up for everyone and eventually remove that feature toggling from the code itself. So there's many use cases for using feature toggles and this is just one of them. So testing with just a small amount of users is one use case. Of course, I'll need to collect some feedback from customers or users. And it doesn't matter if these are external customers or QA team or internal developers. So that's one use case. One use case would be to test with a small amount of capacity. So sometimes I don't really care who are the users that this feature will be enabled for them. I care more about how much of the users would get exposed to that feature because I wanna monitor the system. I wanna see that it can handle the load and I can gradually open it up more and more for more users and more capacity. And once I make sure that the system can handle it or not and do the right tweaking, I can remove the toggle itself. There's also AB testing. Of course, I can test different paths and collect analytics information and decide which path is the best one to proceed with. And a kill switch is a great example. So I can test my feature. I can have the QA team assure me that it's working as expected. I can have some customers using it or everyone can use it, but at some point there might be a surge or like a huge amount of requests coming in and then it may break the system. I should have a way of hitting some switch to kill that feature and protect the system from any incident. And sometimes it's not the system itself, it may affect our system that the system is talking with. So having a kill switch is a very powerful tool there. Of course, it also can be useful as an alternative for canary deployment. With canary deployment, you take your code, you deploy it in a very controlled and safe environment. You test it and once you're sure it's working as expected, you deploy it to more paths. But with feature toggles, it's kind of an alternative way of doing it. You just deploy the functionality to the entire system but it's disabled. You can enable it at runtime on your own time window when you're available, when you have everyone on board that you need them to be and then you can test the feature and then you can switch it off when you need to at any point. I'd like to talk to you about Unleash. So Unleash is a management system for feature toggles or feature flags. It's open source project, it's available on GitHub. We're using it for a while now and the way it works is that the user is using your software, whether they're using the CLI or making an API call to your service. They may have been using the UI itself somewhere in your code. There's gonna be that feature toggling asking is this feature is enabled and with using Unleash, the flow will actually making an API call to the Unleash system asking is this feature is enabled and providing some context like some user information, any kind of context that is important to make that decision and then Unleash will respond with a yes or no and then your code will be performing that branching according to their response. Now, there's several activation strategies with Unleash. The basic one would be the one that you would probably expect it to have which is either on or off. The feature can be disabled or enabled. There's some user identification activation strategy so you can be activating something per user. It can be per session ID or IP address. There's also the gradual rollout option so I can say please open it up just for 30% of the request or 40% of the request. Doesn't matter to who and then open it up gradually as I see as needed. And the interesting part is that you can also extend it by writing your own custom strategy. So we are using that feature and we are actually implementing a custom strategy which is organization-based. So we pick an organization like a specific customer that they are very interested in trying out this new feature, this new functionality. We've opened it up just for them just to play with, get their feedback and then once we've made all the adjustment we can proceed and open it up for more and more customers. So how do you start with Unleash? You can install and run the binary itself. You can download it. There's also a hosted option. Once you run it, you log into the system, you create a toggle and you pick an activation strategy for it. Now, the other part you need to do is integrate with your code. So you can use one of the available SDKs that Unleash has. It has several SDKs in different languages. If you're using a different language then you can just make an API call to the Unleash system and that will essentially be asking Unleash if this feature is enabled or not. So how does it look like? This is how the admin console of Unleash looks like. When you log into it you see a list of available feature toggles. These are feature toggles that we've created. There's also the new feature toggle button that I've mentioned and if I'm gonna dive into one of the feature toggles I will see something like this. So on the upper part there's the feature name. I can see the feature is active. I see a description of the feature over here and I see if it's enabled or disabled. And on that main part I see the activation strategy. This specific feature is enabled just for a couple of organizations. One of them is the development organization. The other one is the QA organization. So we are testing how will the system behave if we are bypassing the limitation of number of compute nodes. We wanna make sure that the system can handle it. We wanna make sure everything works as expected and then once we have gained confidence then we can open it up for everyone. If I'll go into the metrics tab of Unleash feature I'll see more data about the exposure rate. So I can see how much request got into Unleash asking if this feature is enabled or not. I can see how many requests got rejected or accepted. I can see it over different period of time. 40 hours, 24 hours and et cetera. There's also event log. So it's kind of important if I wanna see, for example my monitoring tells me that there's some performance issue. If I'll go into Unleash I might see someone enable some feature like in the last hour I can maybe correlate issues with that. So that's also important for auditing. I'll just summarize this presentation. So using feature toggles is a great way to test your product and have it stable as a service of course. Unleash is a great system to manage your feature toggles and you can also implement your own strategies according to your own product needs and your own business needs. And I'll take any questions now. No, it's an open source project. Oh sorry, the question was is Unleash an open source project maintained by me or the team? No, it's not open source, completely not owned by Red Hat. Any other questions? Yeah. So you need to consider that, sorry, repeating the question. The question was how much overhead is there with using Unleash? So you need to consider that every time your code will be asking if this feature is enabled it's gonna do another API call. Now of course there's some penalty for that. You may cash the results. You can introduce that into Unleash client itself. And then there's also, you need to be aware that when you hit the switch on Unleash it takes roughly, I think three to five seconds until it syncs with all the clients. So yeah, there's a performance penalty there that you need to be aware of. Any other questions? No, it's just a live based on the... Great question. So every time you pick, sorry. The question was what happens if Unleash is not available? So first of all, obviously you should probably pick a good SLA for the Unleash system. But if it's not available then you're gonna need to set up a default for every feature. So whether it's on or off, it's your decision but yeah, you need to pick one. One option, yeah. There's no other question and I'm done. Thank you. I need to take a look at this one. Can we be like holding it? Can we like, see what the monitor show it does and... Just spin it. Like this. Because I want to see it, then we put it on the laptop. Oh, yeah, okay, okay, I'll get it. Get perfect. Okay, let's mic check first. Oh, it works, it works. It must be orange. Okay. Okay, so hello everyone. My name is Matheus and I'm a senior software engineer at Red Hat, nice to meet you all. This is Pythonic Functional Programming, list comprehension. Unfortunately, time's off the essence so I won't have time for questions but feel free to meet me afterwards and I'll be glad to address them. So, let's get started. More often than not, we need to deal with collections, list of objects, overall intervals while coding and most times we can get by with before loop which in Python is elegant for each loop. Performing one action per iteration and we also know that Python is a multi-paradigam programming language which supports several different structures for this coding pattern of performing one action per collection element. So, let's check some of those structures in action and compare them to check how they look like and maybe which one is the best. So, we can start with a simple example of computing the squares of all integers from zero to a large number. So, let's assume that we're going to have an iterative approach first. So, let's define our main function. So, let's see that we want a list of squares that's going to start empty and then for X, cause we're doing math here in a range that goes from zero to 10 to come again. Oh, so I saw in my bed, is this better? Thanks. So, for X in range from zero to 10 to the eighth, let's add its square to the list. So, squares dot append X times X. So far so good, nothing too fancy. C is like CS 101 code. So, we are going to return squares and we should be good. Doesn't look back to me, does it look back to you? Don't know, let's check. So, if name equals main because Python thinks it's a good idea somehow. Result equals main. So, let's grab the whole squares list but we don't want to go to like, this is 100 million numbers. We don't want to check off them. Let's just check the last one. So, let's print result, the last one. Result of minus one. And let's see what happens when we run this. Let's actually go into the folder, write on, enter. But you should take a little, it should run, it's a big number, but eventually it will return an even larger number because this is 10 to the 80 minus one. So, yeah, that looks good to me, it works. So far so good. But, four loops are boring. Maybe there's a more functional way to do this. So, we can leverage a map here. So, map is a high order function that takes another function and collection as arguments applies that function over that collection and returns that collection modified with the function on each of its arguments. So, let's do that. Let's try to get fancy. So, let's copy this and let's call it map. So, instead of going through a for loop, we're going to do this. Return. As range isn't really a list, but generator, let's cast it to a list. So, let's return a list of a map. So, let's pass anonymous function lambda x, x times x over the original range of zero to 10 to the eight. And this is already looking very least. Might be doing something right here. Looks very functional. So, let's save it. Let's run it. And if I didn't mess up, the results should be the same. So, I really hope I didn't screw this up. So, it also takes a little bit to run, but fortunately, it will turn out well. Yes, it did. So, this is a more functional approach, a very least be approach. But this isn't Python. We want to call it Python. We're not, I'm not a list developer. I'm a Python developer. I want to do this on a more Pythonic way. And there is a more Pythonic way to do this. Can anybody tell me which one it is? Can you guess it maybe by the talks title, maybe? So, there's a list comprehension way. So, list comp. And to be completely honest, it looks very good because what we're going to do is we're going to declare a list with open and close brackets. We're going to do x times x for x in range. Looks a lot better. At least for me, it looks a lot better, a lot cleaner. Much fewer characters. Let's see if it still yields the same result as it showed. I think it's going to, it's going to yield the same result. But probably you guys might be thinking the same thing as the same thing as myself. Which one's faster? Why have more? If in the Python Zen, there should be an obvious way of doing something. So, let's check that. Let's create a benchmark script and compare them. So, for script in UI, do, let's show the script name. So, echo script, script, Python three. Python three, actually time, Python three, script. And let's break the line at the end. I think it's like this. Done. So, and let's run it. Let's compare. Who thinks the for loop is going to be faster? Raise your hands. Nobody, okay. Who thinks the map approach is going to be faster? You guys, check the Git repository, you're cheating. Who thinks the list comprehension approach will be faster? Oh, that's nice. Okay, so the interactive approach took 12 seconds and change. List comprehension took 10 seconds, so it's already faster than the for loop. So, maybe map is faster, or maybe it isn't. Let's find out. So, map is actually the slowest one. So, okay, we already know that these comprehensions are faster for this particular use case, but there are other use cases that we can have, because map is one specific function, but we have other that is the filter. Who knows what a filter is? Nice, so many functional programming people who make me proud. So, if this, you are applying a filter effect with a for loop, we can say that, okay, now we only want the squares of, let's say, even number. So, if x mod two equals zero, we then append it to the squares list. But this is naive, this works. How do we implement this in a, let's say, pure functional way, a least p way? We would apply a filter over the range here, and filter is another high order function that takes another function, applies it over interval, and if the result of that function is true, it preserves that value there in the collection, but if it's false, it removes it. So, we're going to pass yet another anonymous function and apply it over the range. But this is becoming even less readable, right? And now we have four parenthesis at the end of the statement. This doesn't look like Python at all. So, how do we do this with Fully's comprehension? Can anyone tell me? You guys make me proud, you're making my day here. So, return x for x in range if x mod two equals zero. Much more readable, much more elegant syntax, but is it still the fastest one? Maybe the iterative approach might be faster because it might benefit for some branch prediction. What do you guys think? Let's run it. Again, who thinks the iterative approach will be faster? Nobody. And the map approach. Nobody. So, I'm assuming everyone thinks the comprehension approach will be faster. So, iterative approach is 11.5 seconds. It's okay. Let's see the least comprehension approach. 8.7 seconds, so still faster. So, I think by now you should be thinking. I should always use these comprehensions because it's faster. And I think it won't be a perfect takeaway, but these comprehensions don't really stop there. Let's say you have a list of lists, which is something like this, and you want to linearize it. In other words, you want to make it a single list of scalars. You can do that with least comprehension. So, scalar list becomes x for y in list of lists for x in y. So, if we print scalar list, there it is, linearized. And you might be thinking, what if a list of lists of lists, and so on? You can have as many four elements in iterable, four elements in iterable statements as you want, as you need for achieve whatever you need to do. But it doesn't stop there. What if you want to filter out just the lists that have more than one element? For example, you can do that. So, let's say that I want a scalar list big, meaning that I only want x for y in list of lists for x in y, if length of y is greater than one. In other words, I don't want single elements lists to be considered. Would that work? Yes, it will. You can see that the six is missing from here because we excluded this element here from the list comprehension. So, this is a tiny bit of why these comprehension are great, and the main think-aways, guys, really is that they look good. They are cool to write, but that's not really what matters here, even though that's pretty awesome. What matters is that they have excellent performance, and begin writing these comprehensions in your Python code, guys. It won't hurt you. And even take a step further, Python even has dictionary comprehensions, but this is probably a topic for another talk. And whenever you need to iterate over something, apply a single action at a time. Remember that list comprehensions exist. All this talks code, examples, and much more in-depth information can check in this GitHub repository here. So, please check it out. The link to it is also on this talks page in the schedule app. And thank you very much. Good to go. All right, hello everybody. And welcome to this quick session. I'm Mark, engineering manager at SDRV. And this is my story. It's a story for those of you who already are, or aspire to be, leaders eventually. So, this is me, a long, long time ago. When I first wrote a for loop, I was ecstatic about it. And it was the turning point, the moment where I realized that this is the career that I want to pursue, that I really want to be in software engineering. And at first, it was like that, do you know the feeling when you hit the backspace more times than any other letter on the keyboard? Well, that was me. But I worked hard. I worked really hard. I had a workout routine, 100 space bars, 100 centers, 100 backspaces per second. Until eventually, I became like this. Sorry, that's a different story and different workout routine, like this. Where code became a drug to me, and coffee as well. And I liked the dopamine spike when I finished basically any task, any feature that was presented to me. And ideally, before the deadline, that was great. And over time, this feeling of expertise that was coming to me actually was recognized by others as well, and I became a team leader. That was a great moment because like everything that I worked so hard for was finally like done, right? It felt great, the dream come true. But me being me, I didn't stop at that. In fact, I continued to work even harder. And eventually, I became an engineering manager. And I almost went bold for real this time. Why is it? Well, up until now, this story was quite common. Nothing special about it. It's a pretty standard journey. Well, let me first introduce my wonderful personality and the mindset that I had at the time. So, I am competitive, and to be a leader who is competitive, that immediately meant to me that I had to be better than everyone else, basically at whatever they do. Meaning, I had to stay constantly at the edge, ahead of the curve, learning new trends, new technologies. Maybe obsessively hardworking, well, I just, to be a good leader, I just had to work harder than everybody else, right? Because that's what leaders do. So, I would always strive to go one step further. Well, as I'm a risk taker, that one step would sometimes be really painful. But I adapt and I overcome, right? That's what leaders do. I'm also a reformer, and I have this constant urge of changing things, because if things aren't changing, they aren't improving, right? So, I would catalyze the change. I'm also demanding. Well, that means setting direction and goals. Luckily, I'm also supportive, so it's like complete dictatorship. It also means motivating others to achieve those goals. And most importantly, I'm ambitious. And, being an ambitious leader, I had to achieve what others find extraordinary, beyond their reach. So, I would like to believe that not all of that isn't it? Not all of the points, but completely wrong. Nevertheless, becoming a manager had an unexpected effect on me. And that started when my hard skills, the textiles, text skills, started to plunge like a rock, like completely obliterated over time. And in turn, I lost the ability to compete with others, because I would compete with them on the technical level, right? No more able to do that. And my personal feeling of recognition, the one that I had up until then, for my technical skills, that just, you know, done zero nada. And along with that, motivation completely annihilated. But, I was an engineering manager, everything is fine, except it wasn't, because I felt paralyzed by my own need of achievement and recognition. So one day, I was sitting in a kitchen, drinking coffee and watching the rain just hitting the windows. And then, nothing happened. And this became a pattern, they repeat once. And I felt burned out. And that is when it happened. And what happened was nothing again, except this nothing is now important. I could now stand here and tell you a story that is completely made up. I could just, you know, come up with an imaginary figure that just entered my life and magically changed the direction of things, changed my perception, changed the meaning. Well, that would be a complete lie, because there is no plot twist, no dynamic point in my story. And likely, there won't be one in yours either, let's be honest. The only one that can add it there is you. And you can do that by changing your mindset. So, you have to realize, you now have multiple playing fields. Inevitably, you're gonna lose on some of these. So, pick wisely. Make peace with the fact that you no longer will be the one who is the most technically savvy person in the room. And don't strive to be one. You don't have to, instead, identify the talent in the room and let them outgrow you. Even better, help them outgrow you. Are you competitive, like me? Well, good, but don't compete with others. Compete with yourself. Are you obsessive and hardworking? Well, work smarter, not harder, da. We all heard that million of times, but to me, perhaps I needed to hear it millions of times and I still didn't get it until now. Do you know what it is about? It's not about efficiently doing the tasks that you work on. It's about efficiently picking the tasks to work on. And that's how of a hell of a change. Now, are you a bold risk taker? Well, first of all, stop making bold statements, please. And second, don't risk, strategize. Risk is a calculated leap of faith. Strategy is a carefully, carefully considered, well thought through long-term plan. And as a leader, leadership is a long-term game. Well, are you a reformer, the constant need of change, the urge? Well, instead of catalyzing change, try to catalyze growth. Are you demanding? Well, don't set direction, establish one and help others establish goals for themselves. And instead of motivating, inspire. Because motivation is in your head. Inspiration comes from your heart. So find a way to inspire others. Don't seek recognition in that. Don't do that. No external validation because you're likely not gonna get one. Other people's achievements are yours as well from now on. Think about it this way. And their success is your recognition. Because how I see leading now is the responsibility to see to the rise of others around you. And that perception helped me greatly to reach my motivation. And maybe this talk will be the plot twist in your story. Maybe this will be the dynamic point that you need. And maybe not. But if at least one person in here finds it useful, well, it was worth it for me. Because guess what? Your success is my recognition. Thank you. Time for questions. Oh, there are some, okay. My responsibility as an engineering leader, I tend to view it as I just described it. I really just want, first of all, to identify the talent and get that talent in my team. That's first thing. Second, help this talent to grow alongside of you. And you know, they will help you as well. But focus on others. That's like the mantra that's always, I've always find it very helpful when in a need, just reach out. And I was trying to teach this to the people that I work with, to establish sort of a dynamic team. And I call it an environment of trust. And as a leader, one of the responsibilities should be to support these environments, to create an environment of psychological safety, if you wish, in your team. And eventually, I would like to say that as a leader, you should always strive to be like replaceable. So it's kind of a paradox, but I believe that you should be aiming for others to be able to replace you. That's like the utmost responsibility that you can have. Okay, so you mean my day-to-day activities that I do for my company? Okay, yes. I think there are quite a lot of them. I don't think that it's in the scope of this talk to actually go into day-to-day activities. I wouldn't even call it a leader at this point, because the things that you do for your customers are not the things that make you a leader, and those day-to-day activities that make your company run continuously. I don't see them as the one, anybody can do them with the right amount of training. I don't think these are the responsibilities of a leader. These are the responsibilities of an engineering manager, and these are out of the scope of this talk, if that is okay for you. How do I deal with imposters in the team? Also, not imposters, was it introverts? Yeah. How do I deal with introverts in the team? Okay. Good like talk question. All right, so my 10 seconds answer would be focus not on those introverted people, because it's gonna be, you can't even do that. You can't push these people to be more outreaching. Instead, try to focus on the rest of the team that is not introverted to pull those people inside. Because you can't really change what these people are inside, and you can't force them to be more outgoing, to be more easygoing about them, publicly speaking, stuff like that. You can support this. But the first thing to do is sort of get them to the basic level of communication. So if you're talking about really introverts at this point that have troubles communicating with the team, then get the team, find a way to communicate with them. It's not your direct responsibility to address that individual. It's your, at least, as I view it, responsibility to address the capability of the team to talk among them. So address the rest of the team would be my advice. Yes? Multiple phases. First I work with them personally. Then I have, I call it the culture core of the team. Those are the individuals that can effectively spread the culture of the team. Those are the true core members of the team. And I usually ask them to onboard this person. Usually they, for example, since I have two teams, one in Bernal and one in Prague, it's kind of difficult and tricky to maintain the culture across multiple cities. And the core of the team is here in Bernal. So I ask them to come often to Prague to the new team member to sort of dump the culture on them to get them soaked in the culture by just being around those people. That's enough. I don't really have a specific target approach that would be somewhat methodic. I just fully trust in the capabilities of those core members to basically transfer the culture to them. And then, of course, I pick people that I believe are able to fit in the culture. Each conference for it to anybody you want to. Last question, then we're out of time or we're out of time right now? We're out of time. Sorry, thank you for questions. If you want to talk to me outside, feel free to do so. Can we start? Okay. So hello, everyone. My name is Matheus. I'm a senior software engineer at Red Hat. Time is off the ass, so I'll be able to address any Q&A by just feel free to reach out to me afterwards and I'll be happy to answer your questions. And today, I'm going to show you some ways you can manage technical depth in your teams and projects. So technical depth is akin to broken windows in a building. If you leave a visible broken windows and a repair, other things will eventually be broken, leading up to misdemeanors, disorderly conduct, and general disarray. This criminal slippery slope is the broken windows theory. And it's an actual thing in criminology. The pragmatic programmer compares it with technical depth, telling us that a code-based riddle with technical depth is a lot like a building of broken windows. But I personally feel like technical depth is a lot like doing house chores. Have you ever let dirty dishes stack? All it takes is a single day without dishwashing. Then in the next day, you see that overwhelming pile of dirty dishes, glasses, silverware, and then you might feel tempted to just grab a clean one and have my next meal with it. And I'll wash the dirty ones later. And then just like that, the stack will keep growing and growing and growing until you run out of dishes, until you run out of silverware, until you run off glasses, and either you wash them or you face a starvation. So a very similar thing happens with code. It can all begin with a single unoptimized function that gets left behind, like this bubble sort here. Or even some dirt that could be refactored. But the main issue here is that bad code has gravity as it tends to attract even more bad code. And after a while, the application's code base becomes such a mess that simple changes can take weeks to be deployed. Who has been in that situation before? More people that I don't like to see, which is unfortunate, but it's the life of software engineering. And this impacted your quality of life, right? Who felt bad? Okay. But not only quality of engineering gets affected. The business itself gets impacted by that because you need to quickly deliver in order to achieve your business goals. A Gartner report tells us that actively managing technical debt leads to a 50% reduction in delivery time. So I guess we have plenty of good reasons to keep technical debt in check, right? So might be wondering, how do we do it? We first need to define what technical debt is. And believe me or not, the semantics actually matter here. Some people argue against the term itself, citing that it can be deceiving. Some eccentrics even advocate that you should stick a banana peel to attack that ticket in order to force you to address it as soon as possible. The reasoning being that nobody wants a dirty working environment, right? But the important message here is that each team needs to define what constitutes technical debt for them. And the usual suspects are, how can I point this? Okay. Usually, non-blocker edge case and work-aroundable bugs, cold-refactors, non-block infrastructure work, such as CICD improvements, database turning or infrastructure turning, some optimization, and general low-priority and un-urgent improvements. So nothing that's urgent, like one or two dirty dishes on the sink, but if they keep stacking, we'll only hinder you on the long-term. And also notice that all technical debts falls into two types, intentional and unintentional. Unintentional ones are usually related to bugs or can be related to oversized original design, shortcomings in architecture. Unintentional ones, however, have multiple causes, such as the need to meet a deadline or to quickly prototype something that can be optimized later. And this classification might help you or your organization with your technical debt definition. But please, don't take this as a set of rules. This is also not a comprehensive guideline of what technical debt is. Think totally about your team's context, domain, and scope, and try to figure out what technical debt means for you. And different teams might require different definitions. We have now discussed the periods of technical debt and how to define it. So how do we keep track of it? Your best bet is to use the same tooling that team already uses. So if you're using JIRA, stick to JIRA, if you use Trello, stick to Trello, if you use a physical Kanban board with six nodes, keep doing that, no reason to reinvent a wheel here. However, it is good practice to regularly reveal the tech debt you have in your backlog. You should evaluate whether it still makes sense to deal with and drop anything that no longer makes sense. Seize the opportunity to refine whatever still needs to be solved, however. And now we explore the basics of technical debt tracking, definition, and why you should deal with it. But you're probably wondering now, how do we actually manage it and keep it in check? We have to approach this with both strategy and tactics. And while the tactics may vary, my experience shows me that there is all but one strategy to effectively get technical debt under control. Can anybody guess it? You need to foster a culture of continuous improvement in your team. Everyone needs to be committed on improving the code base, not letting technical debt slip. Engineers love and clean well-maintained code base. Who here likes to code on good code, on a clean architecture, on clean code? How many hands are raised? So it's natural. And they will make the effort to keep the code clean. This is literally the broken windows theory. If your house is well-maintained, you keep it well-maintained. If a code base is well-maintained, if it's cleaned, it has a clean architecture, engineers will strive and make efforts to keep it that way. And it pays itself compounded over time. And in order to achieve this, a team needs to be enabled and empowered. A team needs to really own their systems and services and to end and be proficient at their tools. And management buying is also critical. Even though there's a good strategy to handle technical debt, the actual tactics for dealing with it vary and depend on several factors such as team size, context delivery process, sometimes even on the tech stack itself. So it really depends on how to actually do the tactics, but there are a few mainstream ones. Some teams go by the golden rule, which dictates that each sprint has 20% reserve capacity to address technical debt. And some teams even split 20% between actual bug fixing and other technical debt issues. This has the collateral benefit of, by default, creating the technical debt management habit in the team. Does anyone here follow a similar approach in their teams? Okay, a couple of people. It's nice. Other teams deal with technical debt as regular issues and has the team's tech lead or architecture act as their stakeholder and they advocate for the tech debt. And this can lead to some sprints having no technical debt in them, but other ones might be technical debt heavy. But again, this is no set of rules. Just a couple of examples from the industry. My suggestions for you to experiment with them and stick to whatever you feel it works. Also, it's worth noting that not all technical debt is bad. Sometimes it is okay to accept some technical debt, especially if you are resource pressured. And resource here can mean money, time, even manpower. Sometimes you might need to quickly hack an MVP in order to beat the competition and be the first to go to market. And sometimes you might need to accept a sub-ultimal infrastructure if you are operating under a constrained budget. In this case, technical debt might compare to actual financial debt. Think as those intentional debts as the equivalent on taking a loan to start a business. In other words, not all debt is bad by default. What really matters here is your ability to pay it back later. Keep that on your mind. But the main thing away here is to not just track off what really matters, which is to embrace a continuous improvement culture and for the team to be the main advocate for excellence and quality in their own code base. But let's now explore a few cases and examples of technical debt being managed in practice. Let's first explore how to manage a legacy application of technical debt. You don't need to be a wizard to do that, by the way. It is an application that is already in production and therefore has concerns such as uptime, SLA, not introducing new bugs, and most importantly, not breaking production. Does anyone here maintain a legacy application? Yeah, good luck. But also there are two sides of being a legacy application maintainer. It can be hard to change. Deployments might be complex and overall introducing new features or fixes to the application might be difficult. However, on the other hand, and this is a pretty much an upside of being a legacy application developer, you know what's going on. You know it's shortcomings. You know that it might be missing automated tests. You know that it's CI CD might be unoptimized or it might have a couple of megazord classes that you could break down into smaller classes. In other words, your technical debt is already mapped for you. For example, if this is a monolith that is hard to scale or it's hard to deploy, we might want to explore a straggler pattern and plan to break it down into more maintainable microservices. If it lacks automated testing, we can create a policy that any new code must always come with their unit tests. Also file tickets to implement tests for classes or modules that might be missing them. We can even wrap it in container image, in a container image, so it becomes easier to deploy it on public cloud, fully averaging CI CD capabilities. All of those things are tactics and techniques, and while they still vary, depends really on the legacy application, the other strategy is still of implementing a continuous improvement culture on the team. This might be easier said than done. Okay, I don't say it lightly, especially if the project has been neglected for a while, but start with some small steps and keep improving things until you get technical depth under control, at least so that it doesn't run wild. The upside is that those small but steady improvements eventually compound with each other, improving overall developer experience. And this will engage everyone to keep tackling technical depth, and we continuously improve the code base. And my personal experience leads to believe that our golden rule approach might work best for legacy applications, because there are only, it's going to be bugs, improvements that you need to address, and it's worth it to always account for that in sprint capacity. But yet again, your mileage may vary and analyze what works best for you and your team, except critical thinking. But now what if my application is something completely new? You have then a great opportunity to get things right from the start. You begin with no previously existing technical depth, and it's entirely up to you to get it right during the development cycle. There's a challenge in identifying it properly and documenting it so the team doesn't lose track. So don't forget to file the proper JIRA tickets. And also there's no excuse for you to not code unit tests, okay? It's new code, you should be doing tests. Automated tests help you with regressions, refactors, and overall engineering metal health, so there's no excuse to not have them. But the actual main challenge here is to get management, sponsorship, and buy-in to properly deal with technical depth, especially when we might be in a situation where a hard deadline for a launch or firstly release exists. What might help you here is to show that not getting technical depth under control on the short term might lead to some bigger technical issues in the future, which might hinder the overall product development. In this case, I strongly recommend to have the tech lead or project lead to act as technical depth advocate and argue them with the manager or stakeholders so that this can get prioritized, at least for, let's say the first post MVP release. Yet again, the main challenge is to foster continuous improvement culture and to get it right from the start. And those tips and examples here are to make you think about how you've been doing things to fight technical depth on your team and company. But this isn't a cookie cutter solution, so exert critical thinking and try to figure out what works best for you. My experiences believe that fostering a continuous improvement culture is the best bet, but there might be other solutions out there. But regardless of strategy or tactics, always remember that it takes a single broken window to wreck a building. Don't let your guard down, fix your broken windows. Thank you. Thanks for, you know, changing. Okay, well, we'll just put that, stay there. Okay, and if I do not want, I can leave. Okay, well, thanks for staying. Okay, so thanks for staying and not going to lunch. It was quite my disheartening to see people leaving, but to be honest, I also would have preferred to get a burrito than seeing my own talk. I'm here today to talk about diversity team in your community and then what? That's a title that I found under the shower and pressure. I had other one. I would appreciate to not take too much picture of me. My name is Michel Charin, I'm working in the Wellat open source program office. I'm a system administrator and shit, now they pop up for the Wi-Fi coming. And the original title of that talk was protest for the concept. I'm also French, so I decided to speak about our national sport during protest. And it was planned to be given to my team, but I never managed to organize myself to do that. So I decided to take her to come to Bono. And in case I decide to write a book, that's a real topic I'm going to speak about. I had a lot of complicated ones just to sound smart. So to explain what it's about, I need to give my own theory on diversity in open source. And my example is based on what I call the MCU, not the Disney one, because then I'm going to get sued. It's the Michael community. All the community are usually participate as part of our open source program office. So when you would come to diversity, usually we just start with phase one. It's mostly just guys and Jews doing stuff. And they see that there is a diversity problem and abuse one, so we go to phase two. We decide that, yeah, women also exist, and we need to get them to join our community. So for example, things like Libyan woman, Linux chicks, et cetera, et cetera. And after that, you go to phase three. And phase three is you get more minority, like we decide that there is more access of diversity. And usually the team change their name. So for example, at Fedora, first, nothing special. I mean, it's not like it was bad, it's just that nothing was organized before 2015. And people decide to say, yeah, we need to do something. So they create the Fedora Women's Day, FWD, and one year after, and they start to say, yeah, we need to do something else. So they rename Fedora Week of Diversity. You can see it's a perfect trick to not change the acronym anywhere. And then phase four, well, everything is in the title. Phase four is basically civil war. That's the difference between SU, where it's during phase three. And that's how people are not fighting. So for that, I have two case study on how it evolved. The first one is on a well-known player versus player MMORPG, also called Wikipedia. So I decided to focus everything on the LGBTQ community because that's for months and because that's all the example I have, but just keep that in mind that is very specific. So in 2006, the first group on English Wikipedia related to addition of LGBTQ-related topic is created. Nothing happened, it's like not formalized yet. It's at the start of Wikipedia after four years. In 2008, they decided to organize Wikimania, which is a big Wikipedia conference in Egypt. So that was before the Arab Spring. Osnimu Barak was still a dictator. Now they change dictator from what I know. So situation was not great for gay folks like torturers, this kind of stuff. People start to protest, but not much. Four years later, they decided to do a meetup on another Wikimania, this one it was in Washington DC. So much less problematic. Decided to see that there is not much diversity. The foundation decided to create a diversity team and a conference which result in a LGBT-plus user group. So I'm not going to explain everything related to the structure of Wikimedia. It's like you take the way the Dutch of Venice was elected five century ago and it's more complicated. And then again, another controversy appeared for the North Carolina Bathroom Bill related to some event. And then there was more reaction because more people joined due to the previous diversity work. And people start to discuss about, yeah, what do we do if there is law, but it's not on force and this kind of stuff. Then again, the foundation tried to do a meeting in Tunisia and that one is pretty interesting because at least one of their employees decided to not go because he didn't feel safe. He moved from Iran to Berlin and he said, yeah, I don't think I should get there because I like dudes and it's cool. And the fun fact is it was organized by a bisexual guy in Tunisia. He knew on the phone what it was about, but yet from a purely, it was not good optics. And again, they decided to organize Wikimedia in Singapore just at that time. It was illegal to be homosexual in Singapore. It was not on force. So the whole group decided to start to organize and ask questions like, what happened if I decide to protest and I go send to jail? Are you going to pay for my lawyer and this kind of stuff? And yeah, the law changed the event which is this summer, but you can see the rise from nobody cares when we send an email to restart to be organized and we are asking for protection and this kind of thing. The second case study is OpenStreetMap. This one is a much better story. So OpenStreetMap is much smaller than Wikimedia Foundation. 2012, the first paper on you are not diverse. Usually when you have these like sociologists that look at it, people start to take seriously because we cannot listen to women when they say there is a problem. We need to have a paper peer review, et cetera, et cetera. Three years later, they say, oh, how can we improve things? They decide to create Geochicas, which is a group of women who decided to again do some grouping for diversity. Later, people start to add a group to tag on the map values LGBT topic that results in the creation of a group specifically dedicated to that on Telegram. Later, they managed to get a safety policy. So the safety policy says that if we do an event somewhere and it's not safe from the low perspective, we need to do something somewhere else. For example, state of the map, which is a big event of OpenStreetMap in 2003. So this summer was canceled because one bid was Paris and the French did what they do well, which is abandon and go home. Then it was in Kosovo or in Cameroon. And I do not remember the reason why they didn't choose Kosovo, but for Cameroon, it was like, yeah, on Wikimedia, it's written that it's not cool for lesbian and gay, et cetera. Maybe we should apply your policy and they said, yeah, okay. We are going to cancel and we will see everything online and see what happens later. And so that kind of example of people protesting for diversity, but I have plenty of them, like GitHub and Microsoft and IC Boicot in 2020. Fedora had a Black Lives Matter ticket for two years deciding whether there is a banner or not. And in the end, they say, oh, it was too late, the moment passed. And it's even happening last year because again, that's recycled slide. So for example, in March 22, the Linux Foundation decided to organize an event in Texas. It was already organized before. So they signed the contract and Texas did something shitty with abortion or something like this. So it's so much stuff happening that I didn't remember. So people started to protest on Twitter. And then in March 22, people started to protest against Cloudflare and QBFarm. I kind of work on that as a CIS admin to see what will happen when it escalate and people start to say, yeah, we need to stop using Cloudflare, et cetera, et cetera. So it was easy, we didn't choose, but yeah. Even this week or this morning, I've seen this on Acre News and people discussing whether we need to do event on for computer science in Florida. So yeah, you can see that the rate of protest is increasing and it's all related to diversity. So as a community manager or someone who cares about diversity, how do you handle that? That's where my talk was supposed to end for my co-worker and let them debate. It didn't happen, so I have to invent some elsewhere. So the first one is start by saying, yeah, we are doing free software, but we are not doing politics. That never goes well for a start free software is politics. And for the second thing, no politics is a political act. So just stop that. It's not going to work at all. Well, you need to have process in advance. That's much better usually. So the example I gave for Fedora, it was mostly because they never decided on how you write a letter or not, what is the process. So they have to decide the process, make meeting. Once you have the process and you follow the process and it took six months, so it was too late. And it's not the first time it happened, it happened all the time on the Wikipedia community for Festas, Cesta, some US law, they decided whether the websites need to go dark or not, something like with the current credit strike. So decide in advance so you can be able to do things while in time and do not just lag. For events, try to get feedback as early as possible, like even if you decide to do something in Texas, well, at least you listen to people and you say, look, it's either Texas or we do nothing. So what do we do? Well, we go for Texas and we try to get another solution. And also try to decide the difficult question in advance. So you can see that all the example I gave were kind of related to the US because I mostly work with US based community. But, well, there is other countries on the US. I mean, I know it surprised sometimes my co-worker. But for example, the Black Lives Matter ticket I spoke, it was to George Floyd Murder and the protest. It turned out there is the same exact kind of issue in Switzerland at the moment with a guy named Mike Ben-Peter, who died four years ago. And right now there is protests in Lausanne for that. Nobody ever speak, for example, about format discrimination, while it's a big topic for the European Union. I guess all the issue with class in India, it's also something that we do not speak about. So try to make sure that we are not only discussing about the US there, but also, well, there is so much that we can do and we can discuss. And also, well, be prepared for our bad-faced actor. This is recorded, so I'm not going to give names, but my co-worker likely can see who I'm speaking about. There is plenty of folks who will call, for example, feminism language or try to appear, be progressive and just do shit. So you cannot just, you need to be prepared. And that's basically it. I don't know if I have a question, but since it's a lightning talk, I think I know that. So if you want to contact me, I'm not on social media except of MasterDont. And I have another talk in 20 minutes on that, but you can send me an email, you can pick me on RC, or you can see me around trying to discuss with people, if you have any questions, we mark or anything like this. So thanks for listening and all right, that's it. So hello everyone, my name is Roberto. I'm a senior software engineer in Red Hat and today we're going to talk about chocolate cakes. So I like very much all of them and many people have lots of allergies and tolerances. So how do you make sure that you and get sick by eating one of these? Usually go to the ingredient list, right? And the thing is, how do you know that you can trust that ingredient list? Well, usually there's lots of food supply chain regulations in every country. There's guidelines on how to handle food. For example, if you touch something that contains nuts, then you should not touch something else in a restaurant. So usually we don't get sick from eating food in that state. So this is actually very similar to what a food supply chain is very similar to what a software supply chain looks like. So this is like a typical software supply chain, very simply outlined. Everybody consumes artifacts made by these kinds of pipelines. Basically, even if you have one of these smart electric toothbrushes, chances are the software in it is getting built by one of these pipelines. So yeah, if it gets some malicious code then you're not brushing your teeth today. Sorry. Yeah, and basically every time you get have actions or some GitLab CI or Jenkins jobs, you're probably using one of these. It can get really, really complex. So trust me, it can get super complex and each item that you put into that pipeline makes it more vulnerable. So yeah, basically each of the triangles represents a kind of thread to the software pipeline or software supply chain. No need to read all of them. Basically no need to read anything in this presentation. Yeah, and imagine if for example, if in the build, if we could in one of the builders put some small script that would detect when the build is triggered and would input some malicious code into the code that you're building. This is actually a real case. It happened to SolarWinds in 2020 and it was very famous case of supply chain, software supply chain attack. So what Solsec comes to solve is how do I make sure that the code that the developer wrote is what it's actually being run in your computer, what the binary actually contains and it was not tampered in some way. So this is, Solsec is a set of incrementally adaptable guidelines. It's of course a guideline. It doesn't make your pipeline or your software supply chain like unbreakable but it should help with hardening it. So basically what it brings is a common language words are kind of easy to trick and companies can usually do it very well. So it's safer to say, hey, I'm compliant with salsa level one because the specification is super clear on what it means than whatever kind of company talk you can do. It's also a way to evaluate how much you can trust that the code that the developer put together is what you're running. I've actually seen in the Arc Linux user repository, I've seen people trying to get salsa certification for some of their packages. Actually the most important thing is that it actually helps you improve the security of your software supply chain, making it less likely that your supply chain, the software, the artifacts generated by the supply chain contain malicious code. There's many security standards that you need to comply in many cases. One of them is NIST Secure Software Development Framework. There are executive order in the U.S. being released that make compulsory to comply with this, but they usually only tell you these standards, other standards, usually only tell you what the final state should look like and not how to get there. So to start with talking about salsa, most part of it, it's about provenance. There are three key words here, verifiable, where, when and how. It's actually four words. Yeah, so basically provenance in plain words, it can be whatever you want, but it could be a JSON file that states how some artifact was built. For example, if you have a C program that you're building, you're compiling, which command was used with GCC, which flags were used, which dependencies are we using, where did we download those dependencies from, what the digital signature of those dependencies are, digests, all of these connects, and you get a document that tells you exactly how an artifact was built, whether it is a container or a binary or whatever it is. So yeah, it's very important, the concept of provenance. Basically, it's a document that explains exactly how the artifact was built, and you can verify that this is actually true. So salsa is divided into four tracks, sorry, into three tracks, one for each kind of thread, source track, build track, and dependency track, and each track has four levels of certification. The current state, salsa version 1.0 was just released, I think in April 18th this year. So it's early stage, and it only covers build threads and the build track, and actually for the build track, we only cover until level three. Level four is probably coming very soon, but yeah, creating a specification of this kind is not very easy. Basically, if you read any RFC spec, you probably saw that words are super precise, they're actually intended to mean what they are exactly intended to mean. They are also, the salsa working group is also prioritizing stability. They don't want to put in any requirements for each level that is not achievable or that they do not agree that is secure. And basically, everyone involved in this, there's many organizations involved and individuals, has to come to an agreement on what it means to be secure. So yeah, it's kind of tricky to reach all of that. For example, well, for the build track, we basically have here the requirements for each level. Basically for level one, you only need that the provenance document exists. It doesn't have to be verified, signed, it doesn't just document, provenance exists. Your level one, salsa level one. For level two, it also requires you that you have hosted build platform. This is because usually when you have a build system that is very specific, it's usually easier. The attack surface is reduced and it's easier to harden it. If you have a system that only works for one specific thing, that's why it requires hosted platforms. And it also requires you to prove that the provenance is authentic, which means probably in most cases, digitally signed by the builder. And for level three, it also requires you that the builder is, all of the builders are isolated between each other so that credentials of our private keys for the digital signatures cannot mess around with each other. The cool thing about salsa is that you're not on your own. They provide a lot of tooling. For example, if you have actions job, in my opinion, personal opinion, it's kind of easy to, if the workflow is not super complex, then it's kind of easy to actually get salsa certification for your artifacts. They're also doing a proof of concept in Jenkins and they of course provide salsa verifier. So yeah, you're not on your own in other standards that just tell you, hey, you have to be like this in the end, but they don't tell you how to get there. As for community, it's an open community. Anyone can contribute. They're a hit hub. You can feel free to create comments, issues, merge requests. They have monthly meetings. You can basically attend. I think they are basically a video call. So anyone can go. They have a manual as well. And yeah, it's probably not the golden standard yet, but in my opinion, it has pretty good chance. And even if it doesn't become this super important standard, then the actual measures that you implemented in your workflows will help you protect the supply chain. And also being very early in stage of development, it also allows you to new contributors to actually influence the specification. So if you think something is wrong in the specification, feel free to go to their meetings or tell it or to create a GitHub issue. And also for the tooling, you can create new implementations maybe for the Jenkins that we just saw. Or, I don't know. So yeah, that's everything and thank you very much. If you have any questions, maybe we have some time. Okay, any questions? It's self-certification. Well, you check if you comply with the requirements. You have the Salsa Verifier to check if the provenance is generated correctly. And yeah, I guess that at some point we will have some certificators that will certify your pipelines for auditors. Yeah, thank you. Any more questions? Go ahead. For S-bombs. They are both independent things. You can have Salsa without having S-bombs, but you can very easily generate S-bombs from Salsa provenance. And it's usually S-bombs are generated in a non-verifiable way, right? Having the provenance makes it actually verifiable. So yeah, both things go well together. Thank you. Any more questions? If it's self-generated, you will automate it if it's something like a chain of trust problem. Yes, yes. How the Salsa will improve if it were to be a translator or something? Well, basically the consumer of the software decides who did it trust. Okay. So basically as any other certification authority that you have from your browser you're deciding or Firefox is deciding for you or Trump or whatever, who do you trust? So it's very similar structure. Thank you. Any more questions? Okay, if you have any more questions, I'll be outside probably. And if you want to, okay. All right. It was born. 2022, he decided to buy Twitter. And everybody live for Mastodon. I said it was very short. This is a lightning talks, so that's the lightning part. So yeah, what is Mastodon? Everybody will say it's a kind of elephant is technically right, but that's not what we are discussing about today. It's a social network web application. That's a free software. Obviously, we are the free software conference so we speak about free software. Started in 2016. Like a lot of projects, it's not named after an animal. For example, Python is not named based on the snack. It's based on Monty Python. So Mastodon is not named based on the elephant but some US metal bond. So go for it. And yeah, social network using web application, every intranet has it. Like the intranet that we use at Fredat has some kind of social network. Nobody use it. The one before added, nobody used it. Every product on the market has it. So why is it so special? Well, because it's federated using a protocol called ActivityPub. The name is not important. I just mentioned that for, so you know that it exists. Yeah, but what is the federation? So for people living in Europe, well, Europe is a federation. Physically, people being here and discussing. The US is a federation. The federation is the federation. And in the case of software, what we have is a setup where every node discusses. So there is no specific central server. In the case of Twitter, well, if someone with which decide to smoke too much and buy the company and trash it, well, you can do nothing because it's centralized and he has money and that's it. In the case of Mastodon, worst case, you can buy a server, trash it and turn out that there is a ton of other. So that's not a ton of. So how do you work when you have no central server to decide who is who and who discuss? Well, it will rely on well-known technology, DNS, you know, something that always work, but you cannot live without it. And a system called webfinger because DNS is not enough, so you need more. I again just mentioned it's not important to know. And right now the whole network, which is the Fediverse, it's close to 13,000 node. That's a lot. So there is some node with one person, for example, mine. There is basically me and maybe a bot account that I never used because it's a social network, but it works also for people who are not very social. And there is bigger nodes, Mastodon social, that's the amount of people this morning. That's a lot. I mean, it's like the size of one or more, maybe. It's definitely bigger than my city. And since it's federated, you can also have multiple software. So Mastodon is one, but there is other. For example, Mobilison, which is made by French people I know, which is used to do events. So like not do strike, but it could work for that. Owncast, which is a server for doing webcast and streaming. Next cloud that I should not have to present, but I will still present. Can use that because there is an activity pub component so you can chat and make for a blog from your drive server. There is something called Funkroll, which allow to exchange song. I got that list this morning from Wikipedia, so just go to Wikipedia. There is more space on my slide and it's more up to date. So one of the interesting part about Federation, it's that it impact a lot as a culture on the network and everything. So for example, you need to choose an instance because if you go on some place, you will discuss with some people. If you go on another place, we'll discuss with others. Since everything is federated, you can discuss with each other, but there is different rules. You know, it's kind of like 20 years ago, you had to decide your email and not just go to Gmail or Hotmail, you had choice. That's basically the same and the instance are either divided by language, like if you speak French, maybe you want to speak with other French people instead of going on Japanese animation instance where you understand nothing or sometimes by topic, there is a discussion related to free and open source software, something for academics. There is different policy around the instance. Some people were very strict, some people were more like very free speech and we welcome Nazi because that's free speech, this kind of stuff. So yeah, you have to pick an instance and it's okay, it's much easier than move around because you also have to pick a city but here you can just click and move to another place. It do not work like this for your nationality or this kind of thing. So just pick, you do not like, move somewhere else. And of course, since it's federated, that means that nobody has power over the other or I mean no one has ultimate power which means that there is new type of drama which is fascinating in itself. For example, I gave the example of free speech instance who say, yeah, we do not care about anything as long as it's legal and if it's legal in the US, that's great. Well, it tends to clash with anarchist instance who for example do not like fascist. You know the whole part of anti-fascist, it's they are against fascist. So they tend to clash and usually what happens is people just blacklist each other and disconnect. And for some reason, it's something that people think is bad but when you do the count with that much node, that means that there is a million of potential link. So yeah, obviously some of them will be problematic and will be cut. And I remember discussing about that. I'm like, yeah, sure, there is 20 people we can not discuss with 20 other people but for a million of folks, it's nothing. And yet, sometimes it's seen as a big deal. So far, nobody blocked me but I say nothing so it's easier. So yeah, and so my thought was mostly for community and official presence of project because we add the discussion in the federal community. Like people said, oh yeah, we need to be present on mastermind, which, okay, sure, why not? We are on other social network and it's the one where everybody is going and then the question is, how do we do? So we can start by using an existing server. That's easy. You have nothing to do. You just need to create an account. Another problem is you need to find a sustainable server because if it's made by volunteer, well, volunteer tends to have a life outside of their unpaid volunteering. And they also sometimes tend to use crappy hardware, forget to do backup, this kind of thing. So you need to choose a tech vacation. Yeah, sometimes volunteer tech vacation and nobody's here to restart the server. You need to decide for moderation and reputation. Again, it's not great if you have your federal account just next to an AD account. So you need to verify that. You need to also match with the local community. Like for example, federal would be great with something related to hacking, something related to free software. Not so great about people discussing the French cuisine and its delicacy and interesting stuff. And then there is a question, how do we know that it's you? Because anybody can create an account anywhere. So you do not just need to say it's me. So there is then another solution which is using your own domain. That's my preferred one because people know that if it's written federal project, then it's federal. Then you have two choice. The first one, paid hosting. So paid hosting, it's cheap. It's 10 euros per month. The only problem is if you are working in a big free software company based in the US, well, you have to deal with procurement and it's easily six months of meeting. So yeah, just to spend the 10 euros. Then there is the solution of self hosting. And that's why I'm doing that presentation. So for self hosting, you can decide to go with Mastono which is the default software. You want that everybody follow because everybody use. And the problem is standard web application. I've seen worse, but it's not because you can torture me with something based on Java and Scala that you should have something complicated to force me to do that. So it's like you need Postgres, you need Rails, you need, I guess, Redis and Sidekick and this kind of stuff. So it's pretty doable, but it's still consuming a lot of resource. Or you can use something small. I use a software called Goto Social and it's great because it's a static go long binary. The installation is curl, start. It can use the database, but you can use SQLite. So nothing to do with that. No configuration, everything is done with environment. And if you work at Fredat for a community, just contact my team and we will be happy to have people to test or deployment. Because when there is meeting, I do not listen and I do some work and experimental stuff. So I have already, everything deployed for that. And that's just the part about deploying because when you are doing for your own account, you do not share your account. You know, it's like your toothbrush, you do not share them. When you are in a community, you can share the account. You do not share the toothbrush, just the account. So we found that there is two issues. The first one is how do you post? Because you can pass the password around and it's not secure. And if you do that, I would pretend that I didn't hear you doing that. But there is several ways. You can use a few services to schedule posts on Twitter. People use TweetDeck, which is Twitter only. People who are using FoodSuite, which is not working for Mastono, I know that Buffer support it. There is various tools. One called Mastonon scheduler, another world called Feed2AP, which take a RSS feed and post on Mastonon and various bots, or you can write your own. I was supposed to do that for my co-worker, but it turned out it's too complicated to authenticate using OOS. So I was just waiting for a single password authentication and it's not yet developed, so I did nothing. And then there is the other issue which is receiving because you are going to post and say something and people will try to interact with you. But if it's a bot, well, computer do not answer. ChatGPT is not really answering, it's just pretending to be. So you need to have a notification for that. And again, that's an open problem. It should not be really too hard. It's basically writing a client, but removing the part where you deal with actual user and UX. So it's mostly getting events and then sending an email. And for some reason, nobody did it yet. But if you want to find something for an intern, I'm ready to give that intern some work to do related to that. We have no budget in my team, so we cannot do that. And that's the end for my presentation. So if you want to contact me, again, I'm not on social network except Macedon, but it's my private one, so it's not on the slide. So you can contact me at miscatredat.com or on IRC or internal stuff or whatever. But that's it. And where's that again? That's where you are supposed to upload. Thank you for being here today. This is Mario Casquero. I am a software quality engineer from Red Hat Spain. And well, this is my first time at DEF CON here in Brenno. And also my first time delivering a topic in a conference. So I hope you all will enjoy this lightning talk as much as we do. Let's start. Today's agenda includes first quality testing, the basic concept and some different types of tests that every quality engineer runs in the day to day. Next is how carbon emissions and clean electricity are directly related to a developer's work and how can those emissions be reduced? Next is what is carbon-aware software? How does it work? And the two main solutions used by Grisha companies. Finally, I'm going to present one of those solutions and how it is possible to make use to fit in order to make not only quality testing carbon-aware but basically any other task. So quality testing is mainly what we do as quality engineers. We keep some continuous control onto a product with the idea of reducing or even better eliminating the possible errors before this one goes to the market. The way we follow to achieve this is through a bunch of different tests that includes stress and regression testing, smoke testing, negative test, sanity testing, and so on. So let's go with a little introduction for each one. Stress testing, it consists in performing any supported operation by the product in conditions of insufficient resources such as memory or disk space, high coherency, denial of service attacks, et cetera. For regression testing, the idea is to run tests over one or more features already existing in the product because they have been modified. Depending on the resources, the priority or just the time, it is possible to test all the existing cases or prioritize some of them according to the changes that have been made. Small testing is simply the necessary set of preliminary tests that covers the critical functionalities of the product. First of all, starting the application, okay? In case of failures, the software release of the product may be affected, so they are quite important. Testing ensures that a system handles properly, for example, unwanted data input and unexpected user actions. And sanity testing is similar to smoke testing, but it goes deeper because it covers more than the critical functionalities. Doing this over a stable build, typically related with the new or fixed functionality. It can be also understood as a subset of the regression testing. So as everybody knows, carbon emissions are one of our current and biggest problems. A lot of companies are trying to reduce them. As it is the case with Red Hat, we will intend to reach net zero emissions by 2030. Nowadays, it is possible to reduce this carbon footprint inside a different time or location for running things. Because more electricity is produced by burning fossil fuels, but there is a percentage getting bigger and bigger produced by renewable and cleaner sources such as solar and wind. So if we make our software conscious to do more when the electricity is clean and do less when the electricity is dirty, that means produced by fossil fuels, then that's called carbon-aware software. If we are looking for our software to be aware of the carbon emissions, we need to start thinking about real data emissions and probably the most important forecast. Keep in mind the provenance of electricity that is already measured in grams of carbon dioxide per kilowatt hour. It will be possible to select a time in which this amount is the lowest in order to run, for example, some stress testing, as I mentioned, that usually consumes a high amount of energy. If it also keeps the possibility to run things at different locations, the real data emissions or the forecast can be checked depending on the source selected. All of these are actually possible, the two following solutions, the time and electricity maps. For this lightning talk, as it is only 15 minutes, I will be focused on electricity maps. And the truth is that it is not totally free, but they provide a free month trial, so it is enough at least to give a try, as it is my case. The electricity maps works through an API and the following endpoints are provided. Those are carbon intensity, power breakdown, power consumption breakdown, and power production breakdown. The last two ones are merely forecast, so they do not offer real data. For today's purpose, the most interesting is the first one, it is carbon intensity. First of all, we'll be obtaining the zone code for the region we want to know the carbon emissions or to read the forecast. This is a public API, okay? Registration is not needed in this case, so everybody can trigger it. If we take a look to the response, it is composed by a huge list of the supported countries and regions. I've summarized this one a bit, and we have to look for the one we are interested in. In this case, we have Chiquia here. As a clarification, we are only interested in the zone code, so for Chiquia will be CNZ, not the sound name that is already Chiquia. Once we know the region reading the real data emissions, from the carbon dioxide, we require the following endpoint, okay? You will see the latest at the end of the endpoint and the zone code obtained previously. Taking a look to the response for the selected zone, in this case Chiquia, we have this carbon intensity value. This number will be lower when the electricity is cleaner, that means produced by renewable sources, and higher when more fossil fuels have been used. This is actually measured in grams of carbon per kilowatt hour. One more thing, if you want to disable estimations for this kind of endpoint, you can just include the query parameter in the request, so include disabled estimations, set it as true. To obtain 24 hours forecast, the following endpoint has to be used. You can see the forecast at the end of the endpoint and again the zone with the region we have obtained before. A real application for this could be to prepare an automated process that given a region, as it could be Chiquia, it will select the desired time based on the API results, for sure, when the carbon intensity will be the lowest. Okay, taking again response during all the 24 hour forecast, this has been surprised again for sure, a real response will be so much bigger. We have 3,337 grams of carbon dioxide per kilowatt hour at this daytime, so that is the ideal time for running something like a stress. And that's it, that's a proper way that we can contribute to make our software aware of carbon emissions. One more thing, they provide cold snippets for making it easier to the final use to integrate these kind of endpoints, okay? So for example, you can obtain the Cook command, JavaScript functions, or a Python script among many other things. That's all, thank you so much for your time. If you have some questions, that's perfect. Okay, okay. You are asking if the forecast change a lot. Actually, it shouldn't because it uses estimation schedule that is actually providing data, okay, in real time. So I mean, it could change from maybe 10 hours or 12, but in the near time, it should be quite confident. I just give you one try. Providers, how to go and so on, match the big city lines, they take control variables like traffic, because I know in Google that they can provide weather data, for example, to enrich the whole CO2 processes. Okay, this is something, okay, I don't have as much as experience with electricity maps as for this case, but yeah, I think not all the variables are included in the estimation procedure. I also saw that it changed between America and Europe. For Europe, there are some extra data that you can keep in mind in order to provide those values. For America, it's not. So it will depend the region, and I am not sure if all the variables have the traffic or the way the electricity is produced are included, sorry. I don't know, so depth in electricity maps. Okay, any more questions? Yeah? Yeah. Yeah, because that way you are consuming energy when it is produced from something like the sun or the wind. So you are not directly contributing to provide more CO to the atmosphere, right? That's the idea. Okay, we are out of time. We can discuss outside, if you want. Okay, thank you so much. Okay, we are starting with the journey of automation. And the world viewers, hope to go. Thank you. Hello, everyone. My name is Sergey and I'm colleague Pavel. Today we'll be talking to you about the journey of automation that we had on the project that we are working on, Linux system roles. So Linux system roles is a set of Ansible roles for managing Linux systems, subsystems like networking storage and so on and so on. And it is critical to automate low-level labor-intensive repetitive tasks in the modern world. And we're lucky to have many tools that allow for this automation, such as GitHub actions, packets and so on. So today we will cover two main topics, which is the automatic release of the content of GitHub repository to Ansible Galaxy and also how we release the content as a Fedora RPM using packet. So to begin with, about how we release content on GitHub itself, we have a script that developers launch when they need a new release and this script does three things. First of all, it uses conventional commits format to decide what should be the new semantic version and to generate change log for the new release. And after this, it pushes the PR with the updated change log and with the new version to GitHub for a review. So after developer performs this review and merges the PR, we have a workflow that runs automatically. And this workflow does two things. It first of all creates a Git tag and GitHub release and publishes the repository content to Ansible Galaxy. And to begin with, I need to explain first what is the conventional commits format. So you have see the format on screen. So you have the type of the change, then the optional exclamation mark, exclamation mark marks if the change is API breaking or not and the title of the PR. One note here is that we used conventional commits format initially for the commits themselves, but it turned out that commits are more like aimed at developers. So they have many technical details that end users don't care at all about. So instead we decided to use this format on PR titles that may be more vague and major describe the actual feature that the PR introduces. And you see the example of our new Git log using this format. Rather the PR titles, the titles of the PR merged. And the conventional commits format, first of all allows us to define the next semantic version and it is done using the type of the pull request. So it has this exclamation mark that introduces a breaking API change. We bump the major version, then for features we bump minor versions and for other changes we bump the page version. And of course you can have the page with the exclamation mark, so bumping major version as well. And the second thing that's romantic, that conventional commits allows us to do is the real change log. And again we use the type to automatically put PR titles into the new features section, into bug fixes section and into other changes section. And on the right of the screen you can see the updated change log with the formatted entries with the new release and date and then all the sections that we need. And to sum up the process, so developer runs the script, the script collects merged PRs since the last release and processes them to identify the new semantic version and to generate new change log. And the pull request with the updated change log is pushed to the PR and developer goes and merges this pull request. And after this we have another layer of automation which is a GitHub workflow. As you can see on the screen, this is the workflow that works on pull request pushes to the main branch and only affects requests that change the change log.md file. And after this we create a GitHub release and propagate this new content of repository to in our case to Ansible Galaxy. And from now on we continue to releasing Fedora RPM using packet that Pavel will cover. So let me take this first. So packet, that's a tool that automates common packaging tasks. And packet service in particular automates proposing Fedora releases from GitHub releases. So the service is triggered by releasing on GitHub. Then it updates the RPM spec file. That means it bumps the version field and it updates the change log. It uploads the source tables to local site cache and it opens a pull request with those updates. So it doesn't perform the update itself that why I say that it proposes the Fedora release because now the Fedora maintainer has to review and merge this pull request into Fedora, this GitHub. This is the only manual step in the process. And then when the packet service see that the pull request is matched, it performs a code and body update. Today actually there was a talk specifically by the packet team about this tool. You can watch the recording and they have also a booth. So I will not go into details how to configure this service because you can check the talk or check the documentation. I will only give a very brief overview. So the brief overview is you create packet YAML in the upstream GitHub repository and you configure the proposed downstream job that proposes the downstream pull request. Or another option is instead you can create the packet YAML in the Fedora release Git and instead you configure a job code pull from upstream. This is useful if you don't have commit access to the upstream project if you are not member of the project so you can configure everything on the Fedora and you anyway should create the packet YAML in Fedora because you need to define the other two jobs that do the core GB route and the body update. So this is quite simple. It looks quite simple at least and quite useful but during this process we encountered a few problems and I believe if you are a Fedora maintainer interested in using packet you would probably encounter them as well as I thought I would share what we encountered and the resolution. So first problem where to maintain actually the spec file because packet by default assumes you can find it in the GitHub repo but this means that any Fedora changes that were made in Fedora's Git for example someone proposed some changes using a pull request in Pagura would get overwritten during the next thing by packet from GitHub. So the solution of this is to just keep the spec file in the Fedora this Git as the primary version but then packet needs the spec file and it is not available. So the solution is to fetch it from the Fedora this Git when packet does the update and fortunately packet has actions which are kind of hooks that can be executed during the process. So we set up a hook that downloads. It's a simple shell command which downloads the spec file from Fedora this Git for a packet to use and we actually need to download all the files that also are included from the spec file because we use include in our spec file. Another option is to configure packet in Fedora this Git as I shown on previous slide instead of in the GitHub repo and to use a profile upstream instead of proposed downstream. Why I haven't used it in our case it's because this job was not available yet when I started this automation. So as you can see automation as any other software development is subject to change and often you are lagging behind the newest teachers that are available in the tools. Anyway, another problem we encountered is RPM change log because by default packet collects all the Git commit messages summaries in the GitHub repo and uses them as a change log entry for the RPM or it can optionally use the GitHub pre-description as a change log entry but both of those options produce quite where both change logs which is contrary to the Fedora packaging guidelines the state that change logs should be brief and in particular they must never contain entire copy of the source change log entries. We wanted to comply of course with the packaging guidelines so the solution was again a custom action that is called by packet when it needs to collect the change log entry and there's a simple shell command that echoes the base version and the version that gets virtual that provides us in environment variable. The problem was related to multi-source RPMs because actually our presentation is simplified. It talks about one repository on GitHub but our RPM is assembled from many individual repositories so this means we have multiple source tags and we need to update them if any of the source table changes and packet needs to upload those tables to local site. So to update them if any source table changes we again use a custom action that regenerates a part of the spec file where we have the source tags. It's marked in the spec file it's actually the spec file is used as a template for this generator and the generator places all the source tags by the new source tags that are needed. And for uploading the new sources to look aside packet now supports this multi-source RPMs and if any source is URL packet uploads it to look aside. And this was actually implemented by me because I need it in this project but now everyone who uses multiple source tables the boss in RPM can benefit from this. Yeah so in the conclusion I'll just hold this for now probably so in conclusion I'd like to say that it's really critical to automate all repetitive tasks that you can and to live up to this standard we developed this presentation using the tool mark and the mark to base GitHub action that converts a markdown file into presentation and then using GitHub actions again publish it to GitHub pages. So now let's proceed to the Q&A section. Anyone has any questions? Then all right then yeah so just for you to know I have another slide with references about every subject that we were covering today so you can go and click on the links and study everything. And also we'll be here around the corner for some time after presentation so please come around if you need to. Thank you. The next talk is get on the container and explain yourself. Hi everyone we had some technical problems but we are ready. My name is Weiner. I work for Red Hat in the virtualization team for a while now. This presentation was actually proposed and most of the slides were made by Triva Williams who is the technical manager on the Open Infrastructure Project Foundation where the project is hosted but she could not make it here so she asked me if I could be presenting it. I'm engaged in the project that I will be talking about Cata containers since more or less 2020 and basically I'll be talking about Cata containers and this is the mantra of the project. It's like the speed of containers, the security of VMs. This is what we are seeking in this project. Okay, a little bit of the history. Well, the project actually began in 2015 when Intel launches clear containers. I don't know if many of you remember of that project. And then in 2017 it was merged with other projects and now it's under the umbrella of the OpenShift Foundation and in May of 2018, the first release. So this was the first release, what's not the perfect release, but it was a workable version of Cata containers. In 2019, more or less one year later, we have the first adopters using Cata containers on production. So we have Alibaba and other players. In October of 2020, a major bump, a 2.0 release, I will explain a little bit of each of those releases later. Two years later, another major release and guess what, 2014, we will have probably the next major bump. I don't think it's for intension, this is just a coincidence. And basically we sit here, we are in the next generation of containerized workloads and so on. Okay. But what is this? What is exactly Cata containers? In a traditional container, containerized system, you basically use namespace and C groups to isolate the process. So to create the containers, of course, there are other layers of software and features being used, but the main ones are the namespace and the C groups. Then you can have other technologies like SecComp for enhancing the security of your containers. SecComp, for example, to filter out SysCalls that you don't want your application to execute. And you have capabilities and so on and so forth. The thing is, if your workload, the application finds a hole, he can, in the, for example, container B, he can escape and have access to the container A, even have access to the host kernel. So this is the warm, basically, going from one container to another. And in order to solve, not exactly solve that problem, but add a new layer of security for containers. We have Kata containers with approach, which is basically, your container is now isolated within AVM. So it's running inside AVM. And you have this, now, this layer between the host kernel and your application. Even if your application is able to escape from the namespace, for example, it's going to have access only for, with access to the guest Linux kernel, not the host. So this is basically what Kata containers is. Moving on, when I joined the project, we used to integrate Kata containers with Docker, for example. So you would be able to, like Docker dash dash runtime, and then spawn a container in Kata containers. We dropped the support, and nowadays we have the support only with Kubernetes. So this is basically, is how the container is initialized in Kubernetes. So the first one on the top, you have a container runtime interface. The device Kubernetes, you have, in this word, container runtimes, it be many things. But yeah, you have cryo and container g or nerdctl. And when you need to spawn a container, you're gonna, one of those container runtimes, they are going to run, for example, run c, or c run depends on how you configure your environment. But basically run c is in charge of actually running the container or starting the container, providing the environment for the container, and managing the life cycle of the container. With the Kata, this is the workflow with Kata containers. Basically it sits between the container runtime and the container within the VM. So now cryo and container g, they are calling the Kata runtime, and the Kata runtime is in charge of creating the VM, spawning the container process inside the VM, and so on. It's important to say that you don't need to change your application, right? The same application that you have, you don't need like recompile or something like that. It seems less, it works. You just need to change a property in your configuration file when you're going to deploy your application on the cluster, and it's going to automatically work for you. Okay, this is an example of a use case of Kata containers. So in this example, you have a sensitive workload, traditionally how you protect or you separate those workloads, basically you run a sensitive workloads on node three, and the other one on node four. So you are physically separating the workloads. And with Kata containers, you can even spawn the two sensitive workloads in the same node. So this is going to allow you to increase the amount of pod containers that you can run inside your node. I'll run a little bit fast, sorry. So this is the product that we made out of Kata containers. This is the OpenShips and Box containers. Here, there are some blog posts where we explain more use case that you can use with Kata containers in the OpenShips and Box containers. We are in the catalog of the Red Hat, so it's very easy, simple to just click and install it. So a little bit about the releases. The second major bump, we improve it, the agent, because there is an agent, let me go back a little bit. There is an agent inside the VM here, and that communicates with the runtime, right? There is no container runtime inside the VM. And we improved it, we changed the agent, now it's written in Rust, we changed the protocol so just to load the amount of memory that we use on the VM. Because it's a concern right this overhead that you're gonna have in the system. All right, so, Virtio FS is now the standard shared file system, because basically on Kata containers, the container image is not pooled inside the VM. It's not even Kata who is responsible for that. It's the cryo and the upper layers. So we share the container file system with the VM, now using Virtio FS, that was the first time that we use it. So, and then we provide some stuff for security, we introduce the support for Cloud Hypervisor, Kata supports Cloud Hypervisor, Firecracker, and QEMU as the virtual machine monitors, so to respond to the KVM VMs. A little bit of stability, some integration with Kubernetes. Version three, so we have other enhancements in terms of performance. We have this single binary drag ball hypervisor which is basically the run time, the Kata run time in the VMM, the virtual machine monitor, they can be combined into a single process. So we start, you don't need two process. And this, we made this mostly because we rewrite the run time in Rust again, but this was not just a rewrite for the sake of rewrite, right, because Rust is nice. No, we changed the architecture a little bit. And now, yeah, so support for C-group V2, then we migrated to the version of Virtio FSD right in Rust as well. We started the Confidential Container Spin-off which will have a workshop tomorrow, if you're interested or know more about the Confidential Containers. A little bit of, we bumped versions of Kimu, kernel and so on, because of course, you need to have the guest kernel. You must have the root FS and everything. This is maintained and we tested in the project. What's coming now next week? Sorry, next year. The presentation from Fabiano Fidensio and Kevin Forum this week. He mentioned some stuff that we are working on. Just listening here. Yeah, so the project is getting rusty. We have only now, I think, the run time which is in goal. And yeah, if you want to contribute, unfortunately, I'm out of time. Sorry for that. If you have questions, comments, you can find me on the outside there. Is there any time for at least one question, maybe? If there is any question, I don't see any question. Go ahead. So the question is about the isolation through the end. Does it mean that you will lose the track to be? Yeah, the limits, they still work. We use hotplug, for example, for dynamically increasing the crease limits. That's all. Okay, thank you. It's a bit of a short time. Okay. Please come forward and how we come up. So are you ready? What is yours? Okay, hi. Can someone please close the door? Yeah, so hi, my, wait, especially, I work in Red Hat in storage management team and today I will tell you something about managing your storage with Ansible and Linux system rows. So if you never heard about Linux system rows, it's a simple set of Ansible rows that you can use to manage some common components of your Linux systems. The idea is that instead of trying to figure out what command you need to use on rail seven, on rail eight, rail nine, federal, whatever, you will write a single playbook that will work everywhere. There are multiple subsystems support it within system rows. Each is a separate row, the F rows for SELinux, networking, firewall configuration, and of course storage. If you are interested in networking, then yesterday there was a separate talk about network row. Yeah, and you can get it either as an RPM package in rail as rail system rows or federal Linux system rows or just through Ansible Galaxy. And if you are interested more about details about the rows in general, there's a link for our homepage, I will be talking more, mostly about the storage row today. So the storage row is obviously Linux system rows for storage management. And these are some of the technologies the row supports. The support LVM, it's a default on rail, it used to be a default on federal, few releases ago, that includes LVM rate in provisioning LVM cache and duplication and compression with LVM video. We support also normal software rate with MD, so you can decide whether you prefer LVM rate or MD rate. Lux encryption, and if you don't like LVM, you can just use standard partitions. And of course, we support also managing file systems on those devices that includes resizing of the file systems and also FSTAP management. So if you tell us to mount something somewhere, we can manage the FSTAP entry for this as well. So I don't have a demo because if I had a demo, it would take entire termines to figure out how to make a demo work because that never works. But I did all those steps yesterday on my VM, so you need to trust me that it actually works. So this is how the playbook with storage role would look like or the part that is concerned about storage. As everything or most of things is unsymboled, you don't tell us what to do. You tell us what you want to achieve. So you write a description of your storage in the playbook and then it's up to the role to basically manage the storage to the point that it looks like or what you tell us. So this is example how a default Fedora installation with LVM would look like with the storage role. That's not very good use case for storage role because you don't do that. The install does that for you. But if you wrote the playbook like that and rerun it on your existing system, then you can do that. Nothing will happen because this system is already in this stage. So we will detect that. Yeah, you ask us to have one volume group called Fedora on the disk VDA with two logical volumes root and home for your root and home. And yeah, we will detect that it already, the system is in this state and we will do nothing. So after you run the role, you get exactly this storage setup and nothing should change if you run it over and over again. So something a little bit more realistic. Let's say we have the setup for the previous slide and we bought a new hard drive and we want to start using it. We want to add it to our volume group and we want to resize our home partition to have more space for our data. So all you need to do this under the disks where previously there was only VDA, you will add the use your second disk VDB and then under your home where it previously it was something like 25 gigabytes. You just put 100 gigabytes there and we will resize it for you. So yeah, after you run that, you will be able to see that yeah, VDB is now part of the Fedora volume group and that the home logical volume is now 100 gigabytes. By the way, possibly notice that I removed the size under the root logical volume. I did it for two reasons. First, it wouldn't fit on the slide, but I can do that. If you don't write something in the storage role, then it's basically, for something we have default values, if you decide to create a new logical volume and you don't put in file system type, for example, you will default to XT4 on Fedora XFS on VAL, but if it's an existing device and you are just asking us to change something, then the things you don't write are taken from the device. So here for the root logical volumes, I didn't specify the size, so that's for the role means yeah, don't touch the size. And by the way, our sizes, we also support percentage values, so you don't need to be specific about exact gigabytes. You can write, I want 50% for this and 50% for that. So another use case, let's say we save some more money and we bought an NVMe drive. So this NVMe drive is very small because we don't have much money, so we can't use it for our root file system, which is good because we don't support migrating logical volumes between PV yet. It's one of our future goals, but you can use it as a cache for your root file system. So we need to add it to the volume group. So again, we will add to the disks our NVMe. This time there's NVMe namespace. And then for the root logical volume, we will say yeah, you want this cached. So we say cached through cache size, five gigabytes, and then we will need to tell the role that the NVMe will be used for the cache. And you have an LVM cached logical volume. This now is quite messy, but you can still see that yeah, the NVMe is now part of the Fedora volume group. And there is a lot of cache pool devices that are quite complicated to understand, but you can see that the cache pool data device is five gigabytes and it's actually below the root's logical volume, meaning that the root's logical volume is now cached. So yeah, and another use case, I wanted to show how to remove actually something because as I said, if you don't write anything, if you don't write the logical volume, that doesn't mean it's removed. I again, for space reason, didn't write anything about the whole logical volume, but it wasn't removed. So what you need to do to actually remove something. So in this case, remove the cache. Then you need to write cached force because if you don't write anything about cache, we will say, okay, you didn't write anything about cache. That means that you won't touch anything about the cache. And if you actually run this without the variable on the top, then the solution would do nothing. That's so-called safe mode. It's by default on. And by default, we won't remove anything even if you express the ask to remove something or even if you, for example, change the file system type to XFS here. We wouldn't do that by default because it means you're excluding data. And it's not a very good idea doing something automatically in storage that could mean that you will lose your data. So if you want to be safe, then just don't disable the safe mode. For this case, removing the cache is actually safe, but it's still removing something. So you need to specify storage mode safe, storage safe mode to false. And then if you run playbook with this, we will actually remove the cache from the root logical volume and also remove the NVMe from the volume group. So this is how it would look afterwards. You basically see it's same as before. You have your root and home logical volume and no cache. So this is like some fancy stuff you can do with that. Of course, I didn't show here the very basic features like adding a new sub-volume, if you want a sub-volume, sorry, logical volume. If you want to add a new logical volume, you will just put it here under the volumes. It's like named data and whatever size and file system type and mode point you need. And as I mentioned, the mount points are managed in FSTAP if you don't tell us not to do that. So if you put a mount point there, it will mount the device for you and put it in FSTAP. So it's mounted during mount, during boot. So yes, this is some features we already have. We are still missing some. The study show is relatively new. Right now in the next trail release, we are at support for file system online resize, which I actually kind of showed here because I was resizing home while mounted. So I was already using the developer version of study show and we had some customers requested adding options to specify owners and permissions for the mount points. So you can be sure that you can write to the newly created mount point if you create it with storage row. And in the future, you'd like to focus on more configurations options. As I said, we started with the basic staff adding new logical volumes, creating something new. And it would like to add some more options like this, which I presented to modify what you already have. Maybe like enable encryption on stuff that is not encrypted. Yeah, and also right now, I mentioned the storage technologies we support, we would like to add more. So Stratis, ButterFS for Fedora Strat and also some basic snapshot support is planned for the future. There are also plans to create more complex snapshots. Systemarrow, but people expect that the storage row would also support snapshots. So yeah, that was my short presentation. There are some links for you if you are interested in more and some contacts. And I think we have like two or three minutes for questions. You have to go select and write, if you have, if I don't know, or you want to select the GBA, but select it from another way. Like if I have a new thing. Yes, the question was whether you need to specify drive by like VDA, SDA and whatever, or you can use the other links in depth. Yes, you can use the UID and ID link in depth. LVM RAID, the support of the, for both MD and LVM RAID, the support of the, so raise your, any more questions? I understand. Yes, so the question was more generic system rows where there is a system row that supports installing packages. Yes, I am actually not sure whether it's a system row because you have a DNF, DNF in Ansible, DNF module in Ansible support. So you would probably use that. But yeah, I think you can do that with system rows as well, but there's a DNF module in Ansible. So you would probably use that. Any more questions? Okay, so thank you. Thank you for the presentation to the skater. Yeah. We'll see you in a minute. We'll see what we have here. Because I know just now what we're doing. The question will be about your, thank you so again for your lines, or yours. Thank you. Hello, everyone. My name is Zor, I work at Red Hat. And I want to share with you the story about how I broke our CLI. But more importantly, I want to share what I learned from it. So I work in a team called OCM that stands for OpenShift Cluster Manager. It's a service. And that service allows our customers to install OpenShift clusters on AWS or GCP. So apart from that service, we also have a CLI too. CLI too called Rosa stands for Red Hat OpenShift on AWS. And it applies for a subset of the features that are supported by our service, which is very specific to AWS and users bringing their own cloud account. So our service maintains a cluster life cycle. When a user requests to create a cluster, initially the cluster will go into waiting state where it waits for the user to complete some configurations. Once the configuration is ready and everything is set up, cluster goes to installing state. And when the installation is finished, the cluster is finally ready. Now, I wanted to enhance the service and add another state, which will perform some validations inside the user cloud account and prevent later installation failures. The problem is our CLI was counting on waiting state initials. So I went home, tested everything on my service, had it reviewed, had it merged, everything was great. Next morning, I found out CLI is broken. Not so good. So I said, okay, I can fix the CLI. It was a small check, what's the initial state? Initial state, remove that check, fix the CLI. But then I realized I have to create a new version and ship it to 100 of customers. Now, the problem is customers are not so happy to upgrade their CLI to. And I definitely can't force them. So at the end of the day, I had to compromise and I had to implement the validating state after the waiting state. This is not the optimal solution. This is not what I wanted to get, but that's the compromise I had to make, which led me to think about stock. So how do you build a service-oriented CLI? First question we wanna ask for yourself is, why do you even need a CLI? I mean, you have a service and it's RESTful and you can do anything with your service. Why do you even need another CLI tool? So eventually it all comes down to user experience. And I'll give you examples. First thing, we have a complex service. So our service allows users to configure cluster creation on Google Cloud using Red Hat Google account or the user Google account, AWS using Red Hat account or the user AWS account. So we had a growing share of users wanting to bring their own AWS account. So we decided that for those users, we wanna simplify the problem. We don't want them to go ahead into our service and configure all the parameters. We just wanna give them a simplified solution. Another example, whenever you go and wanna use the service on every API call, you need to provide an authorization header with your token. You need to provide the full service URL where if you work on a CLI tool that sits on your machine, you can hold your credentials or refresh your token locally with one command, a login command that also supplies the URL. That's it, you're logged in. You don't need to deal with that anymore for the entire session. Okay, another thing, restful commands, that's nice. But human readable commands are much nicer. And you can use names instead of IDs and describe and edit instead of get and patch. What about the output? So you can get this big JSON with all those fields or you can get a nice YAML again in a human readable form. The input, you need to provide a JSON body or just give two nice parameters, two nice command line tool flags to your create command. Another great thing about CLI is interactive mode. Interactive modes allow you to validate users' input on every flag they put in. You can have autocomplete. You can also provide a list for your users to choose from. So those are really great things that CLIs allow you. So where does it get complicated? Let's consider that we want to add a new functionality to our product. In our case, I'll take the functionality of, as I shared with you before, users have to configure a few things in their cloud environment for us to continue with installation. So we wanted to automate it for the users. So should we add it to the service or should we add it to the CLI? So it's very tempting to add this to the CLI because this is a very specific Rosa use case. So it doesn't, it doesn't, not interesting to other customers of the service. Another thing is that the CLI is a very small piece of code in comparison to the service. So it's much easier for a new developer to get in, put that small change inside the CLI, no need to get inside a lot of service logic, no need to worry about getting it merged with this big service. However, we have other clients for the service. So if we go ahead and implement that in the CLI, we also have to implement that in the UI or any other client that we have. Another thing you wanna consider is the key difference between a service and a CLI tool. So while a service has continuous rollouts, it's continuously evolving, it's always there in your cloud, whatever you wanna change in it that doesn't change the API. Let it be a new small enhancement or a bug fix. You just roll out a new version. You can do it weekly, bi-weekly daily, and it's there for everyone to use. But a CLI tool, it's not in your environment. It sits in your client's environment, customers. And every new functionality, every bug fix, that's a new version. And every new version could be a very long release cycle because it has to go through compliance and other internal company processes before you release it. Another thing is that customers are not so happy to upgrade their CLI or any software that they have because it also has a process within their company. And another thing to consider is that every version, you have to support it for a while and you might find yourself supporting multiple versions. So to sum this section up, you wanna maintain a thin CLI. You wanna ask yourself, could other clients benefit from the functionality I want to add? The answer is yes, you wanna put it in the service. What if we need to enhance it? What if we find bugs in it? And sum everything up. You wanna keep minimal business logic inside your CLI. Okay, let's go back to the first example I shared with you about how I broke the CLI. And let's try and think how that could have been avoided. So first thing you wanna ask yourself is like as a CLI developer, what can I rely on? What assumptions can I make about the service that I'm using? Cause we have a contract, the service exposes a contract, that's the API. But there is so much more beyond the API, like other behaviors, like the initial state that we had. And if you think about it, you might find yourself making, you might stop yourself from making decisions that will limit your service from enhancement. Another thing you need to think about is your version support policy. So you wanna carefully define how often you release a new version. How many versions are you going to backward support like define, I don't know, two minors, three minors and have that policy known to your customers? And whenever a version is going to get out of support, you wanna give your customers enough time to prepare for that, you wanna notify them. Last but not least, you wanna have automated CLI test. We had automated test for our service, really with big coverage. We didn't have a lot for our CLI. If we would have tested our CLI most common use cases, we would have definitely caught that earlier. So that's, you wanna put the CLI inside your CI process. To sum everything up, you definitely wanna write a CLI to improve your user experience, but you wanna keep it thin and minimize the business logic inside it. Always, always keep in mind future service enhancements when you go ahead and make assumptions about your service, define a version support policy and have automated test for your CLI and have them running continuously. That's all I have. Any questions? Okay, great. Thank you. Hello everyone, good afternoon. So I'll start with asking a simple question, which is quite obvious, considering we are in an open source conference. So how many of you have or have been contributing to open source? Can you please raise your hand? Awesome, almost everybody I see. But how many of those contributions were not code related? Anybody? Yeah, only a few, right? So that is what I'm going to talk about. So about me, who am I and why I am talking about, what I'm talking about today. So I am a freelance product slash UI UX designer currently and last year I interned with Outreachy, which is an open source program. So I interned with them last year and worked on an application called ODKX. So that's what I'm going to talk about. Okay, so first question. Can I even do design contributions in open source? So yes, definitely. And the main reason is that there are little to no contributions from designers, usually in open source, right? And if you've ever organized or handled a open source project, you'll realize that organizations often require more than just code, more than just code contributions. So especially for beginners, they can start contributing even as a writer, designer, community manager, and much more. You can also contribute to documentation, make tutorials, participate in community discussions and events, et cetera. But is it really even needed? I would say definitely a big yes, since developers are not trained in design, right? They can make things work, but from a user point of view, to improve the overall UI and UX, to make it simpler for the user, I think it's very important for designers to also start contributing to open source. Also for newbies, especially, since almost in every field, right? It is a little difficult to get started, to get your first opportunity. So it can be a great value add to a newbies portfolio, where if they were contributed to a real time, real world project, they can add it to their portfolio and gain some valuable experience. Some things that they should keep in mind, or we as non-code contributions, should keep in mind is about the target audience, who they are designing for, what kind of users are going to use that particular project or app or whatever. The platform, is it a web-based application or a mobile application, the budget and the timeline? Open source projects can be tight on the budget and timeline. So it is very important to keep that in mind as well. The most important thing, what kind of contributions can be done? So there are a lot of type of designers as well. So for graphic designers, they can get started with designing logos, icons. For artists and illustrators, they can do illustrations. For UI UX designers, like me, you can design websites, mobile apps. Designers can also help in branding and design systems. So not to confuse design systems with system design. So design system is like a guide or a collection of all the details about a particular project. Let's say what font is used in the project, what colors are used, et cetera. One of the most important things, I feel, what challenges can you expect to face and how to overcome them? So I feel whatever, from my experience, I would say lack of communication and ownership can be a hurdle, right? In an open source project, we are usually expected to communicate via email lists or comments on the issues itself. So in case you are looking for feedback or some feedback and things quite early, it can be a bit difficult. So for that, how you can overcome it? Just be proactive and patient, right? And second challenge that I feel is lack of resources. Again, as I mentioned that open source projects can have a tight budget of sorts, especially for the design part of it, right? So you just have to be creative and resourceful. You can find free and open source design tools and resources. You will get free icons, icons, icon packs, illustrations, et cetera, easily. I have linked some of those here. How should I get started? So how I got started is I just searched on GitHub. These keywords like design, logo, or UI UX. So sometimes project maintainers have issues already opened which mention these keywords like we are looking for a redesign for our app or we need a logo for our app. So you can easily find those using these keywords on GitHub. Another thing is if you are already interested in a certain project, even if they do not have a design issue opened, you can still go ahead and suggest them certain changes or whatever you feel can be improved overall in the design UI UX or even documentation and things like that. A second is this community called open source design.net. They are a community of open source designers and they regularly share resources, tips and tricks, and also like small issues and jobs related to open source design. Okay, what are the opportunities? So here I've mentioned some of the most well-known open source programs out there, which like when I got started, I did not know that you could be a part of them even if you don't code, right? So Outreachy is the one I was a part of and Outreachy, they conduct their internship twice a year and yeah, so they have around 20% non-code related projects that you can apply to. GSOC, Google Summer of Code, a very popular one. They also have a lot of non-code related projects that you can apply to. Hacktoberfest, very famous one again. It is conducted in October every year and last year I also submitted some design contributions and I was a part of Hacktoberfest as a designer, not a coder basically. And the last one is Bitcoin. This is a pretty new one and unlike other three, they have a dedicated design track. So either you can go ahead with the program as a developer or as a designer. They have a separate track altogether. For rest three, they just have a certain section of non-code related or design related projects that you can contribute to. Yeah, so finally, just some bonus tips. Just because you are working on open source or you are feeling like you're working for free, make sure your work is still high quality. Make sure your work is easy to use because even after you are done contributing, even after you stop contributing to the project, designers and developers and most importantly, users after you will be using it. So make sure it is easy to use. Also, documentation, documenting your work and process I feel is very important even for the project that you're contributing to so that in future if another person is trying to reference the same thing, they can easily understand and build upon it and promote your work. So unlike most companies where people work at, you do not have any NDA or anything for open source projects. So feel free to promote your work, add it to your portfolio and make your work work for you. So yeah, that was about it. Thank you. Maybe this is a, what are suggestions for design work? Like if you work in open source environment, you ideally want to have the same methodology for design work as well. Yeah, so what I have worked on is like, I submit a design suggestion. Let's say maybe I feel that this particular screen or this particular flow can be improved. So I maybe sketch it out or design it in Figma and submit that file on issues that I feel this can be incorporated and then that issue can later be assigned to a developer to work upon it. So how do I track the design variations or versions on that? So again, Figma also has the version history thing or we can also post I think multiple links of sorts. There is nothing like GitHub for design yet. So yeah, that's what we've got to do. Yeah, so open source alternatives for most of the tools. Let's say I was talking about Figma, right? So Figma is also not open source. It is not free for use. So when I was working with Outreachy as a designer like interning there. So that was one of the challenges that I faced. I was very comfortable with Figma, but they informed me that this is not something you can use because it's not open source and we are in open source organization. So then I had to go and look for more open source alternatives, learn it maybe quickly. But yeah, there are a lot of resources and software available. Is this not that popular yet? Yeah, Penpot, yeah. Penpot, P-E-N-P-O-T, Penpot. Hello everyone, I am Parul Singh and I work as a senior software engineer at Red Hat. Today, we'll talk about how you can power the efficiency of your cluster and make sure that you're consuming less energy and if possible, control your carbon footprints and running your workloads on the cloud. So who we are, we are a group of people who are taking a community-based approach on environment sustainability. We are also part of the CNCF TAG, Environment Sustainability and if you want to check out a proposal, the QR code will take you over there. But in general, our mission is to advocate for, develop, support and help evaluate environment sustainability initiatives in cloud-native technologies. And we also aim to identify values and possibly provide incentives to service providers so that they could reduce the energy consumption and control their carbon footprint through cloud-native tooling. The great thing is, as of June, Kepler is part of the CNCF Sandbox project. So that's a great step for us. And what brought us or what was the thing that initiated to bring the sustainability in computing? In 2021, an ACM technology briefs estimated that the ICT sector or the information or communication technology sector attributed to 1.8% to 3.9% of total global carbon emission. And to give you a context, that is more than the carbon emission of Italy and Germany as a country combined. So we do emit a lot of carbon. And so this brought us to ask questions how you can measure energy consumption indirectly. When you don't have access to the racks in the data center, you cannot install specialized hardware. And how do you measure energy consumption of workloads? If you are using any cloud, it is possible that you do get the total energy consumption as a tenant, so does the other tenant. But how can you pinpoint what works the energy consumption of particular workloads? And again, when you're running in a private cloud or when you're running in a hybrid cloud or in a public cloud where you are not the only one, but there are many other people who are using resources, how do you attribute power to specific process, containers, and pods? So with all these thoughts in mind, we designed our cloud-native sustainability stack and it has many projects, but today I'm just going to talk about Kepler and the Kepler model server. But before that, I want to give you an idea of what are the principles that we follow to attribute energy consumption. So we did a bunch of experiments, read a lot of paper, and come to the conclusion that power consumption is attributed to resource usage by process containers, pods, et cetera, that's running. And this is one example. Let's say that you are only, in this picture we only talk about the CPU usage or the CPU power consumption. So let's say that you have a pod that has a one container and it is consuming 10% of the CPU, then you can say that it contributed to 10% of total CPU consumption. So if it is consuming 50%, you can say it attributed to 50% of CPU power consumption. So the first project, obviously Kepler, which is called Kubernetes-based efficient power level exporter. And it uses software counters to measure power consumption by hardware resources and exports them as Prometheus metrics. Down the line, we are thinking of moving from Prometheus and adopting OLTP so that anyone, any monitoring stack that is compatible with OTLP protocol, they can consume our metrics. So Kepler employs three-pronged approach, which is per pod level, it first reports per pod level energy consumption, including the CPU, GPU or RAM, and it works both for bare metal as well as virtual machines and it exports the energy metrics as Prometheus metrics. Obviously the goal of Kepler is to measure power consumption. You don't want Kepler to measure these power consumption and itself contributing to 20% of the power usage. So to minimize that, we made Kepler very lightweight so that we, so that's why we're using EVPF to attribute power consumption to a particular process. And this I'll talk later, but not all the time you have access to hardware power consumption using Rappel or ACPI. For example, in case of VM, you cannot have access to the power meter. So in that case, we use machine learning models to estimate energy. So this is the bottom-up approach and for data collection and data aggregation, Kepler uses, as I said, software counters and power meters to calculate the power consumption by the hardware's. And then for data modeling and presentation, what Kepler does is converts these power consumption into energy estimation using machine learning. So how does this machine learning model formed? They are formed by Kepler model server. When the node energy is not provided to you by, or in the absence of power meter, what Kepler relies on is pre-trained models that can estimate energy consumption. And right now, the stack that we're using is TensorFlow Keras, Flask, and Prometheus. So we use two kind of models, the CPU core energy consumption model which is trained on features like CPU architecture, current CPU cycle, current CPU instruction and CPU time and you have the DRAM energy consumption model that uses CPU architecture, current cache misses and memory working set, sorry. So there are two phases, the training phase and the exporting phase. So for training, the Kepler model server has agents sitting on each of the nodes and these agents scrapes the node metrics and exports them to Prometheus and then the Kepler model server scrapes the Prometheus and forms the data set both for training as well as testing and it trains the model, evaluates the model and if it's of acceptable accuracy, you can use the model. For using the model or exporting the model, Kepler used Flask endpoints down the line. Again, as I said, we are going to go with open telemetry but you can also load the model in memory to do the estimates. So how do you decide what models you're going to use and when you have to use? So it depends on generally the available measurements. If you have access to the total power, then you use a power ratio modeling where the usage is a ratio of the power consumed by the processes by the total summation of power. But as I mentioned, when power metrics cannot be measured, for example, in case of a virtual machine, you can estimate power by usage metrics as input features of the train model and you can do three level of estimation. You can measure note total power, which includes fan power supply and the internal components such as CPU and memory and then you can do the part power. So this is the various scenarios on when you use what model. Supposedly the first row, when you have a bare metal with x86 with power meter, you measure total power using ACPI, you measure note component power in Rappel and part power is just a ratio. And the last row, if you see, which is a pure VM scenario, when you don't have access to note power or note component power, then the part power is just power estimation. The link will, you will have these slides and the link will give you more information on the various metrics that we use. And as I said, Kepler works not only on bare metal as a VM, but now Kepler goes beyond Kubernetes because we have created an RPM that you can run on the Linux server that can estimate power consumption of individual process outside the Kubernetes ecosystem. So I have few screenshots that I wanna share that shows you very interesting Grafana dashboard. At least it's interesting to me. So for this cluster, we have six nodes and Kepler runs as a DMN set. So it is instances of it on each of the nodes. And we have a Grafana route and this is the first dashboard. So we use a third party API to compute the carbon intent by region and this is for United States. And you can see over here, the various colored graph is the carbon intensity by the region. And you can see that the first is highest and the down is the lowest. So BPA, which is like a step graph, it's the lowest while the highest is also the so-called purple MISO, MISO is the highest. The second dashboard, now that you have the carbon intensity by region, you can use Kepler to translate that carbon intensity to particular process. You know what is the energy consumption or energy limits and you know what was the kilo ton per hour carbon emission and using these two values can correlate what was the carbon footprint of each of the nodes space or each of the pods. So for example, over here, we are using the BPA region and the carbon intensity on the left-hand side, you can see is ranging between point, anything between point six to one, but power consumption of all namespace is a straight line. Now we have the MISO region. So the carbon footprint is ranging between four to five, but the power consumption is constant. So you can control your carbon emission if you can schedule your workloads in a region that have less carbon intensity. So we have applied Kepler in the open source. We have worked with IBM to design something which is called Kepler, which is a container level energy efficient BPA recommender. We also worked with Microsoft and the KEDA team and we have made carbon-aware scaling with KEDA. And something that I worked on is carbon-aware scheduling. So if you can have the access of the details of what is the source of your power, can you control the intensity of your workload to line more on renewable energy and less on fossil fuel? So that is all. This QR code will take you to a sustainability stack. This is the link to Kepler and the model server, but you can check out other projects as well. And now, I guess we have time for a few questions. If you don't have any questions, you can also have suggestions. I have a question. Your role is very interesting. It basically would capture all slave scales when, for example, one workload triggers a machine to be scheduled in the region. But it basically changes your total consumption to do that one more machine because one workload particularly needs to scale up so that one more machine is scheduled. And if I'm setting it right, you would capture that because you have a consumption on the whole... Cluster. And then, if you basically could relate it to... So region, how do... So we get the first thing... Oh, sorry. Could you repeat the question for us? Okay. So you were saying that if I got it right, what you were asking is if I control the machines that are in a region, I can somehow control the carbon intensity of the workloads. Is that your question? Yeah, especially about use cases, one more machine could you create it for the... You mean to say machine or you mean to say workloads? Virtual machines. Virtual machines. Okay. I can open shape here and we have a... Got it, yeah. Yeah. Machines that are present could create a new machine on this version. Got it. So... Instead, Kepler will not... That you have to get a third party. And if you have access to that, then definitely you can... If you're adding a machine, you would have the... You can do the estimate of the total number, like all the nodes that are present in the cluster, and definitely you get the workload estimate from Kepler. Now, time these together, definitely you can do it, but... That we are finding is getting a reliable carbon intensity data because everybody have carbon intensity, but they don't talk about how they are exporting it. There's no transparency and we just have to accept what they are giving us. So we are working to a more surprised open source and transparent way to get this carbon intensity data, but if we have access to reliable carbon intensity data, definitely that can be done. Anybody has used Kepler or heard about it before? Like, I know we have one person who use a project, but anybody else heard about Kepler before? Okay, I'm sure you will now. Okay, nice. Okay, how much time we have? So you asked, how do we get the carbon... How do we use carbon intensity information to do the scheduling? I have some slides in case somebody was interested. So, okay. So as I said, we use rely on third party API to get the carbon intensity. So what we did was we developed a carbon intensity forecaster that scrapes these third party API to give you what is going to be the carbon intensity in some step in the future. And for this experiment, we were calculating carbon intensity for a day ahead. So the first thing, you get these carbon intensity data from the forecaster and the forecaster is kind of... There's a cron job that would be querying the forecaster to get the carbon intensity data for each of the node. And depending if the carbon intensity is super high, we label them as red. If the carbon intensity is somewhere in the middle, we label them as yellow. And if it's like the lowest, we label them as green. Now, there's something which is called labels and taint in Kubernetes ecosystem. And using those features, we control that. So when the node has a really high carbon intensity, we label it as red and we also apply a taint to it. If the node has a lowest, it gets tainted as you can see is yellow and not tainted. So tainting a node means if a pot, and this is the pot spec. So over here in the pot spec, you get this pot is explicitly declaring it's preference of node that has a label and the intensity is green. But it does not has tolerations for any node that has been tainted as red. And the toleration second for the sake of the experiment, we made it as five. That means if a pot is scheduled on a node that is tainted red, it would just stay there for five seconds and then we'll be evicted just to be clear that the workloads are moving. So as the node turning from red to green, the scheduler, which is the Kubernetes inherent scheduler will see the pot spec, will see what is the taint and labels on the node. It would evict the pot from node one that is going from green to red and schedule it to node two, which is going from red to green. So tainting nodes will ensure that pods are evicted by the nodes if pods do not have tolerations for that. Somebody asked me this question yesterday in the workshop. What if I care about carbon intensity and I don't care that my pod should be evicted? So in that case, you don't need to provide these additional information in your pot spec. Just don't provide a node selector and don't provide a tolerations and then it would be considered general. Yeah, thank you so much. Our next poll is about chat GPT and card one AM and time. Okay, hello everyone. Thanks for joining. I'm Stefano Maestri, senior manager at Redette. Long story short, why I'm talking about chat GPT and son, not because I'm a senior manager, not because I passionate of AI from the high school. My thesis in high school has been about neural networks in the university and so on. Why I followed in the dark side of management? I don't know, but anyway, I'm there. So let's start talking about what is a generative AI. A generative AI, this is a definition, definition that I've taken from open AI and there you can see, there is a few highlighted point from open AI. I'll leave you a red, I'm not reading for you, I'm sure you can, but read my highlighted version that is a bit different because I'm highlighting something else here and basically that is learning patterns from existing data and from there generate a new and unique output, my mix, human creativity. So this is the key point where I want to stay in my talk because what is intelligence for Einstein? If I must say about Einstein, say that he wrote this sentence on the whiteboard in the first session in Berna University and while he was writing in the end, he told to the classroom, if you cannot read this, please go out and don't return to my course. There is basically, the measure of intelligence is the ability to change. And that is exactly what Einstein did in all his life, basically, because the innovation that we got in physics from Einstein's come from intuition and not from applying pattern that we can use in the future. Basically, what he did is writing this very famous equation but it's not the equation, the point of Einstein research. The point is that he had an intuition that no one else had before that is they must could change the plan of time and the plan of time basically. This is the point of Einstein intuition and the kind of intuition is not possible based on what we know at that point. So what I'm trying to say is that if we just apply patterns of what we know, we will be still discussing Newton physics now. But another example of AI, generative AI, is art. AI art is about the world at the moment and it's great. I asked the generative AI to generate for me art. In the first case, I asked for a post-Cyberpong landscape. I asked also to don't include skyscrapers, probably it didn't understand very well. But anyway, this is the landscape generated. And in the second case, I asked that the AI generate something that includes galaxies. And this is an original image, it's not a real galaxy, it's generated image from AI. This is pretty impressive, pretty cool. But I like art. And there is an artist in the middle of the previous century called Lucho Fontana, he is from Argentina and live in Italy. And he's the father of the special, of the special art. And special art is basically, and let's use words from him and not from me. He is better to explain his art than me. So Fontana, what does, if you don't know, is basically cutting the canvas instead of painting it. But he's not the deliberate act of vandalism. He is introducing space between and around the canvas and the light behind and around the canvas. And if you look at their art in presence, he's pretty impressive because he's not just cut. He's really art. And what is the ability to change here, to invent the intuition of the art? Because in the middle of the last century, we got the first images from the space. And the first images from the space was like those. And if you see now, you understand from where it comes with his cuts. And, you know, it's pretty different from the galaxy that I showed you before, generated from an AI. I just take what it does, what we know, the data gave it and produce a really new content that is, let me say, standard content, is not disruptive. In this case, art, but not only art, science, could and should be disruptive if we want to evolve as a human being. And just to give you an idea of what is the difference between intuition and applying patterns. Because everyone is cared of, the artificial intelligence becomes too much intelligent. But again, it's not the point of intelligence. It's a point of intuition. Those are too classical of lateral thinking. Probably you already know and you already know the answer. Or if you don't know, we can get the answer together. The first one, say, in the middle of a large forest, the corpus of men dressed entirely in skewba gear was found. It was wearing wet suit, mask, fins, oxygen tanks and even weights. The sea was 15 kilometers away. The forest was not crossed by any road and there was no means of transportation around the man. Only chair the trees. Can this be explained? How can this be explained? This is the first question. The second one is, a man is pushing his car while some boys watch him without doing anything to help. He stop when he reaches in front of a hotel and at that point, he realizes he has gun bankrupt. Additional information is needed here. The man was pushing the car with one finger. The fact that he went bankrupt didn't bother him too much. The boy also realized that he had gone bankrupt and started a loafing. I asked chat to GPT. I asked chat to GPT and those are the answer. I go quickly. I have time, I think. So in the first case, you remember the Scuba man in the forest. Basically, the answer is, OK, the situation in the sky seems to be a riddle or a puzzle, right? However, based on the information provided, there are some possible explanations. One, airplane accident and the explanation from chat to GPT that you can read pretty strange that I was in an airplane in the Scuba dressing. Illegal activity, because probably the data it had is about a lot of illegal activity. Because we gave it, so we do a lot of illegal activity. Prank or art installation in the middle of the forest, Scuba man. Dead is an art installation. Anyway, and then it said me, please note that these are only conjunctures based on limited information provided. Without further details or context, it's challenging to provide an accurate explanation. Thank you. But it gave us something. And if you didn't know the answer, maybe you are a bit closer now. It is not completely wrong what he told us. The second one is doing better, better, but not getting the result, but much better for us. In this scenario, the additional information provided helps clarify the situation. Here is a possible interpretation. A man is pushing his car with just one finger, indicating that the car is a toy or a miniature version. The fact that he pushed the car with such ease suggests that he's not a real car, but a small scale model. Maybe, maybe. When the man reaches in front of the hotel, he realized he has gone bankrupt. This implied that he had been playing or pretending to be a wealthy person. But the reality of his financial situation downs on him at that moment. However, it doesn't bother him too much, indicating that he may have been indulging in a fantasy or living beyond his means. Something. The boys who were watching him without helping realized that he has gone bankrupt as well and started laughing. This suggests that they were aware of this pretence of exaggerated lifestyle or find humor in his sudden realization and the change of fortune. Overall, this appeared to be a playful or symbolic scene illustrating a personal or financial downfall in a light-hurted manner. Not bad, not bad. Very close to the answer. Anyone knows now the answer of the two questions? The second, a bit easier, a bit a lot. It's a bit more easy than that. A bit more lateral thinking. Those are the solutions. The first one is the aeroplane getting the water to stop fire and got also the scuba, pretty unlucky. And the second one is the very famous game. And I'm sure there is also a car, a small red car. But again, now we have very little time, but analyzing the answers from chatGPT, we can get to the solution easier than starting from nothing. So what? Why use generative AI and not overuse it? It's an excellent thinking partner. As I demonstrated to you, it's a thinking partner. His fast is getting more and more precise. We'll help a lot on better understanding complex patterns which may not be clear to us at the moment. This is a very important point, because the first and the last one, because it's important to understand that there is patterns that we have and we don't understand. In old care, for example, we don't understand the patterns of cancers, but maybe there is patterns. We need help to understand those complex patterns. And this is an excellent help. Why not overuse it? A disruptive thinking is needed for real progress. There is no progress with disruptive thinking. We cannot base our progress only on what we know until now, otherwise we don't progress. And real creativity needs some craziness. All of them are cool and mad sometimes, but they are all genius. That's it. We don't have time for questions, but if you have, I'm here. Hello, our next comment is about a question sheet. Hi, everybody. My name is Andrea Fasano, and I work as a principal software engineer in Red Hat in the core installer team. Anyone here familiar with the OpenShift installation process? Cool. So today, I'm going to talk to you and introduce you to the Asian-based installer, which is a relatively new project that we have developed for simplifying the installation of OpenShift. And I'd like to try to let you know its main features. And I'll try to give you a quick glance of how it works under the hood to let you understand if it's something that could sweep your need or not. Since it's a lightning tool, I try to make it as much as possible fast without dipping that too much into the dates. But I try to leave a couple of minutes at the end for questions and answers, otherwise you can get in touch with me after the talk. So let's start by defining, first of all, what is this Asian-based installer? Essentially, it's our tentative to merge together the experience coming from the assisted installer technology with the ability to run into disconnected environments. For a disconnected environment, I mean an environment that does not have a direct connection to the internet, and possibly trying to simplify as much as possible this experience. By looking from a very high-level point of view, the overall workflow can be divided in three main steps. First of all, everything starts by building a single ISO image that will contain all the necessary things for the installation. Then, this ISO will be used to boot all the nodes that are part of the cluster. And finally, you will have simply to wait for the cluster to come up. We'll go deep dive into each of these three steps to understand how it works. But before that, I would like to quickly recap what are any of the main characteristics of this approach. First of all, as I said, the ability to run offline. Of course, it can be used also for connected environment. The other important thing is that it doesn't require a provisioning host. So one of the nodes will be selected as a bootstrap node. So this means that you don't need an additional machine for the installation. Probably one of the most relevant parts is the ability to run a high number of free flight validations thanks to the assisted installation technology. And this will help you to understand if the upcoming installation is going to succeed or not. And last, it's fully automated. So it means that once the ISO is ready, you don't have to do anything special. And it was designed to be easily integrated within the party orchestrator. Another important thing is the ability to configure to provide the static network configuration via LM State. Yesterday, there was a nice talk by Montserrat about LM State. If you don't know, it's a simple declarative language for providing network configuration. We do support also zero touch provisioning. There was also an interesting talk about ZTP. So it's just yet another set of manifest to describe a cluster installation. And we can provide the support for installing a single node oven shift or as you know, so a cluster composed by a single node. And we do support fully bare metal and these free platforms. So everything on the on-premises area. But let's now try to see how it works by looking at any potential deployment of a compact cluster in a disconnected environment. The cluster is a cluster formed by just three masters without any additional work. Everything starts from a bunch of configuration file. For those already familiar, you will see that there is the Instur Config YAML. And this is the usual configuration file that you create for describing your desired cluster. So you put the number of workers, the masters, and so on. We have a new entry, the agent config. This is an additional piece of configuration that will let you define a couple of interesting things. The one that takes a takeaway for you is around the YP. This is mandatory. So here, you have to define the YP option that will act as a bootstrap during the installation. All the other properties are optional. For example, the possibilities, as we said, to define static network integration for each host involved in the installation. This could be useful, for example, if you want to define a static IP addressing. Once you are ready, you can use a new command, sub command available in the OpenShift install, which is agent create image. It will perform a number of static checks. And if everything goes fine, it will generate an agent iso. What is the agent iso? It's nothing else than a live ARCOS image with a customized ignition file prepared by us containing all the ingredients for making up the automatic installation subsequently. I think that you could already start to up the fact that the iso could be generated in an environment that is completely different from the one where you are going to deploy the cluster and potentially by also a completely different person. Okay, let's say that we have the iso. So let's move to the environment that will host the deployment of the newly created cluster. We assume to have an isolated network. We assume to have enough nodes, machines available. In this case, three nodes. Master zero has been colored with a different coloring just to highlight the fact that later it will be identified as a rendezvous host. And since we are in a disconnected environment, the other assumption is the ability to have a registry with a release payload already mirrored into it. Given that we take our iso, we load it into each node and we boot them up. So let's now deep dive into each node. Let's try to understand what happens after the boot process starts. First of all, since it's a live iso, everything will be loaded in memory. So the first thing is that the ignition fire that is embedded within the iso get applied and all the necessary files and services are copied and started. And they will be fundamental for the subsequent steps. At the very early stage during the boot process, it will happen, two important things will happen. First of all, if you provided the static network configuration, it will be applied. So in this case, we assume that VIPs have been statically assigned. Secondly, it will start to check if the release payload is reachable in the environment. Why this? Because if you cannot get the release payload, you cannot start the installation. This is will be performed by an agent UI or textual user interface. So a sort of interface that will be visible in the console of the instance. And in case of problem, it will give you some additional information to troubleshoot the problem. And you will have all stability to tweak the network configuration for a last minute fix. That's important because if there was a small misconfiguration, you could fix it without the need of rebuilding the ISO. If everything goes well, at this point, the agent service or the assisted installer is launched on every node. And this will be the solder that will perform most of the back activity. At this point, a special thing happened on the master zero. On master zero, which was identified to be around the worst, the assisted service is launched. The assisted service is the main orchestrator of the installation. It's the one that will take care of deciding what are the next steps. And at this point, the agents have been collected harder information about each node and they try to discover and connect to the assisted service registered on the master zero. Once all the agents have been registered, the assisted service will start the pre-flight validation to ensure that, for example, every node has enough memory, enough CPUs, enough disk space and so on. And eventually, it will inform the user to fix the problem before moving on with the installation. Let's assume that everything went fine. At this point, a number of simultaneous things happens. First of all, still on the, around the worst, the clustered bootstrap is launched and it will hold the temporary control plane for the installation. But I remember that we are still working in memory. We still need to write the Arcos image on every instance so the assisted service issues the command to write the image on every node. And the image is retrieved from the release payload if available. Again, fully compatible with disconnected environment. As soon as the image has been written on the nodes, the assisted service issues a command to the other nodes to reboot themselves. So they reboot and they try to join the control plane following the normal installation steps. If everything goes fine and so if master one and master two were able to join the control plane, assisted service issues the command to reboot itself. And again, if everything goes fine, master zero will join the control plane, will throw away everything that was in memory and will become just a yet another node as the other. From a user point of view, there is another useful command, agent wait for install complete that will let you monitor the progress of the installation. This is an example extracted where you can see the kind of validations that are performed and how they are printed on the console. And if everything goes well, it will inform that the installation is completed and it will give you the coordinates to attach the class. So this brings to the end of my presentation. What's next? We are still working on supporting OKD. There is an excellent community also here that is working on this part. And I hope we'll have it available very soon. We are also working on pixie support so that you will not be limited just to the eyes but also to use pixie for booting the instances. And I would like to acknowledge the original team that more than one years ago started to work on this new project. Thank you. How long it takes to validate? How long it takes to validate? Because it's actually not long enough to validate. How long it takes to validate the second layer? Okay, you ask it how long it takes to validate. Well, I think the certificates, if I'm not wrong, should be 10 years. It's been installed 10 years ago. Yeah, if I'm not wrong, but it should be verified. Yeah, other questions? Yes. You mentioned something about ZTP, have been involved into that. I mean, I'm interested how does this integrate with the whole flow of ZTP? It may be a long wind effective. Yeah, in reality, the question is, sorry, is how does it work and how it integrates with ZTP? In reality, we aim to maintain full interoperability with ZTP when we started from install config in the agent conflict. Internally, they are converted in ZTP manifests, and they are uploaded. When we talk with the system service, we talk in terms of ZTP manifests. The idea is that if you already deployed a cluster also with ZTP, ideally you could take your ZTP manifest and apply them with agent-based installer with almost zero problem refactor. How does it know what the inter-register is? Is that something configuring the demo? Yes, it's configured in the install config for disconnected environment. There is a section, the ICSP section, where you can configure how to eventually reach a mirror-red environment. The idea is that in the registry, it's the complete release space that you don't have to touch or to reach completely outside the internet for this installation. Can you use the ZTP of registry as well? Yes, yes, of course. It depends on if you put it or not in the install config. Ah, yeah, the question is how do we know how to reach the, sorry, the registry? Thank you very much. Thank you. I hope it wasn't registered what I said after. Please delete what I said. We can find out on the internet side if you call the job. And we are in collaboration with Ibi Jali in Brak. Over, if we actually have a meet-up group or meet-up.com that we currently... Oh, I should have this? Then you should have told me this before now. Yeah, actually, the previous presentation was a little bit different. That's funny. Okay. It's on? Okay, so let's continue. So on the meet-up.com we have over 580, I think, members. But after COVID, we are really struggling to get people in person to these meet-ups. So this is why I decided to have this lighting talk. We also... Why I decided to rebrand this and create a new community is basically a few reasons. The first ones I already mentioned and the third one is that the original Bono-Java meet-ups were happening in PAP, in Zivou Palečka, where there is a really bad projector. So many people came for a beer and didn't have much from the talk. So, I'll wait for photos. So, we are making now the first meet-up on the 28 with a bunch of changes. First thing, we are changing the venue. We are moving to Faculty of Informatics at Nasavi University, Botanicka Street in Inberno, where we will have definitely better everything because it's university standard. We also have a possibility because now I am in contact with the people at the university to potentially get even the big lecture rooms if eventually there would be interest from the many people. So, and we also have now a small budget from Red Hat as official sponsor, so we will have a small after party with provided catering. So, there will be beer. I need to put this in, you know, bolts. Why I decided to do something like this? I was actually lucky enough to be invited to so-called Tour de Swiss Jax at the beginning of May, where I went to three different Jax in Switzerland in three days and I had the opportunity to see what Java community can become. In, if you are interested, this is jack.ch, like Jack Switzerland, and you can find them there, they have, this was only German speaking part of Switzerland, which is I think five or six cities, and in every city they already are running this for more than 26 years. If you know how old Java is, then you can guess like this was pretty early. And each talk was around like 20 to 30 people, and the talk itself went great. But after party, after each event with the provided catering or something else, that kind of networking opportunities, many of these people, this is not just to show you how long hair I have in the wind. I meant really good people that are interested in our technologies in our products. I stayed in contact with them and we are still continuously discussing different possibilities for them to come here, for me again to come there, et cetera. So this kind of networking opportunity is something that I want to create for people here in Bruno, to really grow the community of Java developers around the meetup. Yeah, if you are interested, like why this is happening like such a nicely, again, they have a lot of years on top of us. If you go into this link, you will get a very nice presentation that I was playing before my talks, where they list around 30 to 40 companies that are supporting them in that five cities, which means that they have really nice budget. So in other words, if you are working for a company that would be interested in supporting local meetups, and I see very many redhead faces here, so that sucks. Please ask if somebody would be interested now, for instance, in talks with Oracle to get them also presenting and also some kind of support. This will take time, but this is my goals. So this is what we are trying to achieve. Yeah, I already mentioned the after-party, and again, as I already mentioned, there is already ongoing collaboration with CZJUK, which is in Prague, by a chance also run by a Slovak guy, because we are active in Czech Republic, it seems. So I think that we would be definitely able to get some similar type of tour the CzechJUKS in the future, if there would be someone able to do this for us. And I already know a few speakers, because my personal contacts that would be willing to do this kind of, like one day Prague, one day Bernal, or other way around. And potentially we will get more jugs in different cities. What do I know? Okay. And yes, what we need right now, especially I think that Corona brought all of us into more like cozy home environments, and I think that this is a big reason why people are not willing to, thank you, are not willing to no longer come physically to a meetup. But for me at least personally, this is a different type of experience when you can talk to people and exchange the experiences. Definitely I think that there is, like if you just watch everything online, yes, you will learn the stuff that you want to learn about, but it's that experience of that after party where you make contact, when you can basically learn about new possibilities of what to take, whatever you are doing, somewhere else. Definitely if we are going to continue this, I will need to work more on the sponsors because it's expensive if you are paying for everything. But so far so good, like 15 people is okay. 50 people would be a problem for me right now. Because we are moving this to faculty, we will be taking now a pause during the summer, but starting from September, I definitely think that I can do every month till the end of the year, so for talks from September to December, but depending on the speakers, how many speakers I can get, at least every second month, I would like to have a meetup. And because we are now moving to faculty, there is also possibility to do this in hybrid mode, so we can actually stream it online and make recordings. And I already mentioned collaboration with CZJUK, so we can skip that. Okay, and where to find us? This is the important slide. We have now Twitter, Mastodon, definitely meetup.com where we still schedule new meetups, and basically that's it. Do you have any questions or ideas? Also, if you would be interested in collaboration in organizing these kind of events, just reach out to me, because currently it's only me and few other people that are working around it, which I'm like kind of still, everything needs to go through me. So if I can offload something and people would be interested, then just please ask. Any questions? Then I think that I'm finished. Thank you. You got some sneak peeks. Oh, yeah, I forgot to include the sneak peek for upcoming cogs. So I can go into meetup, and I can go into meetup. One, yeah, I know it's the DevCon CZJUK, and there are no 2023. I had to do it on the second monitor. It's hard when I don't see it, so I will put it here. You will look into very small terminal now. So the next talk, and this is pretty small, the next talk will be done by Michael Winkler, OpenShift serverless inaction. So basically how to do Java in serverless applications. If you are interested, here it is. Thanks for a good point. Okay, then I'm finished. Thank you for your attention.