 Hello. Checking, checking. Sound test, it's OK? One, two, three. OK, hi. Good morning. Hi, everybody. My name is Benny Powers. I'm from Jerusalem, Israel. And in my day job at Red Hat, I work on the digital experiences team under the marketing department and specializing in design systems and web components. So today, I want to talk to you about our project pattern fly elements, which is our upstream design system. And we'll talk a little bit about Red Hat design system, which is the downstream project. What that means, we'll get into detail a little later on. So let's take a look. So what is a web component? OK, if you've never heard of web components before. So already stretching back to the 90s, UI toolkits like Delphi had modeled their UI systems based on components. The idea of a component is like a button. A button has text. It maybe has an icon. It has a color. Maybe it has a state. So the idea of components where the state and the style and the design and the functionality and the behavior are all encapsulated inside the component. And you can just, if you have a drag-and-drop editor or put it in your XML file, whatever. Reusable objects with consistent UIs and APIs. The web lacked its own component model until about 2015, when a group of web browser engineers and web developers got together at Google and tried to plan out what was going to be the future of web development. The browser engineers asked the developers, what do you need in order to make better applications? They said, well, we need components. And the browser engineer said, well, how can we do this in a way that works for the web and with the web? And that project eventually became the Polymer Project at Google, which was the initial attempt at defining a component model for the web that later went on to become the web component standards, which are standardized and shipped by every browser today. So there are four core technologies in web components. We have custom elements. A custom element is basically an HTML element that is defined by your JavaScript class. So the video element has certain behaviors. It can have controls. It can have slots for sources and things like that. You can write your own class in JavaScript to say, what's a fancy video? What's a fancy input? And then you can associate that with a tag name. And then any time that tag name appears in your document, the web browser will instantiate that object on the DOM with your class. That's custom elements. Shadow DOM is sort of like the secret sauce of web components. That's what gives you the encapsulation. A shadow root is like an isolated subtree of the document that's visible on the screen. So don't get confused. It's not invisible, but it's isolated from the rest of the DOM, which means that the styles that you apply inside your shadow root don't leak out. Maybe you remember what it was like back with jQuery widgets before they even called them components. You'd put some fancy jQuery button widget on your page. And out of nowhere, it would break some other part of your page because the class names conflicted or the IDs conflicted. If you put those class names or IDs in your shadow root, they can't conflict with the rest of the document. We have the template element, which allows us to very efficiently copy pieces of DOM from one place to another. So I define my template once, and then the browser can very, very efficiently stamp it hundreds of times in the same document without having to parse that HTML again. And the way that web components are distributed today is through JavaScript modules, sometimes called ECMAScript modules, the standard inbuilt language module system that was shipped in 2015. And we're just now getting to see more widespread adoption in the rest of the JavaScript ecosystem. So what can you build with web components? Well, you can actually build any sort of web project with web components. You can build whole dashboard complicated single page applications. You can build what they call today multi-page applications where you leverage the browser's inbuilt navigation and things like that. You can build reusable widgets like a video player or maybe like a comment form or something like that. You can build design systems, red hat button. Stripe payment. Actually, just yesterday, or just two days ago, Stripe, the payments company, released their new version of their widget. It's a web component. And you can progressively enhance your pages. So you can put your content in HTML, wrap parts of it in web components, and when JavaScript comes online, those parts will be upgraded by the web component that wraps them. So the advantages of web components are performance instead of running a JavaScript framework's component runtime that your users have to download onto their low-range Android device, particularly in the global south, where most people have low-powered Android devices. Instead of running your JavaScript code or your framework author's JavaScript code, the component model is written in Rust or C++ by browser engineers. Another benefit is interoperability. Because web components are HTML, they are HTML. So they work anywhere that HTML works. For example, one time, I had to solve a problem in a leaflet map inside of a view application. And we needed to add some interactivity to the tool tip inside the leaflet map. But the problem was the framework only allowed us to send a string of HTML. We couldn't interact with the DOM nodes. We couldn't get a reference. We couldn't add it to our framework. So we wrapped the content in a web component, registered that web component with the browser, and oh, there comes our interactivity. Another benefit is future-proofing. Because web components are standards. They're standardized in the HTML and DOM specifications. Your app is going to work years from now. It's like the Space Jam website all over again. It's tables all the way down. Well, your app can be web components all the way down. And instead of having to go through that framework churn every time, you have the option to update your implementing library, your implementing framework. But you don't have to. You could implement a new part of the page with a new framework and still have them interact via the HTML and DOM APIs with your old components. And for me, I think that, especially in a large organization, the main benefit is knowledge transfer. Train up web developers. Don't train up framework developers. Because that framework, it might be popular today, but it's not going to be around forever. Teach your developers how to work with the web the way it is. OK, so sometimes people say, well, web components, nobody really uses them. They're not really a thing. Well, I'm here to tell you that, yes, in fact, web components are a thing today. I feel like I need to get rid of the Reddit logo and maybe replace it with another one this week. It's not such a thing. But yes, web components are in use all over the web. If you look at NPM download stats, you might get one idea of what web development is all about. But if you look at the page load stats on the Chrome user interface project, you'll see that the use of web component APIs has been consistently skyrocketing over years. More and more companies, more and more small startups are adopting web components. So I encourage you to do the same thing as well. Web components are here to stay. And in actual fact, this presentation is made using web components. All these slides are custom elements. So that brings us to design systems at Red Hat. So let's have a brief overview of the history of how design systems came to be at Red Hat and some of the problems we faced and how we decided to try and solve them. So back in 2012, there was really a design free for all. We had multiple different projects, multiple different properties. And they all had their own designers, design teams, design philosophies, design languages. And nothing was really consistent from a design perspective and certainly not from a technical perspective. So that's why in 2014, the first version of the pattern fly library was developed. It was a proof of concept built using Bootstrap. I'm sort of glossing over a lot of the history here. So feel free to correct me in the chat. I don't mind. And this was the first step at Red Hat to aligning on a design system. And the name pattern fly was chosen as sort of like a non- corporate or corporate agnostic name that could develop a community around it. A few years later, the Red Hat Elements web components project started as the first attempt at a component-based design system here at Red Hat. It was a parallel effort to pattern fly. In 2018, the pattern fly team decided to double down on the React framework. And that was a sensible decision at the time because at least most of the users that they were interacting with regularly, by users, I mean consumers of the library, like developers of pages and apps, most of those users were using React. So it made sense to focus on the framework that most people were using. Even if that did mean, yes, they were going to cut out use cases that were not using React. It was React or nothing. If you wanted a pattern fly, it had to be React. So the pattern fly React project elaborated on the Bootstrap MVP. It was mostly focused towards application development, like dashboards, SPAs, that kind of thing. But like we said, only for React. Shortly thereafter, the first edition of pattern fly elements came out. And this was an initiative in the marketing department and digital experience because for several reasons. First of all, we weren't using React. And we weren't going to use React. We weren't building SPAs. We were building landing pages for marketing sites and that kind of thing. And page loading performance is extremely important. And this was the first version of PFE was sort of a parallel effort with divergent UI needs. We weren't building towards application development, web app development. We were building towards web page development. So the UI patterns and the UI idioms were slightly different. This created a situation where basically you had pattern fly and PFE. And they were very similar, but different enough to be confusing. That's why in 2022, we released the second version of pattern fly elements. And our watch word for that version was one to one. We wanted the second version of PFE to be a one to one representation of the design principles of the pattern fly library, such that the end user wouldn't necessarily be able to tell that they were looking at a React app or an HTML page. So yeah, the history of pattern fly elements. So the original PFE components were inspired by the pattern fly core, but they were different enough to be their own thing. And like we said, the technical needs were different. So it was kind of a different beast. But PFE 2.0 brought the best of pattern fly design to more use cases via spec standard HTML. So you could load up a PF card instead of importing card from pattern fly React and you just drop it on your HTML page. Our goal was one to one representation. It's pattern fly design, but standardized for the web, not just for a particular web framework. So let's take a look at a few examples of pattern fly components. Here's an example of a pattern fly elements card. You see that we have this optional rounded Boolean attribute, where you can say is the card rounded or not. There are slots for the header content. Slot is a web component API. So the web component author can say, here's where the header goes. Here's where the footer goes. And then the user can specify which slot a particular element goes into. And you can put multiple elements in the same slot. The paragraph here goes into the anonymous slot. It doesn't have a slot attribute, so it goes into just the body slot of the card. And then we have here some buttons, some pattern fly buttons, that slot into the footer. Like you would have sort of a confirmation dialogue. And this is what that looks like. That's actually a real instance of pattern fly card loaded up in the browser right there. We have our buttons here. They don't do anything, but same thing. We also have an accordion element, so that's a popular UI pattern where you'll have sort of like multiple disclosures that can open and close and hide and show content and whatnot. So here we have the four freedoms of the Free Software Foundation. And this is what it looks like when you load it up on the page. There's an optional single attribute you can put there so that it only shows one element at a time, so you can opt into that behavior or not. Some weird problems with the mouse, but that's fine. We have a modal element, a modal dialogue, which will pop up some UI onto the screen. And this is a cool one because you can associate the modal. See, the trigger attribute points to the usage trigger ID, which is a button that lives somewhere else on the page, somewhere else in the DOM. And you, as the developer, don't have to write any JavaScript to hook those two things up together. You just have to say, this is the button that triggers the modal. Here's the modal. And away we go. Let's see if I can overcome my input problems. Load up the modal. Not so much. Oh, something happened. Well, you're going to have to take my word for it. This is why we don't live code children. You know what I can do? It looks like that. This is what it looks like when you open and close the modal. Great. OK, so how do we make pattern fly elements? Now, in this section, the important thing for you to remember is that what I'm about to tell you about how we make the elements has no effect on how you use the elements. Our internal technical decisions don't limit you in terms of your options. So we build our elements using TypeScript, standard HTML, standard CSS, the lit web components framework, which is sort of the spiritual successor to the original Polymer project. And we use the pattern fly upstream design tokens. But just because we chose to use TypeScript and lit, you don't have to. You can use the pattern fly elements in a React app, in a View app, in a WordPress site, whatever you want, in a lit application if you want to. Here's sort of a typical example of what it might look like to author one of our components. So you can give it a tag name, pftile. And you can extend from lit element, which is our framework base class. If you squint, if you've ever written a React component or a View component or a Svelte component before, if you squint, you can see the outline of a typical JavaScript framework component. We have two reactive properties selected, which is a Boolean. And stacked, which is an enum of strings, and you can say what the size is. This reflect option on the property means that setting the stacked property on the DOM object will reflect that state to the attribute on the HTML element. And that means now you can select with CSS for a tile that stacked with larger, smaller, or whatever. And then here we have our HTML template, where you can see we are reacting to the changes in the selected state and updating an internal class. Now, this looks like we're doing string concatenation. And so a lot of developers will sometimes say, oh, it's string concatenation. It's not performant. But don't make that mistake. This is a function, which is called on this template, which creates a template object in memory, which has references to the exact places in the DOM and to all of the interpolations. So when I update the state of this element, it will only change the contents of this attribute. It doesn't have to re-render the entire DOM tree. There's no Diffing. There's no VDOM. There's no memory overhead. It's just direct interaction with the DOM APIs, but with a very nice developer experience. Again, although pattern play elements are written using LIT, you don't have to use LIT. You can use them in any framework or in no framework. So how do you get pattern play elements? If you're using NPM and the node ecosystem, you can just NPM install at pattern fly slash elements, use your bundlers, and away you go. But you don't have to do that. I've spoken with teams who are building apps in Django or with Java or just writing HTML pages in Drupal. They don't have the time, the energy, the inclination, or the budget to mess with heavy, complicated JavaScript tooling. And really, who would want to? And I say that as a full-time JavaScript developer. So if you want to use these components, you can just drop in a link to a CDN, load up all the component definitions, and away you go. Write some HTML, and you're off to the races. Of course, the browser will dedupe all of these module references. So if you have isolated components in your CMS, you can just do this 100 times, and there's no performance penalty, basically. This pairs extremely well with a relatively new browser standard, which just landed in all three major engines this month called import maps. So if you want to do something like import from what's called a bare module specifier, instead of importing from a URL, you can just say, I want this package, and I want this file in this package. And up in the head of your document, you can say that that specifier maps with your CDN, and you're good to go. Our goal for you is that you should be able to write more HTML and less JavaScript. You should reduce your tooling burden. You should practice standards-first development for knowledge transfer and future-proofing. And although frameworks are allowed, they aren't required. Anywhere that HTML works, our components work. So I promised some performance. This is where your money value out of the presentation. So let's take a look. I did some comparisons. We're going to have problems with the mouse again, I think. One second. OK, so here's the demo. It's just a card with some labels, a tooltip, and some switches which toggle the state on the card. I don't know if you can see that the corners are rounding and unrounding, and you can make a contact card. That's the demo. So I implemented this demo using web components and using pattern fly reacts with CR Create React app. On the left, we have pattern fly react. Just that demo, 2 and 1 1⁄2 megabytes of JavaScript. I had to download just to render the page, just to render this text. Until you downloaded and ran those 2 and 1⁄2 megabytes of JavaScript, you wouldn't know that we were talking about a React design system. No script, not allowed. The web components version, 222 kilobytes of JavaScript downloaded. And this all works with NoScript because all the content is in HTML. Both of these are unbundled, no optimizations over here. So if we wanted, we could probably go in and optimize both, so it's kind of a quick and dirty comparison, but just to give you the idea. Looking at view source, right? Create React app. The only content in this application is div ID equals root. That's what the user sees when they load up the page. JavaScript is off, or JavaScript is on, and the script somehow didn't download or was corrupted along the way. Div ID root. Over here, NoScript doesn't go for you. You still get all the text. Search engines can read it. Screen readers can read it. All your important content is still available. What about tooling burden? What about the developer experience? What about onboarding new developers, junior developers, into your project? The package lock JSON for the React version of this application, you can't even read it on the screen, is 30,168 lines of code. To render a card and a switch and a tooltip. I wanted to print the list of packages instead of the package lock. It didn't fit in the terminal after I decreased the font size five times. How do we manage the packages in the HTML version? This import map, which I generated by specifying which packages and files I wanted, what's that? Like 10 lines of, 15 lines of JSON that you can just plop in your page. And if I didn't want to do that, I could just load up the bundle from the CDN as well, and one line of HTML, that would do it. So what's the future of pattern fly elements? Well, first of all, we want to build more components. We haven't built out the full pattern fly library yet. Contributions are very much welcome, and I invite you all to come and check out our issue tracker, or just to go look at the list on patternfly.org and say, hey, we could use this one over there. We want to build React wrappers for these components. I said earlier that web components work with all the frameworks, and that is true. But unfortunately, React, how can I put this delicately, does not yet have full support for the HTML standard. So in order to make it easier for React developers to use web components, what we can do is automatically generate these React wrappers that would turn a PF card into a PF card React object that would be easier to use from a React perspective. We are experimenting now with something called declarative shadow DOM. What this is is a new browser standard. It's supported today in Chrome and WebKit, and it's gaining popularity in Mozilla, so we hope they'll implement soon, and we'll go and vote on the bugzilla page. What declarative shadow DOM lets you do is on the server side, calculate the state of your web component, and stamp the shadow DOM onto the HTML page. Before declarative shadow DOM, you had to run JavaScript in order to create that shadow root, but now that we have declarative shadow DOM, if I have an SSR situation, I can just print the contents of my shadow root to the page in HTML, and with JavaScript disabled, everything will render the way the developer intended. Incredible technology, so we're working on different projects in order to get that on the go, and declarative shadow DOM also unlocks nicer APIs, whereas if we had an imaginary PF link component that was supposed to wrap an anchor tag, today we would want the developer to wrap a regular A tag in our PF link as a decorator, but with declarative shadow DOM, that would let us do things just expose the APIs directly on the element. Okay, how are we doing for time? Okay, so that brings us to a Red Hat design system. So that was pattern fly elements. That's sort of upstream. Pattern fly elements is like the fedora to our Red Hat design systems rel, okay? So for those who have been around Red Hat for a long time, they don't need me to ask the next question, why do you have design systems? Cause that'll make sense. Fedora, you got a rel, it's great. Upstream, downstream, it's perfect. But let's dive in a little bit more. Why do we have two design systems? So pattern fly elements is upstream, means it's open to the community, it addresses community concerns and not necessarily just only the concerns of the marketing department. It's unbranded, so you can put it in any app and it'll work as an unbranded experience. And it's multi-purpose. It's geared towards both web apps and towards pages and things like that with a particular emphasis on the app development. Red Hat design system is our downstream, which means that it takes the core tools and concepts and shared code from pattern fly elements and implements a specific use case for them. In particular use cases related to the marketing department and pages like redhat.com and product trials and customer portal and things like that. It's branded, so it has all of the Red Hat brand standards built in, right? You don't have to load up a special pattern fly Red Hat theme in order to make it look like Red Hat stuff, like all the Red Hat design tokens are built in. And it's primarily geared towards web pages as opposed to web apps, although you can use our components for web apps as well. There's nothing really stopping you. So a couple of case studies. Red Hat product trials. It's a multi-page app that's wrapped in a React SPA with micro front ends built in and it touches a lot of different areas because it has to talk to all these different products in Red Hat while also talking to the marketing side. It's kind of a halfway house. So in Red Hat product trials, we built a web component-based micro front end system. That web component part really was super critical for us because that's what let us build this out at scale, right? Without having to burden the implementers of particular instances with our JavaScript tooling and our framework and keeping their package locks up to date and whatnot, we could just tell them, use this tag name. Load up the script and use this tag name and you're good to go. So that was extremely helpful. As well, the Shadow DOM gives us that encapsulation so different teams can work together on the same page without having to worry about stepping on each other's toes. Red Hat product trials is linked from redhat.com which uses the Red Hat design system. It itself uses the Red Hat design system elements as well as pattern fly elements together and the React framework, which manages the entire SPA, can inject its React side state into the web components via JavaScript DOM properties or HTML attributes and that's how they can communicate with one another. Another case study that we've had is personalization. So you land on a marketing page and you're like soon to be outlawed third party cookies tell the Red Hat marketing department a lot about who you are and what kind of things you're interested in. So then the analytics scripts can come in and present personalized content to tell you about something that you're most interested in. Well, in order for those teams to do their job, again, we don't want to ever have to save the words to the people writing the analytics content, NPM install. Those are not words that I wanna write in a direct message to anyone, really. Loa Lenu. So instead, we can just tell them these are the tag names. Here are the design standards. Here's the docs pages with the guidelines on how to build them up. And then they can go and build those personalized experiences, inject them live onto the page without having to worry about complicated tool change of frameworks, just HTML. So that's our presentation. I wanna give a special thanks to all the pattern fly elements and Red Hat design system contributors which you can go and scroll through and hover over at the slides afterwards, especially to Yuval for help with the graphics. Thank you very much. And we have a little bit of time for questions. Yeah, so there's a lot of different ways. It's basically up to you. Web components are the UI layer, the component layer, right? So everything else is up to you. So my preference is use the browser, right? There's just put a link. That's the best. Works with screen readers, works with everyone, everyone knows what to expect. There's a new browser API coming out now called View Transitions, which allows you to do more complicated things like animate between page transitions and whatnot. And when that becomes widespread, it's basically gonna kill the single-page application. At least, I hope it'll kill the single-page application. But let's say you're stuck with a classic framework or something like that, so you can use your frameworks router and just render some web components in there. If you're a little bit more keen, a little bit more advanced and you build your SPA using web components libraries, of which there are dozens. You don't have to use Lit. There are functional ones, hooks, blah, there's a million different things we can talk about afterwards. So you can use that frameworks router or you can use a different frameworks router more declarative, more imperative. I prefer the PWA Helper's router because it's very simple. Just gives you a call back. Location changed, do this. That's my favorite. More questions? Do we have any questions from the chat? Yes? So, is there a plan to make it standard to use web components or is it still going to be like the site technology for vibration as well? I don't know. Patternfly Elements is a downstream project from Patternfly Core. So the address for that question is the Patternfly Core team. I can tell you what I hope and what I dream, but that's less helpful. Sure. Yeah, how do they build the presentation using web components? Well, I wrote a plugin for Eleventy. You download the plugin. Eleventy is a static site generator. So you can put front matter in your slides. You write a deck with this Eleventy component. You say, here's the name of the thing, whatever. And then you just write your slides as markdown files. In fact, like two minutes before the presentation, I was doing, of course, last minute changes like you do. So there are all the slides. No more questions? So sad. So if you want to get involved, find us on github, github.com, slash patternfly, slash patternfly elements. Or if you want to do the Red Hat Design System, it's also open source. We work out in the open. All of our roadmaps are all available, what not. Dive in, give it a try. We have a backlog. There's issues that you can pick up. There's some simple components. There's some more complicated components. I think we have five more minutes. Yeah. It does mixing in. Okay, yeah, so it would work. Yeah, how does mixing patternfly react components with patternfly elements components work? Okay, so I'll give the answer in two parts. There's how it works today and there's how we want it to work tomorrow. How it works today is the same way that using any web component in React works. Like, so if you want to use Shoelace, which is a popular open source web component library design system, so you could just pop in like a SL button inside your React app. Because, to put it delicately, React has poor support for the HTML standards, there are some annoying workarounds that you have to do with refs and attaching event listeners and setting properties and things like that if you want to do more complicated things. If you use preact, it's not a problem, okay? Or what you could do, and this is what we want to do in the future, you can generate React wrappers for the web components and what that does is it takes a file called the custom elements manifest, which is a JSON manifest of all of the APIs on your custom element, attribute slots, CSS properties, methods, events, whatever. It takes all that information and code generates a React component which can attach the event listeners and set the properties for you and all that. And if you would do that, and we hope to publish those soon, then as far as the React user is concerned, it looks like they're using a React component. It just under the hood, they're using a more performant and standard HTML element. So it's sounded like it will generate React components, so it will it be a React component or will it be? You'll have, you'll have, so it's important to remember that React components are abstracted away from the DOM, okay? So what you'll have is a React component which represents the web component. And then when React goes and prints its virtual DOM to the page, it will print your web component there. What the React component will do is it will abstract away React's poor support for the DOM. Other frameworks don't have this problem. In Angular, for example, you can just attach an event listener to the web component. In Svelte, you can just set a property on the web component. In Preact, you can just do those things. But React has very strong opinions about standards and what you should be allowed and not allowed to do with HTML, so we have to implement workarounds for React. Okay, so if we have like two more minutes, there were some concerns about accessibility in web components, and we're addressing that right now with a new standard called form-associated custom elements. If I still have the demo open here. Yeah, here. So if we inspect this switch component, one can see, great. You'll notice that the switch is nested inside of a label. Same way that you would nest an input, a native input inside of a label. What a form-associated custom element is, is it means that the browser, when it submits that form, it knows that this label is associated with my custom switch. And I can set, there's APIs to set the value on that switch so that when I submit the form over HTTP, it'll read the value from my web component. According to my code, according to my web components code. All right, so I write a special color picker and then I can submit that value with an HTML label. That's called form-associated custom elements. There's work being done right now on what's called cross-root aria, which means how do I associate a data list which lives in one shadow root with an input which lives in another shadow root over there. So work is proceeding over there. In the meantime, there are various workarounds that we can do, like sort of copying nodes from one place to another or moving aria attributes or something like that. But the idea is that the web component user will not have to do aria stuff, that the web component author will be the one in charge of implementing the accessibility. We try our best on the design systems team to do that. We have full-time accessibility people. Thank you very much for attending the talk. I hope it was interesting and I'm available for questions afterwards. So my personal workbook, what we'll do is we'll do an N34. I hope they don't collaborate. I just want to go see my chair. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go right now. I think, generally, it's... Okay, that's fine. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go. I'm ready to go. Okay, thank you for coming. I'm really happy that you were able to find the correct route through the dungeon and reach the final room with the final boss. So welcome. So I had a crazy idea a few months ago for a session. So here we are. And my idea was that DevConf is amazing that we have these very in-depth, thorough technical sessions. But Sunday after party, especially in the morning, I think we should use something more chill and relaxing, right? So that's what we are going to do today. And aside from not the typical talk that the presenter prepares slides and everything upfront and delivers it and nice, but today we are going to do it exact opposite. Actually, this is my only slide. There is another one, but it's for me mostly. There is only one slide, and we are going to create content today. And then after the presentation, we'll share it and so hopefully we'll create something amazing today. And my idea for this session was that I always love these discussions with people who are working in the field, like what tools they are using or maybe some new fascinating command in Git that you discovered and saves you a lot of time. And yeah, we can read all about this in blog posts, online or videos. But since we are here at the conference, maybe it would be nice to do it in person, not just one-on-one, but maybe here as a group. This is going to be interactive. Oh yeah, and I should mention we are going to use an online tool for that. You can use FireWapi or phones, laptops, smartwatches, whatever you want. And we are going to do it on the tool called menti.com. I tried to slide, but it was too expensive, so menti.com was free. And we are going to do word clouds or like these variable inputs. And it will be here. And then we can discuss every slide, like what you are using and what do you like. So you can participate and speak or you don't need to speak, but I would really appreciate if you filled in the questions. So let me transfer to the other tab. Oh yeah, it's open right away. So please go to menti.com and type that 3675279. I hope you can see it in the back. Maybe I try to zoom it if it works. Okay, it doesn't do anything. Yeah, so if you can't see, I can maybe write it down, but it looks like that you can see. So yeah, first question, what language you are enjoying right now, like a starter. Okay, thank you. So we have lots of Python lovers here. Okay, amazing. And also, yeah, actually very diverse audience. That's amazing. I think we can really learn from each other. So any particular feature you like about language of your choice? Okay, so you just love them. Okay, I get it. But for example, in our team, we are using Python as well and we started using the Volras operator. That was quite amazing that someone introduced it and we all had to learn what it actually does. Okay, so that was the starter and I will try to control it from my phone because that's what they said is the easiest. So let's start with something more challenging. And okay, come on. Okay, it doesn't work. Oh, perfect. Yeah, so I had to also click it a second time. So I'm actually teaching Git at the university so Git is my most favorite tool so I had to do the second question about Git. So what's your favorite Git command or Git option? Come on. Okay. Okay, so are you doing Git push force on the upstream main branch? Who does it? Get out. Oh, lovely, so many. Oh, I can see rebase, cherry pick. Yeah, very nice. Oh, someone using work tree. So how are you using work tree? I never found a use case for that, to be honest. So yeah, please. Like source files. Okay. Okay, so it's like work tree was designed for kernel. Thank you. So just for the recording, the answer was that the developers are using work tree in kernel to work on multiple branches in parallel and also multiple repositories for kernel. And for everyone else who doesn't know work tree, so work tree actually allows you to check out a Git branch in a new, like directory, like tree structure so that you can literally see two files from two branches at the same time. So yeah, it's neat, but as I said, like I can usually just like switch branch, right? But if you need multiple, like to see the file from multiple branches at the same time, it can be useful. Okay, so what else we have here? You can see bisect, rebase, rebase i is the biggest one, so thank you. Yeah, that's also my most favorite command from Git. So anything that stand out for anyone? In Git? Yeah. Okay, thank you. Sorry, Martin, just for the recording real quick. So the comment from the audience was a tool called DAFAC or something like that. And you use the F word to correct the previous command. So if Git output something very wrong, you just run one command and usually guesses correctly what you want to do. Martin? Okay, thank you, Martin. Yeah, that's a very good point and it's true. It's not there. So Git has reflog. That's like a log of what you are doing with your Git repo. So it tracks all the references, everything. So if you screw something up, you can always look it up there. Like the output is a little bit complicated, so maybe use chgpt to explain it to you. But you can actually really find your comments that were lost or something and check them out. Thank you, Martin. That's a very good point. Okay, so lots of people having Git, perfect. So let's go for the next one. Okay, I'll probably use this because the phone is the next slide. So the next one is, so we touched Git. So let's do shell aliases or even shell functions, but I don't think they would fit on the slide. So do you have some favorite shell aliases that you are using? Okay, I can see LL, so that's even the standard one that should be in bash by default. But like here, maybe also explain what the alias does because it's just like one or two letters, it would be hard to guess. So anyone using anything fancy? Okay, ls, la. So for example, I am trying to use a lot of like, instead of writing Git, I write just g instead of writing of like oc, I just write o like all these that are used daily, I just shorten them to one letter that's very useful for me. Okay, I'll try to hide the interface thing. Okay, so yeah, that long alias looks like related to Java. Oh, pip and shell, yeah, that's very nice. So what does Git work in progress does? Sort and sort. Oh yeah, wow, that's brilliant, thank you. So the Git work in progress does that it prints out branches and sorted by the ones that you worked on most recently. So which I agree that that's a big problem because if we have hundreds of branches and they are sorted alphabetically, that's not very useful. We usually work on them once and then forget them and forget them clean it. Yeah, that's brilliant. Okay, what else? Oh, v equals v, nice. And v equals n, v, so is it transitive? Okay, auto VPN, so probably automatically connecting to VPN even filling the credentials. Okay, no one wants to confess to that one. Okay, so can we scroll this? Oh yeah, we can. Okay, that's the... Thank you. Okay, perfect. Okay, any more interesting? Shell, aliases, all functions. Okay, then maybe we can go to the next one. So what else? Okay, and how about tools, just any command line tools, graphical tools, services, like whatever you are using that makes your life easier and maybe other people should know about it. Graphana, nice. Okay, I don't know many of these. So is it some kind of new interface for Git? Okay. Okay, thank you. So just for recording, it's a... it's TK-based interface, graphical for Git. But I also saw TIC in the list. That's actually the Git interface I'm using. It's command line based. It's very efficient, lots of shortcuts and it's really perfect. Okay, I can see TMAX. Okay, 9S. What's K9S? Oh wow, finally someone did that. Okay, thank you. So just for recording, it's a terminal user interface for Kubernetes. Okay, thank you for sharing. Okay, what else we have here? So anyone want to share about the tool of your choice that you put here? It allows you to expose roles. You have to add API and points for it so you can just get the data and handle everything from there. And it's like, wait for the end of this and if you want to go with desperate transformations or web calls or whatever you want from the UI and it's something that really tips us a lot of time. And they're one of the tools that we can actually use. Sorry, what was the name of the tool? There it is. So is it in the list? Right, left to the comment. Oh, this one? Yes. Okay, thank you. So just for recording, the tool is this one. It's Directus. And it's a graphical user interface for databases which allows you to do really fascinating things. So and is it also for relational? Yes. Wow, fascinating. So maybe we should try that in our project because we are struggling with database recently. Like quite a bit. Okay, thank you. Okay, and Python wins it again. So PyCharm is biggest. Okay, any more tools to share? Anyone? Okay, thank you. Okay, so the tool was FZF and it's a fuzzy finder that finds like you can make mistakes and it will find the directories files efficiently. And does it only work for files and directories? Or like anything? Oh, for anything. Yeah. Wow, amazing. So also does, like can it do for anything? So for example, even for commands with control R with history. Wow, okay, it's brilliant. Yeah, I'm actually using AutoJump for finding files. But yeah, it's fuzzy and it works quite well, but it's only for files. So yeah, I'll definitely check it out. Thank you. Okay, interesting. So lots of interesting tools. Okay, so let's go for the next one. Okay, and I believe this is the last one. So how are we with time? Okay, we are actually burning through it quite quicker than I thought. So if this session didn't exist where are you getting your news or like to learn something? Okay, Reddit's winning. So how are you guys feeling about what's happening in Reddit recently? Oh, yeah. Yeah. Yeah, that's true. So hopefully we can access the information still in the web archives. But you are right that it's amazing because if you're searching for something you get some Reddit thread and you can find contacts and even people having similar problems. So it's really helpful. And now with the API changes we might lose it. Okay, so Reddit, Twitter, Mustadon, nice hacker news. Okay, perfect. Oh, even DefConf. Amazing, thank you. Okay, and the root CZ. So we have some check-in Slovak people here in the audience. Is anyone still using like the old school RSS feeds or something like that? Okay, also that, perfect. Okay, nice. Yeah, and as I said actually this is the last slide. I thought that we would spend more time here so maybe I should have prepared better but I have prepared more questions but I didn't put them in this in this site. So since we still have like 15 minutes maybe we can go through that. Okay, so... Okay, this question I had prepared but I tried to like filter out those that I didn't think would be that interesting. And yeah, text editors and IDE so maybe I should have included that in the list. And I saw already some beams and pie charms. So any new feature in any text editor or IDE that you really like it's worth sharing. Super well, yeah. Nice feature. So I wanted to know if you can like Okay, so the comment was that VS Code has an ability that it's integrated with CGPT directly and you need to like apply for beta to get in the program and the comment was that it works pretty well. Okay, thank you. Yeah, that saves you a few old tabs to going to the website. Okay, and I think we pretty much went through it. So we have like 10 minutes left. Okay, so any questions or comments like anything want to share anything like my plan is like to save the data from there. I'll try to go through it and maybe apply like listen to the recording and explain some of them and I will share all of these in a little post. So if you missed one tool or something it should be available. Yeah, I will be on vacation one week, but when I get back from the vacation so in two weeks I'll try to put it in the DevConf blog and share it. So if you missed something it should be up there. And thank everyone for finding this and participating. I think it was lovely and I'm really planning to go through the tools and all things you put inside because I really love when you are inspired and share these fascinating tools and features. So thank you so much for joining. Any questions online? Thank you. Thank you. Hello and welcome. My name is Adam Miller. I am a member of the engineering team at Red Hat focused on Ansible. And today I'm here to talk about Zen of Ansible. This is a talk originally created by a colleague of mine named Tim Appinal who was unfortunately not able to be here with us at DevConf this year. So not that long ago I found out that I would be giving it. So I will do my best to present this in his stead. So really quickly before we get started who here does not know what Ansible is? Okay, good. If you did not I was going to suggest you find another great talk from DevConf because we are not going to cover what it is or how to use it. We're simply going to talk about some good practices, some guidelines and kind of the general ethos of the way to create Ansible automation content. So really quickly about me I've got about a decade of experience in the Ansible community as an upstream contributor before I joined the engineering team officially prior to that I've got almost two decades of experience in as a Fedora engineering contributor. So I've been in the open source community for a while. It is very important to me. It is part of who I am and if anybody is curious the tattoos are real. So I do have a red hat shadow man and an Ansible tattoo on my person that were part of personal milestones as part of me in the open source community. So this is near and dear to my heart. So about this talk this talk was originally given as an Ansible best practices talk. It was created for an event in 2016 and it's kind of evolved since then. There was a suggestion to adopt the Zen of Python by Tim Peters. However, the direct adoption was not perfect. So there's some aphorisms that we will kind of mutate a little bit and adapt them to Ansible. So they are more idiomatic for the subject matter. And as I said my colleague Tim Appnell who is unfortunately not here with us, he's the original creator of the Zen of Python or sorry the Zen of Ansible and the original creator of this talk. So the Ansible way the goal here is to take what we consider a set of aphorisms or idioms and apply them to the way we think about creating Ansible content. So we want to follow a set of guidelines or philosophies that are in line with generally how Ansible has historically functioned. At the core of it, Ansible is a hopefully simple, powerful, agent-less automation tool. Everybody in the room who has already disclosed that they know what Ansible is, is hopefully aware of this. Oh, why did that go all right? What just happened? Oh, haha, okay. The slide has animations. We're all in for a treat here. When I don't view this in presenter view, it doesn't do the animation so I'm very bad at slides, I apologize. Okay. As I mentioned the aphorisms are at the core of the goal behind the Zen of Ansible in creating this. We don't necessarily have a hard set of guidelines or recommendations but we have kind of a set of mentality changing approaches to problem solving that we'll kind of go through here. Ansible is not a programming language. First and foremost, when you create an Ansible playbook think about it in a declarative fashion. Do not attempt to take programming idioms into your playbooks. Abstract that away into either module, plug-in or templates. You should bring the composition of a playbook into a simple of a demonstration or a representation of what you're trying to do. YAML is not great for coding. YAML is a markup language. It is something you should declare your intent or your desired end result. You should not try to code in it. Ansible users are not by statistics speaking based on data we've collected. They're not traditionally programmers. They're traditionally DevOps practitioners or systems administrators who are trying to evolve their skill set or remove monotony and repetitive tasks from their daily lives. In light of that we should be thinking of for those of us who do create Ansible content, we create modules and roles and different methods delivered in collections for our users to consume. We should be thinking of that with that audience or that person in mind of somebody who is not attempting to write code. They're trying to declare a series of in-order tasks to be executed. One example here is if you're authoring content and you might start with the shell built in and this is fine but it's difficult to read. We find that every command line tool call from the shell module is an opportunity to create a more idiomatic plugin or collection. Now, we have a set of parameter construction. This is not particularly easy to read or maintain. It's effectively shell one-liners that are piping to grep and that's just fine if you're creating a shell script but not necessarily what we recommend in the space of playbooks. Here what we've done is we've taken all three of those tasks and we've folded them into a single module. And every component that is required to represent the desired end state is now represented as parameters we pass in. The goal is it is easier to read and understand. Somebody who does not have expertise of the command line tool that was just used can read this and potentially use it without that added expertise of the topic space. We can ask ourselves should people be using things they don't fully understand? Well, I think that we use things we don't fully understand all the time. I would wage your money that everybody who owns a computer or cell phone does not fully understand the intricacies of the kernel or the memory allocator or their I.O. scheduler or elevator depending on your desired vocabulary term for the topic. Here's another example. This particular version is objectively nasty. This doesn't use the script module but instead is overloading both the template capability and passing this off to an inline Python interpreter which is painful. This is difficult to debug, this is difficult for users to reason with and while it is clever it's probably not how we want to present ourselves to the world. This is actually using JINDA2 interpolation, it's using JINDA2 conditionals and then passing it off to a Python interpreter which is rough. This is a slightly less mild form of our previous example where we find a set of parameters that are very clearly passed to this command line tool this could define a user interface for a potential native automation in Ansible. We're still relying on the system path we're still relying on the role and here's how we can refactor this. In this example we've effectively taken what we did and we abstracted away all the programming logic, all the things that are required into proper Python code where they should be in a module and then depending on what you're working with it could also be a plugin, maybe an action plugin or otherwise. But the logic is implemented using a proper program language, in this instance Python. For those who don't know you can actually implement Ansible modules in any programming language you like as long as they comply with the JSON interface that has been defined. We've examples how to do it in Golang and Ruby and Rust and a couple other things. I will note that if you're going to do that, please be mindful of the processor architecture that you're executing against different between your controller and your remote host. But yeah, that's not particularly useful for the concept of the Zen of Python or Zen of Ansible. I've been a Python programmer for like 20 years so saying Zen of Ansible instead of Zen of Python is taking me a moment to adapt to, apologize. So these are a handful of interpretations from the aphorisms that come from the Zen of Python. Clear is better than cluttered. Concise is better than verbose. We want to make sure that the playbooks we write are easily readable so that we remain true to the principle defined before of simplicity. And that's a goal. That's kind of something we have to always strive for as we refactor or continue to focus on the automation that we're creating. Simple is better than complex. Complexity is unavoidable but it should not be expressed or represented, if can be avoided in our playbooks. The playbooks should again maintain that simplicity. And readability counts. I think this is probably a well-known heartfelt aphorism for those of us who are in the Python development community because Python also likes to favor readability and we want to carry that forward in our playbooks. The goal is not necessarily for a playbook to be self-documenting but for somebody who has some level of subject expertise even just a little bit to reason with what is happening in our playbook. Follow the tasks. They are in order executed. What do I do? Don't use shorthand. This is valid. This is something you can do in a playbook. But on purpose the parameter list scrolls off the screen. This is difficult to read. Your editor will not be happy at you depending on who you're collaborating with. They will find this difficult as well. So you can technically pass all of the parameters for a module and a task on a single line. Don't do that. It makes many things hard including syntax highlighting. So if you're using the language server I'm sorry, the LSP for the life of me forgot the P language server protocol if you're using the language server protocol implementation for the Ansible language which is available for VS Code Emacs, Vim, series of other ones anything that knows how to actually properly import an LSP it will not properly syntax highlight this because it's very difficult to reason with. Oh my gosh, I keep hitting the wrong button and I'm jumping more slides than I do. So this is the exact same three tasks simply reformatted slightly it is again more easily readable and able to be syntax highlighted. No matter if you're creating playbooks, roles or modules something we want to keep in mind is that Ansible is not getting things done. It's about accomplishing a task at the end. So helping users with their tasks is the top most objective and we want to keep in mind that the user experience has a higher weight than ideology so even though we have a set of guidelines and a series of recommendations there are times where we'll deviate from those in the interest of the target audience and this is an example so for Kubernetes users they are used to a certain interface into their API. They typically express it as a YAML specification and it has a well known thing and for everybody who's looking at this saying I do not want to rewrite all of my Kube YAML in a playbook don't worry you don't have to, you can just tell it where to find the Kube YAML and it will utilize it but there's a set of answerable best practices that say a module should abstract users from having to know the details of getting things done. Well there's a lot of details in here and that seems to go against that best practice and the reason for that is because this is the interface that Kubernetes users are used to and we have many of these examples also for networking automation because network automation is a different persona and mindset and the way that a network administrator interfaces with a switch or a router is going to be relatively different than how those of us who know how to interface with Linux servers would interface with our Linux server. The command prompts are very different the types of workflows are very different and we want to make sure that as we try to follow our answerable guidelines to make sure that our playbooks and our roles are democratic we keep in mind who our target audience is. So Arthur C. Clarke originally wrote any sufficiently advanced technology is indistinguishable from magic. So magic conquers the manual. What do we mean here? The idea is that the underlying componentry that allows Ansible to operate on a remote resource should be hidden from the user. That doesn't mean that the user can't go and find out how it works we shouldn't make it secret but the user interface should not expose the underbelly of how it functions. So this is one that Tim came up with and I believe he has a name for it and I can't remember what it is all of a sudden but basically convention over configuration there should always be extra knobs that can be turned in configuration. However if there is a subject matter expertise or an audience that expects a certain behavior default to the certain behavior that they expect. Don't expose that in a way that they have to always accomplish the task by including all the details in one place and I'll show an example here in a minute. So for this one we have a playbook that has to pass in these four parameters for every role or every task and for those of us who are familiar with automating infrastructure as a service platforms utilizing Ansible modules there's a lot of situations where you find yourself having to pass a set of parameters to every task. Well the reason that there is the ability for a default and a module default is to provide those things only in one place. You can set it as variables or inventory information and then it will automatically be passed those modules. Another option is a connection plugin. There is a framework inside of for those who are module developers and plugin developers there's a framework called HTTPAPI and if you use HTTPAPI these types of operations can be taken out by providing that data at the inventory level. Therefore it can be parameterized it can be inferred and also be locked in your vault. So for those who are using please use vault something even if it's not ansible-vault use hashi-vault use cyber arc use something don't just put all your passwords in plain text I beg of you but you can put these things in the appropriate places and define them once and they can be reused over and over throughout and the goal is to provide that interface to your users so they're not constantly having to deliver that. Okay. Ansible is a desired state engine I think that is not something that everybody always considers but if you look at a particular task that task defines a state many of the modules have a state parameter present absent upgraded enabled it's a state engine so when you're creating your automation to deliver to users think about state transition a state transaction inspect state maintain the oh my gosh I blanked on the term idempotency maintain the idempotency of the operation such that the user will only inflict change when necessary there are going to be moments when you don't necessarily want to do this or you cannot and that's okay but document it document it well in the module document it well in the role document it in big bold letters this is not idempotent this operation is not and maybe explain why provide some documentation to the underlying the system that requires that to not be for example there's certain storage configuration with mdrade and there are certain operations that cannot be taken out on an mdrade system without inflicting change because that's just the way it is and Ansible can't necessarily change that we would love to go around and patch all the software and all the world to function in a method that we think would make sense I don't think that's realistic or necessarily feasible so sometimes we operate within the the variables or within the parameters of the system that we have to operate. Complexity kills productivity complexity is something that we want to avoid and something that we want to not introduce to the user so this continues to kind of reinforce the simplicity of what we can do with our automation and the simplicity of the interface to complex things we can accomplish with our automation so if you have a single playbook with a lot of conditionals consider one of two things finding sets of conditionals that are identical move those tasks into a block to find your conditional one time or maybe you're trying to do programming in a playbook maybe that needs to be taken out if you have too much logic of your control flow in a playbook or in a role maybe that needs to be moved somewhere else so strive for the simplification the implementation is hard to explain it's a bad idea I'll say it again if the implementation is hard to explain it's probably a bad idea for many years people have followed the right to call in English the KISS methodology keep it simple stupid that's not a nice way to say it but that's the moniker I didn't make it up somebody probably I don't know in the 70s did the idea is we want to deliver something that is understandable to a possible audience because again I would wager not everybody uses a computer or a cell phone is aware of how the kernel operates they don't need to they can be productive without that level of detail every time you call the shell command this is an opportunity to automate more idiomatically if you find yourself often times calling the shell module it is a good moment to ask yourself or those around you if this should be a module or a plugin so that we can then maintain that idiomatic idempotent interface that allows us to declare state because running a command is not a state declaration it is a method to accomplish a task that is I would almost say try and give a good term for it it's a little brute force it's useful if you're at the command line and you have an idea and you need to accomplish something you just type it but if something is going to be repeated over and over and we need to again follow our state transaction model it's not as nice so again introducing that to the user via the playbook interface something we would like to avoid so this is an example of installing software this is a simple example that was chosen there's often times you see a series of command line options or operations delivered in documentation every time you see a series of command line operations it's an opportunity to automate that's an opportunity to deliver that exact same end result the desired state in a repeatable user interface simplified method so we don't have to accept what is always true we can always refactor just because something is the way it is does not mean it cannot be changed it's software everything is a movable object nothing is set in stone we can always refactor that doesn't mean we should go around and break everything or necessarily remove backwards compatibility but we should be open to the possibility of improvement and one of the things we want to deliver with automation is to remove friction we will find many cases where disjoint systems or environments that are constructed of components that don't have any native integration Ansible can be the glue for that we can create automation that will allow users to accomplish things that is more valuable than simply a sum of parts so in line with just because something works it cannot does not mean it cannot be improved you will notice in the top here we're calling a lookup plugin for a template and then passing that to a string interpolation and converting it from YAML to a string that is not a nice user interface that user then has to every time they want to utilize this functionality or declare this state they have to type that out or they have to then know that added bit of knowledge so a minor improvement to the module is simply accept a template and abstract that functionality away because on the back end we can in the code of the module still accomplish the same thing we can even use the same libraries that are used to implement the lookup plugin but we simply make the declaration to the user a nicer experience so for anyone who's been around in the IT industry long enough we know that change is inevitable change is constant and the only thing that doesn't change is that everything will always change automation is not any different it is a journey that will never end you will always find another thing to automate or a new method that is better than the old method to automate something that you have automated in light of that I always joke that your goal should be to automate yourself out of your current job not because you want to find a new job or because you want to stop working and being able to pay for things because if you automate what currently consumes all your time you have the opportunity to solve new or different problems because there is never a shortage of problems to be solved because the journey never ends and what you automate today will not necessarily be what you automate tomorrow because again the IT industry never stops moving so if you want to dive more deeply into each of the aphorisms that we covered today here are a set of links that I highly recommend each of you go and read and check out and we also oh I did not do this I'm going to wait people are taking pictures I should have put up here in the next slide the matrix information because please come join the community please come join the conversation be a part of what we are collectively building because as people who use Ansible as people who contribute as people who evangelize for Ansible we are all part of the same community we want to make sure that we're talking to one another helping one another solve problems and simply sharing our story so please come join the community and thank you questions comments I love it all yes question okay so the question was I made the assertion that statistically Ansible users are not developers however how would an Ansible user know how to accomplish something without that experience to get that right question kind of how can they jump to the model if they cannot create it so the question is if a Ansible user is not a programmer and they are using the shell module how do they jump from that point to creating a module if they're not a programmer so this is where I would define a distinction between automation content creators and automation users content creators most likely a programmer I would wager that many people who simply use Ansible to accomplish their tasks come from a different background or are part of a demographic that don't traditionally write code now I do agree that we're seeing more and more people learn to code I think the expectation is that even those who were traditionally systems administrators become programmers the DevOps movement has probably probably it's probably subsided slightly from the original peak of its hype however I do think it is a continuously ongoing very positive movement and we see that continuing but there is a wild portion of the world that is still not moved on to those newer practices SRE, platform engineering DevOps, practitioners those kinds of things so statistically speaking by user base we still see majority of the users that are not programmers I'm not saying all users are not programmers I use Ansible the laptop I'm talking to you on I'm presenting from right now was configured with Ansible every computer in my home, every computer I use configured with Ansible playbook and it's exactly how I want it it takes me 20 minutes to reinstall an operating system, it's lovely and I'm a programmer being an Ansible user doesn't exclude you from being a programmer I was simply saying that we have found that majority of the users don't have a traditional programming background so for those of us who are creating content to be consumed or reused should be mindful of that demographic Question Do I recommend for a traditional sys admin to learn how to write modules I recommend that everybody learn how to program at least a little bit that's a personal opinion I'm open to the opportunity and to the option that I'm incorrect but I think that it's a useful tool to have some proficiency in a programming language and you don't have to get down into C or Rust but Python or Ruby or Pearl something that allows that audience to accomplish their task easier or better than by hand is advantageous so yes I think everybody should learn because everybody no but yeah I think it's a good idea Question Yes thank you Ansible.com slash community all of the information that I did not put in this final slide that I meant to can be found on that website Ansible.com slash community Questions Alright thank you so much thank you and again I should remind that compliance is literally what a customer running a system makes with their FIPS Auditor and vendors involved it's not like a technical state known in advance so if we look into the SHA-1 we can see that well the requirement is to get SHA-1 gone by the end of 2030 but the reality is that the guidance is given right now so if you have a certified module then that certified module well there are none at least of what we care about they all implementations under test and laboratories basically say these guidelines apply now not in 2030 you have time to prepare but if we certify modules we have to do it now which means all laboratories have some different standards so we need to go towards these guidance and the most important for us is SHA-1 is not allowed it affects all the crypto specifically in the Kerberos case crypto modules cannot instantiate non-well known curves whenever we removed completely the whole certification process is literally throw a module for investigation get feedback and repeat multiple times and sometimes it takes months sometimes days to get the feedback and it's always interesting to find the problems literally close to when your project has to be released and then you have to to do some fire burns or something with all of this so the reality that we have now is that if you take the strict understanding of 140-3 then you cannot interoperate with active directory period this is not possible because there are no overlapping cryptography primitives that could be used at all so active directory only supports the ciphers in Kerberos from the RFC 3962 which is using the Kerberos key derivation function that is not allowed anymore then on the other side they all use SHA-1 which is being asked not to be allowed anymore and you could use it to verify legacy signatures but we cannot really put this into the world of legacy signatures because they were not generated like years ago these are signatures generated as a part of connection establishment for example right now strictly speaking you cannot apply any exceptions here okay so the game is over I can end this talk and you can spend the rest of sunny Sunday somewhere else well we come back to the interesting story that happened two months ago so Microsoft actually submitted their own implementation on the test there's a bunch of cryptomodules that they have to do and obviously nothing about this talks about active directory because these are like in our case actual cryptomodules that Kerberos part of active directory will use to implement their thing so Kerberos part is basically application on top of this but they don't have Phipps 140-3 certified cryptomodules yet they don't need us so we all work in preparation in addition to that on the same day by the way if you notice this is April 14th for the majority of it on the same day leading Kerberos developer in Microsoft writes a blog with quite interesting content which boils down to hey we need to build an entire crypto stack in Kerberos implementation in Windows and he finally mentions these RFC 8009 which is the encryption types for Kerberos allow it in Phipps 140-3 so great there's something finally something that gives us a bit of hope and from this perspective we at least can expect that our future will be bright really I hope there's too much darkness right around but of course we live in present and back to the present we have the funny question is how all of this is enforced so you have crypto modules in our case libraries that some other applications link against these are pretty complex libraries the cells, the API they provide have certain semantics and so on and you can configure these libraries to apply certain things how we manage to make it coherent consistent in what is supposed to be expressed by by the regulatory bodies so the easiest answer is we try to isolate it all in kind of a system-wide configuration this is not a new topic it's existing in Red Hat Enterprise Linux and Fedora and other downstream distributions for quite a while and the crypto policies project is effectively defining a set of rules a nice set of rules that allows you to generate a bunch of the configuration files for these crypto libraries to apply a consistent set of rules and it also allows the distributions have these consistent set of rules not necessarily be the same so for example what is default or FIPS in REL is not necessarily the same as default in Fedora at least at this point because Fedora has some community requirements and REL has some business requirements that not always align that's fine that's what these policies are for there's a bunch of them and they used already in multiple places this is just an example how test outputs look at them and these test outputs they effectively are generated configurations already I'm not showing the original configuration I'm showing what is generated and then load it into the applications when the library is being initialized these libraries they have these policies they have a way to tune them so you can have a main policy and then can add or remove certain things within context of that policy so for example the names here they are just names behind these names there are small configuration called sub policy so for example this AD support in REL9 means enabling encryption types that Active Directory understands in REL8 it means also enabling all the encryption types that Active Directory understands but in REL9 it doesn't contain RC4 ciphers for example no SHA1 is kind of what Fedora ships by default the default configuration in Fedora enables SHA1 and you can add sub policy no SHA1 to disable that this no SHA1 sub policy would make no difference in REL9 because it's already no SHA1 and you can combine you can actually apply multiple of those and have something how this multiple sub policy supplied configuration matters or means something is really up to the business up to the customer and there if this is FIPS environment their FIPS auditor to interpret and analyze and provide the means but we cannot really say that they are compliant in the end we just provide means to be compliant so just one example when you get default crypto policy on REL9 you have this Kerberos configuration that has some permitted encryption types these include encryption types from RFC 3962 which means SHA1 HMAC based once and also the ones from RFC 809 so the ones based on SHA2 while if you take the FIPS policy you get only two encryption types from RFC 809 you define that the well-behaved application will only see these encryption types and only request them and therefore only operate on these encryption types this is kind of what system level system wide level provides the application still needs to do something to operate in this environment because you get permitted encryption types that you don't understand from your perspective there's nothing to work with the application needs to do something to leave there so how this goes on the application side is that it's roughly in most cases it's transparent so you initialize a crypto library crypto library loads these configurations and then loads providers or whatever and applies these defaults the application might change some or explicitly request certain things but if if the crypto module does not have implementation for a particular crypto primitive then nothing can be done of course so in many cases things are transparent but failure is also sort of transparent this is a new thing for example in FIPS 140-3 the indicator API it's a sort of a requirement that crypto modules have to implement not all of them yet implemented it I know that NSS has merged this API for open SSL discussion is still open how it should look like and how it would be used but there are always concerns that existing applications might fail simply because they not prepared to query this additional information and it's easy if it's implicit like if you called something something failed you bail out early and it works that's fine fine in the sense that you see the problem early enough you cannot log in over SSH and that's all you get but if for the explicit one if application needs to be modified it needs to have knowledge of the API to call and if there is no API of course that's a bit different question so in general we can all of these modernization and things that happen at the application level in three broad categories one of them is really it's not a FIPS related it's a really modernization of how application handle their crypto operations is what we call algorithm agility it's the same for being prepared for post quantum same for FIPS basically you have to be able to negotiate certain operations using certain be prepared that the actual algorithms might change be prepared to negotiate these changes between client and server in a way that doesn't break you if these algorithms change as long ideally as your libraries your crypto modules provide these primitives and you are able to discover and negotiate them and operate on them you ideally should be good the truth is that it's not always where we are the other part is defaults it's nice to have self-adjusting defaults it's one of possible mechanisms to give you self-adjusting defaults if the policy profile changes then application automatically adjusts to not accept what's not accepted in the policy but then we get to the trouble of how to migrate data that was created before the policy change happened and all of those three in the broad strokes they all have nightmares they all have happy days and they all have probably a business that failed to operate and a lot of pressure from the people who upgrade it and nothing works anymore so as usual planning for that is the key and planning in advance is the key as well I just want you to remind remember that the FIPS 140-2 was accepted 21 22 years ago so we happily forgot how this all work happened and we as computers, society technology companies open source communities and so on we do have work to do and we forgot how we did that work in past because we changed generations of developers we changed technologies and so on and sometimes it's too late to come with this hence urgency to fix sometimes this is probably we have some time until 30 December 31 2032 to get rid of in all remaining obscure corners of our software but of course what we do now is basically fixing the broad strokes like whatever we can cover as much but then there are still things missing or be discovered only when you actually have to do that work so with the application modernization and the algorithm agility the main problem is not changing the code the main problem is agreeing where we are going and how we will be changing in most cases these things start with protocols protocols they often defined by some consensus across the industry across different again bodies ITF and W3C and all the others changing standards is a slow process especially in the crypto area update in RFC is to remove certain things is is long time even if everybody agrees that certain things needs to be moved on it still takes time to do that so this is part what you need to do not just changing the code you also have to upgrade specifications and work on it and sadly in many cases this part is forgotten maybe in a hope that somebody else does it not it's not my job right that's a common problem we always forget about that it's us as a community we have a lot of professionals have to do this so the other problem is sure adjusting implementation takes a lot of time for any submission in this kind of crypto area you have to do a lot of analysis of the code just purely to not sleep errors and then adjusting it in a such way that the old deployments can at least be migrated to new ones or at least can interoperate as a part of it maybe with the back ports to the old code base and so on this blog by Steve Seifers also had a small note in it that this is Microsoft working on these changes that they will be back ported to Windows Server 2012 2060 and 2022 or it will be a purely what they call V next so next major version the answer is we work for next major version so how we do the back ports is unclear we meaning those developers like real nine have to accept existing working systems that don't have this new implementations so that's the set of challenges we deal with or have to deal with there is one example we want to show you today it's how we handled it so far with the Kerberos PKI init it's a protocol extension this is effectively using smart cards to authenticate over Kerberos so Julian please take over PKI init is basically an extension for Kerberos protocol for certificate authentication and it's indeed a good example of the kind of trouble you might run into where you have to deal with the new field restriction so it's basically an issue of algorithm agility especially in two areas the signature types the ashen function that I use is in signature types and the parameter groups that I use for Diffie, Elman, key exchange processes basically they are three of them that are standardized in the RFC group 2, group 14 and group 16 the two first ones being mandatory to implement so and even when you have standardized in the RFC that are sufficient to comply with a fixed restriction it may actually not be enough because there are sometimes areas in the RFC that are ambiguous in the way they are defined so it tends to create a different understanding of the RFC depending of the vendor implementing these libraries this has been typically the case for PK init with algorithm agility for signature rows they have been and the RFC has been understood differently by the OpenSSL team and by the Hymdel team for the implementation which resulted basically in some cases where interoperability was not just possible and basically even when there are RFC available that provide somewhere to comply with a fixed requirement the issue is all the implementation they don't necessarily implement all of them because usually there is one RFC and then newer RFC are at the top of it there is a new algorithm new mechanism and this is an issue we face in Kerberos because we have the Hymdel one and the Active Directory one which also has this all set of specifications that are different from the RFC ones so we basically meet all kinds of issues because of that the fact for example that RFC is not fully implemented for signature in MIT Kerberos still in Pekainit there is no support for ECC certificates at the moment and on the AD side we are still limited to SHA1 signature for Pekainit because that's all they support for RFC keys in the process of Diffy element key exchange and there is something else we are kind of concerned about it's that we currently have two slightly different implementation of OpenSSL FIPS provider the upstream one and the downstream rail one and we suspect we might eventually have yet others implementation from different distributions and this is something we are worried because we might end up with a situation where it's difficult for our system administrator to figure out what is the proper configuration they should apply for their environment because there might be different recommendations in case certification law they don't come up with the same requirements basically they have different implementation of the understanding of the FIPS standard and the interesting part here is that if you look into the implementation under test cryptomodules list you will find that every single rail downstream rebuilt like Broke, ALMA every single other loose distribution from Canonical, from Suzy they all have submitted their own cryptomodules for the certification which means that even if they have maybe similar code base I'm sure that they will have differences because they use maybe different labs and they have different state of the patches that they apply hopefully they actually check each other whatever is available and try to kind of synchronize in my experience there's also tendency not to look at the application level problems until you get actual things certified and then people discover these problems so I go through some practical example of what has done wrong basically in the process of complaining with restriction we went through a few issues we've aimed out one of them was support for the well-known groups for Diffie-Elman key exchange which is part of the Picainlit process the thing is the group 2 is not actually supported by the OpenSSL library because it's considered too weak as a crypto basically used as a default by Heimdall for Picainlit so this has caused some interoperability issue we basically fix right now by recommending to change the configuration on the Heimdall side there's also of course the SHA-1 signature issue at the moment Windows implementation of Kebras does not support any newer version of SHA for for ever say key exchange as part of Diffie-Elman we hope we might be able to fix that by supporting elliptic cryptography for Diffie-Elman in the future it's currently being worked on upstream and this is an issue we also have for the previous implementation of Kebras on all the versions of REL because I mentioned it earlier there's still not proper algorithm agility in MIT Kebras for signature so it's still basically hard-coded in the code so far we upgraded to SHA-1 signature but all the versions will still produce SHA-1 signature you have to verify in case you still have this host in your environment this is also a few words about how it's supposed to be implemented signature algorithm agility as part of PK in it there is this supported CMS type attribute that is supposed to basically provide a list of all the signature algorithm that are supported by a certain client by a certain agent actually and so the issue with MIT Kebras right now is it will always generate a SHA-2 signature it will advertise the fact it understands the SHA-2 signature but it will not take the supported CMS type attribute from other agent into account this is basically an explanation of how we deal with the fact we have to integrate some exception in the in the fifth crypto policy for Kebras in case you still want to achieve interoperability with active directory as until IS SHA-2 is actually implemented by Microsoft the issue is this algorithm like SHA-1 or other they are not part of the fifth provider which is the one that is available by default when you use openSSL so the approach simply is to use a library context and load the provider that is making the algorithm you need available so and this is so this is in fact bypassing the limitation of the provider but this is still controlled by the crypto library that's what these modules like AD support or AD support legacy are for they basically modify the Kebras configuration and basically allow to use this bypass method to access this algorithm so a few words about the incoming version of Kebras 1.21 it's actually already released so it adds a few things like basically upstream is actually quite cooperative in the not fifth support but at least modify accepting some modification in the codebase for making our life easier in the process of complying with it so they are for example they allow us to make the importing of some well-known groups optional like typically the group 2 which is not provided by openSSL it has been causing some issue because the beginning module just failed to load in case it was not possible to load this group so they accepted to do this kind of modification to allow us to plug in to work on the less so Alexander will be giving some details about encryption type compatibility between active directory and MIT cameras so far was just PK in it which is admittedly one of the complex parts of this thing but there's another one so specific to active directory active directory issues Kerberos tickets that have additional information that information is really crucial to have the environment where basically your Kerberos identity tied to your system identity they were in past two years there were a number of issues security problems that cause it specifically break through in the environments where you have active directory and Linux systems working with each other. Features of active directory allow for example to each user to enroll machines like enroll in additional machines there's a quarter of like 10 machines or so some users find out that if they call a machine as a Unix user then a certain behaviors within active directory allow them to create machine with let's say name root and then use identity of this machine to login as root on any other Linux machine so to fix this kind of problems Microsoft added additional check zooms over certain fields in the Kerberos packet and fixed some parts on their side we worked together Samba team Microsoft, Red Hat and few other vendors working together to fix this more than a year released it in I think 2021 in November then after that we found out that well some of these check zooms a different check zoom is can be attacked with pre-imaging attack and that kind of pre-imaging attack existed there for 15 years nobody noticed it when the specification was written so Microsoft came again with a new release in November last year introducing yet another check zoom while working on fixing some of these things we found out that we switched to on free APA side we switched to use the SHA-2 encryption types and that means these check zooms will be done with the SHA-2 signatures and suddenly Microsoft servers started to reject these tickets we reported to Microsoft and apparently they only check not the check zoom, they check the size of the buffer if that size of the buffer is different from what they expect for SHA-1 check zoom it's a failure for them so when we reported they were working already on introduction of the new check zoom and some other semantic changes after they release these fixes we found out that they silently fixed this problem as well so now we can apply this check zoom but at that time we already added functionality which is part of 1.21 release to allow KDC to tell or hint that when you're talking to active directory you can change the encryption type to a different one just for that one so that you can keep being interoperable this will not be needed in the case when they finally support the RFC 809 because then we can use SHA-2 basic encryption types but still for the older systems we have to keep this maybe these older systems will not leave beyond November this year when Microsoft announced that they want everyone to switch to the new builds which introduce this new check zoom that prevents the pre-imaging attack but these are things that we have to deal with this is not kind of FIPS but it is driven by our investigation how to make it all working in the FIPS mode and then if we move forward we'll already work it like for 5 years with Microsoft remote procedure calls implementing all kinds of operations like creating user setting a password on the user replacing certain things mutual authentication between machines and domain controls and so on most of that was still using RC4 in the operations so some of these operations changed semantics well they introduced new versions of those operations basically requiring ASBASEDSESSION key then using that ASBASEDSESSION to operate with whatever is there which is a type of activity that is allowed in the FIPS because you have a secure channel using the approved cryptography and what's inside this channel it's considered plain text and of operation but still for some of them we still need access to RC4 just to perform some of that plain text operation and then we also don't want to leak encrypted material the passwords and so on then creating it so had to refactor some but to change the code in a such way that it generates cryptographically strong passwords for example for trust between different AD domains and never leaks it to the application so it's always this library handles it and ensures that whatever is handled is always handled under ASBASEDSESSION so it's still not enough for FIPS 140 there's three but we really hope that the work they are doing in refactoring their Kerberos crypto stack will result in more changes and of course on Samba side we cannot do more until they do publish these specs and actually do this work so we are still in the process of doing that now to the defaults new installations are kind of easy you change defaults new installations basically take them and for example in IPA changed the default installation using the master key for Kerberos with the new encryption types fine works okay the same encryption types as before default policy blocks them so KDC instantly kind of not allowing to use them but for the new you don't have any new old keys at all you have users being created or machines enrolled you get new stuff generated and for the kind of migration environment if users had old keys and old passwords change of the password will use new scheme it automatically upgrade the hashes but there are a few things that kind of fell through fell through on this so for example free IPA supports one time password so using the HOTP or TOTP tokens even software tokens to use and the standard still says sha1 is there and in this context using sha1 is okay because it's not cryptographical kind of operation that matters here but you cannot load sha1 from the FIPS provider so you cannot really use it you need to do this dense of loading in other provider and in a different context to be able to operate it it's not available if the default policy is not allowing you to use it by default so the worst part here is not that we cannot fix our side we can it's that the other software will not work for example google authenticate or only understand sha1 based tokens if you switch to a different and people are switching to a different tokens using sha1 to 5.6 or 5.12 and so on they cannot use google authenticate strangely enough everybody else is kind of implementing a wider spectrum because it's easy to have a different sha functions hashes used there so these kind of things they appear from the field testing finally data migration if you have old systems and you add new systems you're supposed to have business continuity the replicated data from the old system kind of continues to be useful in the new ones so on rel 8 if you have a system installed with infips mode with IPA we don't ship somebody so I'm using IPA for example here if you have rel 9 infips 143 in general you shouldn't expect things working but they kind of not working because what was allowed in 140-2 is not allowed anymore in 140-3 so in case of samba which is more relevant to other distributions and so on they actually have actual plain text of the password encrypted in the database samba uses GPG to wrap all these blobs and so on so they always stored encrypted but they have access to that so administrators can regenerate keys for the users for the new materials it's all sort of possible to do well when microsoft introduces support for those new encryption types because they have to be compatible with the windows machines and windows machines are not supporting them yet but in this imaginary case yes you can do all of this stuff because you have all the materials also for the main members they actually have the plain text credentials encrypted on the machine so they can regenerate and usually authenticate with the domain controllers in the free IPA case it's more interesting because we first we don't have plain text passwords anywhere on the Kerberos level we have Kerberos encryption keys and on the LDAP passwords we actually have hashes so there's nothing to generate from of course there is a mechanism to transparently upgrade from LDAP bind where in LDAP bind you can take a plain text password and apply the older scheme see that the hashes are actually compatible so you can compare them and so on and then re-encrypt this is what we're doing already in the plain Red Hat directory server case so it's possible but the problem here is that free IPA supports more than passwords free IPA supports non-password base authentication which you cannot do over LDAP and then base it OTP, HRTP is supported but we are working on well smart cards are not supported they support it only through the Kerberos so you have to solve it there but luckily a smart card means you don't have password you have a cryptographic device and possibly a pin there and pin applies to the device not to the something here so this problem is not existing there at all we are working on introducing FIDA 2 support so we are both authentication and that means these these kind of keys will also be supported and they also don't have passwords so that's the best way maybe at this point you just start thinking about migrating all of your users into non-password authentication and solve it but of course to migrate the old system or to apply these changes and keep the old system working you have to extend policies and this is where sub-polices become really crucial you can extend the sub-polices do migration because this allows to accept the old keys and so on migrated everything changed the policy back and now in the new world of course that has to be phips from scratch on the old systems phips from scratch on the new systems but it at least provides you a path forward so on the client side it's also a bit easier because you have host keys you can rotate them you can automate things but the same system-wide crypto-polices applied there consistently well if you have unused distributions where you don't have crypto-polices then my suggestion is to work with the maintainers of those distributions to really raise your needs for them as developers this is one of the best non-upstreamed yet extensions that Red Hat Crypto Team made and then the other thing is you can switch to the certificates free IPA supports enrollment using certificates now so you can use instead of one-time passwords or passwords of admins and so on you can use pga in it, base it enrollment of the systems re-enroll the system and you get everything in place and you can use this also to rotate the keys there's a pretty flexible way of achieving this you can get quite well with this so that's literally all we have to focus here is more on getting you a bit of light into what isn't those darkened forests and any questions so the question is has there been any discussion with about FIPS requirements with Microsoft you say it or the other industry has this discussion been public or private yes and no so we try to discuss in many places there are for example for Kerberos there is a Kitten workgroup at IETF that handles the protocol evolution and things and Microsoft came forward recently there with some of their concerns and they were discussing not related to FIPS directly but you can deduce from the discussion that they concern it on this migration thingy a major concern for Microsoft is that they still have to support configurations where they don't use Kerberos where NTLM is being like a fallback and they cannot easily disable that fallback and they want to be there there are some discussions that we do in particular Julian in the OpenSSL upstream Heimdall, Kerberos upstream with the MIT upstream this is all in public in the pull requests or issues on the project sites so this part Julian works also on the updates in the RFCs for these group requirements hopefully we will get this forward so Clemens adds a comment that there is a cryptographic model users forum I think it's on github on some page this is the place where NIST and labs and all other module implementers it's the public one yeah if nobody has died it will stay in this situation forever exactly Dmitry makes a comment that disagreements on interpretation of RFCs should ultimately end in the working groups by producing fixes and modifications to those RFCs so that they are actually readable and I fully support that I should refer to my slide where I write that this takes years and we probably should start yesterday with this clarification site I also should clarify why we care about Heimdall while we do not ship Heimdall for us Heimdall represents a client Mac OS uses Heimdall as its Kerberos implementation and if you don't have Mac OS systems working against your server then you get calls from customers right yep how agile is that for me the next thing we are going to be doing is in Shawan Postquantum but fortunately Kerberos only cares about PK and NIST for postquantum you don't have any signatures right in the main part of the protocol yeah can you answer so the specs are call it the question is how agile is agility agility in the PK and NIST RFCs the answer is not much they establish some level of flexibility among the explicitly specified algorithms not the new ones so some work on extending them will be needed with the context of postquantum this also means that some work will be needed on actually defining what certificates mean in terms of postquantum crypto and that is quite unclear at this moment because the whole story with what are certificate chains would be and the format and what not it's all unclear I guess all this work on specifying that is also waiting for more definitive answers from regulatory bodies yes and a comment from Bob is that a general guidance is to extend as much as possible to accept algorithms and whatever is there even though the exact algorithms for signatures or certificates are not fully set in stone yet and I fully agree with that just adding something there are some mechanisms in place for agility the main issue is that there are actually three RFCs the original one that includes support for RSA with SHA-1 signature there was a second one adding new hash functions so basically the SHA-2 function and there is also the Diffie-Hellman electric curve support for Pekainit the thing is that Microsoft basically skipped the implementation of the second one adding new SHA-2 for for RSA-based signature so they move right to the ECDH one so this is why we are in this case right now where in MIT we only have support for the RSA-based signature so the only one we have in common with Microsoft at this point is the RSA with SHA-1 so this is what this whole issue is about so but we are currently working downstream for supporting the ECDH RFC so hopefully we can fix this issue relatively soon okay other questions? you mentioned on one of the slides that TOTP and HOTP were using SHA-1 and for that reason you couldn't use them anymore small correction that's not entirely correct you can use SHA-1 for hashing not for hashing that's being used in signatures which is the case in TOTP as far as I know but that's at a time think by 2030 SHA-1 will be completely I hope we get there easy yes the question was or correction was that strictly speaking FIPS 140-3 allows to use SHA-1 in the way how TOTP HOTP is using it I think our main trouble is that we do not have access to SHA-1 through the EVP it still works for hashing it doesn't work for signatures okay so it should be fine but it doesn't right now Lisa Buck yeah but that's that's what we see so customers come back with brave new deployments and they finally found out that there's something not working okay start investigating it boils down to certain environment in this case it wasn't even FIPS it was SHA-1 disabled by default in rally the bug that's a different one there's a bunch it works but if you are in different mode it doesn't okay yeah the comment was that just the bugs submitted on Friday around a similar topic where there is a difference between FIPS and non-FIPS but FIPS works non-FIPS doesn't so yeah as you can see this is really a lot of fluid and agile with problems coming and going in a matter of days overall I think we are almost done two minutes and any more questions yep so the question is I mentioned that OpenSSL has indicators I actually mentioned that indicators are not fully implemented in an upstream OpenSSL the pull request is still open but implementation of indicators is a requirement of FIPS 140-3 so I expect them to appear in some form regardless and the question is which one we prefer it really depends on the situation I cannot answer in advance in the case of Kerberos we've been relying on implicit ones so if we get rejection we try another path by loading a separate context and loading a separate provider in the case that we know this is the problem for the explicit one I yet need to find a specific case where this will be absolutely required I know that there are cases like I think SSH server negotiation but I'm not the person to go into details on that yep and we're done thank you very much all right welcome everyone my name is Simo Sokkyu I'm Senior Principal Engineer in Red Hat and I work as a lead for the crypto team and today I have this presentation about building an open cell provider in this case specifically for pkcs11 and I'll just go a little bit have a kind of high level overview and then some tips and some lessons learned maybe feel free to ask questions anytime I welcome them so I'm going through what is the problem, what is a provider what is the pkcs11 provider and then what happened when I tried to write one so the problem is very simple it's not a new problem actually in previous open cell version there's a thing called engines and there was an engine that allowed you to use pkcs11 so the problem was specifically with the new version open cell 3 which deprecated engines and introduced these concept of providers and the problem is having an application that is using the open cell api being able to use a hardware token generally or even a software token that does some cryptographic operation so what is a provider and why there is this change so open cell had these engines but there were limitations sometimes it would be awkward to use in applications and providers are kind of better engines if you want from an application point of view they can be transparent to applications unlike engines where there was a requirement that the application know how to use them and call specific apis there are already in fact multiple providers within open cell 3 that applications don't really actually see like the default one, TIPS, legacy and stuff like that so it's not required for an application to be explicitly select a provider it can be for example configured in open cell conf file you know transparently or loaded and you can also set properties for a provider that can change the behavior open cell to use a provider or to change the behavior of a provider so it basically allows a good configuration or agility if you want in the way cryptography is used once an application you know is open cell which was not really available I would say. At the code level it's just a loadable module like a shared library so in opencell.com you will define I have this provider this is the name and open cell can load this is your object find functions and use various management functions you have to implement generally at least a key management operations because when you're using cryptography usually you're using keys you have some operations that open cell can do that doesn't invoke keys, those are not very interesting to me at least and you also need to support at least a way to reference these keys later in other operations through open cell but that's you know it's just a self-contained shared object that can do operations that open cell can call into and these taken directly from the old open cell 3.0 design just to show you the difference in terms of where these things leave the engine is some sort of legacy thing that has its own world around it while the provider is really below the core of the library today so it's really really hidden in very low level if you want and all of the providers are basically on the same level and while the engine was a special thing together with many other special things so that's why the provider can be used somewhat transparently while engines really had to be coded specifically for. So what is the difference when an application had to use engines before and now needs to use providers as you can see on the left side with engines you will see a lot of this engine word like you have to load the engine you need to know what the engine is what the name is what is the special operation to load a key from an engine and then you do normal operations and then you close and go away but the critical part is that you really have to change your application to use these functions. On the provider side is that if you use the modern open cell API you just open a store which can be anything even normal file so you can have an application that just doesn't know anything about your provider and again there is no special knowledge of where this key is coming from what it is then you will do your operation and close so in terms of the amount of code it's kind of comparable but the good thing is that if you look the URI which is the only identifier that differs if all goes well that usually comes not even from the application it comes from some configuration file or maybe some prompting if it's some utility and so it can be totally transparent to the application itself so if the application is well written you can test with PEM files for keys and then you bring your provider that has the keys in a completely different place and the application will just keep working fine and doesn't know anything so from the point of view of architecturally working in an application it makes it much easier to plug in modules and attach send bugs to many applications of course this is not this doesn't work backwards in time so it's for new application that will use the new API but that's okay I think so what are available providers besides the internal ones there are a few good pointers in this GitHub project there are two that I consider notable because I use them a lot to learn how external providers should behave one is the TPM to OpenSSL provider it's a provider that gives you access to TPM so you can use basically TPM directly from OpenSSL and the OQS provider which is the provider that uses LiboQS which implement quantum safe algorithms both those providers were already quite well written I would say in terms of the number of operations they were doing I'm not making a good quality statement just in the extent of the APIs they were using and they were very useful to learn more about the providers before I venture into writing this other one so let's then digress now a little bit what is PKCC11 before we go there you need to know and understand a little bit what it is to figure out why or what I did and so PKCC11 is fundamentally an API so a C API specifically is kind of mediated the access from an application that is more generic to a hardware a generic hardware token like an HSM a smart card, a UB key, whatever you have so it's very simple in concept and the standard is managed by the OASIS group and it really already defines how an application talks to a share module fundamentally it doesn't implement or define the protocol how the driver talks to the actual hardware is completely outside of the scope and it basically creates an abstraction layer of potential hardware communication but it doesn't have anything to do with hardware that's kind of important because it means that you really need to have yet another piece of software underneath it's not just calling and talking directly to some hardware and it the other thing is that it's not like a monolithic thing there are many APIs defined in PKCC11 but not all of them are but not all of the drivers implement all of them so that also means to be a little bit careful what you're going to use because maybe one hardware token with its driver will support that API and another will not and things like that so it is kind of like a collections of API that you can just write something test and it will work everywhere so it really inherits the hardware limitation if you want in some sense and it continues to evolve as new cryptographic primitives or new protocols or new algorithms are being created so now we can tell it what is the PKCC11 provider it's just getting this API and stick it into open cell with a middle layer and one that created the providers API and talks to the driver that you want to use for whatever hardware you have and so it just another abstraction layer in the middle somewhere and eventually you go and talk to hardware or a software token or whatever it is and it is at that address there so what are the goals for the PKCC11 provider the main goal for me was of course to make it possible to use PKCC11 tokens with open cell 3 because the engine interface is rapidly kind of degrading its ability to do to work well due to the deprecation and we wanted to look forward and see what use the native APIs for open cells that eventually we can stop using it but the thing that really intrigued me and interested me was the fact that if used correctly it can be completely transparent to our application so one of the things that made very hard in my opinion use hardware tokens was that whenever you want to use a hardware token you have to go to the application developer and ask can you start using this engine API in the specific in the special case for me to use this token for the application while with the providers there is a good chance that you can actually send patches to change how the application works but it's not a special case it's just a common case depending on how you configure open cell then you will use either the standard cryptography within open cell or the provider the other thing is that you can use the standard PKCC11 URIs to define how to get access to a key it's a very well done I think standard in this case it gives you all the tools and tweaks you need to identify a key in a token so that you can uniquely identify the key and pull it what needed and use it and you know I haven't felt any need to do anything else so I think it's complete from that point of view the other thing I wanted to do is that I would like to get to a point where we can configure PKCC11 provider in a system like Fedora or REL by default as a standard provider in the default configuration and yet not have side effects in applications that are undesirable unless the application is actually using the provider and then in the case it will not be a side effect it will be wanted and you know of course then that's also the case where application want to explicitly force the use for the provider in that case they will use a property like provider equal PKCC11 but that just need to work and the final goal was to make it work with as many actual hardware tokens as possible because as I said many of these tokens just support a subset of algorithms subset of functions sometimes they have quirks where they kind of support something but not in the correct way and many other strange things because hardware is you know it's kind of hard sometimes and also software tokens so these are the goals for the project so how we can use it I'm not going really down in details and give you exact configuration that can be easily found online but fundamental is just a little snippet of configuration in the main open cell configuration file what you need to define is where the module is, where you compiled it you need to define what driver, what PKCC11 driver want to use you might want to define a PIN because normally what happens with hardware tokens is that you have to have a PIN to unlock the token to permit operations it's not required to set the PIN there is support for prompting if what you're doing is for example using open cell commands and you have an interactive section so you can't avoid that but generally for services, running on some servers on machine you're okay setting the PIN in that configuration so that machine can access the token directly without having to do any dance around introducing the PIN and keeping it in memory or whatever but that's up to the users and then you can run a command like the one that you see there open cell store your little keys to test the configuration and you should see something like this where it will connect to the token if there is a PIN needed and its configuration will use it otherwise it will prompt you for a PIN and then it will fetch the objects from the token and then print them out with the dash text option what we print out we implemented encoders and encoders explicitly to print a little bit more information than what normally open cell prints when it pulls keys and what we print is for example the URI because it's a URI that you could use if you want to use that key you just found like if you have a token you generated some key or not how do I find now this key that I want to use an application just use this command it will list all the keys you have you find the key you like and you just can use that URI in the configuration to use that wow so in this case I found 12 keys and I just, you know, allied it to something to put in the screen but it will print for everyone to get some information alright so in order to get there I had to learn how to write a provider that's an interesting I mean I like doing that I really love having to dig into the code but it is a little bit of a daunting proposition because the documentation is really sparse like if you do man-semin provider you get some information but you won't go very far in terms of what it really means to write a provider it's mostly about how you use it or stuff like that it's very generic so you end up reading a lot of sources and there are, you know two things to know there is the open-sell source of course so you read that it will give you a lot of hints although internal providers take shortcuts even just in the initialization code when I started I was looking at it what you end up doing is probably reading also external provider sources to figure out the difference or the things you have to do when the module is built outside of open-sell um the thing that makes things hard for real more than actually reading the source to say is that open-sell has an extreme level of indirection everywhere and very very hard to read multiple nested macros everywhere so sometimes reading the source code leaves you more confused than not reading it and when that happens I shell out gdb I set my break points and just go and see what happens maybe I do back traces and then I go back and try to find the functions with the understanding of where things are coming from where they're going then you have a better idea but sometimes gdb also get confused because of the extreme level macro indirection it's fun in a way sometimes you swear a little bit but in the end it works so the first hurdle I had was initialization but actually from that point of view once you know the thing it's kind of easy the only thing that you have exposed for your share object is basically just one function that makes it easy, it's just this function called OSL provider in it it's all you need, done, you can go well this function I think called a dispatch table which is a function table that points to multiple functions and one of them which is kind of the most important one is the one that allow you to query for operations which basically will tell what operations the provider actually can do and that will return each of them will return back another function table that then opens up will kind of cache and use at some point to do whatever operation you need to do so for example you can implement this OSL-OP signature function table we have a bunch of things but if you do that then you also need to do a few more because OpenSL kind of expected for some operation that you must have other operation also working although you know you could just implement that single operation so you end up having to build other operations but it is okay it's doable so what are operations just to be clear for example if you want to implement your own OSL version for whatever reason for fun because you have a special hardware module a special CPU instructor you just want to do some different code you will have to use a name recognized or not recognized by OpenSL if you use a name that is recognized like RSA it means that OpenSL can use your provider also when the application just genetically asks for RSA but if you implement something like a quantum safe algorithm you will have different names that OpenSL doesn't necessarily know ahead of time anyway there is a name that identifies beyond the operation type like algorithm we are implementing fundamentally and then you will have to constructors, destructors get and set parameters these are almost all of these operations have some kind of stuff like that and then you will implement your specific function like for example if you want to implement signatures you will have something like I'm implementing OSL func signature in it which is you have this thing defined and then a function that will implement this operation and then of course you will have update and finalize and whatever and verify, verify edit, verify update verify finalize so this is what a function table kind of looks like at least in my code and here you can see that I fell in the same trap of using macros so it's not actually a real C structure in this form but this makes it more readable so in the end it was okay and I picked this as an example because it's one of the shortest signature structure because the DDSA doesn't support various things so you basically have three functions that create free or duplicate a context a context keeps various information on what kind of operation you are going to do then you have at the bottom a way to get parameters you could maybe setting digest that you want to use for signature or other parameters needed and then you have the actual sign and verify functions and this is all that you need to implement to basically do a DDSA in open cell alright so then there is key retrieval and key management usually to do anything useful unless you're just re-implemented an existing thing just in a slightly different way and specifically in the PKC7 provider where keys are not directly available because the whole point of hardware modules that the key is stored safely in hardware and so it cannot be extracted in using software you have a way to go and find this key or reference them somehow so that later when you do an operation you will use the key you want so and the API to use to find keys or load keys is the API just define ways to load keys, find keys and unlock keys through password and callbacks and stuff like that and also importing and exporting keys which I also need another one which is the encoder operations in case you actually need to then use ASM one to export them in some format or import them and so on and so forth and finally you also need the key management like these three kind of always work kind of together somehow and key management does things like generating keys or preparing keys for operations and various things like that I mean I'm not going into details because you know you'll never end but just to give you an overview of the individual areas that you need to learn over time to be able to be a complete provider so this is how once you've built enough things things will start to kind of work finally so if you have an application and you want to do a signature operation you have basically two basic operations you have to do first you have to get a key and then you will have your data and you will say I want to sign this data so to get a key in the pk611 case you have this URI you'll call OpenSL API and say I want to find a key with this URI then OpenSL will see that it's pk611 call on and say oh I have a provider that can handle this kind of URI and so it will go into the pk611 provider there are various things I'm not going into but it will find what operation can do and eventually we'll find that there is a store facility there we'll call it and the provider will actually then know that there is a URI so it will go down into the driver for pk611 and try to find if there is a key that basically fulfill all the filters you have in this URI and if it finds that it will return back will cache something in some memory object store return a reference to OpenSL and if OpenSL finally will create this EDP key structure that gives back to the application and that's the structure that basically allows the application to reference a key whether there was a store in a file or in pk611 or TMP or whatever it is and then the next thing the application does like I finally have a key I have my data I want to sign it it will send all this stuff down to OpenSL which will send it down to the pk611 provider pk611 provider say oh I got this key that's a reference I know what the handle is for the pk611 layer I can set up the operation with the hardware I can tell the hardware I want to do this is the key object that I want to use this is the digest I want to use whatever it will call the driver the driver will do its own set of things eventually you get back a binary blob which is a signature you will send it up in some cases the pk611 provider might have to message this data because of sometimes for some cinger cases like ecc I think pk611 will return you know the data in some format OpenSL will expect a slightly different one maybe it doesn't play with it but eventually it will come back to the application and you have your signature from the point of view of the application the application just called OpenSL to do a signature everything under the first layer is completely unknown potentially by the application I think that's the nice part so what are the hard areas multiple times encoders and decoders the concept is very simple but you have to know about the ASIN one for example all of the providers the internal ones and the external ones use at least 5 layers of nested macros to implement the functions that implement these encoders so when I was trying to understand what was happening it was really really hard maybe it was completely useless to follow what was going on and I just went trial and error until things worked eventually I understood enough to actually correct the first few mistakes ahead and it kind of works but it is really really difficult code and then there is naming and caching resolution within OpenSL itself because whenever OpenSL tries to do an operation and tries to find which providers can complete that operation like for example RSA it has a whole world where it looks for names and try to find them and cache stuff find function pointers I gave up on that one because every time I had I was trying to look into it the code is really complicated but I was also trying to solve another problem and the other problem was more important so maybe it's just me but I just say keep it on faith it works most of the time sometimes like I ended up just looking other code was doing maybe I'm doing something wrong it wasn't working but you can largely muscle ignore it but it is a really complicated code maybe it would be nice to have a guide on that from the OpenSL side but it's not a showstopper but those are really hard areas it's really hard to understand what you're supposed to do there's still one thing I don't understand maybe I'll ask the pencil developers about the query operation for the function that is a query operation that says don't cache this and if you do that what happens is the opposite OpenSL would only use that thing for their own and it was like what so I stopped doing that it was fun but the next steps for me or the project in general is integrating with applications because as I said you have to use the OpenSL 3API fully to make use of the providers so we did some testing within the team we believe SSH stuff mostly works some bugs were found but that should be we should be able to get that working but I want to work on ISC bind SSH and Modest Cell to make them use these instead of engines that's where we'll go next a lot of bugs were found in the making of these slides but nobody was hurt there were PRs open there were discussion upstream OpenSL developers were mostly very gracious and useful and understanding and accommodating and we fixed things there are still some things discussing there are still some areas where the provider API can be improved to make things easier or possible but overall it has been a very fun experience for me so thank you all and if you have any questions yep, go ahead my question is it's just an example of a fix a fix a moving file for fix the moving file has two providers the name is base and the name also is named fix and I was told in that case of a moving file the base is used as a coding and the fix is used for encryption and I wonder how OpenSL have some which provider is used for the coding so the question is in fix mode when you want to use a fix provider there are two providers you actually have to configure one is called the base provider and the other is the fix provider and the base provider does encoding and the fix provider does encryption primitives and how that works but I didn't say although there is this impression that the provider is monolithic like it says one thing from the point of view of OpenSL each provider fundamentally gives you a palette of providers to choose from each operation is kind of its own provider within the provider and each one of them has a name and a function table associated so when OpenSL needs to do an operation like I need to decode something it will query all of the providers it's more complicated than that but let's say it queries all of the providers and we'll find which provider tells back to OpenSL I can handle this and then OpenSL has some logic to choose which of the providers it will use so in the case of the base versus fix provider the fix provider doesn't provide any encoder or decoder function so whenever OpenSL ask for a file the fix provider will not respond but the base provider will say I can decode this and that's how OpenSL knows that it will use with the base provider for that what is more complicated is what happened when one provider for example decodes a key and then you want to use it with another provider but I don't go there unless you ask for it it seems like the important thing to mean to port applications is the switch is using this OSSL store how would we get then the automatic association and a specific provider or say if I've got a PX7 or a user how do I know to use your provider for that URI versus some other provider on the system so the question is if applications reach to this new OpenSL store API which is what is needed to use provider and passes in the URI how does OpenSL know it has to use the PX7 provider versus another provider in multiple ways that are implemented to influence which provider is being used so first of all providers can register into an OpenSL a handle for URIs and so the PX7 provider registers into OpenSL I can handle anything that is PX11 column something so whenever OpenSL is given a URI in the store case or slash because there is a shorthand there it's PX11 column so which provider handle this it will find that the PX11 provider handles it and it will call the store operations in that provider now what happens after that is that the PX11 provider will return an object reference for a key for example and once you will you know embedded into EVP key structure when you do an operation with the EVP key structure then OpenSL will go in and look into it and say oh this key is owned by this provider let's try to see if this specific provider can handle the operation and be asked to do and if it does it will prefer to use that provider and if that provider does not support the operation what OpenSL does is it will ask the provider can you actually export this key to another provider and use it in another provider so this is the general mechanism but you have things like properties for example to tell OpenSL well you should really only ever use providers that expose this property and never anything else even if they say they support operation this is how for example the FIPS provider is actually really used because the FIPS provider is fundamentally the same code base as the default provider the difference is that when you set the property called FIPS equal yes then OpenSL will only use the provider that exposes property which is the FIPS provider so there is this selection mechanism within OpenSL that either the application can pass in or the configuration file can be set to that allows you to select specific providers where multiple of them offer the same functionality that's how it works in the international yep I wanted a little more of that can I do that with standards or does that need or will the FIPS provider have to know if I'm going to load more than one than one so the question is I want to load more than one than one module ok so the question is can I load more than one the question is kind of more than one pxs11 module my question to you more pxs11 drivers yes because the pxs11 provider is kind of a module itself which gets loaded dynamically into OpenSL if there is configuration and then the provider itself currently loads one driver so if you want to use multiple tokens there are multiple strategies that you can go for one is different applications use different tokens you will have a custom OpenSL.conf file for each application and in each one you will set the driver or you set an environment variable instead of setting the driver into the configuration and you set this environment variable for study application that's another way we support to load different drivers and that's what we use for example in CI when we test different drivers or you use something like pxs11 kit that can aggregate multiple driver underneath it's basically a proxy information there so there isn't a single way to go it will depend on the situation and what you need to use and how you need to use it so I have never tested whether I can load the same provider in OpenSL multiple times so it but I haven't looked at that this is the same provider with multiple tokens this is actually the load by the pxs11 kit proxy which basically aggregates all the devices that are installed in the pxs11 kit so if I do not provide configuration in the pxs11 module it's a load pxs11 kit that will basically load the pxs11 kit we have a build option where we will load by the pxs11 kit proxy which can collate multiple drivers available in the system by doing some discovery and so it will make multiple driver available and this work just fine because through the pxs11 URI you can always select specific module you want to use even if multiple modules have keys that are named the same way so there is no ambiguity unless you forget to set those parts it's possible in the end to have multiple pxs11 modules load at the same time we are out of time, thank you so hello everyone welcome to this very last session in the devconf my name is Daiki Ueno I work for RETART as a real crypto team like CIMO so the title of this talk is a question about our systems using up-to-date cryptography so today I'm going to talk about a tool we are currently developing to give you some answer to this question that is to monitor the cryptography usage on our system to be a bit louder so I don't have a strong voice so it's very hard but anyway so here is a motivation so it's no doubt that cryptography is going to work and there is a problem that every cryptographic algorithm has its own lifetime so it will be eventually vulnerable so for example, triple death and MD5 are already known to be vulnerable for quite some time and it's turned out recently that the shower can make a collision with moderately priced computers and it's also said that there will be quantum computers coming and they will break existing asymmetrical cryptography like ECDSA RSA, something like that so the best practice is to regularly review your system configuration and update if needed to use the newer cryptography algorithm but it's not that easy because if you suddenly stop supporting some particular algorithms there might be some users that are still using so they will be probably upset if you are providing public services maybe you will lose customers or users so that is bad so the first thing users need to do is to identify how much the algorithm set is used in the world so to identify the cryptography usage on a system we came up with a simple idea to identify the system crypt libraries with some instrumentation called USDT and attach the BPF program to each user space program and monitor the capture algorithm usage and generate some statistics so we have a couple of case studies in the past so one is that is to gather statistics of TLS ciphers with usage the next one is Shawan SIG Tracer tool created by Alex to check the feasibility of duplicating the Shawan algorithm in Fedora the change itself was eventually rejected but it was proven that this tool was useful so the idea is that we are making this tool more generic so yeah, so there are many challenges actually, so to make it generic so the main concerns I listed here three things, one is the efficiency so it's not zero cost to attach the BPF program and monitor the other three things but we shouldn't affect the entire system performance so it shouldn't be unusable and also in theory we can collect any information from any user space program but we shouldn't collect too much so we shouldn't collect user's private activity or something like that and the last thing is maintainability we are modifying the upstream cryptographic libraries so there will be some burden to maintain if you add stress points or something like that so we need to make it minimal cost to maintain so to address these three challenges we are designed architecture and logging mechanism so here is architecture diagram basically it consists of three components so agent and event broker and some clients attached so agent just installs BPF program into the kernel and this BPF program is attached to each target program and it receives some events so it sends it back to agent and agent just writes them into a disk file and there is a separate event broker process that can monitor this primary log file and distribute events to the subscribed clients then finally the final statistics can be used by any clients so this is the text description of what I say so let's move on to the logging format so usually cryptographic events have some context attached because for example in this case sometimes signature RSA-PSS is used but where for what purpose is it used unclear if we just record these events so we make it structure like hierarchical manner basically there is a prior we just follow the pattern using distributed tracing so we categorize event types in two one is context context means a period of time where events or any other context can occur and the other is event it's just an event it represents the event itself so context is as I said it is just container but it can have some name associated also it is identified with 16 bytes identifier so it is tend to be private information so it is obfuscated by the agent when it is received so for TLS we defined some set of context names like TLS handshake for client and TLS handshake for server and certificate signing for certificate-based authentication and also key exchange you know TLS handshake consists of multiple phrases so it corresponds to it so the event event is just a key value pair that's a single event data so for example protocol version is encoded as a unit 16 value that means negotiated TLS version and TLS ciphers is as well a signature algorithm key exchange algorithm and group so there are more but so conceptually context and events basically looks like a tree but we need to encode them because otherwise we can't write it in a file so we choose this representation using the four primitives one is new context that's just introduced another context from the parent and three event data they are just key value pair with different types word and string and block so for example if we encode these events the handshake client with context ID 00 to 01 with this event protocol version 0304 and also another context with 002, 02 events so this is kind of a tree and it can be encoded into this way so context is opened and this string event is emitted this is just assigning the name to the context it is handshake client and we have protocol version event and another context is opened and name is assigned and two other events are encoded so it's as you see it's two variables so we implement some optimization so 16 bytes is too much basically it costs a lot of disk space so if we just save the file in this format it will eat up your disk space so we we apply some grouping mechanism that is to make multiple event into a single event entry so it will make it much smaller and also we implemented rotation mechanism so if the programming file reaches the limit we automatically create a backup file and open a new file so that was the login format and we need to modify the crypto library to instrument the same information and we provide this helper macros it's exactly matches the four primitives we provide so I'm sure if it meets the challenges but we tried and for FSC it is addressed by simple design of agent it simply just writes a file it doesn't do anything as any other thing and events are grouped so written data will be to be small for privacy we only added this mechanism that is to encrypt and obfuscate the context ID and the maintainability so as you see events are described with only four generic so it should be easy to use both crypto library and also the agent side so let me show some demo it works with network do you see well, okay so as I said the demo installs and running this is the agent it's running on the system and this is also event broker it's working let's start the client and try to use TLS sorry, I didn't start it so again so more interesting example is just using your application for example maps so actually I am living here around here but for example as you see there are some interactions that is basically it's created and some information is also captured so let's save it into binary file and capture some events let's use the long parcel so the events are now rendered as a tree so context has events and it has some child spawns new context and also it has some events so it can be rendered like this in a tree format so we can generate a frame graph like power math tools with script so data is written to html there are multiple TLS handshake happened and all the information you can browse and you can also import it to graphana or any other visualization platform so I created dashboard for this so this is the same frame graph and also you can count the actual TLS ciphers it's used with a simple SQL so you can just write something like that okay that's it for the demo so for the implementation so we recently started GitHub repository it's all public and core components are written in last and most of them are written in async style for performance reasons and for bpf access we use bpf rs there are multiple implementations of bpf but we choose it and event broker uses tarpc create for binary based on rpc and other scripts are written in Python and we also provide a couple of utilities to access the logs and event broker so we need to also modify the crypto libraries we have an experimental package with these instrumentation in my corporate property so we already created open SSL so they can be safely installed but be careful about it so so future work so we eventually move this instrumentation to upstream and further use cases and we also after that we also plan to implement something in the higher level programming languages like Go and Rust and lastly we want to support more protocols like ssh, ipsec, dnssec pgp and other things the Grafana data source is currently just batch analysis but we can also create a plugin to support a real-time analysis also event broker is currently not socket activatable, it's just a restriction in the dependent but we can make it a socket activatable so it is not always running on the system so I don't know it's ended earlier but the conclusion is that our new project is called crypto auditing aims to create infrastructure needed to monitor the crypto usage on a system so we are trying to use a ground generic and the architecture of the project has been presented which comprises agent and event broker and clients also log-homest was presented so that's it any questions yes I saw the copper repository for this program so the question is that we have we currently have only copper but whether we have a plan to create an official package that is the goal but we probably should be upstream first so if the upstream accepts we can make it official so that is the plan yes I had numbers but I forgot yes so the question was how much power must overhead we would have if this is enabled I think we had some numbers but I forgot but it was around 20% if we are actively monitoring everything but we need to re-evaluate afterwards yes the question is that so if we want to monitor all the crypt of primitives we need to instrument all the crypt libraries and also the copied code that is correct and that is in particular a problem with some ecosystem they are using static linking so yes that's true and we currently focus on shared library but we will probably find some way to address the static linking situation yes yeah so I might not understand your question but the question is that yes sorry yes yes different criteria in the system they need to update the coding file of the application is that right so the question that so if the existing user is configured your system with 5th 140-2 will eventually migrate to 1-3 and whether it is needed to change the configuration but it depends on the components right and also how the 140-3 is enforced so I think that kind of enforcement is done by crypt policies system-wide crypt policies in Fedora so this is just a monitoring and give you a hint of how many that's correct other questions