 Welcome to my talk my name is a lot of back and today I'd like to talk to you about why and how you should nuke your AWS account let me start with a short survey by a raise of hands how many of you ever forgot a running EC2 instance in AWS okay so you can see it's a shared problem hopefully after this presentation you can buy one of these from the money I'm gonna save you of course I mean the jet's not the car so I work on a project called OpenShift cluster manager OCM for short we installed a lot of OpenShift clusters on GCP and AWS mainly on AWS because of Red Hat's offering and many of them sometimes because we test a lot we use a lot of versions sometimes clusters just failed from deleting and we leave them running in AWS so try to delete and running OpenShift cluster in AWS you have to deal with a lot of dependencies you may have seen it if you ever try to delete an EC2 instance you delete it you forget to delete the volume or the disk or anything else so try to understand all the topology of all the resources in AWS it's kind of hard and you need to understand whatever resource depends on another resource and you need to be aware of the right ordering of deletion so for example when I started doing this manual work I tried to delete a VPC for example and deleting a VPC blocks me from I'm sorry deleting the VPC is blocked by deleting the subnet and then I couldn't delete the subnet because the IP address was being used so it became a real problem and when I first started doing this manual work I figured well I should better automate it because I'm doing it more frequently and once I did it over and over again I understand I understood that I also need to handle multiple regions and I need to of course be aware of all the dependency graph and I need to repeat it again and again so this became kind of an agonizing task and I thought to myself well I should probably avoid reinventing the wheel in this case and I thought somebody probably already resolved this problem before so I did some research and I found there were actually several tools for handling these kind of situations and one of the best found the best one I found was AWS Nuke which I like to share some thoughts about with you today so what was interesting I went online on their github repository and it's an open source project and the interesting part was that they're dealing with kind of the same problem that we're dealing with only with Kubernetes clusters so they're installing Kubernetes clusters and having the same problem with cleaning up their AWS account and we are dealing with it with open shift clusters so it was nice to see and I'd like to share with you how you can use that tool to clean up your AWS account and reduce your spent so to use it you start with defining a config file which is basically a method file you set up which regions you want to clean some AWS resources are tied to a specific region some are not our global so we just write global as a region and then you list the accounts that you like to clean and then you just type AWS Nuke minus C and then a path to the config file and then minus minus profile and that contains the AWS profile that the AWS Nuke would be using to connect to your AWS account and clean things up so if you download and run this tool you would see an output which kind of looks like this what AWS Nuke would be doing it would be connecting to your AWS account it will scan all the resources that you have on your AWS account and it will list whatever it finds finds for which is candidate for deletion and by default it will not clean anything it will just list all the resources if you do like it to do a cleanup you would need to add a minus minus no dry run flag that will prompt you if you like to actually do the cleanup process and if you hit yes then it's gonna do the actual clean up do take in mind that some resources require some waiting for them to get deleted so what the AWS Nuke would be doing it's gonna try delete some resource if it's gonna fail or find it again on another scan it's gonna reattempt deleting it and it will try over and over again until it deletes everything that you asked it to delete there's also sorts of filtering options for that tool like in my example I'm trying to delete only for example ELBs or EC2 instances S3 buckets so I can list them all under the target section of the config file I can also ask the AWS Nuke to filter out some resources so for example in this specific example I'm avoiding cleaning up my own credentials my own admin credentials to the account so I can use that also as filtering out these kind of resources there's other types of filtering options like by value by regular expression date I encourage you to go online and read about them so we started integrating with this tool and everything went butter smooth we have a nightly test suite that runs overnight it takes roughly two sometimes three hours to run and then when developers came in the morning they come they see the results and by that time the cleanup process was cleaning all the still resources on the AWS account and it worked good for a while and then we hit the wall what happened was that developer came in the morning he saw some environmental failure sometimes it was something that a developer fixed and they wanted to rerun the test suite now imagine tests are running creating clusters on AWS and at the same time the cleanup process was deleting them so obviously that became a problem so we started thinking how can we avoid deleting resources that are actually being used at the same time and we figured we couldn't really know if a resource in AWS is actually been used so we started thinking how can we avoid this problem and one of the things that came to mind was well we don't want to rely on the test site the test can fail it can crash it can run from my laptop and I can be offline I couldn't really rely on asking are you using this resource there's no one to talk to at that point so we figured well the test runs for two three hours sometimes four but never more than five hours so if the test goes over five hours it will time out it will terminate so we figured everything that is older in AWS older than five hours we can safely move it we have a dedicated AWS account for the test so it's it's safe to remove the problem was that some AWS resources has a field of creation timestamp but not all of them and we want to overcome this problem and we figured how about we'll add a first-seen timestamp and while we are scanning the resources and if we do it frequently enough we can add this tagging to each AWS resource and running this frequently enough the first seen timestamp will be pretty much close to the creation timestamp of that resource so what we are using we are using the AWS client that the AWS Nuke is using to connect and talk and scan the AWS account we are using that client to also tag the resources while it scans them so the AWS Nuke will be connecting to the AWS account it will start scanning all the resources while doing so it will tag first-seen timestamp on all of them and then once it finds a resource that is older than five hours it just adds it to the resources that are candidate to get deleted and that work great for us we are working with this method for a while it reduced our spent from looking like this every time we hit a quota issue or we got an alert from AWS that we're spending too much money and then developers came and started manually deleting stuff we went to this so it's very lower lower spent and much more untrendy I would say so let me summarize one thing would be trying not to invent the wheel if you're working on a business problem or another problem that is not directly related to your business maybe someone else already solved it AWS Nuke is a great tool for cleaning up AWS resources it's open source so you can contribute as well we are contributing to this project and then everybody benefit from that so I encourage you to look into it I'll take any questions now yeah so we have some filtering according to the question so the question was there's all sorts of filtering options in the AWS Nuke one was the question was if there's any options to filter by text so there is an option to filter by value for example and in this case this one's for a gap for example this is a filter that is filtering out my user so it's basically a text filter without tagging okay so the question was how to delete resources without tagging okay so if you don't want to use tags you can consider one of the other options for example you can use a regular expression or a date if you like there's many many options and if none of them suits you that's the fun of being an open source project you can contribute to it really and and we will benefit from that as well any other questions okay guess not thank you okay good afternoon everyone my name is Konstantin I work for IBM Germany and I'm also a Red Hat partner engineer and for the last two and a half years I was working on enabling OpenShift service mesh on S390X architecture and in my presentation I'm going to share my experience about that so first I will make a quick overview of service mesh architecture then I will talk about the differences in implementation on x86 and S390X platform I will also talk about the features that were previously be were enabled only on x86 such as Loadjit and wasm filters we'll talk about boring as a library replacement and we'll talk about future and possible enhancement so what is a service mesh service mesh is a dedicated infrastructure layer in a microservice architecture that enable you to to add capabilities like traffic manager management observability and security without adding them directly in your code so therefore the microservice developer can focus on business logic only the main component of service mesh is Envoy proxy that runs in a sidecar container alongside each microservice rather than within them and those sidecar containers from mesh network so currently there are three implementation of Envoy are available first one is upstream Envoy it is used as a base for Istio service mesh and based on Bore and SSL library the next one is Maestro Envoy it is used as a base for OpenShift service mesh product by Red Hat and the main and the biggest change in comparison with upstream Envoy it is a fork of upstream Envoy and the main biggest change is that Bore and SSL library is replaced with OpenSSL and there are also changes that enable built on other than H6 platforms such as S390X and PowerPC and the third one on Envoy Envoy-OpenSSL it is also based on OpenSSL but the Bore and SSL library is replaced with OpenSSL using more advanced technical solution I will mention it in the next slides but currently the status of this project is walking progress and there are no walking versions for any platforms so Envoy is written in C++ and Basil is used as a build system it has approximately 60 external dependencies and each of them is a separate open source repository some of them are pretty big and well known such as Google V8 JavaScript engine so from the security perspective the Envoy build is done is being done in an offline environment and all quote including Envoy itself on all of these dependencies are being built from source without using any pre-built libraries or binaries and so the build takes more than one hour depends on the hardware available on the build machine so what is S390X platform S390X is a Linux operating system that is compiled to run on IBM and frame computers and there are situations in which the code that is compiled on the X866 and it just walks and if we are trying to come to this code and run on S390X it crashes so the the most frequent reason of such crashes is different in engines which is the order of bytes in computer memory so S390X is big engine X866 is little engine another type of problems that we have on S390X is C compiler error specific to S390X so the build is specifically configured in that way so that every most of warnings are considered as errors and here you can see the example one real example of such errors and the root cause is that character types may be different on different platforms signed or unsigned and the last class of problems is basal build errors and they happen they happen because of some of basal tool chain do not support S390X or don't have binaries available for S390X so by design all communications between the micro services are encrypted and Boring as sale is used in upstream on void as TLS library so the Boring as sale is a fork of Google's fork of open SSL and the problem is it is not supported on S390X and its community doesn't accept any S390X changes and on void community doesn't want to support open SSL in addition to Boring SSL so on S390X we have to use either this one or this one implementation and the last one it is not ready so currently Maestro on void is the difference between them is how open as sale is replaced so in Maestro on void all calls to Boring as sale methods and all headers and everything is being replaced manually by Red Hat on void team and this is a huge amount of work and to reduce this amount of work for each release currently is this project and void open as sale is being developed so the main idea is to have so-called compatibility layer for Boring as sale to open SSL so it provides an implementation of Boring as sale APIs sufficient enough to build on void against it so there is a mapping between Boring as sale and open as sale method and when on void calls some Boring as sale method then corresponding on open as sale method is loading dynamically from open SSL shared library so using DL open C function and others you can you can read more details by clicking this link so initially GCC was used as a main build tool in on void but about two years ago the community changed the build tools and they switched to LLVM clunk and LLD as a linker and this caused a lot of problems on S390X because LLD is not supported on S390X so we had to switch to LLD.gold and there were more problems for example integrated assembler wasn't supported we have to disable had to disable it on S390X then we had to define manually missing types and numeric types replace them with the existing double or float in clunk and we have to fix new S390X specific warnings by either changing code or disabling those warnings and so here is the summary of on void dependencies that were changed or replaced to enable the build on S390X so first Boring as sale is replaced with open as sale another another project is LLWJIT is compiler for on void filters on LLWAH language it was replaced with LLWJIT2 which is open-rest implementation fork of LLWJIT and it supports S390X another problem was entailer parser dependency we have found a bug that when passing delivery incorrect string to input as an input on void crashed and parser crashed so by debugging points to some internal functions within libestdc++ string and I couldn't find found the root cause but I could find only walk around to disable string length optimization during the build and now if hacker passes incorrect input on void won't crash on S390X so another change was to Google Kish and in its HTTP2 implementation by fixing and DNS problem by swapping bytes and the last change was to on web assembly extensions so web assembly or wasm is portable binary format for executing code and it is run in a memory safe sandbox so user can write filters in and compile them and timing dynamical load into as a wasm modules without stopping on void or rebuilding so currently on S390X we support two wasm run times Google V8 JavaScript engine and wasm time and to support them we we had to change proxy wasm on void dependency let me tell a little bit more about that so initially when we try to build wasm modules on S390X we got first build problem because we could build using only extensions written in Rust language and we couldn't build extensions written in C++ because it requires AM script and toolkit which is not supported on S390X but even if we built them on x86 and copy wasm modules to S390X we experienced crash wasm VM crash after an investigation the problem was that wasm module always by design keeps all instructions in little engine format on any platform and wasm run times for example Google V8 supports this on S390X but on void proxy wasm didn't support it and I had to find all places that serialize this serialize data to wasm module and read that reads or writes data from or to wasm module between proxy wasm extension and wasm VM and proxy wasm maintainers were S390X friendly after several code reviews as several pull requests they finally accepted all changes so that now wasm extensions are supported in on void on S390X you can read more detail by clicking this link so for now almost all features that are enabled on x86 are supported on S390X but we can improve something if we enable AM script on on S390X then we could build and run all tests from on master on void repository including building wasm in any language and the next improving could be the real fix providing real fix instead of walk around for ntlr for parser then also contribute to on void open SSL implementation and monitor the development of embed mesh which is a new proxy in Istio organization and there is no working version currently but it is still under under active development okay so that's all now almost all features are supported on S390X but the permanent maintenance walk is required because every day and new changes are being added to on void and and to its dependencies and potentially any of this change could break S390X build so thank you for listening so if you have new questions you can find me later or if you have time now okay okay then feel free to contact me okay hello everybody my name is Ori and I'm excited to be here to share a success story of my team it all started when we decided to expose our service to new audiences which presented new challenges that resulted in a lower adoption rate we looked back in the past and we found out a method from the aviation industry almost 100 years old and adopted it to our service does your service or product depend on the user in on the users cloud if so maybe you can take some key takeaways from the session in the year 1935 a prototype Boeing B-17 crashed in Ohio killing two pilots after that a group of engineers decided to come up with a new method called free flight checklist to ensure the flight safety this is performed until today before each flight free flight checklist can include checking the fuel quantity the baggage weight the air crew documents and more like airplanes our service is also in the cloud so we decided to adapt this method I'm a developer in the OpenShift cluster manager team OpenShift is a Kubernetes based platform that enable developers to deploy their application you can use our managed service in the link above to create clusters with the single API call to deploy more machines to schedule upgrade to get the latest version get metrics about your cluster a CPU usage and memory usage and more how does it work in a high level the user is making a single API call to create a cluster our service is starting validating the cluster spec running some business logic and if everything goes fine we are continuing to the installation phase triggering the OpenShift installer and if anything is wrong we return an error to the user they can fix and retry again after 40 minutes if if the installation is successful you can deploy our application on the cluster if it went wrong we get an error and you have to investigate it and to retry and start all the process again up until here everything worked fine then we decide to expose our product to new audiences presenting the customer cloud subscription so on the left hand side you can see our previous offering where if you wanted to have a cluster we had a pool of AWS account pre-configured and ready we would allocate one of them create a cluster and provide to you access to the AWS cloud and you can manage your cluster now customer cloud subscription has many advantages first this is costless you have your own customized account you can bring your own VPC and more but then all of a sudden we are starting to see a lot of installation failures sometimes the account is not configured or missing quota I want you to imagine a user trying to create a cluster and they they are sitting it is moving to the installation phase and they are sitting in front of the monitor waiting 10 minutes 20 minutes 30 minutes just to find out that the cluster is in an error state how frustrating is that they have now to delete the cluster delete resources in the cloud call the support and start all over again and there is no guarantee that on the next time it is going to be successful maybe something else would be wrong with the cloud account so this is not a good user experience and probably not the way to attract new customers to our service so what do we do we go back to the preflight checklist and we are adding a new layer in our service you can see the purple rectangle the preflight layer where we are making multiple API calls to the AWS cloud of the user and we are verifying that the account is ready to create a cluster like like an airplane needs fuel for a flight a cluster need resources elastic IP EC2 instances load balancer like an air crew need documents and permission our service need permissions to provision a cluster and manage it in the user account so from here we have two options option number one if anything is wrong with the account we return a bed request the user can adjust the AWS account and start over all over again we are preventing an installation failure if everything is fine we are moving to the installation phase with a much higher success rate let just before that I have to say to be honest there is a trade of here first previously we could return a response to the user in one second but now we are making multiple API calls and it takes much longer the user can wait up to 10 second the second thing this is not 100 percentage bullet proof because we are checking the user account in a very specific time like a snapshot so if the account was valid and ready to provision a cluster and after that we move to the installation phase and something change we don't have any protection against that let's see how does it look like from the user perspective for that we are going to use the UI creating a cluster choosing the AWS cloud here we in the wizard we have the option to configure the cluster a spec you can one option is to choose next and get the default configuration for the cluster so specifying the AWS account then a set of rules to grant permissions in the AWS account and here you have to the option to click next and get the default spec so we're going quickly through this and then at the last step we're going to make an API call to create the cluster choosing the cloud region the version setting the machine pools and the network configuration and so on in the last stage we are going to see the cluster a spec a summary and then clicking create cluster making API call and now the preflex checks are running in the background so here in this case we can see you need the last one available elastic IP address to create your cluster we have just prevented an installation failure the user can jump to the AWS console here in service quota they can make a request to increase the quota this is one option making this request the other option is to release redundant resources that they don't need in this case we are going to release one elastic IP and after that they are going to have enough available quota to provision a cluster going back to the redhead console creating a cluster now again the preflights are going to run in the background once all of them pass we can move to the installation phase a closer look at that how do we get a quota we have the applied quota in a specific region and in this case 10 we're calling get service quota to get the limit in a specific region then for the utilization we're calling describe addresses with the EC2 service once we have the utilization and the limit we can calculate the available quota and to ensure this is enough to provision a cluster we have multiple preflights for quota for authorization in the user account we are also supporting bring your own VPC and validate in the configuration for that to summarize we decided to expose our service to new customers it introduced a new challenge we combined the null deviation method with the with the cloud SDK it increased the product adoption and the success rate and you can find our open source repo Rosa this is the CLI tool that we are using to provision cluster and you are welcome to contribute and this is it thank you and a great time for questions okay yeah and so okay and the question was you asked if it works from the console and if it will work for from marketplaces in the cloud so specifically when you saw the UI our back end with the preflights are in the back end so if this is from the UI or from our CLI tools all of them are making API calls to the same back end and when you create a cluster trigger the cluster creation we run the preflights it's in the back end and it works for the CLI and for the UI okay thanks a lot okay hi everyone hi dear colleagues my name is Sergei and I'm working on Linux system roles and in modern words diversity equity and inclusion became a main topic and the focus and because it's important for all of us to feel comfortable on the projects that we work on and the projects that we use and we participate in and one of the parts of these is conscious language which is about using words that are rather inclusive rather than exclusive and in Linux system roles we started using a tool that is called walk to detect language that is not inclusive in our code so today we'll cover first of all why this initiative is important and then we'll switch to technical part about the walk to self and its features about the github action that exists there with woke and then about some caveats that we had working with woke and how we worked around them and yeah after this we'll have some q&a section so software often carries office often uses some words that carry a great deal of emotional and historical baggage and to be truly inclusive and welcoming we should try to avoid the use of words that can carry these unintended second meeting meanings just to welcome all the participants and just to not limit the number of participants that we might have and so that's the first reason and second reason is for those who use languages there with English as their second language it may be problematic to translate those idioms and now let's move to the walk to so here I describe that just that it has so it finds and inclusive known it finds non-inclusive terms in your code in file names and in the content of files themselves it also suggests alternatives that you may use and it has some configuration that you can apply for example to identify the words that you want to mark as warnings or that you want to mark as errors and also you can choose what paths you want to search on so yeah here you can see the example of the github action that exists there that enables you to run woke on every PR in your github repo to identify if this pair does not introduce any non-inclusive terms in it so it's yeah it's pretty pretty simple and here I have a slide demonstrating how this will work so it sees that there is a non-inclusive term in your code and it has this pretty feature that prints the exact place where this occurs and also prints the terms that you can use instead if you wish and in this case this is marked not as an error but as a warning and as for the caveats that the reason is to so in my project we needed to mark some words as errors to strictly fail on them and some words as warnings and currently the tool fails both on warning and errors and we needed to change this so we made this change to the core to the code and submitted this PR but unfortunately the second issue is that there is a lack of attention from the developer from the initial developer of this tool so hopefully the developers will return and continue the work but for now we did the following we created a clone kind of forked the project into our local repose and yeah and just created a custom custom action instead of the official one and here how it is how it looks you can see that we are pointing to also to our configuration file that we also customized for our needs and instead of the official action we're using just the path where we copied it to and yeah that's pretty much it for myself and please if you have any questions yes please sorry could you repeat I think I didn't catch it yeah so you're probably asking about github actions github actions is an environment that github provides where you can use github resources virtual machines and containers to run any test on your repository so you can configure it to automatically run anything on on special triggers so for example in our case we're running the stool on every PR to check the content yeah of course yeah you can also run walk locally and yeah customize everything and just use it locally as well just just continue and I will get to you yeah actually the fork it's pretty we just yeah I'm repeating the question so the question was if we can make the fork available to everyone and the change that we did it is currently publicly available in our github and it's actually pretty small and it's not actually the full fork but we just copied the code to our repo so officially it's not a fork and yeah so in the next slide by the way I have a reference section where I'm yeah I'm linking to this to this fork that we did and also linking to some other things that I talked about today yes thank you please yeah so to repeat the question for the audience is it possible to ignore work on some particular lines because sometimes and this is the case with our project as well we are working on many subsystems that just use the non inclusive terms and we cannot use them in our project because we're just automating what is there already so yeah so first of all in our configuration file we marked some often often we use terms as warnings and did this change to not fail on them and also you can ignore separate files in the config file or also in the file you can ignore a separate line by appending a comment above it or to the right of it to say woke I think it's woke ignore and the name of the rule that you are ignoring so yeah it is supported yes please well yeah that's the work yeah the question is about that sometimes again you have terms that you cannot change because they are part of some other underlying project that you don't have like authority to interrupt and what are the ways around it. And yeah, so in our case we needed to do this work and to just run work on every repository that we do support. And we like about 20 of them. And of course some of the terms could be replaced because we are using them only in the repo and some of the terms come from the underlying project. Some, for example, the name of the main database is a master database in Microsoft SQL Server and Microsoft I doubt that they are going to change it in foreseeable future. So yeah, that's something that we need to live with and yeah in this case we just need to ignore this word either in the config file or on the line itself. Any more questions please? Okay, if not, thanks a lot for your attention. Again, I think you will get a link to the slides and the slides I have the reference section where you can click through the things I talked about and have some look at them. Thanks a lot. Hello everyone, my name is Katka. I am a manager of customer service team in EMEA working at Red Hat. And I would like to talk to you about manager swap about my experience. What did the swap you might be thinking? Is it something like a wife swap or? Well, kind of similar, but I guess with a different purpose, more noble one. The manager swap is a disruptive idea to blur the boundaries between regions and teams and it kind of enhances our Red Hat open and inclusive culture. So the customer service department especially is presenting our culture to the outside world. So managers of this very team were the obvious choice where when choosing the first people to try the first round of the swap. So our team used to be a small local team only taking care of regional customers, but then we became global. And not only that we had to collaborate with our US and Indian and APAC colleagues, teams, but also we had to work from one big backlog of customer requests, aka global queue. So you can imagine being global, it requires lots of consistency and suddenly there was a lot to learn. So it was me and my colleague Ashley in US serving as guinea pigs for this first swap. And I got out of my control comfort zone when I became a newbie in the US team. And I definitely started seeing things from different perspective. So the original goal that we agreed on was that we will basically attend each other's team meetings, we will have one-on-ones with all the team members weekly and also the underlying goal was to identify some opportunities maybe for better collaboration and also how to improve our processes. In the first week of the swap, I immediately saw that the dynamics of my newly adopted team was a bit different. And I detected some really crucial differences between EMEA and US cultures. See, the European way of communication is definitely more direct. Slightly blunt, maybe even harsh, you might say. We don't really do small talk much. We really prefer to address the elephant in the room and this was a challenge for me. At the beginning I was really trying hard to fill the diplomatic mold and I was getting lost in long sentences and in complex phrases. And as a result, I always forgot to smile as well. That's strike two, that's another problem. You have to smile when speaking to American. If you don't smile, you might come across as impolite even or rude. So we don't really dance about uncomfortable truths and let's say the sugarcoating is not our forte either. So what to do? I am just wondering now, imagine how much time you would have if everything was addressed directly. I mean, you would say this meeting is meaningless and we are wasting everyone's time. Instead, we have to say, I would suggest postpone this meeting until we have further details and some structured agenda to cover. So yeah, I don't know about you but I always prefer the direct way of communication back to my newly adopted team. So first week, we got to know each other. We kind of established our communication ways and we shared our life stories. We shared the career journey as well. It was great. It was all virtual, obviously, but I got to travel to Panama, Argentina, Brazil, Portugal, Venezuela, all in my imagination. So it was great. They were all smiley and polite but still I felt we were still kind of shy to each other. There was still some kind of boundary and it was as late as the second half of the swap when I finally felt people are starting opening up to me. They started sharing actually their actual ideas and opinions and this constructive feedback, that was the real gift and me by stepping back and simply listening to all that, I realized some practical implications as well. So for example, as we became a global team and we wanted to be globally aligned, I realized we have to have some globally aligned new hire training as well. Before, we were four teams onboarding people always in four regions. So that was one big Eurika moment or aha moment for me and we should also not concentrate on processes that much but also on the inclusive culture during that new hire training. Secondly, I realized we need to encourage more interactions between the regions. So after the year since the swap happened, I can tell a lot of things have changed for the better. We do have consistent and globally aligned new hire training now. We have established stronger collaboration. We meet also not only with work related topics but we have some regular virtual games and virtual meetings. For example, we had some cooking shows where we got to learn how to make an authentic Italian tiramisu or our colleague from US was showing us how to bake fantastic banana bread. So I honestly believe anyone on the team could benefit from a swap, not just people managers, but it could be easily an individual contributor as well. So if you feel that you know better than your colleague across the ocean, you might be a perfect candidate for the swap because there is nothing better than trying that and walking in someone else's shoes to see their perspective and to understand their ways. I promise it's an eye-opening experience. Thank you. And now we can open it for the questions. Any questions? Or experience? There's a quick one. Do you talk about international swaps or what about interdepartmental work? Even if the managers are doing quite different tasks, you can still apply? Definitely. That should be the question. Oh yeah, sure. So I was talking about international swaps from the same type of team, but the question is if it can be applied when the managers are not managing the same teams. Development versus production. I believe there will be some value that you can find there because if you are struggling with some other team, some collaboration is not as smooth as you would wish. It's definitely great to find out, to try to be in their shoes for a certain time. So yeah. Yeah? How long was the swap? Month, one month, yeah. So... What was it before the swap? It fell too short for me because as I said, they started opening up only in the third and fourth one-on-one, which was like end of the swap before it was all... It was interesting, but it was all kind of polite and very... Not formal, I would say that. Yeah, I feel like it got... We were getting to the interesting stuff only too late. Yeah. You need to build a trust. Sure. Oh, performance. Think that if you manage your change, might affect things for a short period of time or because it was a similar type of team, it did not end up in that... The numbers were pretty much the same, but the effect is like, my team is still contacting the other manager because they know each other, so the collaboration is more efficient and smoother this way. But in terms of how many requests did we do or something, no, it did not have any effect. Surely there's a learning curve, right? Yeah, absolutely. I highly recommend it. It's uncomfortable, but great, yeah? And so it's like a one-to-one swap? Yeah, this was one-to-one swap, yeah. Rather than just... That's another version, like... Yeah, there is no... How to say... Yeah, it can be freestyle. Yeah, Milan? The manager has a similar experience or different experiences that people don't know about? Yeah, so we actually wrote a blog about it, so because I know you're from Red Hat, you will be able to read the blog. Yeah, of course she loved it as well, and everything was shining and bright. As I said, I think it's a great experience for everyone. I am encouraging now our team leads, our SOLs, like other functions, because I think it brings value. As long as you work for a global team, I'm sure there are some differences between the regions that could be understood better. So go for it, yes? Yeah, one month is too short for this, and also just to explain, we did not have access to the HR stuff. So we were just kind of not on the surface, but we would not be able to go deeper to the other team's data. So it was... This way it was just our agreement and an experiment. Yeah? Well, I think Ashley was focused more on the personal side, so I was the one who was always trying to talk about work. I think Ashley was just relaxed, and they were focusing on getting to know each other as humans more. But yeah, it was overall positive when you... Yeah, she did not mention any of this. Yeah, maybe it's an elephant in the room. Yeah, let's see. Yeah, one thing I totally admired, how well they can connect... Like, they had really great team-building ideas. For example, they have game night every Friday after work. They do some picturing, which is a game where you draw something and the others are guessing. And it's a simple thing, but we never did such fun things before, so we learned... We adopted this, and now we are actually doing the pictionary with them together, so we have one joint team-building with the American team. It's really nice. Yeah? So, unfortunately, we're out of time. Thank you. We should probably be happy to discuss anything else with you later. Thank you. Yeah, thank you. Sorry. Okay. Yeah, we have time. Great. Another minute. Can I close the door? Yeah. We have an enlightening talk. We have Sylvia here with the topic what's new in the world of HDV dashboard? Yes. Go on. Okay. Thank you for coming. For the next 15 minutes, we'll talk about the new features and the past and the present also of the HTTP cache. First, I will introduce myself. I'm a French baguette, so I have a bad accent in English. I'm the creator of the Swan HTTP cache. That's an HTTP cache written in Go. And I'm also the maintainer of the Caddy cache handler. And I'm an active open source contributor. You can find me on GitHub searching darkwik and darkwikdev on Twitter. So you can ask questions during the conference, but this talk is really small, so you can ask after outside. I want to say the tech is political, so if you want to fight some governments, directives, or something else, you can do that at your point. And also I managed some students, so if you want some coaching or anything else in Go, React, or PHP, just send me an email and we'll see that. So go back in the past. There were too many providers that bring us some internet features. And we had the HTTP 1.0, but this version was the first stable version of the HTTP protocol, but there was a lack of description and a lack of customization. Some providers like Google or Akamai tried to implement on their own ways new features, but each provider implemented it on their own ways, so the behavior could be different from Google to Akamai. And support that, support all providers was impossible. In June 1999, Matrix was available at the cinema, and they released the RFC 26.16, also called HTTP 1.1, so an RFC is a Bayet blog, long like 60 pages, and it's a fully technical Bayet blog. And this RFC introduced some grammar about the HTTP request response, what is a request body, how to define it more specifically and precisely in this description. It introduced some HTTP headers that we are using today like the content encoding or content type, the range, and many other headers that are still in the HTTP protocol today. In this RFC, we have the first part of the stable caching in HTTP, and it starts with the following sentence, caching would be useless if it didn't significantly improve performance, so in the past it was true, the cache was only for performance, but today we have some ecological goals, and if your cache consumes 10 or 20 times more than your upstream, maybe you don't need your caching system. And as it was the first release of the HTTP caching system, there was some bad sentences like this one, if a stored response is not fresh enough, it couldn't be served to your client, the cache may still return the response to the client, so it's not good because if your client doesn't want cache to respond, you have to respect his choice. And you can see there is a warning header here, so you can still use it, nobody uses it today, but it exists. It introduced some cache control directive, so there were no cache and no store. No cache don't serve the cache, but if you find a request or a response with the no cache, you have to revalidate with the server. A revalidation is like a normal request, on your upstream, if the content didn't change, you have to return or not modify the status code in your response. No store will say, okay, the return response mustn't be stored in your cache. The Mac Sage directive will say, I accept the response and consider it as fresh if it has been stored for x seconds. So if you say Mac Sage equals 5, you can't serve a cached response that has been cached for more than 5 seconds. And no transform, ensure that your proxy will not modify your request or your response. It can't add any iterals and cannot edit the content. The directive only for request, you have the Mac Stale to say, okay, you can serve me a stale response for x seconds. It's the same as Mac Sage, but for the stale format. The mean fresh, it's not used in production because it says, okay, if your response is stored for since x seconds, you can serve me and contact the upstream otherwise. And only if cached say, okay, if you have a cached response, you can send me that. In the response context, we have the public that say your response can be stored in any cache system, private and shared cache. Private should be stored only on your browser cache because it's a private cache. Or your proxy should implement the private cache system. The must revalidate will try to revalidate each time you serve this response to the server to check if it's fresh enough and if the content shoots to the expectation. The proxy will validate will trigger revalidation only on your proxy and not on your browser, for example. And the S Mac Sage is like the Mac Sage but for shared cache. We have some extensions to extend these directives and it's like a key value to implement some custom logic in your cache system. So here with community key, we define the value UCI and you can say, okay, if the value is UCI for this key, I will store for 10 seconds and 5 seconds otherwise. In August 2001, Google Image was released and we discovered the ESI language. The ESI language comes with the ESI tags who already wrote some HTML in his job or... Nobody? Okay, so... Okay. I won't explain what is HTML. So here you return a web article and you have the full page that would be cached in your varnish server, for example. But you have a dynamic header with the user.php call. Your client will say, okay, send me article.html. It will request on the web server the whole article. It will pass the HTML content and detect an ESI tag and make another request to user.php, get the response and will replace the ESI block in your full page by the user.php response. Here is an example with the ESI include. We have some different kind of tags. The include will define the source and the alt. If the source fails, it will try the alt and on a row you can say fail and it will stop the render of the page. And if it's on the continue, it will just omit that and replace by a blank character. We have some trycatch blocks. And we can interpolate some variables. Here I will get from the cookies the type and logo name and it will replace from this value to give me this HTML tag at the end. So here is an example for what could be your page with some different ESI tags and customize the TTL for each ESI tag blocks. In 2014, the RFC 72.34 defines the wall caching providing us a new HTTP header called edge that can be negative and it defines the invalidation because the get request can be cached but if you send a put delete or a post request on the same resource you have to invalidate now the get resource in cache. The stale while revalidate is a new directive to say okay, you can send me stale content but you have to revalidate asynchronously with your upstream server. HTTP tool, better performance that's good. You have to enable that on your server. Some diagrams to illustrate how it works and why it is faster thanks to the multiplexing and everything is in the same pipe despite the HTTP one. In 2022, they released two new RFCs the cache status and the targeted cache control. The RFC cache status defines some directive to say okay, my cache status eats some cache response or forward because of that of why it has been forwarded. Here, the first will eat and the TTL will say it will be fresh for three seconds. The second one is an URI miss so it didn't find the URI in the cache and the upstream return a not modified status code and the third is also an URI miss because I have a detailed directive to say my storage was unreachable. The targeted cache control your upstream will return some new cache control prefixed by a name. So we have the CDN, varnish and CAD and all your services will interpret and focus on this HTTP header to manage that cache. So CAD will handle only CAD cache control and it will store for one hour varnish for five minutes and the CDN for two minutes and it will fall back on the cache control if none other match. And the client will store for no store because it will revalidate each time he is doing its request. And what about the images? What about the future? Probably in June because all the IRFC are released in June. The surrogate keys will allow you to group your cache keys and invalidate them by a global key first group, second group and third group. No images too. There was so much implementation with Cloudflare, Akamai et cetera and I think I don't have the images. There was a QR code to a new IRFC that is currently written but you won't have it. And the resources, obviously no images. You have the images and that is beautiful with that because we have the GIF at the end. Unfortunately the image was very tight so if you have any questions I'm sure Sylvain will be able to answer them personally. Yes, with images too. Thank you. I'm going to take the mic for you. Thanks. You have 15 minutes. That's all right of course and we're going to show you some time in the next five minutes and then we still have a mate. Okay. It's fine, thanks. Okay, ready to go? Okay, and then come in, close the door, take a seat. That's also my first role. Okay. Hi, welcome to my talk. My name is Thomas Hood. I'm working for Red Hat and I hope you basically all know what QMU is about. So who used QMU before? Just raise your hand. Okay, who's just here because you were too lazy to leave? Okay, at least two. That's great. Okay, so QMU is basically an emulator. It emulates 21 different CPU architectures and has support for a couple of CPU virtualization frameworks like KVM, hypervisor framework or macOS and some others. QMU does three releases a year and the major version is bumped at the beginning of each year so QMU is not doing any semantic versioning or something like that. And this talk will cover the last year so that means I will talk about QMU 7.1, 7.2 and 8.0. If you're interested, the schedules are available in a wiki so if you're curious when the next version will be released just have a look at this URL that I listed here. The next release will be QMU version 1. So let's jump into it. What's new in QMU 7.1? So I have to say QMU is a huge project so each release contains a lot of new features, a lot of new stuff and of course I cannot mention everything in a 10 or 15 minutes talk so I had to pick some few but that doesn't mean that the other features are less important. One of the big things that have been added in QMU 7.1 is the so-called Longarq 64 support. The Longarq 64 is a new risk CPU by the Longxin technology company. It's a Chinese company and it's said to be a little bit a mixture of risk 5 and MIPS. QMU now supports both full system emulation with the QMU system Longarq 64 command. It emulates a power virtualized machine there. You can also run the Linux binaries for this architecture directly on Linux host for example on your x86 Linux host with the user mode binary emulator that's the Longarq QMU-Longarq 64 command. I also got a quick example but I'd like to do it again depending on how much time we will have left. Another great cool new feature in QMU 7.1 is the zero copy send migration feature and this allows you when you migrate a guest in QMU from one QMU instance to the other QMU normally has to copy the guest pages before sending them over and with this feature you can skip this copy step so it reduces the CPU usage on the host quite a bit disadvantage it needs support for locked memory on the host so it only works with Linux and if you want to use it you can use this oh where is my mouse this HMP is for human monitor protocol so this is the built-in shell you could say of QMU you can use it with this migrate set capability zero copy send on command or QMU-Libbert you'd say verse migrate and then dash dash zero copy now QMU doesn't only add stuff we are also removing some older features at one point in time the codebase would become unmaintainable otherwise and one of the all the things that were in QMU since quite a while already is the dash sound HV command line option this was used to configure the sound hardware in the guest but it was considered legacy it didn't have the possibility to connect it to a to a backend on the host for example so it's no advice to use the dash device and dash audio dev command line options or if you want to do it in short there's a new dash audio command line option where you can do both now in one command so if you are using dash sound HV SB16 for example in the past to configure a sound cluster 16 device for a guest you would now do dash audio and then PA for pulse audio on the host side so this is the host backend then comma model equals SB16 and this will configure sound cluster 16 in the guest so this one command now wires up the guest side and the host side QMU 7.2 so QMU has a bunch of network backends that allow you to send a network traffic from the guest to another place and it had the possibility to tunnel the network traffic from the guest via our socket since quite a while already but this only supported internet sockets and a couple of people were asking about the possibility to use unix sockets too so QMU 7.2 introduced two new network backends there's net death stream for connection based protocols and you would say now dash net death stream and then you say address type is a unix socket now and you can configure the path on the client side so the other QMU that you want to connect to it you do basically the same just say server equals off and you can also use datagram based protocols like low depending on what you prefer another thing that has been changed in 7.2 is user mode emulation binaries for x86 they now default to a different CPU type so a couple of linux distrust recently switched to a higher level of the x86 architecture you could say so this is the x86 64 version 2 which require so these binaries for example require some sse extensions or stuff like that which might not be available in older CPU models and so that you are able to run these binaries out of the box QMU no defaults here to the maximum CPU model so that gives you all the features that QMU supports and again cleanups in 7.2 we removed the so called sub module from the repository this doesn't mean we removed the feature it just means we removed the code so for historical reasons QMU shipped the slurp code in its own repository and in the tour balls but in recent years all linux and other distributions brought up and supply the slurp code as a proper packaged library so take away here is please install lipslurp devil before you compile new QMU versions if you still want to use the dash net diff user for configuring your network backend or there's another new alternative called past which you can use also instead this is an external program so you basically connect dash net diff socket that program and it will take care of handling the network packages by the way there's a talk about pasta so that's the sister project of past tomorrow if you're interested in that be sure not to miss it. QMU 8.0 so there were many improvements again I just wanted to list a few risk 5 gain support for ACPI on x86 you can now pass a random seat to the linux kernel when you use when you load the kernel with the dash kernel command of QMU we've got no support for compact flash card emulation and there have been quite a bit of performance improvements in TCG so that's the tiny code generator the JIT engine of QMU the back end now there supports a couple of CPU extensions if they are available and as always we did also some cleanups here and this time we removed the Vata.IFS daemon from the repository since there's now a new implementation that's that has been rewritten in Rust and it has become quite mature in the last month so the decision has been to remove the old implementation that was written in plain C okay I think I was fast enough so before we go to the questions I can show you the example so this is my normal laptop wait this is useless this is my normal laptop here so you can see it's x86 machine and I cannot remember the options so if I want to emulate this new system now I download the kernel so I mentioned in the slides where to get the kernel the init.rd and the firmware for that you can start it like this it runs through the firmware it boots the system so it just drops into the init.rd shell here and if I know can you see it? I hope so great and as you can see I've got now an emulated long arc system here works full software emulation of course okay and that's pretty all from my side so let me go back to the slides questions yes please I haven't looked at the socket implementation I know that the question was whether there's when using this new socket implementation for the back ends there's a kind of multiplexing server in between when you can connect three or more I guess I'm not aware of multiplexing server in between or something like that at least the internet sockets are able to do multicast so you could multicast to multiple cumus I haven't looked that up in a new implementation but I guess they would be able to do that too hopefully so yeah the answer is multicast any other questions? okay then thanks for your attention we have another lighting talk we have look at the battle here with super power that's right here I am with my little path project I was working on recent hackathons and I'd like to share the news and here maybe catch me on the form and booth if you like to talk about it more we have only 15 minutes so why I'm doing that is the reason for that is that I think for you don't have a good image based provisioning unattended story now with the cloud everyone is doing image provisioning right this is the right way to do it in the cloud mostly and even other distributions like Wuntu if you install Wuntu today it is actually image based installation and there's cloud in it also there are other tools like kind of metal as a service that actually does image based provisioning for bare metal and I spent a decade working with form and satellite maintaining its provisioning capabilities and these two solutions mass and satellite are common in a way that they are pretty tough to install they really want to control all infrastructure they want to control the HTTP server and if you can live with it that's great it works well if you can't afford it maybe you already have an HTTP server or other infrastructure doesn't really work that way so then you're fighting against it so I was thinking maybe can there be a little service that could provide you bare metal image based provisioning because there's a common secret that Anaconda can actually do image based provisioning and this feature has been added in I think 7.2 Anthroposlinux and it's been meant for virtualization like Overt but these days it's still used for all the stream installations and you can actually use this for normal FEDURA or REL installations so I was thinking like maybe I could write a project small little service that you just run and it will provide opinionated provisioning for image based provisioning it's important so you need to have an image and then you can deploy it based around Anaconda so it is built for Anaconda it will not work for other distributions at the moment because it uses some unique features and it's also based that it only supports EFI HTTP boot which is basically like a simplified or more modern version of Pixi booting you don't need to have a TFTP server you don't need to have a really complex setup it's much easier it's just HTTP or HTTPS and of course Secure Boot and HTTPS and the HTTP v6 maybe these are all the features that customers really ask right so I was thinking like can I do that here I am with my project I want to share with you so first of all if you want to do any kind of image based provisioning you need to build an image right couple of options you can use LiveMedia Creator which is in Fedora it is I think not in Rel or second option I would recommend this one is to use Composer was built Composer is a component program a demon that you can install on Fedora or Rel and maybe other distributions I don't know and it has a command like client so you can use it to actually build what's called blueprint basically it's like which packages to build what the users you want to create and HP is stuff like that and then you create an image third option is if you go to consoleware.com if you have an account you can even use free account build images there now this service it is basically a go application very small it's called raster controller that is the service that you need to run you only need to give it a configuration in a sense that you need to point it to a directory where it will store images that's basically it it also needs Postgres and free tables so it's not really big and there's a CLI and that's all you have so you just build it it's not in Fedora or anything like that it's a prototype I have a github I'll share with you and the thing you need to do on your infrastructure is to set your DHCP server in a way that you just want to point it to this service right so in my case it's running on port 8000 and you need to go and give it the slash boot slash emfify and that's all you need to do it's like three literally three lines for Libert for testing you can actually use Libert because that won't you know you can continue using the default network and for ISC DHCP you would do the similar configuration now the workflow is a little bit different from what you maybe know from Foreman I call it a la carte menu and if you're familiar with Beaker it's actually very similar you first register and client so in this workflow EFI really don't work well with fallback mechanisms of BIOS where you boot from network and it doesn't boot then boot from hard drive that doesn't really work in EFI so this service actually provides out of bound management capabilities meaning it needs to be able to power on a server and power it off and change boot order so for this I'm planning to add Redfish API and also maybe IPMI in the future for now so the project only supports Libert that is meant for testing it's not meant for that you would like to install images using booting from network that's really not the way to work so first thing you need to register appliance in this case Libert that is only supported right now and then you do end listing operation which terminology is a little bit vague I'm not still trying to find it this is bar out from Ubuntu mass but basically it will go and search all your let me say blades within your chassis or in this case Libert VMs and it will store in the inventory in the database some information about them specifically UUID so you can actually power them on and off and then you can power them on using using the API of the service and it will boot into Anaconda and Anaconda will because these systems are not yet set or configured to be installed it will actually do what I call discovery it will basically try to run a DMID code and find amount of CPUs and memory stuff like that and it will report these facts back to the system that's exactly what Beaker does except Beaker actually installs the system this one is basically Anaconda and once you're ready or before you can be ready you need to upload an image so you have an ISO image that you've previously built or downloaded from our portal or created in OSBuildComposer you upload it to the service and the service oh my god hopefully this work and the service will actually extract some extract some files key files like bootloader, grab, Anaconda and the image itself and then finally you're ready to acquire the system again terminology is acquiring means like you take it from your list of inventory of available machines and you just want to put this particular image onto this system and that's it if you're ready to release the system back to the pool you just call it release and this is basically the operation workflow as I've said I'm using some unique features of Anaconda specifically like VMG or pre-script and HTTP headers but this is like not important this is my Fedora cockpit and I'm running terminal and here this is how it looks like the service you just call us CLI it has three commands image, system and play and so first step I have an image here called test one and the important thing is it is UFI image it needs to be sorry UFI VM because UFI HTTP boot is a special thing it doesn't work on BIOS because BIOS Pixi doesn't have like a full HTTP stack it only has like UDP very simply protocol so you need to have an EFI so the idea here is you have a bunch of servers and you have a let's say IPMI or Redfish API management so you can actually like do the same workflow with these if you want to put an image in the cloud redhead.com portal this is how it looks like make sure to like bare metal here ISO that is the type of the image you want to build or you use image builder image builder has a nice user interface this is my home server you can actually install OS build cockpit plugin I think and it allows you to like do all the customizations like put the SSH keys there and first boot scripts and things like that so that's great so once I have an image I upload it here so this line image upload you give it a name and then the ISO and then I list all the images so in this case there's only one image called the row 9 now I'm ready to define appliance this is basically like a machine or chassis with multiple blades in this case again this project is really early prototype so it only supports libvert as like a testing testing machine right and so I give it a name and it only supports Unix local connection right now I can do enlisting operation enlisting does is actually it connects to the appliance and lists all the in this case VMs these are not blades I'm giving give it a regular expression I want only to use everything that starts with test because I have multiple VMs there that I don't want to have a list of systems so every system has an important thing is MAC address or multiple MAC addresses it's a list every system has a like generated name cockpock whatever this is it's just a random name so you can use it and then the most important thing is UID so it actually can do a power operation now I'm going to reset it basically means boot from network like connect to the appliance and boot it from network and now you'll see you know the system is booting up on the network we'll try Pixi first and then falls back to the HTTP boot over IPv4 Anaconda boots up and in the background you can't see it but it will execute a Python script that will just collect some data along the system very similar to Beaker and in the future you could maybe search for systems that have more than 100 gigs of memory or some stuff like that it's not implemented yet and then it powers it off but now if I show the PEM system again I see a bunch of new facts and in this phase this system could actually do like CPU burn or memory testing or after provisioning it could actually wipe the drive that's what Ubuntu does nothing like that is implemented and now the final and the most important thing is system acquire you give it an image well nine in my case and you I'm actually I forgot to type in the name of this is like make address or the name of the system that would not work but suppose that I've typed it in Anaconda starts up and it downloads actually like EMG tar-tar ball which actually is your file system prepares the drive storage you know copies the file over prepares boot loader and stuff and then it reboots and your system is installed so that is it that is my project called Forester and going forward so I'd like to hear your opinion if you would like me to continue work on this and maybe I need to write documentation of course until I implement RedFish it's sort of like unusable why would you implement provisioning of images onto liver like that that's insane right you upload the image into liver and running of course so that doesn't make sense of course so RedFish is first one and maybe Terraform plugin there are some Bermetal Terraform plugins these days but these are really tied to like a specific vendor or specific environment here this is like if you want to install Redhead of Fedora and you have like a DHCP server and a system that has PMC operation like RedFish or IPMI you can do that and maybe Ansible module or some integration with console or maybe alternative to beaker it could be interesting so if you have any options I'm eager to hear from you if you like it or not so now minute we have two minutes for questions and find me at the form and booth if you want to talk about it a little bit more that is my project thank you and here's the URL if you want to test it it's really easy to like just jump in and forest.org slash forest.org get help thank you all right thank you very much yes test test one two three we have questions with us here and apparently Koraz is dead let's see hello everybody obviously the title was clickbait but I'm still glad you showed up and there is some truth to it and I'm gonna well it's really a case study of how software sometimes dies, sometimes is buried sometimes is reborn sometimes is resurrected or just has come back or shows up in ways that you didn't expect my name is Christian Glomback I'm a senior software engineer in OpenShift at Red Hat I am on the OKD streams team and we maintain the OKD community releases which is the community version of OpenShift yeah and I actually started my my journey into software engineering with with almost Koraz with one of the predecessors to Koraz which was atomic host and I'll get to that now so really the past of Koraz and I'm not gonna give you a whole history lesson on Koraz because I wasn't there for the Koraz Koraz container Linux Koraz time that was the original Koraz 2 based system it was then renamed to container Linux at least the community variant and then Red Hat bought the company Koraz and we created a new operating system that took the name of the predecessor but really it was a new operating system and that operating system was based mostly on the atomic project some folks may still remember the Fedora atomic host releases I think there was also a Red Hat product in there somewhere and really we took the best of these both worlds or at least we tried to do that and I think we did pretty well in taking the best from to create the new the new current Koraz so really what do we have today what is what's happening in Koraz land the main Fedora Koraz release which is the upstream leverages all kinds of software it's built with RPMOS tree so it's image based or hybrid image based you can rebase from one image which is a digest in a repository to another one and that really that changed how we deployed and provisioned the systems and it really made things much easier for us there's obviously things like Potman which is our main container runtime in Fedora Koraz we actually still ship Mobi which is Docker and Ignition is our first boot provisioning tool and we took that from the original container Linux Koraz so that's a declarative way of configuring the machine you want to provision upfront you have a config file and that essentially gets written to disk it defines file contents, file all kinds of things you want written to disk that gets put in Ignition config and then on the first boot the Ignition binary writes it while it's still in the init ram stage and writes it to disk and then reboots into that newly configured pristine operating system the way you want it recently we introduced something that really is a big leap in how you can work with Koraz which is Koraz layering and it's been mentioned a couple of times in other talks today but really what it is is the operating system like a container image because the operating system is a container image so you can import it with a from directive in a Docker file and then RPMOS to install other things other RPMs on top of the base which could be Fedora Koraz or the rail Koraz or the newly installed string Koraz and that really makes it easy that the community provides the Fedora Koraz and make your own version very specific to your needs and a great example of that how a community starts to peer around things like this is universal blue ublue it's ublue.it it's a community project and they actually use Koraz layering to build derived images from the standard stock Fedora Koraz I think they put in Nvidia drivers and things like things that we can't ship in the Fedora repose and they configure it for different use cases gaming and workstation they have a couple of images there what we end up with or what we currently have are three main versions which are officially released and distributed of Koraz which is the upstream Fedora Koraz and then the downstream Linux Koraz rail Koraz R-Cos and we recently introduced a midstream distro to CentOS stream Koraz so that's essentially for rail Koraz it's about six months ahead of what I get into rail meaning the way this works is RPM builds images by consuming RPMs and it creates an image out of these RPMs so everything in the image is part of an RPM and so we for these three composers, three different composers we use for the CentOS stream one we use the CentOS stream RPM YAMBOS to pull RPMs for Fedora Koraz it's the Fedora repos and for rail it's the rail repos but the manifests that define the RPM lists the RPMs that you want to include are actually the same or mostly the same for R-Cos and R-Cos CentOS stream Koraz is S-Cos, R-Head and the first Linux Koraz is R-Cos they're actually exactly the same manifests it's just a rebuild with a different set of packages different versions of the packages with different sources one comes from the CentOS community the other from the Fedora community and we're pretty happy with where we are right now it's especially the layering and all the new goodies we've recently introduced it really feels really nice to work with a stack but we're not finished and this is kind of where the next death is around the corner right is it gonna be what's it gonna be what's gonna go, what's gonna come next and that really is the only constant is change and so what's next what do we want to do what do you want to do there's a lot of interesting projects in the realm that we kind of have our finger on the pulse or you know our following or definitely want to include there's things like can we make these layered images support actually maybe do something like build one common base and then have different layers that share the same common base but that do different things so you could imagine one derived image for the for the Kubernetes stack one for extensions and one for IoT devices for example all share the same would be very easy to reproduce and to also change if you're unhappy and really want to derive again you can derive from a derived image doesn't really matter because in the end it's just shipped as a container image and that container image you can use as a container image but it's a bit special because it also includes a kernel so when you run it as a container image but when you use RPMOS tree to rebase to that image then the image is actually written to disk and the system is rebooted into that image then it's not in a container anymore we really use the container as a distribution mechanism for the images because container registry are ubiquitous today so what are these things here UKI unified kernel images that is something I'm really excited about I think it'll make our operating system safer because with that we'll be able to do attestation from the from the start to the end essentially it'll be much easier than with the current bootloader stack that we have obviously there are different interests here different teams different requirements from customers from community users so really set in stone yet we don't even know how it's going to look when we get there this is the time to shape this there's other things like boot C which could replace RPMOS tree sometime and really compose FS which will enable FS variety for our use case in RPMOS tree or even boot C based image distributions and really what's the take away what do I want to say with all of this and especially with looking into the future we can't change the past but we definitely can do our best to shape the future so really what I want to say is be the change you want to see in a software project and open a PR the second the quote is not by Mahatma Gandhi and that really closes the circle here back to this conference's motto which is define future you can define your own future and be part of it and I'd like to invite you to do that and to join any of these projects here or any other join the community around it and contribute to open source thank you very much so well there is oh yeah the question is it still is RPMOS tree within a container still valuable because you don't really need the package manager inside a container I guess and that is essentially where things like boot C comes in that doesn't ship a package manager in RPMOS tree is both one side that builds that composes an image but it's also the client side package manager that installs more images so it's kind of there isn't a clear split in the code base it's just one binary that does both and with boot C the build part would be separate and boot C would know about RPMOS it would just know how to out an OS tree native container to disk and boot into it and maybe upgrade to the next one going forward maybe there are use cases where we want the layering to happen in a container build because you can do these layered builds with any with potman build, docker build on a cluster anywhere they don't need to be privileged which is also different to the base compose needs to be privileged because we need virtualization enabled in the builder and run some stuff in a nested virtualized nested word anyway and maybe that's going to be solved by boot C some of that at least so definitely there's a lot going on and maybe in the future we won't even have a package manager on these immutable systems thank you I can just press this so let's do it so we have another topic here, we have Ondra here you may have seen him because he's got a boot here as well exactly so another topic will be the clicks and clacks of the mechanical keyboard thank you so hey my name is Ondra I'm a software developer at Red Hat also a student right here at this very faculty and I'm a huge keyboard nerd you might have noticed by now as a disclaimer I'm going to use the P word so watch out preference everything in this hobby is about preference you may not agree with and all this stuff you may like different things and that's alright there might be more interesting flavors but people still like vanilla all this kind of stuff as to why mechanical keyboards let me start with this how many of you guys do have a mechanical keyboard by show of hands how many of these have Cherry MX switches quite a lot so why mechanical keyboards well options more specifically options and looks because the keycaps are compatible between each other you can mix and match all this kind of stuff options and feel which depends both on the switches because the switches have a different mechanism and also cases because there's cases designed to be more light on the touch all this kind of weird stuff in this hobby there are plenty of people just obsessed with the talk as they call some kind of sound stuff options, layouts, plenty of layouts and firmware because most of the I'd say more expensive as well but especially the more open source split weird kind of keyboards use an open source QMK firmware which is you can set up anything you want and open source because some of the keyboards just are available anywhere on github or something and you can just order the PCBs and build yourself switches, I use this amazing graph with emojis combined with each other this is for these little things I'm going to pass you can try and test this see which switch you like as well there's the cherry, the hole because it's a rabbit hole once you go into the enthusiast switches and then there's silent switches as well there's a linear one, the tactile ones and the clicky ones, there's no clicky silent one, that's how you tell which way to orient this so I'm going to pass this along and good luck but yeah, to sort of help you with this the main difference you should be basically looking for for the linear system the non-cherry ones are usually more smooth while the tactile ones have a stronger bump and the clicky ones just use a different click mechanism so it's not much of a rattle it's more like a pure click so some common misconceptions for switches there's this a lot of saying like red is for gaming, blue is for typing, brown is for everything else or for everyone most of these things are basically just some things to get you to actually buy some keyboard and figure it out by yourself it's all about preference, I use reds or linear spore for typing and programming all that stuff it's not that it's going to help you in a new way it's just preference, it all depends on which one you like and it's not everyone is able to just go to your store and try a bunch of switches or order like this switch tester then it's kind of difficult to help anyone because everyone has a different taste so then this sort of bullshit was made up to help you just go for it and buy a single switch then cherry MX switches there's this thing that there's this issue that cherry had a patent until 2014 so there weren't any other brands making these switches in the same standards as nowadays they are and so cherry instead of innovating on newer switches and more like smoother switches better switches they sort of just went with marketing that we're the original ones we're the ones that everyone is copying and German engineering so it has to be the best and it's sort of falling off slowly but people still have this sort of mindset that the others are a clone that can't be good and they're Chinese but yeah nowadays Gateron and other brands most importantly innovate a lot and provide some better switches as well and also the mechanical means loud this is a mechanical keyboard I don't believe anyone from the backside can hear this never mind I'm out but yeah there's this sort of you know misconception that mechanical has to be loud but there are specific silent switches which are even more silent than your usual membrane keyboard and then the color coding which is just something that works for the basic red brown blue and other switches but other brands don't really follow it once you get to do more enthusiast switches so also then the options are in layouts you might have seen your usual 100% keyboard I'm gonna ask you to raise your hand and then lower it the moment I go too small I'm gonna ask you guys with the layout so everyone raise your hand alright does anyone need a 20% so 96 that's just editing keys sort of squashed together with the numpad a TKL that's just the tanky list without the numpad FRL but watch out this one also has a blocker instead of the windows key cause some people think that's a very awkward place to have a key so this one is a FRLess, binkeyless, tankyless actually then there's a 75% which doesn't have the usual function keys at all or the editing keys 65% which is without the function FRL as well that's quite a lot of people now we delve into the 60% without arrow keys you have to use the function layers to sort of use other stuff there's the arisu and stuff for slightly more ergonomic boards the ortole which free more hands or something that guy has a split that doesn't count then there's the splits usually what people end up with are smaller sizes cause once you go for ergonomics you also don't want to stretch your fingers as much so you use layers to sort of do all kinds of weird stuff and then also you can 3D print some cases so you get stuff like this or this is what I use at work basically it's like a skatepark and then you need a manual for your manual basically for the layout you're using in your keyboard but don't tell me you can't use a 40% if you use this and you only use two fingers now your thumbs can actually do something and yeah it's manageable and it's not a joke these are some kind of jokey layouts but the 40% actually works so there's plenty of themes to go around but yeah and that would be mostly it for me, thank you so any questions? preference again you have two how many keyboards do you have at home? I think I'm at eight right now oh yeah how many keyboards do I have at home I have about eight keyboards so I can have one for each day of the week obviously but yeah you can sort of see the journey I went smaller and smaller exactly what's the best switch? preference mixed switches as in they have more some people use heavier springs on like the spacebar and stuff but that's not very common I would say most people just go full keyboard with a single switch in the community what do you think are the best switches? what do I think are the best switches I'm a big linear kind of guy so I would say like maybe Gator Oil Kings are my favorite right now by the way from the people that tested the switch tester I'm not sure how far it got how many of you prefer the cherry ones? I think that answers a lot of your questions about cherry there was a in what switch? okay like a clutch VIM clutch okay I've heard of the VIM clutch that people use a clutch like for you know using your driving to switch modes between VIM and stuff but I have no clue how those works so you know how to drive I'm not a driver but I can use VIM alright if you guys have any okay when you switch from keyboard to keyboard how much time do you need to adjust the new layout the controls are different enough my brain just clacks it works immediately but as long as the layout is sort of similar like if I switch between the split and a normal keyboard it just works automatically but yeah if there's like I have an ortho linear as well and that sort of messes with my brain because it's similar to a normal keyboard and a split as well sort of yeah that doesn't work too much it's usually linear oh yeah I should have been repeating the questions sorry the question is what's the difference between linear and yeah yeah usually non-mechanical are not exactly linear they're mostly membrane like membrane I'm actually more textile-ish because you have to pierce through the sort of membrane bump so they're more more textile-ish so yeah the difference is but they're also mechanical overall just don't feel as mushy which can happen with the silent ones because they're actually like some rubber inside the switch to actually make on silent but they're still way off in field compared to normal switch like membrane I have not tried my girlfriend did some artisan pickups just like a little toast or something like you could do those but I haven't tried pretty printed stuff I did like a keychain stuff you can take at my booth by the way and yeah all the other questions you could also I could answer at the booth in E all the way the other way so I think we're out of time please stop then okay not mine and the topic will be pie test container thanks for numerous attendance given the beautiful weather outside I'm done and I'm gonna tell you a bit about testing container images with python and pie test first the boring part so that's me I work at SUSE I kind of became the release engineer of our base container images besides that I also do other stuff but since we only have 15 minutes left let's skip the boring part and go to the interesting part and that's why on earth would you use python and pie test if you have shell scripts and we all know shell scripts are portable they run literally everywhere 20 year old AIX machine they are super fast exceptions and terms and conditions apply and everyone understands them and I'd like to use this small opportunity for a quick poll who of you likes shell scripts please raise your hand oh okay that eliminates the next question never mind my point has been served so for the recording there was like 5 people and there's I guess 50 in here so I think I don't need to convince anyone so let me still give you the short sales pitch why you should use python, pie test and especially the plugin that I wrote for testing container images because well shell scripts in my opinion they are pretty brittle they usually tend to break especially when you don't like it I don't understand shell quoting and when I write shell scripts I have to start writing tests for my shell scripts and if I start writing tests for my tests I'm doing it wrong so let's not do that so what can this plugin do for you you can define all your container images it will do all the pulling all the building, launching, cleanup for you so you don't have to do that you can use the test-infra module which gives you just a very nice abstraction layer all around all the common file system operations checking whether there's packages installed essentially all the stuff that you also can do in a shell but just in python we've been relatively meticulous about support of parallel test execution so you can run all your tests in parallel also if you expose ports also if you do volume mounts because that's also as I said one of the features it will find you free ports and actually tell you what the port is so I mean you can just find some random port via portman and docker and then have fun finding the jq expression to find the correct port in the output of portman inspect pytus container will do that for you if you use portman as the back end and not docker then you can also use ports to create them to have a shared network you can create and clean up container volumes and bind mounts if you need that for some reason also if your container somehow declares a volume and creates anonymous volumes they will also get removed afterwards so you don't suddenly realize my container storage suddenly consumes 10 gigabytes where did that come from and you can also use some fancy pytest magic to just execute the same test on multiple container images that's relatively useful for for instance for testing the same piece of software on multiple operating systems and yeah it supports docker and portman you can switch between the two by just setting an environment variable so now with the sales hat off sorry next one it actually runs on rel 7 I don't think that's a great feature because it also comes with all the catches but it runs and it also works on all the architecture and let me tell you power pc and s390x weren't fun but it works so let me give you just a very short example how this would look like in practice you import all the stuff that you need you define yourself a container in this case it's just opens with a tumbleweed container and then you write yourself a test which does which does very really nothing very impressive it just checks whether the file at COS release exists so this what you see here is just testing from module that handles that and this parameterization is just included for pytest container now if you would write yourself I put this into a python file install the required dependencies and execute pytest then it would then the plugin will pull down the container image launch launch it, launch the container before the test connect to the container, check whether at COS release exists and after the test has passed it will just remove it and spin it down again so what can you use this for you can use system tests of your container images if you create operating system container images you can test applications that you deliver via containers just because it's in a container doesn't mean it actually works or what I also think is a relatively nice thing and where we use it in a few places is that you run a few tests on multiple different operating systems if you want to run it on Fedora on OpenSUSE, on CentOS whatever you like you can do that so for the record the whole plugin uses a few features of pytest but I guess since there's so many of you you probably know that this is a python test framework on what it executes it relies heavily on python fixtures which are just magic functions these handle all the spinning down and the destruction of containers and what also is relatively useful is that you can try tests in this case your parameters are the containers so and now let me just breathe through various examples what you can actually do in practice so common thing is you have a base container and now you want to actually run build stuff in there so in this case you would define yourself a container file you put that in this in this derived container class you don't have to take pictures it slides on github there will be a link at the end then you just pass it as a parameter and in this case in this case it's just doing a very dumb check whether python 3 is installed because we just did that up here somewhere if the font rendering wasn't broken sorry about that another feature is the port especially nasty if you want to run tests in parallel because if you try to bind the same port from two containers oops won't work so in this case you define yourself you pass just a parameter to this derived container class and then inside your test function you will get access to the port which this port gets bound on the host so then you can for instance just curl it and find out does my container respond to this or if it's running some other service you can use whatever means necessary to contact this host you can create ports this has so as you see I'm a very creative person when it comes to naming hence this thing is also called pytest container and the port class is called port makes it kind of obvious but as I said not a creative name and the port just expects a set of containers the main use case for creating ports at least so far as I have used it to just expose some service to the outside and so hence you can also forward ports from this port and you use it kind of in the same way that you just pass it as a parameter to your test and then you can do whatever you want in your actual test one thing that I have so far omitted is that all of these tests reuse the same containers and they assume that you so as an optimization they assume that your containers actually don't mutate things so if you suddenly start rmming things all your subsequent tests will be influenced by that and you might not want to do that but there's a workaround for that just use a different fixture which is creatively called container per test and then you can also do things like testing whether rm-rf will actually work I haven't run this one because I'm too scared but maybe you are brave and in theory this things like this should still work you can also chair you can also if you have a set of tests which should all run on the same container image and you don't want to write this test mark parameterize for everyone you can just define a global variable called container images and use a fixture called auto container and then everything will just be automatically parameterized by some very nasty pie test magic that I'm not proud of if you try to build containers then sometimes it also happens that containers depend on each other and you can do that one as well so you can define yourself a container and build another one on top build another one on top please don't add circular dependencies because that will most certainly break but then you've been warned and once you execute the actual test the plugin will first look for the container that's been requested find out okay I need the other one I need the base one the tumbleweed one at the top pull that one build the subsequent one and so on launch your test destroy everything again if you want to have some abstraction over podman inspect or docker inspect there's also one for that so you can just grab the inspect property from the fixture and you'll get a few of the a bit of the output that podman inspect would give you so for instance config cnd but not everything because there's a lot of info and it's definitely for releases and I don't want to support all of that you can create bind mounts again this is very creatively called just bind mount which can be occasionally useful if you want to bind a specific directory what you can also do is to bind really specific container volumes but you can also achieve that by just putting a volume instruction but in case you need something like that you can make the plugin aware of that and then you'll also get all the automatic destruction that it supports one thing that I think is very useful is health check please use health checks they are great because often your services actually need a bit of time to launch and if you test your service yourself you'll launch the container and it will take you a bit to type the test command python is faster and usually you wonder why does the test break then you check the running container and it's still working and it's actually working but your test is failing because your test executed far faster so if your container contains a health check then the plugin will automatically will automatically wait for your container to become healthy and after it's healthy it will start checking if you don't want to do that for instance because you want to check whether your health check feels then you can also configure that you just tell it don't wait for it and then it will not bother you as I said you can use docker or podman it will use podman by default if it's installed otherwise it will use docker so you can just switch that by an environment variable you can run tests in parallel by just installing pytest-extest and by whatever means you want and really that's also how we use it so if you run heavy tests you can murder your machine as much as you like but it will run the tests in parallel and way faster and also the plugin really tries to clean up everything so all launched containers get destroyed all volumes get removed all ports get removed afterwards all temporary directories what is retained are images and intermediate layers just because then the test would take ages to execute there's a few there's a few things using it for instance the BCI test suit a bunch of other services as you see all kind of tied to open source because I made them use it but maybe maybe your project or you tell me about your project and I'll make you use it I would like to thank a few people mostly Jean-Philippe who the initial prototype with myself I would like to ask you to give it a try if you find this useful as I said the plugin is on github the documentation is there as usual it's there's room for improvement but I don't know where currently the slides are linked at the bottom and with that I would thank you for your time and be happy to answer all your questions so what do you mean exactly if I so if you if you use in the default mode you will get a the containers will be shared to be frankly honest I know this is embarrassing because I wrote it but I'm not sure it might be that the containers that if they actually two workers in parallel launch the same job they will actually get different containers but the sharing is an implementation detail you shouldn't rely on that it's meant to speed things up so how that actually behaves will depend on the behavior of PyTest of Exodus how it actually handles that so yeah so the thing is I'm not sure how ExTest will handle this kind of fixture so it's just a fixture that's global for the whole test suit it should be a session scope fixture yeah so then then they will be different okay thanks again we have another session here and we have a meeting with us and the topic will be open source fitness apps hi everyone can you hear me okay even in the back great awesome so I'm here to talk to you for a bit about open source fitness apps namely Vega, Fila and how we're creating an exercise wiki before I get into that I'd first like to define what is actually a fitness app well a fitness app that's a broad term of broad category of apps that covers a wide range of physical activities for people with a wide range of expertise with varying demographics but what brings all of these different apps together is they all help you improve your physical fitness unfortunately what's also very common with these apps is that they're often times proprietary which means that you don't really have full control over data or over your habits over what's running on your computer or phone and the problem with that is let's walk through a scenario let's say you install this app you build your habits you use it to track your walking or cycling running every day you build up a lot of data you track your progress is shut down and then what do you do well you have to find a different app you have to completely change your habits because you can't really use the app that you've been used to anymore you have to hope that the alternative allows you to import data from the app you also have to hope that this app will let you export your data so quite a change but that's not the worst case scenario it can also happen that your app can leak your data which has happened in multiple cases and there were real life risks associated with that also in multiple cases so we'd like to change that and by we I mean me I'm the author of Feel Roland Geider who was supposed to originally give this presentation with me but couldn't make it today so we want to change this and we want to make free and open source fitness apps the norm so that people wouldn't have to compromise on their privacy or the control over what they run on their device or their habits just to be fit so this is our goal here's where we are now we have these two apps that you can use right now and there's also an API that anyone can use it's a public API associated with Beggar I'll go into more detail about each of these so my app Feel is a simple guided workout app basically how it works is you open the app you choose the workout and then this app guides you through that workout telling you how many seconds you have left before the break and then before the next exercise you can choose from some pre-made workouts or you can create your own and this app runs on Android as well as Linux it's a Flutter app so it's easy to port to other platforms and all your data is stored locally then the problem with this app is that for now there's only 57 exercises that you can build the workouts from while you do have images and descriptions etc for these exercises there's not that many workouts that you can actually build but this is going to change and I'll get back to this then we have Vega which is an online application that allows you to create your own workout schedules create your own meal plans and then helps you follow up on them you can track all of these things via this web interface and it's for more experts and more seasoned athletes this is a web application but you can host your own server so you still have control and of course it's all fully open source and there's also client applications for Android as well as for Linux via Flathub there's also the Vega API which Vega pulls its information from and this API gives you access to all of the things that you store in Vega including your account details meal logs, nutrition plans workout schedules etc but most importantly it also hosts the exercise wiki which is a very important aspect not just for the users of Vega but also for pre and open source app developers because suddenly it creates a centralized resource that developers can use to exercise data exercise images, exercise beta data such as which muscles each exercise targets or what commitment you need to use for each exercise and all of this is licensed under the creative common share like license all of this is just like with wikipedia contributed by a range of people and really it's an invaluable resource that one person couldn't create by themselves the beauty of crowdsourcing is that everyone contributes a little bit and then you build something huge that no one person could easily create and we see this as really kind of a launchpad for various pre and open source fitness apps that they can use because suddenly it becomes much easier to create your own app for your own niche that you target to your demographics of choice so we're building this Vega right now has 350 exercises in its exercise wiki there's more than 22,000 accounts on there it also has more than 2300 github stars so it's relatively popular among developers as well the wiki is relatively new and there's some bugs being sorted out so this exercise number is sure to grow in the future but we still haven't reached our goal of making pre and open source fitness apps the norm so how do we get there well we need you we need the community to contribute exercises to help us build apps and more there's a number of ways you can contribute I've mentioned the wiki you can go on ee today and contribute an exercise or contribute changes to an existing exercise that's on there you could add more information you can revise the steps if they're not correct or there's not enough information you can upload images I would hold off on uploading images for now unless you have public domain images that you can fully share because and this is still being worked on there's missing functionality for specifying license so unless you have public domain images don't upload images yet but maybe save them to a folder for uploading at a later date speaking of images of photos in my experience a lot of people don't tend to like taking photos of themselves and publishing them online for people to use and see and this is one of the reasons why feel uses this low-poly look this triangulated look and what you can do if you're feeling shy about submitting your own photos is download this app called foist triangulator it's a java app it's on flat hub but you can also use it on other platforms of course and you can add your image create a low-poly version like this which you can customize where the points are so that it looks good and so that it's anonymous enough to your liking and then you can export the SVG and submit that so that your face is obscured enough you don't have to feel shy but you can still contribute to this open source wiki of exercises that we have and then besides ways of contributing there's also the standard ways you can develop you can help with development of these two apps or you can develop your own app perhaps integrating some of or all of Beggar's API including this exercise wiki you can help us translate you can translate via web or you can translate exercises via the Beggar wiki you can help design you can help test and of course you can spread the word share on any social media that you happen to use so I hope you're as motivated about making the world of free and open source fitness apps happen if you are you can visit these links there's also under these links to chat with the community so you can talk to us there and I think we have a little bit of time for questions yes right so this is so the question was if we can integrate with hardware such as smartwatches, fitness trackers of all kinds and this is maybe on the road map right now we're focusing on more core stuff such as this wiki but I'm sure that eventually there will be support if you'd like you can help us implement this and potentially if there's more free open source apps there are some open source apps already that do have this integration mostly on fdroid perhaps we could collaborate with them as well yes because right so the question was how do we how do we get people to use these free and open source applications so that they become the norm and I think the answer here is right now if you look at the landscape of fitness apps in general there's a ton of fitness apps but no clear ones that people would recommend and I think that creates an entry point for free and open source software because apps come and go apps are being cancelled all the time there's mergers, there's redesigns there's all sorts of things and there's no one clear app to use and I think the advantage that the free and open source world has here is that really you can focus on building apps for specific use case and these apps then because they're open source even if the original maintainer leaves can then be maintained if they're useful to people and also we can have joint efforts like this exercise wiki that everyone can use so really I think we can grow communities in a much more efficacious way than close source projects can just because we share a lot of things among various open source projects any more questions yes so the question was whether we can so the question was Garmin currently provides access to its API only to commercial entities and it's a question of whether it's worth it to create a commercial entity to be able to create a commercial entity to be able to create a commercial entity to be able to access this API and I would just say that I personally haven't specifically thought about Garmin maybe again this is something worth visiting in the future it's also a question of whether it might not be better to just integrate with some resource that already plugs into the Garmin API so that you'd have your data from Garmin somewhere in some centralized piece of software for example Google has its do tracker that amasses all sorts of fitness data from everywhere and you can plug into it but it's something that we'll have to look into going to the future yes yeah we can potentially convince people that Garmin to open it up yeah all right thanks and if the backend doesn't have it QMU cannot send get config message to retrieve the backend configuration so it needs to know which features are supported by the backend so they need this feature negotiation which is initiated by QMU side and QMU will send the features message to get the bits that are supported by the backend the backend will fill the bit array with the features that it himself understand and then answer to the QMU message and then once QMU receive the response it will match the features with the ones that he can support I put here a few examples for bit type of features and because user protocol features that there are some generic and some specific ones I put a few here specific about bit type of GPU and then this backend that allows backend to send file descriptors back to QMU but there are many many features both generic and specific then the next step in the initialization is the bit queue configuration so this is for the data plane right it's a shared memory so what QMU will do is allocate memory regions as shared huge spaces as it boots and then it will pass this shared huge pages as file descriptors to the backend which will map into their own other space and then after that so they can both see the same memory regions the bit queue driver will reserve part of this memory for bit queues so bit queues are basically like logical queues where the sender can write their message their command that one the reader the receiver to read and they they usually have different logical meanings so that you know what kind of messages are you retrieving from specific queues and again same with features there are some generic and some specific like we have here the control queue or the event queue that are generic for most of the devices but not all but most of the devices but then we have some very specific bit queues like the cursor queue, tx and that are for GPU and socket respectively and we can log in and we should so so in this part of the presentation I'm going to focus in my experience with it with builtio video and as I said we have already the device accessible from the guest and we will have an application running in the guest user space that want to interact with that device in this case we can assume that it's just streamer and let's say it wants to start at the coding stream and then it will query some buffers it will create some buffers and then it wants to queue them so for doing that it will have this video queue buff IOCTL filling with some data that is required by the driver and then the driver will handle the system called IOCTL taking into account that some operations that depends completely on the implementation of the driver are handled by the driver or completely ignored but in this case this queue buffer will actually trigger an interaction with the hardware and when that happens what the driver will do is take the information through in the IOCTL and send a command and write a command in the build queue that is specific for this type of communication and for the queue buffer command it will send this command and it will write it in the share memory in the specific build queue and then the backend will be able to read it taking into account that the format is defined in the specification document it varies for each build queue device so this command resource queue is only for build queue video devices a different will have a different set of commands and the format is agnostic to the driver or device implementation by definition so that you can have multiple implementations coexisting between them as long as they are there to the build IOC specification they should be able to understand each other and then we left it where the driver brought the message in the build queue and then it will send a notification and the backend will be able to read it from the build queue and then it will reconstruct the video the queue buffer and then the command the information should be in the queue that the gstreamer sent in the user space and then the device driver in the channel will handle the request and interact with the hardware because it will receive the IOCTL and then we have this final interaction here and we have arrived to the hardware device and that would be in most cases but in some cases we need a response that is defined by the video specs not all commands do in this case it does these responses can be synchronous or asynchronous again in this case specifically it is asynchronous so what we are doing is queuing this buffer in the hardware device and once we start streaming the hardware device will fill the buffers and once they are filled it will launch an event upwards and once the event it will pick it up and send the response back to the the video then to the guest user space so it is exactly the same path but backwards and that is the end of our journey I think we have a few minutes so I will give you a few hints for debugging completely based on my experience so the problem with this is that something fails or the communication misbehaves is not easy to debug because the problem might be in the guest in the guest user space in the communication between the frontend and the backend or the communication with the backend and the hardware device so it is not really that easy to debug so what I would say here are a few points first get even if you are implementing the backend or focusing on the backend it is good to know what the driver is doing then also that depends on the backend but in QMU most backends come with a dash-dash debug option use them because they have a lot of extra information that are useful then also these two commands I assume most of you are familiar with them but if not they are very useful to figure out what is going on and for inspecting the function flow in the kernel for F trace so they can be very useful to know what is going both in the guest or the host so use them as much as you can then you can also run what I said we run in the guest user space you can run it in the host and trace it to know that you are doing the same thing or the right thing in the backend then another good tip is use QMU monitor and GDP debugger this is the blog post written by Stefan and he gives very good hints to use these both tools if you are built that your backend gets stuck somewhere you don't know where you can check this blog post and try to figure it out and not as patience and good luck and just to wrap it up there are these are a few links that helped me when I started with it and they are all good reads including the build.io specification the last one that has been updated which is 1.2 I think 1.3 is due for this next summer and the last link is the patch that I posted in QMU for the build.io video so if you are curious please do and that's it from my side any questions? yeah so the question was whether the let me see if I can phrase it correctly if the build.io driver is a kernel module that interacts with the hardware device in the host so yes and no I mean it does interact with the hardware device but it does it through the backend so it needs another build.io piece which is the backend so you can consider it the frontend and then it can send commands to the backend through the share memory and the backend will interact with the hardware directly yes so the question about the benefits of externalizing the backend into a different process that's difficult question I mean not difficult but it can be long to explain but long story short is about context switches like if you have for example the backend embedded in the QMU process every time you do a system call you need to do a context switch to the kernel and then back again for the data and if you are like this is writing data into your buffer you're gonna have to wake up QMU and it's a tough beast so that's why it's better to externalize the backend to avoid these switches but which Qs are you referring to sorry so the question is whether the Q used in the backend is somehow limiting or could be a different option I think it's not limiting like you write you don't write that many the communication is not that fluent between the front end and the backend you just write a bunch of messages there and then you have a ring and you are looping through the ring with the messages so I think it works pretty well for what it has to do I don't know I guess I never gave that much thought into why it was done that way hello yep so exactly that so that's the topic about the the talk so first of all you know my introduction so my name is Ricardo Carrillo Cruz although you know everybody calls me Ricky I'm a principal software engineer at Red Hat I've been there for almost seven years now I'm a member of the productization team at Ansible we are responsible for building and delivering the Ansible products and we're also you know responsible for the installers and operators and other stuff previously to that I was very involved into the OpenStack project as a whole with another company then I joined Red Hat with Ansible and then I've traveled around a bit so I've been at OpenShift Engineering or CDOF and I'm back to the Model Shift at Ansible because I'm really happy to be back with my folks so first a little bit of introduction because I'm not sure you know I don't like to presume things I don't know if you've ever heard Podman if not so with this Podman it's a container engine to build, run and manage container so you know you can pull images from Docker Hub Quay you can build your applications you have maybe a web app with your source code and you want to create an image and push it and you can create containers run them you know that kind of thing it's obviously open source with Red Hat we're an open source conference that what we do follows open standards like OCI for container images and run times so it will support things like run C C run, you know whatever you know it's standardized you know it's just like Docker but different so how is it different it was designed to support rootless so to ground up one of the things that the Podman folks did is that they they didn't want to have like Docker that runs as a demon and you know as root so the cool thing about Podman is that you can run a container as non-root and basically if there was some sort of a privilege escalation and you could somehow you know get out of the container you would still be running as the non-root user at the host it's demon-less or like Docker which has a client server architecture which you know you connect from a client to the server and the server you know spins up the container and then you know it wires up the connection between them for the end of the standard out it doesn't have any kind of demon it's just a fork and an exec so Podman spins up spawns the container runtime the container runtime spawns the container and that's it so Podman you know Docker still like compatible you know tries to it aims to have the same flags as Docker for the most part you know you can even you know do like a an alias for Docker and Podman would just work cool things that it has so despite being demon-less you're going to still have like Podman as a demon if you want to access it you know remotely maybe because you want to have a host machine with Podman running and you don't want to run Podman from your laptop you can still do that that's managed by SystemD it's not Podman being a demon it's SystemD doing that integration with SystemD that's another thing that the Polymer folks wanted to put into the project maybe you have some legacy application which you have multiple services SystemD managing all of that and my understanding is that there was some issue with Docker and they didn't want to allow that and you know Podman allows you to do that you can have a Podman container which you know can leave you know SystemD managing the services for you this is actually another cool one that you can actually label your containers with a special label and if you run a Podman auto-update command Podman will either internal or remote registry check whether there's a new image for your container and if there is then it will pull it and rerun it so it updates the container you can also have like a SystemD unit managing that if you want to have more automatic kind of things it supports Podman containers this is like you know just in case you're not familiar in Kubernetes you have like the single unit for execution is a pod it's not that container itself so when you want to have like containers that share the life cycle and you want them to share like the same storage and networking you put them into a pod so that's actually what gives I think Podman the name it's Podmanager because it allows you to create pods so you can create a pod and then you put containers in them so you can share or storage to have some communication between them generation of Podman pods from Kubernetes manifest you can actually get something like a deployment manifest from Kubernetes and run a command and then you create containers out of it and the other way around you can generate Kubernetes manifest from Podman objects which is actually pretty cool now another quick introduction for Ansible as well as an IT automation tool it's a tool that allows you to automate things for your infrastructure whether you know it's deploying servers configuring your servers deploying applications that kind of thing it's open source with Red Hat that's what we do very easy to learn and use this is actually one of the cool things about Ansible is that it's the learning curve it's very low it's very easy to get going and automate things with it Agents less similar tools you have to install agents in your servers to actually run automation with Ansible it's not like that you run Ansible from a bastion or control machine as we call it and you run automation against your target host so you don't have to install anything providing you can attach to it or use whatever connection plugin to connect to the device you can just do it high list Ansible with plugins everything with Ansible it's a plugin base from the connections you can maybe you want to you have devices that do not use this as a telephone windows machine so you have a connection plugin for WinRAMP for example or maybe you know you have your host machine in Inventory which is a static file but you want to somehow pull your VMs from AWS so you have plugins for Inventory so pretty much everything in Ansible is plugin base you can mix and match and you can extend it use case for Ansible config management install packages set firewalls create users groups that kind of thing infrastructure provisioning you can create VMs containers, clusters AWS, GCP Azure, DigitalOcean, OpenStack VMware, I mean you name it we have a lot of modules for that kind of thing application deployment if you need to deploy your Bermuda servers applications or container applications or even you know you can even create VMs with your applications then push them into your cloud and then you create instances from it and that's my favorite one because Ansible is so simple and it has such a wide integration with different services it can actually act as the glue to basically automate all the things I recloud you know all the clouds that are set networking you can automate your networking devices, your firewalls we're also getting into into the edge right now so with Ansible you can put your content modules and playbooks and roles and all that into collections which you know contain particular domain of automation so you have a collection for AWS or for VMware that kind of thing and you can pull that from Ansible Galaxy which is a repository in the internet for basically browse and look for your content to automate your stuff so this is the collection that there is for one that's been largely developed by another fellow redheader Sagish Naiman, he's done a phenomenal job you know creating very peer-to-reach collection for ultimate you know Potman containers because it's not part of the Ansible core it's available for Ansible Galaxy so you have to like install it with Ansible Galaxy CLI it has it's a resource-based kind of collection so we'll have resources for managing Potman containers which you can stop, start containers you can even you know generate system D units for those containers that you want to run modules for building, pooling images, login to registries whether you know they're internal you know public ones, create networks in Potman you also have like a module for integration with Kubernetes so for managing pots and volumes and you know the use cases you can build and publish container images push them into Docker or Quay whatever you can deploy your container as applications managed by system D you can actually run a Potman container but also have a system D generated so you can start and stop it with system D which is a really nice feature because you can do pretty much anything with Ansible that's another good use case you can create an instance then configure the instance with a roll that will deploy container as application on it then you create an AMI from it and then you have like a blueprint an AMI which contains your container as app and you can create instances of it yeah so you can manage your volumes and networks for doing back and restore for your container as applications because you can actually not only create volumes you can also in fear and get information about your current Potman objects you have info modules for all that and if you have like Potman containers running you can also generate system D units for it with it and also following that feature for Potman you can also help you to migrate workloads from Kubernetes to Potman and the other way around I'm not sure so these are going to be a hard one because yeah I don't have my yeah so this is one word so I won't be able to do the demo because I don't have in front of me the terminal sorry but if you go to the last slide it will contain the link and some yeah I can even click here yikes yeah so if you go to this link you will find some playbooks and scripts that will have a very basic role that builds an image by using a role for based on an engine X image it will create a volume push some HTML files on it and then it will create the damage with the AWS modules and then tear it down so basically it will depict how you can have a very basic role for creating your container as application and reuse it for creating a cloud image which you can use for launch your instances with it and we're out of time yeah so the next one does it have the same system for showing because I have another demo another big demo on Sunday and I would like to try out before sure you know what it's good to try either in the morning and we are up with this and the topic will be the power of GraphQL how is it annoying to see a spinner and the application takes forever to load performance is crucial in user experience and slow loading data can really result in an outback that's why today I'll be exploring one of the tools to enhance the data fetching and data loading it's called GraphQL do I have a pointer? this works right? it should work like that sorry about that sorry technical issues I don't know I don't have an asset I think this should work where is it? the shift I'm just pressing whatever you know never mind I will just use my keyboard it's okay alright so today I'm presenting again the power of GraphQL so we're gonna explore the powerful capabilities of this tool and how we can use it for API and modern application but before I do that quick introduction about myself it's not working again quick introduction about myself hello everyone my name is Yara Debian I'm Lebanese I currently live in Barcelona and I'm a software engineer at Factorial so before I start talking about GraphQL I'm just gonna go over real quick REST API we all know about REST API and it's like the go-to solution when you're dealing with data fetching and data manipulation but with REST API we can have some problems one of them is the under fetching and under fetching is when I'm requesting data and to get all my data I need to have more than one REST API request because I'm not getting enough for the one request also we have the problem of over fetching which is actually the opposite it's when I get more data than I actually need in my request so all of this can result in this load loading data in the appearance of spinners and do hate spinners in our application to reduce responsiveness of the web application and overall the usability of the application it undermines it so let's try to solve this question to solve this pain with GraphQL first what is GraphQL GraphQL is an API query tool okay I found this on the web for ZZ check it out sorry about that it's my friend Siri Siri Sharop alright so GraphQL is an API query language and it has three main pillars the first one is the single request property and what do I mean by that I mean that we can have all the data that you need in one single request and we're going to see later how we can do that the second one is the easy data manipulation and the GraphQL supports precise and simple data fetching and the last one is the strongly typed schema and when I say strongly typed schema we can think about it as a contract it's a strict contract between the server and the client so more about the schema here in my diagram I have the schema that's defined and inside my schema I have all the logic, everything I need to know for my application I have the objects, I have the the models that I want I also define the operations that I can do on my data so the query, the mutation everything so what happens is the front end and or the client it requests some data or it wants to manipulate some data so it talks to the back end or the service and what happens is the server need to check with the schema which has the contract with it and to see what kind of operations are allowed and what kind of types are defined and everything so everything happens inside the schema and the DB is there just to provide the data to execute some manipulation some mutations for the data now what is the schema and how we construct it we need to understand something which is the types we have four main types in GraphQL the first one is the scaler which are the basic types they can be integers, string, boolean the second one is when I don't have the types already predefined in GraphQL I construct something called customer scalers and this is used to express some date or time variables the last one, the third one is the innumerators which is when I have a discrete set of choices that I want to express and I know what output I am expecting for example to define a role I can define the role being an admin or a guest or a default or a manager I know what kind of output I'm getting and the last one is the object and the object is actually a combination of all the previous types and also of some objects they are the complex data structure that when we see we can actually understand the last thing that I want to talk about is the query and from the name of query what is query is a function that used to request and fetch data from the GraphQL API it's a really cool thing about the query is that shut up, see the cool thing about query is that it helps me define what I want and it helps me to get the exact data that I need so the thing here in this really simple query I'm defining what I want as a response, so here I'm trying for example to get a post and the query takes an argument as any function here I'm taking an ID and I need the post to return the ID, the title and the other attribute for example so everything is defined by the developers and I don't have any surprises from the backend from what I'm getting so it makes the API more efficient and more flexible for the developer so I'm going to let that thing end for a second and now that you have enough basic knowledge on the GraphQL and how things work I'm going to jump to a quick demo suppose I have this WebLog application, I know design is my passion so I have a user I have some posts and I have a friend's list now suppose I want to render this page what I will need is three requests first one to get the user the second one is to get the post and the last one is to get the list of friends okay now suppose I want to take my application to the next level and I want to have this kind of like a pop over or tool tip or something and inside it I will be also displaying some information about the author of every post so I will need three additional requests to get the user information so that's all it's more than four requests just to get this simple page so imagine the amount of spinners I will be having now that's crazy and we hate that we hate spinners so let's try to solve this with the GraphQL first what I will do I will start with my schema just to understand how things are done I have a user I define the user attributes that I have ID, name, email, whatever and I also have posts that the user can see or that they belong to the user and what is a post is actually another another object inside the post I also define the attributes and the post has an author what is the author is actually a user and lastly also inside the user I have a friends list and what is a friend is also a user so now we see how things are connected together now that I started my schema I'm going to try to construct my first query and what is this query doing this query will try to fetch the page to render the page that I have and inside my query what I have I have my argument I have my ID and again I'm defining what I want I'm telling the client the client is telling the server what it needs so I'm telling here for the user I need the user attributes I need a list of posts I need a list of friends of this user so all of this is being done with just one request so no more under fetching here I don't need more requests just to get one simple data that I have it's as simple as that this is how my output would look like strict basic json format output I have all my information I have an array of posts I have an array of friends as simple as that and imagine I decide in my application that I don't want to show the email anymore I don't care about the email and I don't need it I don't need to render it so basically with REST API I will have to accept it because this is my request to get the user so with the backend is giving me I'm going to have to deal with it I'm going to have to parse it and do all the logic there or I'm going to have to change the logic in the backend but with GraphQL all I can do is just I don't need it here and this is how my output would look like no email, no overload of data no load of transfers so no more over fetching here I'm getting exactly what I need so yeah I'm going to go just one take a quick look again on my application and if we look again we can see that we have something common here we have some common fields and if I take a look again at my query I can see that inside my query I'm repeating myself here I'm repeating the attributes again and again and I think most of us are lazy when it comes to the writing the thing over and over again so what if we had a way to connect everything and just to group everything in one single query and this query is going to every time I'm going to call it it's going to just give me the information of a user instead of me repeating myself well that's possible it's possible with something called fragment and what is a fragment the way, the only way or the simplest way to think about fragment is like an abstraction and it's like it's extracting data from a query extracting a sub query and I can reuse it everywhere like a puzzle piece and I can use it in every query but it's going to get me and I know that type is going to get me so for example here because I have the type user I can construct a query called user fragment and every time I will be calling this fragment it's going to get me the ID the name and the email so this is how my query would look like again I would be just calling that fragment in both places for the user and for the friends now the also cool part about it is that it makes my code more modular why? suppose I decide to make my application even more sophisticated and I wanted to add a bio attribute here now if I had the old query there I would have to go to everywhere I'm calling the user and call the bio again so but with fragment I can just add it here and the bio will be requested everywhere I'm calling the fragment so yeah that's pretty cool now everything I just explained and I went over can be possible or it's kind of a bit complicated to do without Apollo client and Apollo client is a simple library that's used to integrate GraphQL with our front end project with React, Angular or any other framework that we're working on and it provides caching capabilities and it makes it easy for developers to use GraphQL I'm going to combine everything that we did it's just simple JavaScript file I'm sure if I'm going around this it's going to fail but it's pretty basic to understand how things are connected so I'm using my old Apollo client here and I'm importing my use query method here and I'm going to use this query to actually call the query that I constructed so I have here my query that I constructed before I'm not going to put it here the get user query and what I'm doing for example here I'm defining a user profile component and inside it I'm calling the use query from Apollo I'm passing to it my constructed query before and also the argument which is the variable which is here the user ID and what it's returning it's returning loading which is a Boolean value to know if the actually it's loading or not to know if I'm going to display the spinner or not the only spinner and the error field and also the data is how it's going to look like I have the user for example I can define it and I can be able to use it in my component user.name,.friends,.post everything that I'm actually returning in my query pretty cool and there is more I haven't even scratched the surface of GraphQL we still have the mutation it's enough 15 minutes is not enough to talk about that we also have the pagination which is used for extensive data and the error handling as seen before it's here because the GraphQL it dedicates just the field for the errors in every response so it's so easy to handle it and to parse it and all of this so yeah I really I highly request or advise you to to check GraphQL it's really interesting tool to have and it can take really like API management and data management to the next level and to check that there are a lot of tutorials online a lot of documentation one of them is the official documentation for GraphQL and the official documentation for Apollo as well and there are a lot of free resources on the internet to check last but not least I just want to go over some real life cases of GraphQL starting by Facebook which is I forgot to mention is the creator of GraphQL and Facebook is using it for the Instagram application also we have GitHub that's using GraphQL to construct the public API we have Twitter using GraphQL for their Twitter Lite application and Twitter Lite is the application used for slow connection environment so they're trying to make the application more performance and requires less rendering so this is great when they try to keep the over fetching and under fetching ingress and lastly also Airbnb is using it to unify all the API in one and it makes it easy for developers to maintain Airbnb application so yeah we can see that GraphQL has made a huge impact in the tech industry and let's try together to unlock the power of GraphQL thank you so sorry I didn't hear the question I don't but the schema does it itself it's Apollo it's the integration with Apollo that validates everything that's the thing I pass only my arguments to it and the schema is how it's defined because I'm defining in my schema my user type for example and defining the user type is always like the ID is always an integer and I don't know the name is a string or the date it has this format so every time I'm sending a request with my Apollo client it's gonna check this query it's gonna check what kind of type I'm passing for example and it's gonna return it for me and if anything is not matching it's gonna throw the error by itself is it possible for you to run a for the query for the query partial response for the query it depends what I'm sending this is the thing I define in my query what I need so if I need the partial response I just need to tell it I need it partial so I only define it the principle so yeah you're welcome thank you thank you