 Good morning everyone, I'm happy to be here today with Silva and Silvi, who will talk about the Ansible Automation platform, before we start with our questions also in metrics, or here, or in the end, and hopefully in the next session, and now for a serious question. Alright, thank you very much. So welcome to our today's presentation about Ansible Automation platform as a service based on OpenJet. Today my colleague Silvi and I will show you how we at SIX use the AAP together with the customization to provide a service for our internal developers so that they can really bootstrap AAP on demand. So first of all, let me show what you can expect from today's session. So we will start with a quick introduction followed by an explanation of the architecture of AAP2 as well as the basics of Kubernetes operator, since it's quite important when we speak about OpenJet that you also notice basic concept of operators. We will follow then by the bootstrap and configuration, that means how we enhance the basic operator so that it fits in our zero trust environment as well as the execution environment, which goes then hand in hand. At the end, we will of course also show you how we migrated from the old approach, the Ansible tower to this new AAP on OpenJet. And to conclude this whole presentation, we will also show you then the challenges and the takeaways we have. But first of all, let me pass the mic over to my colleague Silvi to introduce himself. Thank you, Philippe. Hello everybody, I'm Silvan Chen. I'm a principal consultant at Retrat. I joined in 2017. It's my first DevConf. I'm very happy to be here. I've been working with containers for quite some years as well as with Ansible. And I'm glad today to present the work done of my two favorite products, Ansible automation platform and OpenShift all together. I would like to hand over back to Philippe, who will introduce himself. Thank you very much, Silvan. So my name is Philippe Putter. I work as a Kubernetes engineer at SIGS. For those who do not know what SIGS is, SIGS operates and develop infrastructure as well as software for financial services in Switzerland and in Spain. Therefore, also, zero trust is a must in our environment. From my background, I've been working with contact technology for quite a while now for eight years. I started my automation journey with Puppet, now moving slightly bit for bit to Ansible. But as you may know, it's not that easy to let go someone's first love, right? So, but that's enough from the non-technical things. So let's move over to what we are, or why we are here now. So when we go back some years, when we introduced Ansible in our company, we thought about deploying this Ansible automation platform, or there the name was, then the name was Ansible Tower on OpenShift. It was supported that you can deploy Ansible Tower on OpenShift, but it was never supported to have an operator. So officially supported by Red Hat. But we decided to go then or to use the upstream, the AWX operator already back in the days with OpenShift 3 to deliver a service to our internal employees, which had all these benefits like self-service approach. They were able to use infrastructure as code to deploy their instances. And due to the fact that it runs on OpenShift, you had like this consistent lifecycle automation in place. But now with the move to OpenShift 4, Red Hat officially provides this Ansible automation platform operator, which is quite cool. So therefore, also with the move to OpenShift 4, we decided to take this official operator, but as you may know, it's not easy to just take something, put it in a zero trust environment and then run it. So you also need some customization. That's what we did actually. But first things first, so we'll pass over back to Silva so that he can explain what benefits this AAP2 really has. Thank you, Philippe. So I will talk more about AAP2, but first, what does it bring compared to Ansible Tower? So basically, we have the decoupling of Ansible automation platform in two parts. One is the control plane, also called automation controller, and the second part is really the execution plane, where we will run the user playbooks. So this is really interesting in OpenShift because you can basically have it running really in a micro-service way. Then regarding the dynamic cluster capacity, you can really rely on OpenShift to support all the different playbooks and job templates as pods. Before it was not possible. Everything was running from the same pod, and this was really the monolithic approach. Now AAP2 is really bringing things down and having the micro-service approach. On the bottom, you can see the automation mesh. This is to actually bridge external VMs from OpenShift. Unfortunately, when you deploy AAP2 on OpenShift, at least on the version 2.3, this is not yet GA soon, I hope, but at the moment this is technology preview. So this is the difference when you run it on OpenShift as of today compared to the traditional way. Obviously, people can do this central management with automation controller and you can have different teams using that independently from each other and having really this as a service. We'll talk more about that right now and how users can provision their own AAP on OpenShift using operators. But first thing first, we need to explain what is a Kubernetes operator and Philip will explain that. Yeah, you mentioned it. OpenShift operator. If you just come from the Ansible environment, maybe operators are not the thing you're working on in the LDL base. But let me explain it. So here on this slide, we see a normal OpenShift cluster and on top you see the API in AdCity. So the API AdCity or especially AdCity is where OpenShift stores all the state from the running applications in it. So what is an operator? An operator is actually a piece of software running in a container which you can use or which you use to automate tasks. In this case, you see in the middle of this picture, you see the, just as an example, the Ansible automation platform operator. This operator has so-called reconciliation loops. So it constantly watches the state of the AdCity and when something changes in AdCity, you will apply the changes to the cluster. And on the bottom line, you see the customer names. So how does it work now for the customer? How can a customer or an internal developer actually interact with this operator? So there are these so-called custom resources. So users can create custom resources to interact with this operator. And as soon as a customer creates a custom resource, this operator realize that and will apply this change. In this case, it realize that a customer or employee wants to have an automation controller, so it automatically detect that and bootstrap the automation controller in the customer namespace. But is it only about this Ansible automation operator? Is it only about bootstrapping automation controllers or is there more? Can you explain that for the customers? Thank you, Philipp. So we go over the bootstrap and configuration, but this is not, this will be explained in the next slide. Yes. So basically, the Ansible automation platform operator is responsible for the bootstrapping of the automation controller. We are really talking about the control plane here and later on talking about the execution. It can also do the LDAP configuration bootstrap at the start of the automation controller. It does as well backup and restore for the, of the database as well as the upgrades. So every, every, every use case regarding that is actually performed by the Ansible automation platform operator. But however, in an enterprise environment, you want to have even more features. So we try to push the automation even further using the six AP operator. This is a internally made Ansible operator that does the following. It will inject the subscription needed to actually run the automation controller because you don't want every user to inject that. Then we will customize the UI to make it more corporate to the two six. And then we inject as well default settings like external logging information. So that the audit logs are passed to an external logger. We will also configure all the things such as the container groups defaults with resource management and so on. We'll go further on that. But essentially the users will create some custom resources and then they will have everything ready in minutes. So as I said, we really focused on the Ansible operator to create this. It was used and it was created using the operator SDK. We could have done it in Golang, but here it makes more sense to use Ansible. Fun fact is the Ansible automation platform operator was also developed with Ansible. So it makes sense to actually use the same technology here. So now let me go further on the development and how we can actually not have conflicts between the two operators. So we need to make sure with this operator that it will come at a later stage because it needs to have the bootstrap of the automation controller. So we need to check if the status is actually ready to be used as well as if the API is also up and running. Because this is really important because we inject some configuration and we will actually communicate with Kubernetes or OpenShift using the Kubernetes core correction. So we have different modules, Kubernetes, Kubernetes Info, we can copy files, we can execute and this helps us to actually inject our configuration. But how about a secret management, all the secrets because actually in an operator you cannot use the Ansible vault. So for that we were using the hashikov vault plugin to fetch all the secrets in a secure manner. Having said that, having explained all the logic of the operators, now let's have an example on how people can actually bootstrap this within their own namespace in OpenShift. So in the following picture you can see this is actually the deployment of the automation controller together with standalone Postgre database. You have a standalone PostgreSQL pod, it's simply one container and then we have the automation pod which contains four containers. One is ready, the other one is task, web and EE. The task is responsible for scheduling all the playbooks. It's very important in terms of resources to have it properly configured. The web is for the web interface that you know in Ansible automation platform and the EE for the receptor. So what kind of changes can you make to the automation controller? You need to specify a custom resource called automation controller and then you can specify how many replicas you want. You can specify for each one of these containers how much memory and limits because it's important to size it properly. You may run this into an OpenShift shared cluster so you will not have infinite resources. You can find more details on my blog post on the bottom which will be as well shared in the references where I share all the knowledge regarding the LDAP configuration at Bootstrap, how you can actually integrate the CA bundles to integrate your external services within your company. Then there is something regarding the scheduling that I would like to share. For example, if you want to schedule your automation controller within specific nodes, you can do it using labels. You can also spread them across the nodes so that they are not seated only on one specific node. That's quite important. Last but not least, you can use taints and tolerations to actually schedule them on dedicated nodes using this concept of node tainting and pod tolerations. You can find once again more details on that having a lot of different use cases for this kind of use cases for customization in the reference architecture. It was published during Q1 this year so quite fresh and so on here. But one user will definitely use this automation controller which is basically Yamlify. As we said, it will install the automation controller within your namespace. But here we have the automation part where we have developed our own Ansible operator. And here this is basically the second Yamlify which you can create and it will automatically inject the license. And how do we do the mapping? Well, the mapping is on the following part. So we are actually taking the name of the automation controller so that it knows which one it can configure. And then it will inject everything within 10 minutes basically. All right. Let's talk more about the two operations. It's good to have it provisioned but then you want to tune it accordingly. You want to do some upgrades. You want to do some backup and restore. You want to monitor the resources. So how do we do that? So first of all for the upgrade is very simple. It's whenever you upgrade the Red Hat Ansible automation platform to a specific version. For example, in the coming month to AP 2.4, this operator is responsible for upgrading all the automation controller you have in your cluster. The second part is regarding monitoring. Why so important? Because it can give you some insights. In OpenShift, for me, if an application doesn't have this kind of monitoring, I will just get the pod information and that's all. But I will not know what it does, how many jobs it runs and so on. So for this, we have also implemented within the custom Ansible operator the creation of this monitoring workflow so that we can scrape in real time the permissions metrics. So what do we do? We create an auditor user. This is a read-only user. We create it in automation controller. Then we are actually creating a Kubernetes secret with this requested information, user name, password, for example. And then Prometheus will need to scrap this information. In OpenShift, how do we do this? We need to enable first the user workload monitoring. This is to actually monitor your own services. And then we are actually using a service monitor. In that, we can say for each namespace, oh, yeah, I want to monitor this endpoint using the Kubernetes secret. So this is done automatically for our users at 6. And then we can display the information in Grafana. All right. Let me show you an example. So basically, we have two panels. The first one is basically embedded. You can actually have, like, for each container, the resources that she's using. Here we are talking about memory because we have a bottleneck there. And you can use the container memory working set by. It's basically for each container, you will know how much it consumes. And this is very important, especially in the case of automation controller because it contains four containers. By default, the OpenShift UI only shows the pod memory usage. It's very hard to know which container needs more memory. On the bottom, you can see the automation controller metrics. This is the information that we just scraped before on the monitoring workflow. As you can see, this is highly correlated with the first panel and the number of jobs. So I'm basically displaying here the running jobs in total. So at time t, I know how much is running. Same thing for the pending jobs, because there is a queue. You cannot process like 10,000 jobs at the same time. It's queuing, so it's very important. So if you want to run more jobs in parallel, we can simply increase the memory. All right. Having said that, now Philippe will take over and talk a bit about the backup and restore within OpenShift, and especially in the case of automation controller. Yeah, thank you very much, Silva. So I mean monitoring is quite important, but what's even more important if something goes wrong, a backup. And the official AAP operator from Red Hat actually offers you the possibility to create a backup. It's not very well documented in the official documentation, but you can always go upstream and check the AWX documentation where you can find all the possible configuration settings. With the latest releases of AAP, they even introduced two cool new things, and I want to highlight it quickly. So it's first and foremost the cleanup backup on the leads. So whenever you delete an ACs and automation control, it also leads the backup, as well as the PG dump. So the Postgres dump, you can modify. For example, if you have some event that you don't like to have, you can exclude them, and you can save some space on the backup. Here we have the name. The name is obviously important if you want to restore it, and the backup is getting stored on a PVC. That's currently the only solution or only way to store a backup, so you can't add an S3 bucket or something like this. If you have done this backup, you can also restore it by a similar custom resource called automation controller restore. There you just have to reference to the backup. You want to restore. You have to add the name of the deployment, and obviously if you use the same name, you need to do some additional steps. It's linked in the comments there. And depending on the size of the backup, it could take longer, but you can always see the status of the restore in the status section. If that's that, back to Syllma for the execution environment. Thank you, Philippe. We talked about the control plane, sorry, so the automation controller, but then how about the users? They have it, but they want to run their playbooks. How do they do that? So we have the automation execution environment. What is it? Let's have a recap on that. We have basically, it's a container image. It's based on the universal base image from Red Hat. So lots of RPMs, OS3 and so on, where you can fetch to have like a command base. And then you want to add everything to run your playbooks. So we are talking about all the dependencies. Collections, libraries like RPMs or Python modules, as well as the Ansible Core version. Everything together we pack it, and then this is the image that is going to be used in the playbooks. So we can definitely use that and then scale it out. But how does it work in an enterprise environment and disconnected? So basically we are using Ansible Builder for this, but we will have a different approach here because we are not connected. So we need to do some customization. We are, and basically we are going to create, I mean we created already the Ansible Base Images within 6. They basically contain additional settings like we have the CA Bundle from 6 to trust their systems. We have the Private Automation Hub running as well to actually fetch all the Ansible Corrections. We have as well regarding Python, the Artifactory for everything related to the Python modules we want to fetch as well as the UBI Mirror part where we have actually everything mirrored there. So the users don't need to do anything. They will just need to use our 6 base images and all the dependencies will be gathered within the customer environment. Then what happens is on OpenShift, how does it look like? So we have the control plane as displayed here. We can see the resource management like the monitoring, how much it uses in terms of memory and so on. And then we have the execution plane. As I said at the beginning of the presentation, this is separated. So it's not running on the same part. It's really spinning up new parts using this new container image or this execution environments. And we are here really leveraging them as well as the container groups. What are container groups? It's basically the pod specification. You may want to mount additional volumes. You want maybe to add more memory allocation to your container. And this is how you would do that with container groups. So first of all, you would use execution environments and later on even customize them with the pod specification using container groups. All right. Now I would like to pass over back to Philippe who will talk more about the migration from Ansible Tower to AP2. All right. So I mean we have a solution now but you also need to convince somehow the customers or our internal employees to use this new solution, right? So how does it actually look like or how fast it actually is? So the customers with our new solution can create two custom resources, actually one for the automation controller plus the one for the customization. So he can do that actually in 10 minutes if he doesn't have it already. So we already have some templates for that. If he applies it, of course, he also needs to migrate their own Python environments, their old Ansible Python environments to execution environments. And if it's done, he already has like an environment where he can run his first job. And that's what Silvan said before. So we introduced this monitoring stack to also show the customers or to give them an insight what's going on. Because if you really have like 500 concurrent jobs, you may hit some limits in memory or CPU wise. So it really is an iteration where you really need to fine tune your resources. But that's already it. So it's quite simple to adapt new customers to use actually this approach. So as that said, it looks nice and it's just easy to use. But in the background, we had some challenges when introducing this new stack. And we want to just show you the recent challenges we had. So not all of them just... I mean the recent and ongoing challenges. So first and foremost, we have like the Galaxy collection install which failed with version 214.5. It's already fixed with the latest version which we're quite happy of it. Another bug and that was actually the trigger why we introduced the whole monitoring stack. It was that the task... So Ansible tasks was marked as running, but they were actually not present in the job queue. And the reason for that was that the automation controller run out of memory. But it was quite hard to detect if you don't have really a monitoring in place. So that said, these tasks are already kind of soft since we adjusted the resources as well as the version that is fixed. But we have some other issues currently we're hitting. One of them is the usage of underlying open-gift node storage. It doesn't look that obvious. But the AAP operator or the automation controller, especially the task container, runs and uses an MTD for caching their jobs. An MTD on a local disk, you can't really limit. So that could happen that your task container with the temp directory fills your node and if you have a shared class, that could be quite problematic. It's an open bug and hopefully it will get fixed at some point. As well as the RSOS, our SysLock configuration, which is not loaded when the web container is restarted, we have there a workaround to trigger trustee reload with an API call. However, it's not fixed upstream. It's just a workaround we implemented. But even with these challenges, I mean challenges are quite normal. We also have some takeaways. With this new solution, we have like a self-service for customers, for internal employees to bootstrap their own environment under 10 minutes if they already have like the YAMLs available. They could use YAML so they don't need to go through documentation. They can just use the template, the YAMLs and then bootstrap their own environment. It's fully functional in a disconnected mode, as we have since we have the customization operator, which does all this job for the customers. And since it's really based on OpenShift, you have the benefits of scalable and reusable containers. If you're also interested or if you want to have some more references, we have all the references we use during building this solution on this slide. Especially the blog post right at the bottom. It's not the use that we just have a full list of references that it looks better or not. But it's really written by Silva. So if you're interested in how we did it, also get some more code snippets out of it. It's part of this blog post. Good. If that's sad, we come to the point where we're going to open the stage to you for any questions you may have. Last chance for any questions. We will also be outside. So if you have any questions, feel free to ask them afterwards. Thank you very much for attending this session. I know it's quite hard after the event yesterday, but luckily we got some attendees. Thank you very much. Hello, everyone. Can you hear me? Well, welcome. I am very surprised like there are so many people after yesterday's party. It was hard for me to come. So thank you for DevConf and all the volunteers for their hard work for this wonderful conference. I am very happy to speak here. So let me start talking about myself a bit. My name is Ege. I live in Istanbul, Turkey. For the last two years, I am working on Paracona's Kubernetes operators as a software developer. Before that, I worked as a system administrator and a web developer. You can find me on GitHub and Twitter. I don't use Twitter that much, but if you want, you can DM me for anything. Okay, let's start. So Kubernetes is a hot topic for many years. And we are now seeing a surge of interest for stateful workflows on Kubernetes. And that one Kubernetes community recently did a survey, like I believe 2021, and they found out like 90% of responders think Kubernetes is ready for stateful workflows. And like 70% of them really started using databases on Kubernetes already. So that makes me think why people want to deploy their database on Kubernetes? You can argue because it's a hype and maybe you are not wrong. But I believe the reason is people like having a standard API for their whole infrastructure. And people also like the ease of scalability of Kubernetes and they want to have the same for their database. So this talk is about best practices to how to deploy your databases on Kubernetes. And honestly, I don't like the term best practices at all because it's very context-dependent. Like you have your own best practices, every team, every company has their own best practices. And like when people start talking about best practices, it always feels like a cargo cult to me. So you can ask, okay, again, why you did like select this title for your talk? I didn't. I am filling for some other speaker that already has this talk accepted. But it is the way it is. So like the title says best practices. So I shall deliver some best practices for you. So I will give you two best practices for deploying your databases on Kubernetes. And these two practice, I believe, like general applicable for everyone. So first best practice is knowing yourself. And I can call this a best practice because just like it's come from Socrates. So I am sure it's general applicable. So every product, every company has their own requirements, own expectations, their own way of doing things, own practices. And I see more and more with our users and customers, like people come to Kubernetes and like they think it will fix every problem of theirs. It will fix scalability. It will fix like automated failovers. But it is not the case in most of the times because for all this, you need to understand your own best practices, your own way of doing things. So like first thing I believe before doing a database on Kubernetes thing, you need to understand what are your expectations about your database cluster. If you want to perform a failover, for example, how do you expect to do it? How much downtime you can tolerate in case of failover? If you are troubleshooting something, where do you expect logs to be? How do you want to do that? So I also recommend giving a special attention to your assumptions because assumptions are blind spots of human mind. So you need to turn them either into facts by proving them or disprove them and find some work around. So if you are coming to Kubernetes to like have a standard API and ease of scalability, you need a good automation. Now of course you can already have some automation for your databases. You can automate adding a new replica. You can automate failovers. But in Kubernetes actions are not imperative. They are declarative. You declare a state and Kubernetes has this never-ending loop that tries to reach that ideal state more and more. So putting the binaries of like MySQL, PostgreSQL or MongoDB, like the open source databases, some of them, it is the easy part to deploy them to Kubernetes. You can build the images and in 30 minutes I believe you can have a working cluster. It is the easy part. What's hard is ensuring the state matches your ideal because especially when we're talking about the state of work load, you have the Kubernetes state with objects, all these API objects and you have the database which is state load. So you need to reconcile two of them and it is the hard part. So you can try to do it yourself but you will become day two guy. And becoming day two guy is like, life is too short. I've been there and I don't like it either. So second best practice is using an operator rather than being a day two guy, use an operator. So it's not surprising thing that like many database operators that comes from the companies that's already experienced on the database, they have their own opinions. They have their expertise and what does experts have too much of? They have opinions. So in this sense, operator is an opinionator software. Companies embody their opinions, their expertise into their operators and it doesn't mean their opinions are correct and you should submit to them without question and guarantee. So that's why knowing yourself is important, I believe. So you can verify the operator in question and see if it's a good fit for you. You can argue like every software is opinionated and I believe it's correct, yeah. But as I said, Kubernetes is a declarative environment and most of the actions of the operators, you don't see, it's in the background. It's not like you are running some commands, getting some output. It's in your cluster working on your behalf. So it is in some way like trying to mimic a human DBA in the Kubernetes cluster. So it is very important to understand the opinions of operators. So you can trust at night like your cluster is in good hands. Okay, enough philosophy. Let's talk about Paracona Kubernetes Operators. In Paracona we have four Kubernetes operators. One is operator for MySQL. It's just regular MySQL with group replication, a single replication. Second is operator for extra DB cluster. It is also MySQL, but with Galera replication, built in. And operator for MongoDB and operator for PostgreSQL. These are some features of our operators. Now I will talk about them. I will read each of them in detail. I'm just kidding. I'm not going to talk about them. Just look at them. Like you will have plenty of features, plenty of things that you can implement your own best practices using Paracona operators. Instead of our features, I want to talk about our opinions. Because remember I said operators are opinionated. So I want to present you some of our high-level opinions that affects how we design operators. So first thing is availability. For us, database availability is number one priority. To have a healthy cluster, you need many components running. And we are working hard to ensure your database posts are... Why is this something? Okay. We are working hard to ensure database posts are independent. So that means no other components failure should affect database posts. They are readiness, liveness, entry points. It should work even all else fails. You can see there is a... Maybe front rows can see. I try to do a marketing trick here. Maybe you can see the asterisk. So... Yeah, we are trying to, like, ensure database posts are independent. But there are some things, like, at least needs operator to be running in the cluster. So, for example, in our extra DB operator, if you, like, dealing with a full cluster crash recovery, it needs an operator if you want to do that automated. So I said database availability is number one unless that integrity is at risk. I believe many companies can tolerate some downtime. Loss of availability for your databases. But not many of them can, like, tolerate data loss. So ensuring we are not prone to data loss is very important. And for the sake of availability, we don't compromise data integrity. So that means, like, if we can automate some failovers with, like, ensuring that integrity is intact, okay. But if we can't, we are leaving a human DBA. That's already, like, experienced in this particular cluster. Performance. Performance literally inside our name. Percona means performance consulting, and A, I don't know, let's just... The company started as a, like, performance consulting company. Then it grew with, like, short training and now software. But it's still in our DNA. So we aim to provide, like, acceptable performance for every deployment with operators. I say acceptable because, like, every cluster, every product has their own requirements, their own type of traffic. So it is hard to ensure, like, you deploy and you get the best performance. It's hard to say that. It would be a lie. But, like, performance should be acceptable from the DBA. And if you want to fine-tune the performance, we provide some tools, like... Marco Tusa, our MySQL tech lead, created this tool, especially for the operator. So you will get, like, plenty of MySQL configuration options, the time-ass for liveness probes, delays, and all this stuff, like, it gives you maybe, like, hundreds of different options according to your configuration. And if it's not enough, it can be. You can always contact Percona for consulting. We still do that. So people talk about Kubernetes as hearts, and, like, I don't have empirical data about it, but I believe they are mostly talking about the troubleshooting issues. So we can say Kubernetes is very easy, unless something goes wrong. So why troubleshooting Kubernetes is hearts? Because there are many moving parts, and you don't have a solid ground. Everything is subject to change, any time, because of this never-ending loop. And now you have two loops, one even more, but, like, let's say two, one Kubernetes, and one the operator. Because operator does the same thing, like, it tries to reconcile stuff with a never-ending loop, and it tries to reach the ideal state. And it's also hard, because you have many different parts to look at, like, you have multiple containers in pods, you have multiple pods in cluster, you have events, and sometimes you have multiple logs in each container also. So it's hard because of that. And, like, there are, like, tools, maybe you are familiar with, like, Strace, TCP dump, they are not working as we expect, out of the box. So Percona is also known for its top-class support. Like, they are really rock stars of databases. And, like, one of our elite customers. Making their life easy means, like, we need to think hard about troubleshooting. And, like, making their life easy means making life easier for all of our users. So it means we need to provide tools, guides, documentation about how to troubleshoot stuff in Kubernetes. And it was not a clear goal until recently in Percona about troubleshooting and how it started in Kubernetes. Now, like, we are listening our support, we are listening our users, and understand it's a blocker for many people to come to Kubernetes. So we are committed to improve the situation for everyone, not just Percona. Like, we aim to provide tools for whole ecosystem and reliability. So we know we are working on databases. And I believe, again, people care more about reliability of their databases than, like, shiny new features. And then we can say reliability is a big feature that we can provide. So if a feature makes it hard to ensure reliability, it's a no-go. We don't compromise reliability, just, like, some use case, like, supporting some use case. And breaking our operators is, like, one of my hobbies. I really like it. I really like to see them suffer. I also like breaking other companies' operators. But my manager said I should shut up about them. I can't say anything about them. So I will say. And by breaking our operators, like, we understand their limits. It will be a lie if I say, like, yeah, I break. We break them, and we fix every issue. No, we don't. Because we can't. But, like, with every experiment, with every failure scenario, we reduce the uncertainty. With documentation, with tickets, with some, like, make it hard to shoot yourself at the foot. So we are trying to ensure your databases are reliable. So in summary, I talked about these two best practices, like introspecting yourself, knowing yourself, and using an operator. And I also talked about availability of your databases, how it's number one priority, unless integrity. How performance matters for us. And how we committed, like, to make troubleshooting easy for everyone. And how reliability is a feature. I now understand, like, see, no vendor locking. I didn't, I removed it recently. But I forgot to update it. But yeah. Ensuring we don't lock in users is not just our opinion in Kubernetes. It's in our opinion in Percona. So people should come to our products easily as they can go. So we are trying to ensure that in every product. So we have a new initiative called Percona Kubernetes Squad. Like, it's, like, some people want to be close to development, want to influence it. And, like, you can have some, like, AMA sessions with me and my colleagues and get some swag. And, like, notice, notice I say users, not customers. We are an open source company. Every product of ours is open source. And we don't, like, separate our users with, like, paying or not paying. You can influence our roadmap by just, like, creating a ticket. Like, opening a full request, a direct full request to say, like, I want this feature. This is, like, a pseudo code. Like, can we do something about it? Because we care about users, not just customers. And contributions are very welcome. Like, you can, if you want, break stuff and, like, create tickets for us. We have a community forum. So if you, like, have a problem or want to discuss something, you can create a topic there. And we have these four repositories. So, like, if you want to get your hands dirty, like, you can create piers. It's always welcome. Oh, thank you. If you have any questions, please. If you want to see me, Svet. What do you mean by acceptance test? So the question is, like, how autotuning works and can it affect acceptance tests, when you assess the operator? So let me talk about autotuning. We are using the container resources for database to understand the limits of resources. So we do, like, we select some of the important configuration options that affects performance, like, for example, in MySQL. We calculate buffer pool size, you know, DB buffer pool size, according to your limits. And max connections, according to your limits. In MongoDB, wiretiger, cache, calculated according to your limits, have its ratio. So it's not, like, a continuous process. If you increase the limits, you will get more buffer pool, more cache size in wiretiger. But it's not, like, always autotuning for yourself. It's dependent on the resources. Yeah. Thank you, thank you. Another one? Closing in five, four, three, two... Yeah. Well, like, if I get your question right, I am not sure if I got it right. But you are talking about, like, how to abstract the data layer more... Well, there are some new technologies, not in our operators currently. But, like, Neon, for example, the serverless PostgreSQL solution does something like that. They separate data layer and compute layer, and they abstract each of them. So you can have this serverless PostgreSQL experience. And in Percona, we, like, we did some POCs, like, if we can provide this in the Kubernetes, how we can do this. But currently, our operators aim to, like, match the regular deployments of this open-store databases, MongoDB, MySQL, PostgreSQL. We aim to, like, provide our own opinions of what we recommend our users, like, their deployments should look like, how disaster recovery should look like. And we aim to provide, like, some automation on Kubernetes for them. So they are not, like, we are not doing novel deployments with this open-store database. But there are some technologies, like, coming up, I believe. We don't have community versions in Percona. So we don't have, like, this enterprise version and community version. Everything, every feature is there, just a single product. So if we talk about PostgreSQL, we don't do any development on them, the PostgreSQL. We take the upstream version, package it, and, like, test it and, like, test with the distribution, because the important thing, we are providing distributions, not just PostgreSQL. We provide distribution in Linux sense, so we provide PG-backrest, PG-balancer, PostgreSQL, and we test them together. So, like, if you use one version of distribution, you can ensure your PG-backrest, PG-balancer, PG pool will work as expected, like how REL, for example, provides this kind of trust. So operators also, like, deploy the distribution. So it installs the PG-balancer, configures it, it installs the PG-backrest, et cetera. So you will have the distribution on the Kubernetes, and all the components, all these versions will work together, like flawlessly, I hope. So, and if we are talking about the PostgreSQL version, we support, I believe, I'm not sure about the 12, 13, 14, and 15 in our PostgreSQL operator. Are we doing what, with default configuration? Yeah, we are, like, there are some options like, needs to be there for operator to work. There are some options, needs to be there for, like, certain type of replication to work, for example, in MySQL. So, like, yeah, we do have a default configuration. It's not just about the performance, it's about, like, some things needs to be there, and yeah, we do that. But you have the possibility to, like, provide a custom configuration, overwrite some of them, but, like, not all of them. We will, like, we will remove some of your configuration if it will break your cluster. So there are, like, certain possibilities. The question is, like, how can I migrate my already existing cluster to a deployment with your operator? So, depending on the operator, we have some different options. For example, like, in Mongo operator and in extra DB cluster operator, we have this option called cross-cluster replication. So you can configure your cluster to replicate from your source database if you, like, if you want to minimize the downtime when you do the switchover. Or maybe you don't want to, like, switchover but just have some disaster recovery site in the Kubernetes. There are, like, also ways, like, to, with MySQL operator and Mongo operator, you can do some manual steps to restore your physical backup to Kubernetes also. And, yeah, logical, logical dumps are okay but, like, depending on the data size, of course. Now, Postgres also supports this cross-cluster replication. Okay. So thank you, guys. Thank you for listening to me. All right. Thanks, everyone, for coming. Sunday morning is a tough time. My name is Peter and I work for Red Hat as a member of the OpenShift OTA team, which is a little cryptic shortcut for over-the-air updates. So we are speaking about ourselves more like about updates team. I don't really know why we still keep the over-the-air because that's mostly 11. We have five, five wonderful people for my colleagues. And we basically own the whole update experience in OpenShift. And to be able to speak about what improvements we did in updates, I will need to do a little introduction about how updates work, like from general, a little bit of OpenShift update 101. So updates in OpenShift that they're built in. It's supposed to be a no-brainer, click the button, and everything happens without an oversight kind of feature. And it really is. Before I joined the updates team, I was working for one of our platforms team, and we operated several OpenShift clusters. And updates were one of my favorite features because you push the version, push a button, and now you see this whole thing started moving, like the machine starts moving, like all the things, like I'm upgrading myself, and I'm running one of three replicas in the new version. I'll just watch that for, I don't know, 40 minutes, then it's done, everything works. One other cool thing about upgrades in OpenShift, OpenShift is a platform, right? It serves no purpose like on its own. It's supposed to run your workloads. It's supposed to run your stuff, and that's the thing you care about. So the one thing that's important for platform upgrades is it shouldn't disrupt the things that you care about, the workloads. And because OpenShift also controls and handles all the content, configuration, operating system on the nodes, updates usually mean you need to update stuff on the nodes, and the whole thing is designed to not disrupt any workloads as long as it's somehow properly configured. If you're running one replica of something, then that's not highly available enough to not be disrupted. You kind of break a single replica, I think, without disruption. So OpenShift is architected using operator pattern, and operator pattern is 2019 kind of buzzword when everyone started writing operators. And the idea is simple. You encode the operational knowledge of something into software, and then you let that software manage your stuff. And this is everywhere in OpenShift. So every component in OpenShift has an operator that takes care of itself, and there's one operator to rule them all that handles all these operators, and that one is called cluster version operator, CVO for short. And it encodes the operational knowledge, so it respects this reconciliation towards desired state idea. So there's a custom resource in the cluster that has a spec. The spec says, I want to be on version, I don't know what the word there is, 4.13.2. There's a control loop like CVO continuously reconciles the cluster state towards this version all the time. And as long as nothing happens, it just keeps the cluster running. Upgrade is nothing more special than you set the new desired version. The CVO notices, yeah, I want to be on this version, I'm on this version, and we'll start working. And starting working is it will resolve the version number to something we call the payload image. The payload image has an artifact that contains manifests for all the desired version of OpenShift and will start to reconcile to that state. It's not this simple, right? So if I set anything into the desired version, it will probably not work. I need to set one of the known versions, one of the available versions. I can't write there like foobar will not work. I need to write there 4.13.3. And these available versions are also stored in the custom resource, in the available updates field, and just lists the options that the user has to update to. And these options are surfaced to the users and in all the UIs, this is how it looks like in the web console, this is how it looks like if you use the command line interface, it will say recommended upgrades, there are four options in this case, and it works there. So the question is how does CVO know what versions are available for the current state? The answer to that is there is an update graph. An update graph is, it's a heap of data that we serve using a service called OpenShift Update Service. It's served using Cincinnati Protocol, that's a relevant technical detail, and Red Hat maintains an instance of this service to which all the clusters in the Red Hat's fleet talk to all the clusters query, also for update information. So if you dig a little bit more into update graph, it's all the possible update paths there are. So we test updates and we have some intents about which versions do we want to allow people to upgrade, so we generally want people to skip from one minor version to the next one, we don't allow them to skip, and this is all encoded into one huge, directed as a cyclic graph, DAG, and that contains all the possible options. This huge thing, the huge heap of data is partitioned into so-called channels, which are sub-graphs of one huge thing, and the channels allow us to encode some strategies. So our strategies, we have channels for individual minor versions of OpenShift, and we have stable, fast, and candidate channels. So in candidate, we include releases as soon as they get built. In fast, we include releases as soon as they get published, released, officially, and in stable, we include them after they have been released for sufficient amount of time and we know soak time, and we know that there's nothing too wrong with them. So the whole thing works, CVO queries OSES and asks about the data in the graph for the specification of the cluster, and it's very simple, like CVO asks, hey, I'm the 4.11 for that one cluster and I follow fast 4.11 channel, where can I go? And in this case, the response would be, you can go to 11.2 or 11.3, because, like, wow, there's a mistake on the slides. The orange bubble should say .4, obviously, so not to be confusing. So if it follows the fast 4.11, it can't go to the 4.4, because it's still only on the candidate channel. So that's the upgrades 101, but, like, in reality, things do not often go as planned, and I've said, like, we test all this, right? This, like, the open sheet is heavily tested, upgrades are heavily tested, everything, like, if we have an edge in the graph, it's supposed to work. Yeah, it's a real world, right? But bugs still happen. Sometimes things slip out. And when we manage to really something that's slightly problematic in some way, we can use the control we have over the upgrade graph to steer away people from the problematic releases. So we can say, don't upgrade to this version. It's maybe, like, could break you. So we have this power, and the question is, how do we use it? So one thing that's, it's still happening. I think this was the first method. It's still happening. It's not happening, like, that much anymore. We tombstone the releases, which is, like, yeah, we discovered there's a problem when something is still in fast. That's the purpose of the fast channel. So we will not promote it into the stable channel. Like, people who are on the stable channel will never see this. Never see this thing. And we'll just, like, need to wait until the next release to come up and gets included in the stable channel. So that's one thing that we can do. We can protect the downstream channels following cluster from ever observing the problematic version. And this is, like, this has two issues. Like, one issue, which is, like, one issue is, and I will be speaking more about this, is, like, it lets people wait. But we apparently released 4.11.2, the buggy version with some intent, right? We shipped some features. We shipped some bug fixes. And there may be people who just, like, desperately wait for this one bug fix that's supposed to go in, like, the two versions, the buggy one. And, like, they just need to wait. The second problem is we only protect this way the clusters that follow the downstream channels. Like, if you follow the fast one, yeah, that's your problem. You can upgrade there because, like, we discovered the problem while this thing was on the channel. So while we protect the stable guys, the fast and candidates, we'll see the problem. So we can just, like, we can just pretend the version was never there. We just remove the thing from either the whole graph or just from the channel. Like, there's no bug version. We pretend it doesn't exist. We have no problem. Nobody will see it. Except the people who already upgraded to this version. Because, like, there may be some time before we managed to remove that. So some people may have upgraded and maybe the bug was not serious or maybe it was not deterministic and I haven't hit them. They want just to continue to upgrade and the CVO will query OSES and the OSES will say, yeah, you are saying you are on this version and you follow this channel, but you are not. That version doesn't exist. And this is what we'll see the red thing and, like, that's a not best user experience, so we don't do this. The one other thing we could do is we just cut all the edges. Which is basically the same like the previous with a slightly better UX because we will not present ugly red box to the users. We will just say, you are on this version and you have no path to upgrade. You can't go anywhere. Good luck. And I guess the obvious thing that we will solve this is, like, we will remove just the inbound edges. Well, nobody can go in. Everybody can go out if they want to. And that's it. That's the solution except it isn't because we still have this problem of, like, we make people wait for the new release. And the problem is more pronounced than we would like because, like, the real world is complicated. The bugs are not made the same, in the same way, right? We could have a typo in the web console and nobody would really care. We wouldn't block the update edge for this reason. On this other side of the spectrum, if we make a data center explode, we would probably block the edge. And there's a lot of, like, gray areas between these two. Also, the people have different sensitivities to problems, to issues. Like, if you have a startup full of Kubernetes hackers who can just, like, take care of things themselves, they may be able to recover from, I don't know, I don't know, back or something. They may want to upgrade. On the other side, you can have people who just really care about reliability and they don't want to see any kind of disruption. And again, there's a lot of, like, gray area in the middle. And still, the bugs themselves are, like, different. We can have issues that affect everyone, but we can have issues that affect certain configurations, certain sizes, certain cloud platforms. The last one is very common. We can have problems with, I don't know, Amazon-based clusters, which means, like, everybody else will just not need to care. So we have this, like, complicated world. And we have just, like, this one, we have a hammer of, like, we will, like, block the edge or we won't. And the decisions, like, the decisions there are really tricky. Like, if we affect everyone, we will probably pull the edge. If we affect Amazon clusters, yeah, maybe if it's serious enough. But if we don't, we will endanger the Amazon clusters. That's the problem we wanted to solve. This is the area where we wanted to improve our stuff. So we solve that with, we call that either update recommendations or we call that conditional updates. It has two principles, like, first, we want to break this, like, one-size-hammer aspect of what we had. So what we did is we want to annotate the update edges with enough information so that the cluster itself can evaluate, like, EMI affected by this problem. EMI, the AWS cluster. EMI, the cluster with 100 plus nodes, like, EMI endangered. So that's one thing, like, we have these annotations. And second is by removing, by always, like, removing the edge, like, we have the power and the cluster administrator had no power, right? They just didn't see the update. But the cluster administrators, they are also the ones who know their situation best, like, they may be risk-averse or not. They know whether this is, like, test cluster where they don't need to care. They know whether they use the impacted feature or not. So they should have some amount of power to decide whether they want to, like, to risk it, like, maybe they don't care. Maybe they do. And we want to, in order for them to be able to make the decision, they need information about, like, what's happening? Like, what's the bug? So we did that. And so what we are doing is, like, we monitor the known issues in OpenShift for things we call, for things that could be problematic, like, either the problems in the upgrade itself or regressions, like, if something worked before, now it doesn't. And if you upgrade to version where it doesn't, like, you are unhappy. So we scour our bugs for these kind of candidates. We, when we decide, okay, this is, we know enough about this enough known issue, what we can do is, like, we encode this, the known information about this issue and we encode this to be, so that this information is included as annotations in the upgrade graph. So this is an example of such metadata of the edge. Like, if there's an example, like, if you go to this version, from any version, there's a known issue. We give it some kind of, like, little informational name. We put there a little, like, brief message about what's happening. And we put there the prompt QL query. And the prompt QL query encodes the, like, self-assessment. Like, what is happening there? Like, the cluster is supposed to execute this prompt QL query against its monitoring stack to discover whether it is affected by the issue or not. This is how it would look like in the, this is how it would look like in the OSUS provided data. So it's basically the same thing, just in a different format. The cluster will self-evaluate the prompt QL we included. And if it discovers I'm not affected, then, like, the user will not see anything. It will just ask if there is no problem at all. If the CVO discovers the cluster is affected, it will surface this again in the user interface. We put there a little, like, additional step for people to be able to update to this version. So we will surface this. Like, we will say these updates are still supported, but we don't recommend that. We make you, like, switch a toggle or include, like, one more option when you do an upgrade to be able to see these, like, non-recommended paths. So it's hard to, like, mistakenly update to something that we don't recommend. And that's basically it. This is what we do. Like, this is how we solve the problem, partially, I guess. So I would like to speak a little about, like, what we discovered. So this conditional upgrades feature is there for about a year, I think for three releases now. We have 24 things, like separate issues where we used it, where we end up saying we don't recommend upgrade to some versions of OpenShift. So we have some idea whether we manage to improve the situation or not. One thing about this is the success of this is a little hard to measure, because, like, one large set of people who are benefiting from it, like, they will not notice, right? The main benefactors are the people who would previously need to wait for the new upgrade, even if they are not at risk. Like, previously we made people on GCP wait for new release because there was an AWS bug, right? And now they don't wait, but they don't even see, like, they would need to wait before. So we have no way to measure this, like, how happy these people are. So we only see the people who, we only have, like, good idea about the people who see the non-recommendations. And, like, the overall thing, overall feedback is quite positive, but with some nice, like, clusters of negative responses. So one thing, like, we discovered is, like, many people operate in, like, everything is supposed to work, and if it doesn't, they will contact support. So that's the world they are living. They are used to make these decisions. So we have feedback about, you are informing us about the bug in your software. That means you are not testing your software properly. You should stop releasing buggy software. So this is something where we need to somehow manage the expectations. I sometimes, like, there was even a sentiment that it's better to not tell people about, they want to be told about bugs, because they can't handle that in their processes. Like, they click the button, and if it doesn't succeed, they will contact support. That's routine. That's how they work. But we are now making it hard for them to click the button, and they can now contact support about whether they want to click the button. So that's something that, like, we want to maybe improve our user experience there by making things more clear, providing better descriptions. I don't know. This is something we took. Second thing is something that we discovered. Like, a lot of people have some, they don't update to the most recent version or anything. They will just test something in their lab, a certain update path, and they will plan, okay, we will update to this thing that we tested in two weeks, where we have a window, and if we, in the meantime, discover some known issue, and we will pull the recommendation, they will come to us and complain that, well, we pulled, we removed the edge we wanted to follow. And we found out that there's a huge step, like, things are mostly fine, as long as there is at least one recommended path to follow. Like, if there is at least one thing that shows up all the time, that's fine. But we need to be really careful if we want to fill recommendations in a way that makes, like, to make step into the state where, like, no recommended path remains, because that makes people confused, and we definitely need to make the user experience, like, in this case, when there is no recommended path lab, like, much better than it is now. So one thing that we discovered also is that, like, PromQL works, like, very well for us, like, it works very well for us to, like, for the cluster to be able to do the self-evaluation. But there are some concepts, some intents that are, like, either very hard, like, PromQL itself, it's not the easiest thing to write. And something, so some intents, like, when we say, when we want to say these clusters are affected, they're hard to express, or, in some cases, it's even impossible. Like, if we have no metric about some specific, like, aspect of the problem, we have, like, we will need to go to the older version of just, like, block the edges for everyone. One thing that's more, like, social than technical is, like, the, like, continuous monitoring of issues and assessing whether they are serious enough and what kind of clusters do they affect and what edges do they affect. It's still a lot of toil that we don't want to do, so we just need to make this process of, like, from discovery of a potential blocker to the decision about, like, we will annotate the edges or we will not annotate the edges, like, very, very short, and in some cases, it's a lot of work. And the last thing that we want to spend, like, this is more like our intent for the future, like, we know that, like, the user experience for this is, it's not the worst, but it's surprising for many people. They are not used to make decisions about upgrades in this aspect, in this area, so we need to really, really, like, work intentionally with some UX experts to better, like, surface what, like, data we have, what options do the user have and stuff like that. And this is made a little problem, like, a little confusing, I guess, with, like, there are other, like, features in OpenShift where that somehow, like, they touch updates and they touch, like, risky updates, so there is, like, there's one other feature that, like, prevents people from updating if the cluster is in some problematic state, the core functionality, if one of the cluster operators is, like, I don't know, not available, we will, like, the OpenShift will set itself into, like, not upgradeable state. There is a, there's this, like, feature in the support where, like, that's AI-driven and it makes, it provides the cluster administrator advice about, like, your cluster is in a state that's very similar to other clusters that upgrade it and in the context of some kind of problem. And these all have, like, slightly, slightly different user experience and, like, they have, like, there are many people, like, operating in this area and we somehow need to make the experience consistent with these other features, like, so that there is, like, I don't know, one place where you go to upgrade, you get all the information you need and you make your decision. So that's, that's our plans for the future. So that's what I have. I will be happy to answer any questions here or hallway track anywhere. Yeah, I'll try to summarize that for the recording. Like, the question is, if I got it right, is there, with the, with the self-evaluation criteria being based on PromQL, which means reading the metrics published in the cluster, like, how to solve the case when, when there's no metric being published that somehow describes the, like, possibly problematic area. So, yeah, and this is, this is a weak spot. So, like, if there isn't, so right now we don't have the solution. If it's not describable in PromQL, we can still do the, the old style blocked edge, which means, like, just pull the edge for everyone. That's the, that's the, that's the, like, backup we have. Many things can be made metrics. Like, if there is a custom resource, there is an operator which operates on these custom resources, which means, like, anything you express in this custom resource can, in some way, be made metrics, even if, maybe not for alerts, maybe just for us, right? And, but, yes. So, it's, it's our, we don't have the solution today, but the, but the feature is architected. It would be seen on one of the, one of the listings. I won't go back there. But the PromQL is, is, like, right now the only instance of, like, matching rules, something like that. So, it's, it's engineer, so we can build a new matching engine, so we would, I don't know, something else for querying this. What was the name of the talk? Even there, so there was, there's a little discussion in the, in the audience, like, there, there is an even driven automation talk earlier today. There should be a recording of that, which, which, like, describes some, some new mechanism of querying the state of the cluster that, I guess we could build something out on top of this to build a new, yeah, yeah, we could build something to use this to, to query the cluster. Anything else? Yes? The question is, there, there are people who see the version they want to update to, so they decide they will do it tomorrow, and, but until tomorrow, the version is gone. With, with the amount of people running OpenShift clusters, it can happen for someone. We always need to do the recommendation at some point in time, right? So it's, it's, one day it's available, we, we, we stop recommending it. Second day is behind the toggle, and this is one thing that we, that isn't the greatest UX, but we also, we also kind of think that this can get better as, as people like will get used to the fact that there are some non-recommended things, so they can you, they can look behind the toggle and maybe see that version and see why it was pulled and decide, okay, I, I wanted to do that yesterday. We did, like Red Hat stopped recommending it because there was a typo in the Webconzole, I don't care about Webconzole, can still click the button. So it's, this is, this is one of the feedbacks that we are getting, and we want to make it better, but we also think it will get better so, so the question is, how do we stage the upgrades, whether we make, made them in like, like, I don't know, canary, batches, and if they, they work well, we upgrade the rest. So there are different answers to this, so I, I think these decisions are, are, are more relevant in this, in the world of, like, managed open shift where, like, Red Hat SREs are making this, like, managing that, I don't, I don't know their processes that well, and I don't even know if I'm supposed to, like, share them, but we are, like, in the, like, core open shift, like, the experience is that the customers make the decisions, like, they, they see the versions and they just, like, upgrade whenever they, like, they have the window, they, I don't know, their own considerations. That's, there, there was a lot of discussions about automated upgrades even for, like, them, but that, that open shift doesn't have that feature right now. Managed open shift does. All right, I think we've run out of time for, there's, there's lunch, so it's not that sensitive, but there's recording and stuff, so I think, thanks everyone for questions, if there are more, I will be happy to talk to you, like, outside of the talk. Thank you. Now? Can you hear now? Same? Test, test. Here? I think you can hear us, right? In general, but people who are watching the stream or recording, they will not be able to hear us. Like, okay, okay. Yeah. So, we are here today to talk about teaching at the university and I would like to ask you all if any of you would attend university as a student maybe recently or maybe you shared some experience. Could you put your hands up? Yeah, not many people, but yeah, I see some. So, I hope that our talk would be interesting and maybe inspiring for others. And so in our talk we want to share experience from the courses that each of our panelists today, panelists conducted recently and we would like to share with you experience what went well what would require some improvements in order to deliver things better. And so, we have today Tomasz who taught his student how to master GIT. We have Sharka. She introduced student to fundamental of technical writings. We have Maria. She shared her wisdom about development of intuitive user interface and David teaching students about software quality. And so we would like to maybe inspire you with our talk today if you are having baggage of knowledge to share this knowledge in an open source way with students or if you are a student maybe you would be looking forward to interact with teachers like us. So the first question will be for me. Yes, I am Alexandra. I was also conducting a course together with Sharka and I see one more other teacher in the audience Yirka. So the first question for everybody I can hear myself now. So tell us about your class and about the motivation. Tomasz, would you like to start? Yeah, I can do that. So this would work. So hey, I'm Tomasz. Together with Irina Gulina. Hey, Irina. We are teaching as Alex said mastering GIT. And the reason we picked it was because GIT is such a core technology in the IT. And if you want to learn how to run, you first need to learn how to walk. And we try to teach the collaborative aspect of GIT because that's what we do in our jobs every day working all together on the software and everything. And every time we had a new people starting, we would need to teach them GIT the basics so realized like why not just go to the university and teach it like right there. Yeah, thank you. So we were teaching a completely new course at the Maastricht University with a group of colleagues about the fundamentals of technical writing and the motivation for that was addressing the lack of widely available courses on this topic. And we thought it would be beneficial to bridge the gap between how the students are normally taught to write during the university or during their school studies and what is required for the technical writing industry. Our course was in English and was open for students with different backgrounds from a bachelor's to PhDs and we also had attendees not only from the Faculty of Informatics but also from the Faculty of Arts. So, okay. I taught a course development of intuitive user interfaces with my colleagues. Also, the motivation was pretty similar because there are not many courses about user experience about user needs in the Faculty of Informatics and we wanted to show students how important it is. Also if you are a developer, you still are not developing your apps for yourself in many cases, majority of cases but for your users and we wanted to emphasize it in our course. Yeah, so as it was already stated I was teaching software quality and my motivation was a little bit different. I'm a PhD student at Masarik University in the second year and I would like to say that I had no other option but it's only heavily recommended to help out with teaching and my supervisor is actually the guarantee of this class and she needed help with this course in the last two years. We were teaching students how to do testing, how to write good code, manage code also a little bit overlap with Tomas's things. Continuous integration, Maya helped us with some UX design as well so we tried to explain students how to write good stuff, good code. Yeah, that's cool and I know that we all had some challenges during the conducting the course or maybe preparation so would you like to share those challenges? Yeah, so not this year but last year we had 20 students applying to our seminar group and more than half of them I think like 14 or 13 were already Red Haters, either part-time juniors or interns and we were struggling a little bit with Jakub our former teacher that what should we teach them? They already know all the things or we supposed to and it was a little bit challenging to make it interesting for them but on the other hand they were really useful in a way that they could explain things already to the other students so we created mixed groups of one or two Red Haters with other students and they were actually helping us teach the rest of the people. So of course we encountered a lot of smaller or bigger challenges during our time but what I wanted to mention today is that teachers are people too so there is also like challenge for us maybe some maybe more for first-time teachers to to get in front of people to not be super stressed out, to be able to speak and actually perform that lesson so one of my challenges was nervousness and some more anxieties which can be of course which can be of course done by if I have better preparation and I will have in next semester there is a lot of there is less more stress thank you. We also had several challenges but I would like to pick one specific because as we decided to teach the fundamentals of technical writing as an open source course in an open source way it does not mean only using the open source tools but it also means collaborating with a bigger group of people to get the best result possible so we had a bunch of people preparing the syllabus the slides and the lectures and also being at the venue so managing these multiple teachers in a classroom and also in a hybrid environment because we had some teachers also joining online and we also had one fully online lesson. This was kind of challenging in the aspect of time management and keeping the flow over teachers and over the course but I think that we succeeded in this even though there are some improvements and we will address them in the future. Our challenge I would say it was kind of typical as everyone facing so we had capacity 20 people and we got like 60 people who wanted to join so we tried to make a homogeneous group so that they would be on the same level we failed obviously and it forced us to change the course basically on the fly so one hour before the lecture we updated slides or something like that and I think that worked out pretty well especially when we started every lecture with a question no with a set of questions from the previous ones so that we would be sure that they still are following and learned from the previous parts and that worked really well but at the same time it was in the night and in the morning five minutes before the lecture talking to Irina like what we want to actually do today it was crazy. Yeah thank you that was that were interesting challenges actually and so I know that in the university sometimes people are giving a lot of things to students which are not actually then usable in the life in life in the career so what do you think how we can bridge this gap between academic knowledge and the real IT world could you answer yeah thank you in my opinion I would like to stress teaching the real skills that are often times called as the soft skills what I mean by that is collaboration, communication, feedback giving and receiving it time management because your overall success in your career depends on so much more than just the students expert level of knowledge you also have to have all this package of skills that are needed to and we as a teachers we can implement it into our lectures too as we try to do it in the fundamentals of technical writing and I would like to mention also having a real life scenarios simulating them for the students showing them a typical workflow so they have a better overview of what's been done and how and they can also bring their input on improvements and I think that working towards that is really applicable that has an impact is also a very important thing because imagine all the time and energy that both the students and the teachers put in the lectures wouldn't it be great if this time and energy didn't go to waste but it could be really used for something practical and I also think that universities should focus on specialized workshop and lifelong learning. Thank you, Sharka, that's a very good point especially with the soft skills because we have these courses in our companies but this is not being taught in universities which is I would say really odd and I mean I would really appreciate something like that when I was studying but I would like to add on top of that like doing events like this for example DevCon that it's free it's on the university so students can come here and learn basics of some technologies and reach out to people and also what we were doing like teaching at the university I know that one of the feedback we got was when we tried to say what we are doing in our day-to-day job like how we are using it and how we are solving these complex things like students will love that and say that that really helped them to understand why they are learning this technology or how it's being used in our jobs so yeah great thank you we will come back to feedback slightly later but first I want to ask Sharca Maria what learning experience did students got from the course so our course so as Sharca said we gave them an opportunity to at the end of our course to have like real life projects and we somehow guided them from the very beginning from the idea through the entire designing process where they were gathering feedback they were gathering information about their users that they were creating this app for till the implementation itself so we guided them through the entire process and in the end they had this project that they can showcase like in their portfolio and it helped them really stay focused during the entire course in our course the students gain some basic knowledge and expertise in technical writing which might be beneficial for them if they want to choose this career path or even if they do not want to they want to become developers because it might be also they work to write a documentation but as an enhancement I would see especially in our course was the part that our students were able to create their first small portfolio with which they could apply for the job or that they could reference as developers to have a look how a proper documentation should be written and I also think that we gave them a very detailed feedback because all of the reviewers put a lot of effort in giving exact, specific feedback a lot of suggestions so they can improve through Git and GitHub and the students normally or if I when I was a student I used to know that I have this only one chance to be successful to deliver some papers and work and we made it more agile way as we are used to when we are working at Red Hat or any other companies when the students submitted their poll request and the reviewers took a look at this they gave them some feedback pushed it back to the students so they can implement it and this cycle could go on and on until the students actually reached the level we wanted from them so it was like oh your work sucks and F goodbye it was like oh yeah it would be better to implement this and this and I think that's the good point was like that and getting back to feedbacks if you are a student please fill in this form and we also received the feedback from students and what was actual feedback from the students so thankfully overall feedback for our course was very positive they actually enjoyed that our lessons were interactive but of course everything can be more interactive always so that was something that they enjoyed and they learned a lot during those activities what's more also that our course was project based so they actually had think at the end that they can showcase that was also a really great thing for them and some of them also mentioned that us giving them feedback and actually leading them or guiding them through the entire process not just okay this is better just do something else or maybe next homework will be better but giving them detail feedback the way Sharca said that helped them a lot and kind of opened their eyes in some ways because these people were software engineers that they were in designers they did before that they didn't really think about users about usability of their apps and during for example user testing they realized okay my lecturer said it's not good because of that and that but when their peers tested their app they realized okay that makes sense and yeah sorry go ahead yeah we also conducted a survey for our students to ask how the lessons were and after finishing the course we invited them to get even more feedback and we were happy that it was overall positive that the course was engaging and inspiring but what was the important part of the feedback for us is that almost 90 percent of the students want to be more involved during the lectures to have more activity which means for us for the next run to cut on the content cut down the content and extend more meaningful exercises yeah thank you I know we received more feedback but we don't have much time left so I will continue with the questions sorry so what was the actual lesson learned and what would we do better next time yeah so in my opinion what we learned was interesting for the students and again it relies back to the feedback part to read your feedback and learn from it and the other thing is regarding technology that you shouldn't give your students assignments that can be solved by chat GPT throwing technical writers more under the bus right now sorry for that so we need to try these things out if our lessons are solvable try chat GPT or other AI services and try to make it more interactive and interesting for the students well actually chat GPT was not the issue in our course because it was not able to solve our homework assignments but we learned a lot about giving the homework assignments as a group of reviewers every week we had to review almost assignments which was a huge work to do and we know that for the next time we have to be more specific on what the reviewers should focus on because as technical writers we are used to be focused on detail when conducting peer reviews so for us it's normal but for the students it's the first time writing something like that technical documentation so we have to probably lower our expectations or adjust more to the students needs maybe prepare them with a set of small exercises throughout the lecture so that they are able afterwards to make the homework assignment and also we have to and I think we succeed in this to grant a fair grading within various reviewers and also throughout the course of the course of the presentation. Thank you. Getting back to bridging the gap between knowledge and real IT world what skills teachers should focus on to give the students to teach students yeah. So we were programming so I'm going to talk about the programming part I'm not sure about the technical writing regarding languages I guess it's English but in our case I think universities are trying to focus on more like teaching the language than the logic behind it and I think it is a bad thing the way we are hiring people at Red Hat interns going through intern reviews intern reviews it's getting more focused on one language one specific technology and I feel like we are teaching students to work with one technology instead of having a broad perspective and trying to work with multiple technologies trying to find the right tools for the right context. Skills. Well I will repeat myself but I would definitely stick to giving and getting feedback because it's a very valuable skill and also maybe that the students are not that fix on the outcome more like on the way to getting there. I would add also implement that feedback. So and the next question how well we were talking about that how we taught how we but we did not touch how we get involved so how can other IT experts can be involved in teaching I know Tomas would like to answer. Okay, IT experts involved in teaching so participate in conferences and then maybe when you realize that you know something that other people should know or other people are coming to you and want to learn about it maybe think about approaching universities and start teaching there and you can really just start small like if you are working on some open source software as service or library tool make sure that your readme is very good you have contributing md so that even beginners can come there and try to contribute and use your software and what really also works for us in our project is make sure that we have chat so that anyone can ask questions can create issues saying I don't understand this please explain and you can even set up video calls with these people and start there and maybe approach the next part like speaking at conferences and then teaching at universities Just a quick thing so right now if you are a UX designer or a UX researcher I'm looking for a person to teach UX part of my course for next semester so how to get involved come to me and tell me you want to teach Thank you so and we don't have much time and the last question would be actually about how what is the future of the curriculum design at the university I know David has a lot to say I already mentioned the language agnostic thing so I would add that here as well also we need to teach students how to learn continuously and never stop learning because I think there are multiple IT experts in the room imagine that you would stop learning today how how long could you stay in your current job without losing it also so continuous learning that's the second one and maybe the third one is soft skills as Charca mentioned already we need them a lot even me that's kind of what we have for you today I hope you got inspired maybe you have some questions yes so do I understand correctly you are asking if the if we are trying to inspire students if the student inspire us who want to answer Charca yeah we actually had this because for our assignments we had for example some idea how it could look like but one or two people delivered solution that was like wow that's even better I would implement it right away so it was a big surprise for us but I actually think these people were able at the end of the course to apply for a job so I think we both the teachers and the students did great job because after this half semester course we got some fundamentals and they were able to expand them thank you any more questions yes please so the question basically is if the AI influence in a negative way for computer science future right yeah AI will not take away your jobs in my belief so if you have a hammer coding right now you're going to have a jackhammer for the same job in the future you still need that person who operates the jackhammer and you can actually you don't have the robot I mean you might have the robot in the future but it's a tool that helps us and if you ever tried it for coding I'm not sure if anybody tried coding with AI any hands no okay how easy was to debug the code if it was bad easier than a person's or not because in my opinion it was much much harder because you can talk to the person and the person actually sometimes gives you viable answers you can have a discussion with it AI can explain it to you but not always understands what it's saying did I answer your question thank you I don't know do we have more time no if we have any more question yeah so the question basically was if the students are not learning but giving all the let's say homeworks or to the AI is it correct yeah basically if students will stay motivated I guess I can have a one word answer for it money if you understand these technologies and you can look into the stack lower it's always valued better in my opinion so if somebody will just try to go around the things they will struggle with finding jobs because AI already taken it over but if you actually understand those technologies and you can utilize it better we can supersede the AI in this always at least for now okay maybe say it in different words like coding is just one part of the job like when I'm doing my work coding is I know 10% I still need to go to meetings talk to people write emails like create diagrams and all these things yeah maybe AI will do that in 10 years everything and I can just chill out in Malibu somewhere but right now we still need these people to do all the work and just just the coding is the AI can do like yeah it can help us but it's not replacing us at all did that answer the question once not passing so that's it who just cutting the corners and not doing the job I saw one more question yes yeah so the comment was that the MIT University created a course which is consist of missing meta tools yes so yeah I also think that's neat and last comment yeah so thank you for this comment actually on this faculty where I studied before my PhD there was a course called IVS practical aspects in software development where they were teaching exactly these things UNIX philosophy, Git and all the sugar around yeah I think we agree with you and that made me apply to Red Hat and I think we are out of time but we are still here around a little bit longer and if you see us please talk to us thank you we were waiting for you really okay welcome everybody this is change the world one piece at a time with trashware my name is Andra Perotti I've been born and raised in Milan and here you can see some of the things that I like some of the things that I do just to answer the kind lady I also work for Red Hat but today I'm here with a different hat it was not planned as a joke I'm also a volunteer for an association named PCOficina and I'm to be completely honest I'm also the president PCOficina since 2021 it's their fault they voted me, I was not but that's it I want to have a quick chat with you about our experience and what we do this is our agenda for today so we'll have a little on the association we'll see what the hell is this trashware things we talk about open source about people and a small wrap up because maybe what we are doing in Italy can be done in your home so the association why that name well it's very easy it's a compound word you take a PC you take a workshop and oficina in Italian and you have PCOficina Czech friends told me never say in the way we pronounce because it means something different so I'm already saying sorry if by mistake I can say they see in a more strong way bear with me so our idea is to create a similar experience to what a passionate about bicycle have with cycle oficina cycle workshop basically places where you can go if you need to just fix a little something on your bike or maybe where you just want to hang out with other passionate about this we want to replicate the same experience but in the new world and we not only steal from them the name but also the idea the experience because it was really great and probably it's not the case that many of our associates are also heavy cyclers they love the bike hello welcome so the formal part what we are we are an association for real it's not just a bunch of people official tax ID and we have a statue and what are our goals as an association well we want to of course promote the aggregation and the sharing of knowledge like maybe you already do in the open source communities that you are part of we sustain and support free and open source of course and we also want to promote the protection of the environment the reduction of the waste and especially of the electronic waste we want to increase the awareness about the usage of the technology we want to we try to help fight the digital divide and last but not least we also want to support low budget computing because it's costly and we may want to have everybody to have the opportunity to use our computer where are we based we are based in Milan Italy northern part of Italy and the association has been created in December 2011 so we have a good experience and a good a lot of time behind us so but what we do for real first and foremost we have our workshop and we maintain it free and open to everybody we open our doors one evening per week almost every week unless we have something maybe to do around like other activities that I'm going to show in there you can find the tools some very lovely I fix it tools that the lady is showing and you can find everything you may need to fix your computer very easy things like change some to maybe start adjusting something more complicated we offer technical support with a very peculiar point of view not just for the user but with the user we are not a shop we are not competing with regular shop but we'd like to offer the experience of from peers so you came with a problem maybe you have already documented yourself on how to do it but you are not confident and the problem can be I want to install Linux for the first time to I have a power outage and now the system does not boot anymore let's see it together the point is together there is no this is the problem see you next week no way where you are hard to stay if you go away we are not responsible and not only that we collect computers and devices we restore them and find them a new home but we will discuss about this a little bit later and last but not least we do also seminars workshop we try to share what we have learned you may imagine that there are a lot of geeks here but it's not only a geek association we have space for any kind of experience any kind of level there are people that on their daily job are director there are people that work in totally different fields than IT of course we have also people like me and others that are programmers and etc etc and the great things is that we are not only technical is the fact that when the device is really unfixable we just we don't just throw it away but we use it also for other for other ideas other projects and so we try to give them a third life maybe after the first and the second so the trash work the core of our talk today let's start from a definition this is taken from Wikipedia trash is a compound word derived from the construction of trash and hardware trash were referred to the activity of replacing faulty components in electronic devices or the activity of making obsolete computer operational again great so why doing trash work well first thing we want to preserve the environment if you fix something it will not became a waste and it will not became a problem for for us or for our children second to save money because sometimes very often fixing something in way cheaper than buying something new every of course we have to deep into case by case situation but usually if you know what to do you may save a lot of money and have your system back back on track the second the third point is that maybe we forget but computer has become so famous because they were adjustable they can be expanded they can became what we need them to do maybe nowadays with laptop it's slightly harder but you know probably you may know about the corporation the framework laptop there are people that still believe that computer have to be that way and doing trash work keep your hands busy and keep your mind that what is not maybe does not fit you now it can become can be fixed also because by doing trash work you learn you learn that things can be fixed you learn that sometimes get obsolete in a forceful way and maybe by thinks in thing and by doing trash work you also start thinking maybe you are doing a political act you are screaming I want my right to repair yeah and also because great as well as when you do trash work so the real and the complete definition is that we with trash work we have the main purpose of encouraging ecosystem ability so as to extend the life of devices and to produce less waste so the great things about doing trash work is that you can do for whatever the reasons that we have just told before and the order apply as well you can do just because you are in need or you can do that because you love that computer that all the novel that has been with you for so many years at the end by using daily you like it you don't want to throw it away because it has been with you for so many years and if you find a way to make it usable again and to continue to be with you you are happier so is trash work the solution for every problem unfortunately not we have to take into account that the purpose of a device it's what lead and define what will be his fate so here in the association we have started thinking about when all this and we have identified this scenario so when a computer is just a computer works fine and with small adjustment can continue to be a generalistic device cool you are still ready to work but when we start seeing that maybe the device has some problem like a laptop with broken monitor and maybe the monitor is not the chip to be obtained not to be changed or maybe we have some on board the motherboard we have all components in single motherboard and the monitor the video card does not work anymore so all those situations start becoming limits and when we face a limit we have a challenge so maybe that device it's not fit does not fit the general purpose anymore but we can find new use case new usage for that like for example you may remind the net book era very small screen not so powerful device nowadays they are not that great but have you ever thought about who is find himself perfectly fine with dealing with small keyboards and small screen kids why not use those small computer for educational purpose we have plenty of educational software that can run on normal or even not so powerful computer and they can become educational tool for school and etc so it's not important if it's not working as a generalistic computer anymore but it's important to find a purpose for that device and this has something that we have thought a lot about when old is really old and we in the association has decided year after year about some changes in term of what computer we cannot collect anymore so the first pool points are way self-explanatory but in 2020 we realized that we were not facing an hardware problem but a software problem the 32 bits distribution were not available anymore at least not the most common one and especially not the one that we choose to use so we decided that it was not not good to have them anymore with us but that doesn't mean that we were not having them anymore we stopped to collect them we still have all the other 32 bits that we have collected in the various years and in 2022 the story repeated we realized the fact that 4 gigs were not enough for the generalistic use case and so we realized that it was not good for us to collect the idea to anymore this because we don't have so much space to collect all the hardware that we receive so we also need to be careful about what device we accept so right now our target is that if a computer is dual core 64 bits and has an i3 family CPU or something similar and has a consequence it has the idea of free RAM yeah that's good we know we can make good use of that why this specification the point is that it's not that we are picky we don't like old stuff the point is that our goal is to give them away and in order to give the computer way we need to be sure that they can be a generalistic computer so while we are pretty confident that probably Linux can run everywhere our goal is not to demonstrate that Linux can run everywhere that's probably the retro computing goal our goal with trash for is to make sure that devices can go anywhere what the user the average user do with a computer when he's not doing that with his smartphone probably most all some of those but it's not only that sometimes we are all grown up guys and so we have to deal with traffic tickets with the garbage taxes we have maybe to book some medical visits and so and so on so all those needs have one point in common they all need a browser they all can be addressed with a browser and what is the most important component that you need when you want to have a smooth browser experience a lot and a lot of run so we have decided to have standard in order to help us providing and setting up computer in such a way that they will lost at least a couple of years as association we are aware that those are maybe 10 year old computer so our goal is very realistic we say if one of our computer can still work for 2 years our goal is obtained we are having success in order to make that happen we need to make sure that those computer will not last 2 weeks 2 months but 2 years we need to beef them a little and if we have the possibility we pump up the CPU as well but it's not so common because the components that tend to be available more easily are of course the memories so we decided if the computer will be a Linux computer it will have 6 gigs of RAM and optionally an SSD on Windows machine it's mandatory 8 gigs of RAM and the SSD as well and the license because don't do shady things if the computer has been donated to us with the sticker with the license it's ok if it's a corporate computer with licensing manager centrally that's a nice and lovely Linux computer full stop no question the point is SSD is definitely great for any use case so also when we offer Linux computer we suggest the receiver do you have maybe some spare money because that will definitely change your experience but it's not always the case we have learned to be very humble and pay attention to who is coming as asking for help because it's not so easy sometimes if you are in need to ask for help so also on our side we always need to be careful in proposing but also listening so if the person for example has the possibility we say ok just we can take care of buying that disk you just gonna pay if it's not possible it's ok but then the exaggerated amount of RAM will also help us with the beginning disk but sometimes it's really too old or sometimes it breaks because we test them carefully before giving them so we have the possibility to of course have a creative use of something that is some hardware but also we have created a very good friendship with another association that use computer for creating educational laboratories their name is smunting and it's a name game and the meaning is disassemble and play together so what they do is just use our broken computer for their labs and that's some circular economy for us it's simply great because what we can't use anymore for them so in all this how open source fit well this is our regular regeneration process we remove the dust we change the thermal paste because one of the reason why sometimes also all computer start getting older is because they don't dissipate the heat and so the CPU start going because it don't want to self burn we check the RAM because sometimes the computer act crazy it's just a poor two gigs of RAM that is faulty we check the hard drive and then proceed with installation and everything that is not just mechanical it's possible thanks to open source because we test the RAM with memtest 86 plus that we are very happy that has introduced the support for ufi systems as well we use bad blocks on spinning disk and then our operating system of choice is linux which linux well it's a funny story basically even before I was part of the association everybody was installing his preferred distribution everybody has its own also the opportunity to use computer that is not your own without data so you test the most stripped down the newest whatever a complete mess the problem was that people were maybe looking for some help from us and if there was no volunteer that has installed puppy linux maybe nobody was has no knowledge about puppy so we decided to do something in a more structured way so what do we need we need a visually pleasant distribution because end user that accept to use linux maybe he's not a techie guy more often it's if he's gonna receive a second hand computer he may be one grandma that need just to check some emails for the health situation etc etc second point all via graphical user interface forget the console the end user of PCOficina is not a techie guy more often than not we need to have proper hardware support and proprietary driver support because unfortunately the world is rich of various kind of hardware and also we need to be able to help our user so if they need a proprietary software that luckily for them work on linux let's make their life easier not install the super nerd distribution that is not supported by by any third party software vendor of course need to be lightweight because we are talking about ten years old computers the most important things and what scare us most is the fact that more often than not we our computer are just on fire and forget we give them the computer and we never see again so we have a huge responsibility in making sure that the user do what they need to do with the computer the user do the updates that is not so common and that those updates will not break the computer so we prefer to have a long term distribution something that will give us assurance that the changes are not that disruptive plus documentation and we also accept opinionated choices but respecting the freedom so the winner is the linux because it's based on ibuntu because don't force us with snaps and because at the end of the day it has a visually lovely experience and it's coherent and does not just look like a dump from the upstream so it's not my favorite and definitely not my distro of choice but for our users works fine and that's enough so I talk a lot about computers but the core of the association are people so our goal is to enable people to do what they want to do and we are lucky that very often people that have great ideas need our help so we like to think that our computer are enabler for changing the world I'm gonna tell you three stories the first one is from come from Verona from the city of Rome and Giulietta and it was January 2022 where a psychiatric day center care asked us for help because they had this idea there was some recent studies that demonstrated that through some cognitive games patients or users of those kind of services maintain their mind active and has on the long term positive effects on them but they were lacking hardware they reached us out asking we were absolutely happy we provide them some desktop and some laptops the great things that they came into Milan so 170 kilometers with the entire community so it was a great day for them because while the the guys responsible for the day care were dealing with the computer the rest of the group was visiting Milan they get back home not only happy with good memories about the dome but also with the computers that work at fine we recently have feedback and they were still working using them and they were happy second story it's always from the east part of Italy this time it's Venezia I don't need to introduce the city and this time was a group of lawyers they were looking for create right spot they were looking for way to offer legal help orientation desk especially for immigrants but not only limited to that great idea zero budget need for help they asked for some laptops in order to be able to remove them because it was a shared space and in a few months ago they received the hardware through a very funny story because a friend of a friend was going from Milan to not Venice but to Padua and then from Padua to Venice but at the end the laptop reached out the final destination this is the third one story it's also the closer one in the neighborhood next to our headquarter we have some schools that unfortunately suffered the loss of some laptops because they were stolen so we heard about that and we offered our help to substitute the laptop with desktops because they are heavier maybe less less interested for this and then from that started a very lovely exploration that end up with change of display monitor on primary and secondary school because they were part of the same pool of schools and the great things that after seeing us as some of our computer working with linux and having one of the teacher that she was very geeky she had the idea to just rather than change in the computer that they were using that were slow old and etc and they were all using web stuff they said what do you suggest us to do and we tell them just buy the SSD so we have a fraction of the cost they change the disk they install linux mint and the funny thing is that now the linux lab is more loved by the teachers respect to the more powerful but windows one and it was a great success for us and also for the teacher that believe in this in this story so wrap up in those 11 12 years of of our association what we have definitely has learned is that start small is definitely absolutely right so a good idea don't need to start immediately big if you deal with hardware make sure to have an inventory because we took as sometimes to in order to make that the choice make sure to have a procedure if you want to replicate our experience that is really nothing special but everybody can start doing that make sure to have procedure because when people start learning that there are there is the possibility to some volunteering with computers may find that interesting and so you want to tell them not just by word but hey we do things this way and they have a reference and make sure to be legal because you don't want to be stopped by the law because you are taking short path and also make sure to be a legal entity because if you want to receive hardware from companies it's realistic that probably they can just throw away their hardware to Mr. Nobody they need to have something that can track down the fact that one item that was part of the corporation now it has been given to another entity another legal entity be pragmatic we absolutely love Linux but it's not for everybody and sometimes people need windows and keep in mind that if you start and you decide to do something like this you are doing volunteering so it's not the technology technology sake but it's the people that are your focus so be ready to listen be ready to focus on the person technology is just a way to help them and have fun so this is some of us and now I'm open for your question yes that's good that's good so the question is choosing a distribution is not that interest but the real topic is what desktop environment we have chosen a XFC because it's sound and it works in a very classic way so also windows users may find themselves comfortable with but it's lightweight and it works fine also the way Linux Mint package them it's good for our end users yes can you repeat because I haven't heard okay so the question is if we have demographic trend about who come to the association it depends usually are people from 50s and older that may have problem with windows and we also support that but we also have some very curious Linux users and not only among the younger they start using that for various reasons because they are curious because of political behavior it's I'm against the system I use Linux and it's funny it's always funny to be open to the people because you find an interior an entire kind of people there was another question yeah so the question is if the activity of the association can be sustainable on the long term given the fact that the industry is constantly putting all things in one single device so the possibility to change pieces is reducing that's absolutely a good question we hope to be able to continue to do so what we can try to do and what we are doing is to advertise and to make people aware that when they buy a computer is not just the cost or the performance the only point but also paying attention to how repairable it is so I've already mentioned in the past I fix it the company that do and they offer various services so we have been given one of their tools we have a full of screws etc they have created a manifesto that you can find online and that really fit in that we hope to continue to be able to do so for sure even if the industry should move in such negative way we see changes with ten years of distance so we have always ten years of old computer that will be received by us before having to stop ok last question then we have to go it was not our choice we have just been informed that the use case by the day care center was also to use those cognitive games we really have no we haven't even seen those we just deliver them computer with an operating system thank you very much guys alright hi everybody everyone I'm mixing up everybody and everyone anyway welcome to our talk on scaling new heights with the Ansible community in 2023 we are Don Nero who is off camera right now but he will be coming on shortly and I'm Carol Chan and I think next time we team up we'll say Nero and Carol alright so instead of a self introduction which actually I did in my talk yesterday so if you didn't see that please watch the recording I thought I would do a group introduction because we are really working as a team Ansible community team to enable and help and support the Ansible community so as you see here these are listed in alphabetical order not in any kind of preference this is only half the team so the other half is here so you might have seen some of us during DefConf at the booth in some talks in the hallways and I hope you had a chance to talk to us if not take a look at some of these matrix IDs and github handles and ping us online at your convenience we're going to share these slides after the talk we'll upload them so you can get this information if you can see like we are quite a diverse team and I actually just came from a talk by jamadriaga about DEI it's a really great talk so again if please watch it if you get a chance and she talked about having a team that's diverse and in terms of demographics, expertise location status and so on and I think our team has a lot of that and of course it's not just check boxes to take to say oh we have diversity and so on we have people from Asia, Pacific and Europe and US but it's also having a diverse team helps to serve the diverse Ansible community that we have so one of the main goals that we have is to really kind of make sure that we are being inclusive with the Ansible community because Ansible community is very varied not just in terms of the different parts of the project but also people's different background and location and experiences and so on and so forth so hopefully as a pretty diverse team we are able to also achieve that goal of being able to support different challenges that you may have be able to just kind of bring everybody together and hopefully make everyone feel included speaking of this being some of the things that we have been doing this year in 2023 great strides in yesterday we covered a little bit about the talk yesterday is how to contribute to the Ansible community and I mentioned about the new website the new forum part of it we are also hoping to get a better sense and understanding of what you think about the Ansible community Ansible project has been around for more than 10 years a lot has changed, it has grown in many ways so we have had different mission statements through the years but we would like to something that we will be able to kind of combine everybody's thoughts and ideas and feelings together so if you get a chance take the survey you can scan the QR code I'll get the link and again like I said I'll share these slides so you can check the survey out later on tell us what does Ansible mean to you and what does Ansible community mean to you automation is kind of central but what else do you think, is it about the people is it about the different projects we just like to hear from you so please take the survey I already mentioned about the website and actually Don will also talk a bit about the whole journey thing in his part of the talk but I just wanted to share some URLs it's a web working progress and as a repo you can check out we are using Nikola as the study site generator and we are working with the community it's a completely community effort and public and you can join the working group on Matrix we have been working asynchronously so instead of a weekly meeting we just have the discussions in the working group on Matrix and as for the forum we have been running an internal test instance in the beginning of the year I think we wanted to make sure that it will do what we planned to have it for and we think that at least for me has helped me a lot in working together with my team so we hope that this will extend to the community and be a strong tool for the community to feel as together and come together and collaborate together and everybody you know hopefully we will be able to welcome you soon to the forum at this URL speaking of events I just wanted to touch on that we recently have a community day in Boston as part of Rehab Summit and AnsibleFest but we do want to you know again be more accessible so we will have since community day happened in US in the first half of the year we will have one in Europe in the second half stay tuned for details similarly for Contributor Summit which is more contributor focused we had one in Belgium in February and we will probably have one in the US in the second half of the year and especially for the Contributor Summit because we have contributors from all over the world and we know that a lot of times people can't attend events in person for various reasons so we will make sure that the hybrid option is available and accessible to everyone to participate from online from anywhere the time difference it will be difficult to adjust we can't have one time that works for everyone so we will be able to still reduce the sorry increase accessibility by providing also recordings after the event meetups are kind of more focused regional events more like local city and regions so if you are interested in organizing meetups come talk to me I want to show my colleague she is not here right now but she is also working on this organizers toolkit which helps people get started to organize the meetup especially if you don't have much experience otherwise you can also help to share your experience expertise to the toolkit to help us organize meetups from around the world and this list is just the meetups in June some that has already happened and some are upcoming so it's getting active again after the pandemic times and we love to see more meetups happening around the world so how do you stay on top of all this news what's happening the website the meetup and what's going on please subscribe to the bullhorn newsletter if you haven't already so again QR code you can scan of the or the bit.ly short link the newsletter is not just something that you consume and read and get information from but also you can contribute if you have something you are working on that you think the community can benefit from like a lot of collections are community supported and maintained every time they have updates they will share the news on the bullhorn if you have a blog post or a video or something that you created and you want to share that we're happy to have your contributions so again there's a new spot on the matrix channel in the social channel where you can mention the new spot and then you can get your news item saved for publication in the next issue and it's a weekly issue so I'm supposed to do the weekly issue yesterday but I think I'll do it tonight or tomorrow lastly this is my last slide we have been using matrix for the past two years now but I think a lot of people are still kind of starting to get used to matrix it's okay because if you're used to using IRC you are a bridge to that but if you're new to matrix we recommend using Element as the client and there's information on the communications page some of you watching perhaps you're already on matrix because you're watching that through the account of matrix space so you can use the same you don't need a separate one for Ansible you can join our Ansible space and connect to the Ansible rooms in that space Mastodon is another social network that we have started to be more active in so if you use Mastodon again it's not tied to one server or one instance whichever instance you're using you can connect to us and follow us you know we happen to be on the Phocodon instance and you can follow us there and we can share information on Mastodon and with that I'll hand it over to Don who will give you more interesting talk for the rest of the personal based content journey thanks Carol it's Mike okay everything good to go so as Carol said my name is Don hey everybody I'm part of the Ansible community team and today we're going to talk about quickly some of the work that we've been doing to strengthen the community really and support the users and make things better because that's something we've identified a real need for also touch briefly on the central web presence that we're building so to dive right into my slides I want to start by disambiguating you probably hear me say the word dock site like 20 times so it's a good idea to say specifically what that means the dock site when I say it's going to just be a set of a bunch of HTML files that are statically generated and it's kind of a top level landing page for the Ansible community docs and that's the dock site and here we are this is a snapshot a little over a year ago but it's about the time that I started with Ansible I'm still quite I'm learning stuff all the time about Ansible but when I started this is my entry point like most community users I went right in let's go with the docs help me understand what Ansible is I have talked to a few people at diveconf over the past couple of days that they're like hey what is Ansible and it seems to be a common question but when I got into the dock site it was you know there was a real mix of things it was hard to find the answers and I was looking for a quick start for a hello world I came over from middleware where I've spent most of my career and working with different Jboss teams I always look for that hello world you know get in, get started like a few easy steps I want to get up and running and doing something let's go but when I was looking for something similar for Ansible I got in this weird loop I just had the question like how do I automate something with Ansible how does this work you know and I found a page in the community docs that took me to like a red hat site where there's a video that didn't load and there was a link then that took me back to the community docs where I had already been I just like in this weird loop and I spent like 10 minutes just like what even is this you know like what am I doing so that was just this barrier for entry you know a lot of people are just like okay I can figure out I'm just going to give up and like maybe go to Reddit or you know who knows another thing that I noticed while like navigating through all of the community docs and trying to like you know I knew there was a lot to Ansible and I was like trying to like navigate through all the different projects and like looking at their documentation there was this lack of cohesion you know like mark down docs that were just in GitHub over here there was stuff on like Netlify over there there like a bunch of stuff on Read the Docs but they were all in like different namespaces so I didn't know what was officially an Ansible project you know I also found like what I was doing this were like these third party mayors of the entire community docs so there was a lack of trust I didn't know where I was I didn't know is this like an official Ansible thing or is it just like a project out there that somebody's doing and you know there was things were just like spread all over the place and you know different looks and feel completely inconsistent so why does that matter you know why does why does documentation why is that important why should we do anything about that like why should the community team even care you know community documentation kind of reading the slide to you here but you know it enables users to succeed right like when like I think like for an open source project like Ansible like what made Ansible really successful is great documentation you know like when I started like I was writing a play book to do something and I you know once you can find your way into like you get to the documentation the underlying documentation is great you know everything was there all my questions were answered when you do go to you know places like reddit and you see people have questions you often see like the responses in the comments there's a link to the docs so once you know where you are and you're in you know it's great and like that success increases adoption and you know that expands the project and you know you get adjacent to like all these different things and you know it's it's a real vital part of any project so documentation absolutely matters as a force multiplier so fixing all of these things where do we start you know like where was kind of the point that like okay we've identified like these kind of problems like how do we go about fixing them so I think we all are pretty familiar with the idea of personas and you know it's kind of a UI UX thing and like some of them can be a little oh almost have like too much detail you know this is like George and you know he's a Sagittarius and his favorite color is brown and like all this kind of crazy detail but the thing that really matters with personas you know you identify the user the people who are it's your audience and the thing to kind of hone in on are the needs the attitude and the knowledge I think those are the things with personas that really matter hey there's Anne-Racia so you know the needs for a persona are important because you know that kind of explains the goals right you know like what the user is trying to do and you want to help the user succeed so you need to understand you know what do they need from this and the attitude is important because the attitude kind of gives you the level of verbosity like say it's a developer and you know you're coding against an API or something you want to know all the programmatic options and their expected behaviors so you can play around and tweak and tune and see what works and what doesn't but if you're an SRE you don't want to be playing around you know if there's like a flashing red light on a dashboard somewhere and the services down show me how to remediate that as quickly as possible so I can restore things and we're back to totally operational knowledge I guess that's kind of an obvious one you know that if it's a hobbyist someone who's like maybe automating you know every time they like install like a boon-two distro you know and you got a playbook for that or you know whatever you're doing that would be a very different sort of like knowledge to say like a solutions architect who you know and to enterprise so you know knowing the needs, the attitude and the knowledge of the different personas really helps you understand your users and the different types so you gotta bear with me I'm kind of the king of protracted pauses and it's Sunday it's also Father's Day I gotta talk to my kids later on and there's a weird phone call so but it's okay let's go it's good stuff so again like once we had those users the personas in place we were kind of like so what we do with that you know that's great but how do we how do we map out like the you know what the user is trying to do and like you know so again like coming from middleware I've been working on like this operator and for Kubernetes and that's why I became that's why I learned about these this is actually the kind of the base Kubernetes journey when you become aware of something maybe you read about it or you're just curious and then you evaluate and you start to learn and then you adopt and you know you're using it then you scale up and out so I think this these kind of milestones that's pretty much applicable for most IT projects right it's a very generic one so you know framework to you know have that progression the evolution of the adoption of an IT project so you know using those milestones um so once we had those two things together like you know we'll have these milestones each one starts with some kind of human motivation and then you know you describe that and then like there are the specific tasks underneath that you need to complete that milestone um along your you know there the milestones along your journeys so we started mapping these things out interviewing people talking to the community um you know identifying what these are and um you know we needed a way to the idea is like you know meet the community where they work I think that's something that is one of the actually before she left Robin kind of imparted that bit of knowledge yeah that's that's a great thing you know don't don't don't put things like we started like you know why I was talking to people we started putting things and um like this browser based tool and have like these really fancy really beautiful kind of like graphical representations of you know the user journeys and like oh there's this task and they go on to do and it's kind of like a tree thing you know it looked great but you couldn't check it in the get hub log in and you had to like register and there was a limit of you know it was a free tier so it just didn't work you couldn't give it to the community you know you didn't want to be behind um a log in so we turned to you know mark down because it's ubiquitous and then you know putting things in YAML and then you know in in get hub so you know plain text of course makes it easy to you know do PRs and all that kind of stuff um so we started mapping out persona journeys making things available to community by putting them in the repo you know everyone works in get hub so we created a repo as I think we all know naming is one of the hardest things in tech and I still kind of hate the name of this repo and I think that's okay it's gonna but you know we um so I created a a really simple like bunch of ginger templates some styles and you know um deploy things on to get hub pages and there's an action so people could look at it and we could start getting things out to the community and they could see what we were building um and that's when things started to get really fun you know I think like when you talk about personas and journeys to the community sometimes they don't really get quite that engaged okay that's that that's great but you know maybe it's a little bit abstract or something you know I just but when you show them something and you say hey we're thinking about this new dock site look at this then the feedback starts rolling in you know and we went to a little bowl at first you know we put out the um you know we decided to be hey let's let's really mix things up how you doing and um the feedback at first you know we changed from like oh this is great you know we can like change these things and we can actually like build this new dock site and then like some other people like yeah now that looks awful you know we went through a few iterations but you know as we found like Cunningham's law was kind of our thing let's not try and get it right from the beginning let's just get it out there let's be bold and let's let's get the let's get the community get their hands on it and as we went you know we got a whole horn you know send and shouts out like we hit matrix IRC we even like went to the folks on Reddit and just asking hey what do you think you know we're working on this new dock site and the feedback started getting you know we got a lot of great things from the community that we didn't think of oh you know we need links to this and you know that doesn't make sense and you know so feedback started roll in and eventually it started to get a lot more positive and when we got away from those boxes and people stopped paying attention so much to the colors that we were using and the look and feel and things started to come together and I think people actually started like noticing those journeys and they're oh hey you're actually telling me this is the complete step this is the progression for like you know I'm a maintainer and here's the path that I can follow literally what we were doing you know like the guys in the airport with the glowing sticks are like yeah this way on the runway we're literally signaling to users this is the path you should follow so we got to if you go to docsiteansible.com you'll see this and you know we've got a journey based dock site and again the whole idea is like here are these paths like these are the if you're you want to get started you can do these things if you're a user you start here and then you build and we kind of define those journeys and of course it's still a work in progress and we're continuing to try and get feedback measuring with analytics we've got a quick links there's a toggle so you can like flip between the old site that was there with the cards if you still want to do that but we've also been able to see how many of the users that go to docsiteansible are doing that some of the things that are to come it's using the diataxis framework which divides content types and like there's tutorials, concepts, reference and then like how to guides we're like tutorials you know it's task based stuff it requires skills how to's are larger, kind of more overarching sets of procedures that help you apply the skills that you've learned also we're working on revamping one of my first contributions to Ansible was the getting started and we're working on revamping this so it can lead to different places and build out and make that path a lot easier so the ecosystem we've moved everything under the same Ansible namespace all the projects that are on Read the Docs we've moved stuff that was over on Netlify Galaxy and G they're now in the same namespace so everything's kind of organized and you've got that trust and there's deterministic URLs and everything's kind of in the one place we're also working on themes there's MK docs for projects that want to use Markdown great, use the MK docs theme that's brought to us by Soren who's on the DevTools team who's been working on that one there's also a Sphinx theme and we're building a community website as well that will tie in all of this and here's a screen grab of that that you know there's still a lot more work to come on that but that's going to give that central place where the community can come and find the docs and then find the forum that Carol was talking about in her segment so there'll be that as well and finally I'll leave you with the call to if you want to get involved with this one of the best places to do that would be on Matrix and the docs channel we call it the dogs documentation working group it meets every Tuesday everyone's welcome and if you're curious to know more about this if you want to contribute or find out more please come and join us we're a friendly bunch love to hear feedback love to hear criticisms and just you know come join us thank you how are we for time do we have no questions from the chat you guys want to hear me talk for 5 minutes to I'm sure I could oh it is Father's Day so maybe I got a dead joke in there somewhere yeah yeah I'm on the spot though on my mind just going blank so I would yeah I don't want to be recording yeah please yeah well I know like we're working on oh yeah okay sorry yeah so let me make sure I got the question right like you're the documentation that's on Automation Hub or Galaxy is not also on docs.anceable or yeah do you want to my daily community docs yeah Automation Hub stuff yeah so the docs.anceable.com has mainly community docs so the Automation Hub stuff probably won't be there you have to anyway log in separately to the Automation Hub and access those certified collections and things like that and even on Galaxy I think the community Galaxy the docs also are also rendered on the website as far as I know it's not on the website to Galaxy but no are there like collection docs are they rendered to docs.anceable.com so they are so the package docs are so should we like show them or something let's see if I can it's a good question anyway if you've got us a little bit stumped with the answer but well yeah here we go those Galaxy user guys are a little bit stale yeah yeah I do think they're both pulled from the same source any other questions out there no more questions okay well thank you thank you very much yeah I will no you have because like