 Hello. My name is Karel Szymon. I am a software engineer in Red Hat working on a project called OpenShift Virtualization. And I would like to present here a cool feature called Keyword Tecton Task. And long story short, with this project, you can automate the VM workloads. So you can easily start them, run some scripts inside the VM, then turn them down. Or much more, I will show you some examples of what you can do. It's pretty cool, you will see by yourself. So at first we will talk a few sentences about OpenShift Virtualization, what is it, what's it used. Then a few sentences about OpenShift Pipelines, then the project Keyword Tecton Task, then Keyword Tecton Task Operator, and then some example pipelines, what you can do with our project. So what is OpenShift Virtualization? It's in case you have workloads in virtual machines and you run it all the way, it might be hard to bring it to new systems like Kubernetes or OpenShift. But with OpenShift Virtualization you can bring your old workloads to Kubernetes or OpenShift and you can run it next to the containers. OpenShift Virtualization supports Linux and Windows. We have multiple features there, e.g. just a snapshot, it's subset, importing, live migration, snapshot ring, monitoring and many other features. Of course we have templates, it's called, the project is called Common Template and it's a set of templates for most of the modern operating systems. So for example, you don't have to find on the Internet how many CPU cores or how many memory each operating system needs, but you just look inside the UI of OpenShift and there you will see, for example, templates for Ubuntu, Fedora, for Windows and with just a few clicks you can have fully working virtual machine. And of course you have to provide a cloud image for it. But for that we have another feature called Golden Images and for example, for Ubuntu, for REL, for Fedora we are distributing these images inside the cluster so you don't have to care about anything, just a few clicks and VM is working. Now a few sentences about OpenShift Pipelines. OpenShift Pipelines is a cloud native CICD solution based on Tecton. It uses Tecton blocks to automate deployments across multiple platforms and with Tecton or Tecton introduce multiple CRDs for defining CICD pipelines. Before I will go more deeper into our project, I would like to explain some definitions. I will repeat these words a lot, so let's try to explain them. So first of all, task. It defines a building step. For example, you can imagine building some binary or building a container or running a test. So it's just the smallest piece of pipeline. Then we have a pipeline. Pipeline consists of multiple tasks. So with pipeline you can define, for example, prepare a disk for VM, start the VM, run some scripts inside the VM, then shut down the VM, delete the VM. So that's a pipeline. Then we have a pipeline resources and it defines an object that is an input for the pipeline, for example, a container or a git repository and it defines outputs. For example, again, some container or some change in a git or something. Then we have a pipeline run. Pipeline run starts the pipeline. So when you have some definition of pipeline, you will create a pipeline run object, which will then start the pipeline and Tecton will take care of it. And now about the project. Keyboard Tecton task provides a specific open-source virtualization task, which focus on creating a tool machine, deleting them, starting VMs, creating data volume, data sources. Of course, you can do some manipulation with the PVC. You can run, for example, disk customized, disk versus prep and many other actions. You can find the project under the link in the slides. And here is the list of all tasks which we are currently have and we are shipping with OpenShift virtualization. You can find tasks for creating virtual machines. We allow to create VM from manifest. So you just pass there a YAML manifest or you can create your tool machine from template. Then we have tasks for manipulating with metadata of templates. So, for example, you can copy the template, do some modifications. For example, like change the number of CPU, number of memory or add some devices inside the VM. Then we have tasks for creating or deleting data volumes slash data sources. There is a small note that in OpenShift virtualization 4.11, the task has named create data volume from manifest. But then we add much more functionality in 4.12. So it's name changed to modify data object. And as you can see in 4.11, it just know how to create data volume. But in 4.12, it knows how to create, delete data volumes or data sources. Then we have a task for generating SSH keys. So when you run this task, it will generate a new pair of SSH keys and store them into the cluster. Then we have a task for executing commands in virtual machines. First task executes commands via SSH. The second one just stops and deletes the VM. And then we have a task with libgefstool. We have two, this could customize and this could sysprep. And then we have a task for waiting for the VM. Like the VM can be in some states. So you just define for which state you are waiting and the task will wait for that state. And now about the project TectonTask operator. TTO or TectonTask operator is an operator which takes care about deploying of keyword TectonTask. So this operator is something like a box which consists of all of the previous tasks. And when you install OpenShift Virtualization, TTO will take care about deploying these tasks inside the cluster. TTO is a part of OpenShift Virtualization 4.11. It's marked as a dev preview. So it doesn't have any official documentation or test coverage. Of course we have documentation and test coverage in upstream but not in downstream. From OpenShift Virtualization 4.11, TTO is deployed by default. But by default it does not deploy anything. It's just a testing feature. And when user would like to use this feature, he or she has to go to HCOCR. HCO is like an operator which takes care about deploying any other OpenShift Virtualization components. And then user has to change or enable a special feature gate which will then tell to TTO to deploy its resources. You can see an example in the presentation and then there is a link to the Github repo. So before I will show you some examples what you can do with these tasks, I would like to speak a little bit about motivation, why we are doing this, why we are creating these tasks, why we are creating these examples. Imagine when you would like to use OpenShift Virtualization, you need to have some source data, some disk with operating system. When you are trying to run VM with Linux, it's quite easy to find a cloud image. With just few clicks, you can find an image for Fedora, Ubuntu, CentOS, RAL, but what about Microsoft Windows? I guarantee you you will not find any publicly available cloud image. And for that, we prepared example pipelines which will take the URL to Microsoft Windows 10 ISO and then it will trigger the whole pipeline which will take this ISO, install the operating system, then stores it in a special data volume and then there is a second pipeline which I will explain later. So, as I mentioned, we have two example pipelines. The first one is Windows 10 installer, the second one is Windows 10 Customize. The first pipeline will take the ISO image and installs it and then stores it in the cluster and then you can use it as a golden image in OpenShift Virtualization OS Images namespace. Then there is a second pipeline, Windows 10 Customize. This pipeline takes the resulting result and then it does some modifications. EG, in our case, it installs Microsoft SQL Server and then, again, pipeline will take the image, stores it in data volume and you have fully working Microsoft Windows 10 with SQL Server ready for use in other VMs. So, as I mentioned, the first pipeline, it just needs ISO, the URL, specifically, and it will install this operating system and it, of course, installs virtio drivers for better performance and then it stores the image inside the... Well, in this case, it's keyword OS images. It's upstream name, but in downstream, OpenShift CNV Virtualization or something like that. In the bottom, you can see an image from which task it consists. Then there are special parameters. The only one which is required is a link to an ISO. EG, you can select from official Microsoft sites or you can do your own server which can then serve the ISO to the pipelines. Then, of course, you can specify some other parameters like auto-attent config map name, where it's a special config map where you prepare sysprep and according to this sysprep, the pipeline will then install these windows. Then source template name, base dvname. Here is the list of all tasks from which this pipeline consists of. So, at first, we have to copy template. We are copying the template from a project called Common Templates. Especially, we are copying windows 10 desktop large. Then there is modifying VM template which takes this template, do some modifications. EG, change the boot source. Then we are creating VM from the template. Wait for it. Create the new dv from the installed dv. And then we do some cleanup. And here is the demo. First note, this demo is cut. The whole pipeline takes about one hour, but it was cut to only three minutes. As you can see, there are two pipelines, Windows 10 Installer, Windows 10 Customize. You just start it, provide a URL for ISO, or you can change the pre-default parameters. So, start it. It will do some copying of templates, modifying. Then it will start the VM. And now let's see how the VM is going. As you can see, the VM was created. Currently it's starting. During the pipeline, the VM is multiple-timers started. It's fine when it's restarted. So now it's booting. In a few seconds, you should see the installation page. As I mentioned, this installation is going through sysprep. So everything what you define is sysprep. The installation will just flow the manifest. Now, as you can see, it's booted. And now it will do some generalizing, some cleaning. It's shutting down. Now let's go back to pipelines. As you can see, it's doing the last steps, like copying the disk. And when you click on virtual machines, you can select the Windows 10 template, and it will immediately show you that there is a boot source available. So you just click single button, and voila, you have working Windows VM. And now in this case, the automation stopped, because to do some extra steps, we create a special another pipeline called Windows 10 Customize. The purpose of this pipeline is to take the result of the first pipeline and then do some modifications. In this case, we are installing SQL Server, but of course you can install any other software or any other packages. In the bottom, you can see the picture from which this pipeline consists of. Again, we have some special parameters, but in this case, you don't have to provide anything because the pipeline will take results from the first pipeline. So you can, for example, define template name, config map name, where you again define the sysprep and, e.g., what you can install inside the Windows VM, and then the result name, with which you can then run another VM. Again, it consists of multiple tasks. First of all, we have to copy template and modify template. Then we have to create the VM, wait for the VM, then again do some modifications and then copy the resulting disk to a completely new disk. And again, we have an example. So as you can see, the first pipeline was successful. Now let's trigger a second one. As I mentioned, you don't have to specify anything in this pipeline, just hit start. Again, it starts the VM. So let's look inside. Again, I have to mention that this pipeline takes about one hour. And now it's installing the SQL server. Now it's doing some preparation, cleanup, shutting down. And in the pipeline, we'll take the resulting disk, copy it into completely new disk, and it will create a new template for you so you can use this new template with the new disk and create a new VM from it. As you can see, there is a new template. And just with single hit, you can launch a new VM with pre-installed SQL server. Okay. And that's all from this presentation. In case you have any question, don't hesitate to ping me on the email or you can ask here. Where to speak here? Cool. Do you understand that you're using PowerShell to install all the software inside the tool machine? Yes. Yes, we are using PowerShell script, which in the first pipeline it installed by IoT drivers. And then it installs my SQL server. And how the commands are invoked from the pipeline through the SSH, I guess? It's a part of a sysprep. So you define sysprep, then there is a special command to run or to mount the script, and the script is just below the sysprep. So it's quite easy to... Yeah, and the second step also done this way. Which step you mean? You describe you have two steps. First for installing Windows and preparation. The second one is for installing a SQL server. Yeah, both are done the same way. So with PowerShell script below the sysprep. And then invoked from sysprep. Thank you. Yep. Where this is all done on the back-end side, is it built locally on the machine? In the cluster. In the cluster. And can you connect multiple clusters this way? For example, around the tasks and builds, let's say Fedora S390 machine in IBM Cloud? Can you please repeat the question again? I mean, you built a x86 machine for Windows-based. Is it possible to, for example, use, let's say, my own IBM Cloud credentials and build the machine in the cloud? I think so. If you have pre-installed OpenShell virtualization, then I don't see any problem with that. It needs to be on the remote node in the cloud. Yeah, in your remote cluster, you have to have OpenShell virtualization, OpenShell pipelines or Tecton. And then it will work for you. Cool. Thank you. Yeah, I just wanted to ask if I can find source code of all the steps somewhere. Yes, yes. You can find them. So the source code of a task, it's on this link. So when you just write keyword slash keyword, it will find you the organization keyword, and then you just find the keyword Tecton task. But the source code is of pipeline. It's in Tecton task operator. Again, it's under the keyword organization. So just find reposts with Tecton prefix and you will find all source code. And of course you can ping me on email and I will send you the links. Thanks a lot. Hey, hello. So your solution is built on top of Tecton, right? Yeah. So in case your pipeline fails, how do you debug it? And in case is there anything missing, like because Tecton kind of builds on jobs, I think? No. When you run Tecton pipeline or when you create a pipeline run, it will create like a job pods. So each task is running inside a separate pod. Okay. And in case your pipeline fails, so you just directly check the logs of those pods or... Yes, yes. Okay. And is there anything else missing in those logs which might, like in case, is there anything missing that you would like to have, like additionally on top of those pods that would help you to better debug it? I think not because when you debug the pipeline run, it will tell you which task fails, which pod fails, and then you just go inside that pod, run OC log, and you will see, like, the whole log, what, what failed. But of course, when you would like to run a deep diagnostic, for example, when the logs are not enough, you have to build your own tasks, your own images, and then run the pipeline again. But I think for user usage, it's enough to debug from logs from pods. Okay. That's all. So then thank you very much. Thank you very much.