 Wow. Thanks. Great, great. Thanks for the introduction, Marisa. And I'll step in with the topic that I've got for today. Thanks everyone for coming to the session. I already see around 50 people in the crowd already. So hopefully what you'll learn today is going to be useful and beneficial for you. So today we'll be digging into the DevOps world with the focus on the last city of DevOps teams and how to achieve that using GitOps and other practices. Before we jump into the presentation, a few words on myself. I've been doing software engineering since I was a kid and for the last five years, I've been doing a lot of distributed systems and distributed teams, most of the technical leadership roles and dealing with Kubernetes. And recently I joined Portainer as a developer advocate. So now I'll be more focusing on helping other people succeed with the technology rather than being the one in the field. You can find me on Twitter, my nickname as in the end of the slide. And I'd be happy to chat and answer all of the questions that you've got. Either that could be the distributed systems or Kubernetes or Portainer or my awesome doc. So feel free to connect on the social networks. And as an additional bullet point to that, I should have stated that I'm very practical. Very practical developers always start with the problem I'm trying to solve. And then eventually I figure out the solution. So that is the exact way, the pattern that we are going to follow today in our session. First, we would identify the problem. Then what are the patterns to resolve that problem? How other people do that? And then we would cover the tools that we're going to use to resolve that problem. Which basically implementing the patterns but with the more practical way. First, to get everyone on the same page, I will just describe you a very classical CICD pipeline in Kube and not only that I've encountered multiple times when I did the consulting part. So you have one possibly gigantic pipeline which takes care of fetching the code, building tests, and producing some edifact out of a successful build, deploys that edifact straight away to the selected environment. And then you go onto the environment and see what happens there. Maybe click a few pages, do a smoke test or best case, run some automation tests on top of that. While it really sounds simple, it's very complicated and suboptimal to me it seems. And why? I definitely see a lot of problems here. So I'll be going over them. If you ever encountered these problems, just raise your hands. I think you should be able to do that in Zoom. So the first one is the problem of what is in production now. Imagine two guys pushing the code at the same time to developers and how do we figure out what is unplugged now if your pipeline triggers on merge to a particular branch such as master or main. The next one is the the exposing the access to the cluster by allowing people to do the routine work with cluster resources such as deploying things, applications. I see that a very error-prone approach and it tends to make a lot of mistakes. So simply giving any of your peers admin access to the cluster is not a good idea and everyone is an admin is a very bad idea, especially just for deploying the application. Next on, sometimes it happens that cluster is kind of is exploding. So I'm sure that you've come across that scenario of what we have to do if we have to recreate the cluster either that would be moving to another subscription or recreating all the applications that we had. In classical CI CD pipelines, you would run simply all of the pipelines for each service and that would take a lot of time. And the rollback has re-built in classical CI CD pipelines. I've encountered multiple products with built time up to a few hours. To me, it seems horrible since it kills the feedback loop and the whole idea of moving quickly and being able to break things then roll it back and see what happens. So what we're going to have a look at today are the patterns to resolve these issues. And the first one is decoupling CI from CD. So decoupling your continuous integration from your continuous delivery and deployment. Essentially, that's the exact enabler to GitOps. And GitOps could be considered as a part of DevOps culture. One of the practices in DevOps culture is something which is a state of art and state when you are having everything automated and the team is doing great with the delivery. So we'll get into all of these patterns as we go along. And I think it would be extremely useful. First one is decoupling CI from CD. It's kind of breaking up and not breaking up at the same time. Like we are decomposing the process. So independent components are able to take over some step in our process. So the same good old classic CI CD pipeline. And then we are putting that build test and release into one piece. And this is the leftover of our old good gigantic pipeline. And now we're adding another option to have the deployment being done through some other component. So that's what we're exactly trying to separate here. As we now have it decoupled, it enables us to plug in that independent and external component. So you'll see what it gives us in a second. It gives us the ability to implement the approach of GitOps, of keeping the desired state in cluster based on the repo. There are a lot of really excellent talks out there on GitOps. I'll include them in the materials that I'll send out. But general view of the concept is such as I've described. So if Git repo gets a change, cluster should also change its state. And by state, I mean the application is deep for services, load balancers, etc. What we have here is like the whole idea of GitOps is Kubernetes manifests are just sitting down in your Git repo. And now it enables us to rapidly collaborate on that. It's much easier and addresses the problem of not knowing what is in production environment now. And enables the exact source of truth. So you are able to have a look at that and figure it out. And it's Git based, which means that you have all of the pull requests, reviews, branching, rollbacks, signature. And most importantly, what it does, it keeps the history of commits. So it's much more deterministic than Cube CDL commands being ran on the cluster, or even CI tool, or running these commands manually. So yeah, pretty much that's it. And it also enables us to better understand what's happening with the shared resources. Such as shared load balancers, shared volumes, etc. Because it essentially has the Git heritage to that. I'll walk you through the diagram, how that works. Initially, imagine we have a developer who did emerge into the main branch and CI tool on the left picks it up and does exactly what we described to be for building, testing, pushing the image to the container registry. And once that's done, our developer is able to go inside that GitOps repo and change the image version on his application. Just bumping that by one or something like that. So GitOps tool, which is running inside of a Kubernetes cluster, will pick it up, pick up the changes from the repo and start applying that. And it's only the job of that GitOps tool to listen to the changes happening in GitOps cluster, continuously pulling that in GitOps repo power and applying that onto the Kubernetes API, which is declarative by itself. So what's happening is we are pulling YAMLs out of GitOps repo and applying that onto Kubernetes. And great, now it's whole state of cluster has changed as we just changed something in the GitOps repo. And important thing to notice here is that that GitOps tool has to live within your Kubernetes cluster to be able to apply such changes and for that to be secured. And next thing as we tackle the general concept, let's see what kind of tools how we got here. The first two are big, big players in GitOps space. And I'm sure a lot of you have heard about that. They are great and capable of doing a lot of things such as doing the basic idea of GitOps, doing the progressive delivery, tackling the multi-tenancy, etc. I want to tell you about something that we've built into Portainer with the latest community edition release, 2.9.1. And that's something that I call lightweight GitOps. I've recorded a small video on that. So we would be able to see. That's the basic UI of Portainer. And what I have here is my local environment, which is Docker. I run Windows and I run Windows subsystem for Linux. And I have Docker installed on that subsystem. So let's go into that and see what we got here is the list of containers that I have. It's simply for container container and control plane for the Kubernetes cluster that I'm running kind. Next off, I'm trying to deploy the application into my Docker environment and I have already prepared a Docker compose file, which describes the applications that I want to deploy. So I'll just specify that in the UI. So I don't have to write any of CUBE CDL commands or anything. It's simply specifying the repo, specifying where to find the changes, where to listen to the changes. And I'll also set the fetch interval to one minute. So each one minute it will be pulling the changes from the repo, hitting up that deploy button and it should be deployed in a second, I think. Yep, exactly. So and now what we see is a stack. Stack is a logical group of containers deployed in my Docker environment, which are all grouped by the fact that they are in the same compose file. So what we see here is some front-end, is some ready sketch and a busy box. And what I want you to have a look at is the version of Ubuntu here is 20.04. So what we'll just do is go into my Git repo and change that in the Git repo. So as you see it's 20 here and I just want to feel like changing that to 18. So as you see it's 18, it's committed. And now let's wait for a stack for it to come on here. Let's go into the containers tab and enable auto refresh. So it should be okay, I think. And eventually we'll find that Ubuntu was updated to the exact version that I've specified in the Git repo, which enables me to collaborate with my peers on that repo and the changes will be replicated in my local Docker environment. And that environment would be literally anything. We'll just briefly now jump into the next use case, which will be more Kubernetes related. So as I mentioned, I have kind of spending up on my local machine. Same UI for here. I'll just deploy another application, the same actually the same Docker compose file. I'll pick the compose format and the container will be able to translate that Docker compose format into the Kubernetes manifest. We'll put the fetch interval the same as we did before with Docker, which actually is kind of a huge enabler for any migration from Docker to Kubernetes, because you can use the same Docker compose files. And that would be translated and deployed in the Kubernetes environment. So let's hit that deploy button now and see what happens. It will take a few seconds to spin all of that up just updated the page. So it's all now happy and running. Notice the version of the image, the same one as we've changed in the Git just a minute ago. And let's now deploy also the one with the exact Kubernetes manifest. I also have another file in the Git repo. And that file is called the busy box YAML. It's a Kubernetes manifest of busy box. And all of the same process. So it's just specify the path for the resource you want to have deployed and then specify the interval. In my case, it's one minute and then hit that deploy button. So what happens now is that that busy box got deployed or at least waiting for get deployed. You can notice the version of that busy box is stable in the UI. And let's now modify the version of busy box that we've deployed from Docker compose, then go back to the busy box from the Kubernetes manifest and update the version here as well. So what now we expect is that the changes that we just did in these two files will be fetched into the UI or not fetched into the UI, but fetched into our Kubernetes cluster and then reflected in UI. So what do we have now is stable and busy box stable and Ubuntu 1804. Let's enable auto refresh here. So it's that. And eventually it will get updated as we spent to specify the update interval as one minute. So let's just give it another second boom. So now what it says is that our busy box got updated to latest and our Ubuntu instance got updated to 2004, which is exactly what we just updated in the good environment, which is great, which is exactly what we aimed at. The whole idea of that feature is that having the reliable workflow enabled for our users. And if you're so tired of doing all of the things manually and you have to collaborate anyhow with your peers, that is the great way to start. And after that, you'd be able to either stay with Portainer in that exact feature or grow and switch to using Flux, switch to using Argo. Doesn't really matter. What actually matters is embracing the correct workflow and more reliable one. So we would see less of issues with the production clusters. And that is something that really contributes to the DevOps culture. To me, it seems like that when what we did was decoupling that CI from CDE, adding GitOps on top of that, all that contributes to the healthy DevOps culture. That's the cultural transformation in the organization, which embraces the collaboration, embraces the shared control and responsibility over the particular component piece of the system. So if you have a service in that Kubernetes cluster, then you own that service from the very initial stage to all of the monitoring, all of the feedback sessions, etc., all of the retrospectives. And having that idea of DevOps makes teams very autonomous. And that's why I think they are top performers because there is no throwing over the fans crap in their life. They simply take care of everything they've got. Either that would be the development or testing or deploying the application or monitoring the application, they've got it. And that's actually quite hard in pure Kubernetes environment. So that's why all the tools like Argo or Flux or Protainer exist. Because simply it's too complicated without correct tooling. That is the part of engineering culture to equip your engineers with the right tool so they can not spend endless hours figuring out the Kubernetes, but can deliver the value with that from day zero and make that in a way that it's reliable and consistent. That's exact philosophy of what we're trying to do at Protainer. That's simply why we exist. We believe that we can make the whole industry much easier for fellow developers and make them self-service that Kubernetes cluster. Make them use their workflows such as GitOps. Make these available to them at no huge cost of learning Kubernetes for a few years. Because personally for me, I did last five years with Kubernetes and I'm still so newbie in that space, even though I've been resolving really complex issues, but sometimes it just stumbles upon me and can do nothing. So that's why the tools like Protainer exist. It's supposed to be easy, but it's not at the moment. And we are the ones who try to mitigate that complexity and embrace the really reliable workflows and educate our users on the topic. So let's get back to our problems. Let's try to address them one by one. What is on prod now? That's easily addressable by just having a look at the Git repo and that should be the answer in 99% of the cases. If something is in GitOps repo, then it means that it's in Kube. I can do it manually. Just admin. That's also address because this way, you would enforce a role-based success policy, which Protainer also does. You would embrace that role-based success policies and only the GitOps tool will be capable of doing any changes to your Kubernetes cluster. And that is much more reliable. In the case of moving the cluster to another platform, another subscription, or just cluster exploding, the recreation process would be quite simple because you would repoint that, deploy that GitOps tool and say, hey, fetch all of the applications from that repo to my cluster and that would be much, much quicker than running all of the pipelines and even pointing the pipelines to the correct cluster. In the case of rollback, what you simply need to do is to revert the commit and that commit, that updated state of Git repo, will be replicated into your Kubernetes cluster or simply change the image version in the cluster. And that is the rollback. So that's quite easy. And the biggest and the most complicated problem that I want to mitigate with Protainer is developers just letting ops people do it. I believe that we would be able to do the DevOps at the same time. I believe that it's not that complicated with your right tools. So we are able to take care of that and remove the siloed environment and enable that DevOps culture. And here we go. That is the exact magic that happens and that's quite a long journey that I'm advising you to take onto DevOps culture. The patterns that we just went over is decoupling CI from CD, as GitOps and DevOps culture. What we actually have out of those patterns is more opportunity for optimizing our development process is that collaborative environment with good heritage, with all the features that Git provides us, merging, rebasing, etc. And as the end of that, as the end state of that is delivering at really crazy speed from day zero, which you achieve with DevOps culture. I also will follow up with some more materials. And if you need the presentation link, just let me know. I'd be happy to provide that. And I think Marisa is sending that out in the chat now. I'll follow up with the more materials and let me know if you want to have a presentation on your side as well. And what I want to also mention is that we are launching the crazy campaign. We have the Portainer Community Edition, which is completely free and open source. You are able to spin that up in a matter of 30 seconds. But also, we are starting the Portainer Business Free campaign. That's our commercial product that we do on the base of Portainer Community Edition. And it has more of enterprise features, more advanced workflows, more advanced role-based policies, etc. So we are launching that campaign. And if you want to get a piece of that Portainer Business, just let me know. What we'll do is we're giving away Portainer Business for five nodes of your Kubernetes cluster for free. If you got more than five, then that would be a different story. So let me know if you are interested and I'll be happy to help. And the community, if any of this just got your interest, feel free to reach to us on Twitter, on Slack, on Discord, Reddit, LinkedIn, literally anywhere. We monitor all these channels and we got you there. The most active one I'd say is Slack and Discord and Reddit. And if you have any feedback on the product, on the session, on the feature, on the approach, just let us know here and we'd be happy to chat about that as well. That's it from my side folks. Let me know if you have any questions. I would quickly go over them in Q&A. Robert asks, Adolfo also wants to answer that question live. So I'll read out the question a lot and let you Adolfo answer that. Here's my wingman on today's presentation. So Robert asks, will the Docker compose to Kubernetes environment, create all the needed parts, including ingress, persisted volume claims, etc. Adolfo, 107. Thank you very much. Meanwhile, I'll just take it over and I'll just take it over and respond to that question. So yeah, cool. It's really a simple thing that Portainer does is we are using a tool called compose, which is advised by Docker and Kubernetes to be transforming from Docker compose to Kubernetes manifest. So I'm not really sure if that would create the ingress with all the required configuration. But I think Adolfo would be able to answer that when he comes back. So basically relying on that compose tool. The next one is we use Portainer's worm as of now, but I want to move to OpenShift Kubernetes. We are very familiar with Portainer now and not very familiar with OpenShift. I feel that well, Portainer give all the features that OpenShift provides or not. So as far as I'm concerned, OpenShift is just something that really is similar to Kubernetes. So in that sense, Portainer is very agnostic with the distributions. And if that's Kubernetes alike, then you get all of the Kubernetes features on the OpenShift as well. But Adolfo might be a better expert on this since I recall he did something with OpenShift. Next one is by Rod. How do you manage admin rights on the Kubernetes? So the only selected individuals have the success. So that is actually really simple. In Portainer, you've got really advanced role-based access policies. So you would be able to configure the teams, the groups. So the exact teams or groups or exact users have the access to particular resources. Imagine if you would want your development team alpha to be able to access only their Kubernetes namespace. So you simply enable that access. We have a gradual list of the permissions that you can have as a user. And in that case, you would be doing the either operator read only or full admin or developer level access. You would be able to have a look at the logs, inspect, monitor, etc. Does Portainer support all of the Kubernetes versions out there or only micro-Kubernetes? So as you've seen, I was spinning up kind as the local environment. So it also supports kind. I use Minicube. I use Azure Kubernetes service. I use the one from the Amazon. I use the one from Google. So all of that is supported as long as that's Kubernetes alike. We are really into supporting that. Portainer is very generic in that sense of the distributions. So it must be cool. Alrighty. Alrighty. I think that another question from Tushar does Portainers for multiple Kubernetes cluster or only one cluster at a time? So at the moment, I had, I think, four clusters spent up on my machine, two kinds and two Minicubes. So really multiple of environments you can support. It's not only the Kubernetes clusters, but also Docker environments. It's also the edge environments. So pretty multi-tenant, I'd say. Hopefully that answers your questions, Tushar. Thank you, Rod. Cool. Would you like to ask the one along for it? Oh, yeah. Okay. Robert, hi. So Portainer is going to work with whatever storage technology you have deployed on your Kubernetes environment, be that OpenEBS, be that Longhorn. So really, it's just going to detect whatever storage you have on your Kubernetes and support it. Because Portainer is not a infrastructure tool. Portainer is a container management tool. So we don't do, let's say, base installation of containers, sorry, of Kubernetes clusters. We just manage what is relevant to the containers running on your orchestration environment. Hope that answers your question. Let us know, Rob. And I think, yeah, perfect. He says it's, it is great. OpenShift versus Portainer, pros and cons. Do you want to shoot that one? Do you want to do it? Go on. Go ahead. Okay. So, again, OpenShift is, in my view, a enabler for Kubernetes clusters, right? I mean, tell me if I'm wrong. My concept might be wrong, but okay, D runs on OpenShift, right? So it's a platform that enables you to run, amongst other things, Kubernetes clusters, right? Portainer is agnostic to whatever platform you're running. Portainer can run on the cloud, on bare metal, can run on prem, can run on a public-private cloud, hybrid cloud. I mean, we're in a stage above of whatever underlying technology you have running your cluster on. So it's, I really can't say it's comparable in that perspective, because I cannot deploy a Kubernetes cluster with Portainer, because it's not what I do. I mean, it's not what we do. We will manage the containers running on whatever underlying technology you have for a Kubernetes cluster with Portainer. And it runs on a Raspberry Pi, or it runs on a OpenShift environment, or on an Azure cloud, Google. I mean, we've managed to have Portainer run pretty much everywhere that you can imagine, even on Alibaba cloud. So I think there's, that's one of the things that is hard to answer, because they're not necessarily the same thing, right? They're not comparable in that perspective. I hope this answers your question to you. Cool. Cool, great. Are there any questions you guys would want us to answer? I'd be happy to answer them. Yes, exactly. The recording of the session will be posted shortly. I think Marisa will give us more details on Taiwan Marisa. Can I ask them? Yes, absolutely. The recording will be available on the Linux Foundation's YouTube page later today. So you can check back there in a couple hours. Awesome. Awesome. I think that's it from our site. Adolfo, do you have anything to add? Only that. It was a great webinar. Thank you very much. And I know we have clients of ours also watching the webinar. Rodrigo is already a client of ours. Thank you for being here, Rodrigo, and watching with his director. I hope you enjoyed the webinar. This is, as you saw from a dev perspective, what we see is the potential that we have. And also, contributing to the Linux Foundation is always a great experience and an honor from my perspective. So thank you for that. Perfect, perfect. A few other questions here that I would love to answer. Does Portainer support any webhooks? In terms of the GitHub feature that I've showcased today, we also support webhooks and in opposite to polling from the repo. But for monitoring, none of that I'm aware of. And the monitoring is something that will be on our product roadmap. So Portainer will become a tool which you could solely use for doing everything in Kubernetes. Another one is how Portainer is different from GitLab, AutoDevOps. Not completely into the context what is GitLab, AutoDevOps. So I'd have to go that up. Perhaps you've seen that at all already? No, actually, I haven't. I'm actually already doing a quick search here. Well, I can say that Portainer is not a Git environment. I mean, we have the capability to connect to any Git repository and automate the deployment of applications or environments, if you may, if you consider a stack, be that on Docker or on Kubernetes. So what we do is ensure that there's an automation in terms of deployment of environments, be that on any stage that you are in your deployment from development to production. So that's what we have in terms of integration. I mean, we had the integration already before. What we now have with this new version is the automation of this integration. So you can automate by polling or even by webhook, webhooks, this integration. And the webhooks, basically how it works is that it, if you have a CICD process that at some point needs to ensure that there's a deployment to be done in your environment, you can connect to Portainer via this webhook and follow your cycle of testing and deployment. So this is what we do. I don't know if this answers the question, but this is what we do. But we're not a Git repository. We're not going to do that for you. We're going to make sure we can manage the lifecycle of your running containers, not the, let's say, the storage of the lifecycle of your containers. Good, good. Perfect. So we won't detect your code language. We're not going to do any code qualifying on Viya or Git implementation. We're just going to automate that deployment. So it's more end of cycle, if you may, process. Essentially what we've seen in the CICD pipeline, starting from planning, coding, building, testing, releasing. And that is the, that deployment and operating the environment. That's what we do. Perfect, exactly. Okay. I think that is it, Marisa. Would you be so kind to finalize that? Yes, sir. Yes, okay. So thank you both so much, Dimko and Adolfo, for your time today. Thank you everybody for joining us. And as I just said, just a quick reminder that the recording will be up on the Linux Foundation's YouTube page later today. So you can watch it again or share it with a friend or whoever you would like. So we hope you'll join us for future webinars and have a wonderful day. Thank you. Thank you very much. Awesome. Awesome. Happy to be here. Have a great day, folks. Bye-bye. Bye-bye. Thank you.