 Great, thank you Christina. So hello guys, again, this is Steven, platform engineer from Portina. First of all, I would like thank you very much for your time today. For today, it is going to take about 30 minutes to walk you through our recent awesome feature, your tops with Portina. And at the end, I'll do some demo to show you how cool and easy it is. Before we start, I would like to mention that this get tops engine, it is a very early version. And what we really want to hear is your feedback. So if you have a chance to try it out, please let us know how you think. So any suggestions, improvements, so anything, please let us know. It's going to be extremely helpful for us. And one more thing I would like to mention is that we're not trying to build a product to compete with other get tops big boys like Flux or Argon. Our intention is, as always, is to allow users who are new to their container journey to be able to utilize our get tops feature with just the right amount of knowledge. And when they're ready to adopt advanced features that other providers provide, then they can do a smooth transition. So that's our goal here. Okay, so let's get started. So first thing first, what is Portina? So Portina is a centralized service delivery platform for containerized apps. So what this means is that as long as your application is containerized in the right image format and running on a container platform, we will support to you. So ultimately, our goal is to minimize the amount of learnings required for their whole container lifecycle that includes orchestrations, likes for more Kubernetes. And simply we will let you utilize our software for delivery platform to be able to get started with your container journey. Just to expand it out a little bit further, we are a layer that sits between your underlying container infrastructure and your users. We pretty much support Indian infrastructure. So it could be physical service, it could be virtual machines in your data center or in cloud, or even Kubernetes as a service offerings, like AKS, WKS. We also support Azure Container Instance and with the recent release, our recent release, we have now a very first version of support of Nomad by Haskell. So yes, we are expanding out, not only managing container workloads, we also can take care of, or identity management for you. You could enable LDAP AD or Azure AD or other standard OS providers that you'd like to use. And based on that, what we provide is that we ensure that only authorized users can perform actions that are within their assigned roles. So we do provide a strict RBIC capability. And in addition to RBIC, I believe that you already be aware that we provide a great UI or a portal that is really, really easy and straightforward to use. But it's not just for people who would like to consume our UI. We also provide APIs as well. So we maintain a swagger file for all the APIs that we provide. And we also act as a proxy back to your orchestrations like Kubernetes or Docker. So should you need any requirements like that for automations? Yes, you will be able to achieve that. So I hope this brief gives you an overview of what Portaino is. So before we dive into our GitOps engine, let's talk about CI CD, the classic one. So as you see the screen, CI will take care of normally code building test. So the developer will follow the normal day-to-day operations like create a branch, comment your code, open a PR, get it reviewed, get it merged, and get an automated test running. And after that, the CI will normally trigger a continuous delivery pipeline that could be based on PR open or a merge request. It depends on how each organization does it. And once the CD kicks in, it's going to, based on the image that has been built, it's going to deploy it to your cluster. And it could be, if it's production, there will be some sort of an approval process in the middle or it could be all automated again. It depends. One thing to note is the classic CI CD pipeline is very coupled. So what I mean by coupled is, CI will normally trigger CD pipeline. And it is a pretty straightforward approach. There's nothing wrong with it. It's good. It serves us quite well today. And one thing to mention is the continuous delivery in the container lifecycle is that you will have some sort of a repo that contains Kubernetes manifests or help charts or Docker stack compose files. And yeah, CD will utilize that to deploy your artifact for you. So it is serving us well, but there are some problems with the classic CI CD model. So first one is, it is hard to identify which version of code is in production. I mean, frankly speaking, you can go into your Kubernetes cluster, just query it and find out what version of image tag, my application's running, or I can simply go to the history of my own CD tool. But the problem is, it's not really easy. It's a very manual job, I'd say. And another challenging part is, what if the core maintainer leaves and a new person joins? And can you really trust your repo that contains exact version of applications you are running? So let's say, can you really ensure that there was no manual edit has done it ever? So just expand out that manual edit on the second one here. So there are cases where you have to fix some outages. So it could be anything, but one example I can give you is, let's say my Kubernetes applications just keep bouncing because it's got not enough memory limit. So what I end up doing is, and let's say that application was deployed, I say help chart, then because it's an urgent situation, I might just go to my Kubernetes cluster and just do kubectl edit. But then the problem is then your help chart becomes out of sync right away, which means is that the repo contains your help chart, does not reflect what's in your production or give test cluster. So that's out of sync straight away. And that leads to the third problem here. So there will be cases where you will have to recreate your Kubernetes clusters. So it could be any reasons, right? If it's cloud, it could be some sort of a outage in cloud. And if it's on-prem, it could be pretty much in issues like how do the outage and then my it city cluster breaks. And there are cases where you have to boost your Kubernetes cluster again. Today, boost trapping a Kubernetes cluster is really, really straightforward. So if you're in using cloud, then you can simply use Terraform as your bicep or cloud formation to deploy your Kubernetes cluster or Qubetm or Kops for your on-prem. But the problem is, what about the Kubernetes resources? So if you're running Kubernetes in production, then you will be using other Kubernetes platform-related components like IngressController, CertManager, external BNS. And there might be 50 to 100 apps running on your Kubernetes cluster. And if you want to recreate it, can you really trust your repo that contains all your Helm charts or Kubernetes manifests? And when you deploy them, can you really ensure that it is back to my Kubernetes cluster 10 minutes ago, like when it stopped working? Of course, there is an option to use a backup and restore tool like Valero, which works like a charm. But again, what we want is something that's that contains the state all the time. So because you're not gonna take a backup every minute. And lastly, rollback is pretty painful because as I mentioned, it's a very sequential process. You have to go through the whole lifecycle again. I mean, frankly speaking, you can just go to the CD tool and deploy it manually, pointing to my previous CI job, but then history mixes up. So in my opinion, well, I'm pretty sure everyone thinks the same rollback should be immediate. So what I just talked about is all about states. So you can get it right with the classic CI CD approach to make sure that when a change happens, make sure your repo is pushed. So it is updated and make sure all the deployment is done through the CD pipeline and so on. However, it requires a lot of efforts. So you'll have to implement a lot of strict rules, processes, train people and so on. Ultimately, what we really want is something that does not require a lot of our energy to deliver our code. What we really, really want is something easy and especially developer-friendly. CD tool is not really developer-friendly. You have to learn the tool and you have to learn the new process, how to use it. So this is where GitOps to the rescue. So the idea of GitOps is that we will keep all our desired states in a repo. So if you change any changes on your local machine, your GitOps engine will bring it back to what's in your Git repo now in what we call this reconcilation. Just to brief on the GitOps overview, so what it is. So I'm pretty sure many of you will be already doing that your Kubernetes or Docker stack or any manifests that will deploy your container workloads will be in a Git repo. But again, it's not a single source of truth because you can always do a manual edit on your Kubernetes cluster or Docker swarm cluster. But with GitOps, it's going to prevent you doing that. So what it means is even you change it locally, it will bring it back. So that means your Git repo now is a single source of truth. And it acts as a centralized management. So it will act as a buffer for you. So what this means is developer will not require to access Kubernetes cluster to make changes anymore. So they will be able to simply go to a Git repo and follow a developer friendly process where opener PR, get it reviewed, merged and your deployment is done. And the rollback will be easy because Git history will tell you exactly what has happened. And again, it's very, very developer friendly. Lastly, because your Git repo will be the single source of truth, the recoverability will be really, really rapid because now all you need to do is deploy a GitOps engine pointed to the repo that you were using and boom, your cluster will be back online to the previous state. And can be trusted. So just to expand how GitOps based CIC will look, CI will pretty much look the same. So it is identical to how you guys are doing today. So a developer will create an open up request, get it reviewed, merged and a container image will be built and then pushed to a image registry. But the main thing is that with the GitOps engine, it runs in the cluster in your Kubernetes cluster. And what that means is that it is a pool based, not a push anymore. So with the traditional CD, you will have some sort of an agent running in your data center or in cloud or even you might be opening up your Kubernetes API server with some API. So the IP restrictions, but rather than opening up your API server, the GitOps engine reach now to GitHub which is outbound based, it's more secure. So that's the main difference. And also the difference is that the developer doesn't necessarily have to learn a new CD tool anymore. What that means is they will follow the data that's pretty much the same developer operations. So open up here, get it reviewed, get it merged and your code is deployed to your Kubernetes cluster. So as you see, GitOps is all about the making developers easy to deliver their code. I made it sound that it sounds really easy but not really. The reason is because GitOps was built for Kubernetes initially and soon as Kubernetes is involved, it gets complicated. So the learning Kubernetes is not that easy. Once you're into it, you will be fine but as someone who is about to start the container journey, it's not. And that includes GitOps. So we believe GitOps should be easy. We think that it has to be easy to use and if it's easy to use, then it's not going to provide 100% capabilities. What we think is the functionality covers 80% of use cases and it should just work for you. And also what we think as developer should be able to self-configure their pipelines without the need for dedicated DevOps people. We also think that GitOps shouldn't be just for Kubernetes. It should also support Docker, Docker swarm and other container orchestrations that we can think of. We also think that GitOps engine should run inside the container itself. So not your cluster, not as a separate component so that the maintenance becomes easy. So you don't have to worry about upgrading a separate GitOps tool. Rather, you can simply update Portaino itself and it will take care of all the feature enhancements and bug fixes, all those things. And again, Flux and Argo CD are awesome tools. I have used Flux in production many, many times. It works like a charm but you have to be ready for the depth of capability. And lastly, Portaino will get you started with near zero operational overhead. So how does it look different for Portaino? So the only difference is this, down here. So rather than GitOps or Kubernetes only we will support Kubernetes, Docker swarm, Docker standalone and even it will run on your developer workstation as well. So you can take a look at how GitOps engine looks like and you can get a feel on how to use it and move into production. Okay, so how does it look? So it looks like this. It looks really, really familiar to you, right? If you are a Portaino user, really. And the only difference with the GitOps engine is the automatic updates down here. So you can enable the GitOps capability by simply clicking on that, clicking on that, extending it out a little bit further. So for Kubernetes deployments, you'll deploy, you'll select a namespace where the application will be deployed to. And of course the build method will be Git. So there will be a GitOps based feature. And repo URL and reference where the reference being either a branch name or a tag name. Manifest pass being pointing to a manifest. So it could be a Kubernetes deployment email or it could be a deployment email that contains multiple Kubernetes objects like deployment, service, secret and so on. And this additional path means that if you have more than just one manifest, then you can simply click on that and specify more, which I'll show you at the end of the demo, at the end of the slides. And authentication, if your repo is private, then you will need to authenticate. And today we only support personal access token at the moment. Automating updates, this is the GitOps feature. And either you can set pulling to fetch the repo every five minutes. So five minutes being the default, you can always customize it, or a web bug. So it's a manual remote trigger. So it's a simple, you can call it by a simple curl minus X post. So, and then your application will be update. So this is more a traditional CD approach. Lastly, for three deployments. So this is where what we call continuous reconsolation that if anyone changes your cluster locally, manually, then it'll bring it back to what it says. One thing to mention with this automatic updates is that I mentioned that it's customizable, which means that if one application is using three minutes interval and another application is using five minutes, then they will have their own pull, okay? It's not like five minutes, it's going to update all five apps. It's going to have its own pull, which is really, really great. So what makes our GitOps different? So it runs in container, not in each cluster, which means that you are out of great headaches and there will be no cluster overhead. And another cool thing is the authentication, sorry, the GitOps pipeline run in the user context. So whoever creates the pipeline, it will be belonged to him or her. Not the GitOps service account. So what that means is it will minimize security risk and it'll allow easier audit as well. And developers will be able to create their own GitOps pipelines. So they would need extensive knowledge or special permissions to use. So just going back here, it's really simple to use. So all you need to know is repo URL reference manifest path, authenticate or not, enable automatic updates. That's it. You don't need extensive knowledge to enable your GitOps pipeline. Okay, so what works well today? So obviously we can deploy an application from a single Git repo based on one or more specified manifests. Git reconciliation loop based on a five minutes interval. Obviously that can be customized or a web of trigger. And we also do the enforcement of redeployment so that the only the Git has the state. So it becomes the single source of truth. We also support multi-service manifest. So it could be Kubernetes or Docker stack files. And lastly, get authentication using API tokens. Okay, so as I mentioned, it's a very early version of our GitOps engine and this is how it'll look in the future. So we will bring in support for help chat and customize. I am pretty sure that this is really, really required. And also we will bring in the SSH key space door because you don't really wanna, well, you want to use SSH keys time to time. And thirdly, deploy an app that comes from multiple Git repos. So this is a requirement where you might have a repo for front-end or a repo, a separate repo for back-end and they become a sync lab. So in that case, it is going to, so we will support that scenario as well. And the fourth one is that today, if there is a YAML defect, like the indentation is wrong or something's wrong with the YAML, then it silently ignores it. So we will fix that so that YAML defects will alert the operator. Lastly, we will allow you to specify a repo, rather than selecting one or more files. So instead of, which I'll show you in a minute, rather than typing path of files multiple times, we will just let you specify a repo. Okay, so that's the end of the slice. Let me just open up the Q&A just quickly before I go into the demo. Okay, so one question is, is it container same with Kubernetes or OpenShift? So it's similar. So not same as Kubernetes or OpenShift, but we are a layer. So we are a, I think it has a middleware that a service that runs on top of your container orchestrator. So it's a more layer on top of it. So not really orchestrator or like, it's not really Kubernetes. It's, we are leveraging Kubernetes and swarm so over their APIs. So what we're trying to do is hide the complexity for you. Hope that answers your question. And the next one is, can I get the Pops pipeline shared with a group of people? So yes, it does. So if time permits, I will show you how the RBEC works in container. So it'll exactly show you how you can assign a specific team to a specific namespace and show you if this application was deployed by a developer and I'll show you that that can be accessed by someone else. So I will show you in the demo. Okay, and the next question is, databases with a relational or non-relational usually externally hosted from web app front end and middleware applications there. How does your GitOps approach resolve the issues of lowbacks security and so on? Okay, so, sorry, I'm just reading the question again. Okay, so is this, because what I'm seeing is databases are external to Kubernetes, is that what you're saying? So if that's the case, then GitOps will be able to roll back to the previous state where the application, so let's say your image tag was upgraded from version one to version two, or 1.1, then the GitOps, when the comment happens to the, or merge request happens, then it will roll back to the previous version. And as long as, so if the database was upgraded to your new version, then GitOps won't be able to help you with that. The database schema, I would say, will have to be reverted back to the previous version. But the rollback I mean was the application itself. So hope that answers your question. Okay, so that's the end of questions. So let me just quickly go through what I will show you today. So I will show you a GitOps deploy to Docker swarm using a simple compose file and using a environment variables as well. And thirdly, GitOps to deploy to Kubernetes using a single manifest and multiple manifests. And if time permits, I'll show you how that works in container. Okay, before that, I will answer one more. So how can you match a desired state to the actual state? How to reconnect GitOps with monitoring? Okay, so if I understood correctly, are you saying that if something breaks in the middle, so let's say the GitOps pipeline is deleted by mistake, is that what you're saying? If so, Portena will, I'll have to come back to you with this. So yeah, I'll have to come back to you on this. But matching desired state to the actual state, so to answer that one. So how our GitOps engine works is that it looks for the commit ID in our Git repo and it specifically looks for the file change. So which I'll show you in a minute. So that's how it matches to do the actual state. So it's all about the Git commit ID, which is unique. Okay, so let's jump onto the demo. So I now have two local environments running. So one is Docker swarm and one is Docker stack, sorry, Kubernetes. And I have a bunch of emails here, which I will use for today. So I've got a Docker here, engine XML. So it's a very simple Docker compose file. So as you see, just using engine X latest with one replica. So let's try that. So what I'm gonna do is go to stacks, add stack. I need to select repo, put it in the URL. And I'm just showing you the authentication process as well here. So I'll put my path and enable automated updates. This time I'll just make it one minute so that we don't have to wait for too long and deploy. Okay, so it's done. So now if we go to the services view, now we'll see the container coming up. It must be pulling the image down. So while we wait for that, actually, okay, it shouldn't take too long. So as you see, now we've got one engine X container running. So now what I'll do is I will quickly update this file. So you would normally commit to the main branch directly, but this is just a demonstration purpose. So commit changes. So now what we expect is in a minute time when it kicks off the reconcilation, it's going to increase the number of containers to two. So while we wait for that, let's move on to the next one. So what I'll now do is I will use two and now I'm gonna use a different manifest. And the main difference for this dropy file is it's really similar to what I've just shown you, but it's expecting environment variables in the Docker stack itself. So version import and this version is for the tag and each tag and port is what port I want to expose this application at the Docker host layer. So the difference will be, and I'm just gonna make it a minute. And this time I want to inject two environment variables, one is version and I'll just use latest for now in a port, double A, double A, nice and easy, deploy. Just realized it wasn't a full swing, sorry about that. Okay, so now coming back to the services view, you'll see the first demo we have done. We've gone through, so you'll see now two containers running. So as you see, it has kicked in the reconcilation. And our second demo here, you'll see it's using the latest tag in a port, double A, double A. So now what I want to do is I want to update this environment variable. So let me just quickly change this to double nine, double nine, save settings. And after a minute, it's going to republish my Docker compose service with port 9999. And you see the pull and redeploy here. So you can simply click on this if you can't wait for a minute and it will do the same thing for you. So yeah, after a minute, when it kicks, when the GitOps engine kicks in, it'll change to 9999. Okay, it'll kick in at some point. So while we wait for that, I will move on to Kubernetes. So for Kubernetes, obviously, I need to create a need space first. I'm just going to call it demo. And I will not assign a resource quarter for now, but in real life cases, you should be. So that you can protect a one pod using so much of your clusters compute. Do that. And going down to applications, create from manifest. So it's really, really the same thing. So I'll show, I'll go through the same process, but the difference is now it's Kubernetes. And it's a very, very simple Kubernetes manifest that has got a deployment and a service that's associated with it. So let me try to go in there. Name GitOps demo three. Oops, Git repo. It's pretty much the same process. We'll make the updates again, one minute. This time I will enable this for three deployment. So now what we should expect is a single engine export up and running. However, I can see one problem here. So published as said to know. So what we mean by published is that this deployment will be published at the Kubernetes host layer so that you can access this in your next container in your next service. So if we just quickly go back to our demo, what I can see, oh, so the selector is set wrong. So what I should have done is the, so as you see the deployment match level doesn't match the selector at the service level. That's why Portainer is telling you published now. So what I'm going to do is I'll quickly fix it. And at the same time, let's just make it two pods commit. Okay, so same as before, after a minute, it's going to kick off. So again, while we wait for that, this time I'm going to deploy multiple manifests here. So I have got deployment service, which is pretty much identical to that single YAML. Now I have got config map, ingress and secret. So deployment, it's a very simple deployment again, but the main difference is now I want to map all the conflicts and secrets to my environment variable. As environment variables in my pod, which I'll show you how that looks like. And of course, the service is a node port. I'll explain to you why. And a config map, config 012, hello world 012, pretty straightforward, and a secret. So you would normally do not want to store secrets in your repo, even it's private. The reason is this is not encrypted. So I can just be encrypted and I can see the value. So please don't do this. It's just for the demonstration purpose. What you potentially can do, I mean, what you can do is to use a third party secret provider integrators. So I have used one that integrates with as a key vault. And that works like charm. So how the manifest looks like is not the actual values, but all it does is it references to your secret in your key vault. And you can use like, obviously, hash code polls and other great tools out there. And lastly, I will also show you the Ingress spell in Ingress class name engine X. So I pre-deployed Ingress controller and yes, we are good to go. So just coming back to this. Yeah, so now you see the number of pods have been increased from one to two. And now the protein is now telling you, yes, publish is yes, and it'll tell you the exact pod mapping based on how our service describes it too. Okay, so now patients, so I'll go through pretty much the same process. So, and then you can text them up for, same thing, but the differences, so now I'm going to do multi, I'll use deployment first. And now if we need I've in total as I have shown you, so this, what is it? Concept map, secret, and Ingress. Vindication, tell your demos, the updates one minute, and I'm going to enable force redeployment. Okay, so what we should expect is, boom, one engine X container running, pod running, and do you see the difference between demo three and four? You see, I can expand this there. The reason is it picks up the Ingress rule for you. So as you see, I can, so I can see the URL here that I can simply go to, and as you see, I can reach to my engine X container, sorry, engine X service. And I'm not doing anything special, I'm just using the host file, but yeah, it shows you the Ingress capability. So next, what I'll quickly show you is the, in the service, in the names, demo namespace, what I'll do, let me delete these two. Swatch it, you see the, the get tops engine just kicked in. So let me just quickly go back here. So what I should see is published no to both, but as you see, soon as I deleted my single manifest demo service, the get tops engine kicked in. So that's just the coincidence, but in theory, soon as you make any changes to your local, so Kubernetes cluster manually, then as you see, every pull, it is going to update the, it is going to bring back to the previous state where my get is telling you. So yeah, I'm pretty sure it's going to come back, just got to, yep, there you go. So as you see, the container demo service has come back. So which means is that soon as I refresh the page, yes, now it picked up the Ingress as well, not just the service. Okay, so we have got three minutes left. So what I'm going to do is I'll just quickly show you how our app works in two minutes. So I have two users here, Dev and QA. So now what I'm going to do is I'm going to create two namespaces, Dev, QA. And under Dev, what I'm going to do is I'm going to grant Dev user to be able to access the Dev namespace. And the QA user to be able to access the QA namespace. And now what I'm going to do is I'm going to open up a private Firefox. Okay, so Dev, as soon as I log in, go to my endpoint here, then you'll see that I only see default in Dev namespace. So now what I'm going to do is quickly do the same thing I have just done. And you see, I can only see these two. And this default namespace can be disabled by the way, as a setting. So it talks to my Dev. So, let me do this quickly. I'll just use the same manifest. This is where the matter, I'll just deploy. Oh yes, so that's already been used. So what I'll do is I will quickly clean this up. So actually, yeah. What I'll quickly do is I'll just delete this for now. Okay, shouldn't take too long. You should never want to delete a namespace. I'm just making it really, really quick just to show you how it looks like. Okay, it's gone. So now I can do deploy. Okay, so as you see, it's already running. And now I'm going to log out. I'm logging in as a QA user. And obviously the QA user has only access to the QA namespace. So, that you won't be able to see the application that has been deployed by the Dev user. So hopefully this gives you an idea on how our RBEC works with our new GitOps feature. Okay, so that's all I prepared for today. And yeah, I made it. It's, yeah, 15 minutes except. So Martin, I will get back to you on your question. I will see if I can grab your contact details by Christina. So I'll get back to you on yours. Any other questions, right? Anything that you would like to ask? Oh, great, thank you. Oh yeah, so I just got a message saying it was a great presentation. Thank you so much. So before we wrap up, I will quickly, just for 30 seconds. So it's a very cool feature that we are working on. So what I have just shown you is creating apps from a manifest. But what we also can do is you can use our UI. And what we are working on is fill in whatever you need in here, image tag, blah, blah, blah. And what we are bringing in is a little button here, enable GitOps. And what it'll do is it'll push the changes to, so it'll generate the Kubernetes samples for you, ML files, and then push it to your GitHub repo and then enable GitOps engine. So, and from that point of time, you do not have to worry about writing your own Kubernetes manifests. We will do that for you and then enable GitOps. So that's something very cool that I wanted to share. Okay, so I believe this is the last question I should answer. So is some of the feature paid or open source? Close there, it says business edition. Okay, so, yes, there is a business. So the GitOps itself is free. It's an open source. But the force redeployment, so this is preventing you from changing manifests manually against your cluster. And you have seen that when I deleted our service, it has brought back it to the previous state where my Git repo is telling you. So this, what we call reconciliation. And yeah, this is a business edition feature. So, but either than this, everything else is free, open source. Okay, so I believe that's it. Feel free to reach out to me if you have any other questions in person. I'm more than happy to answer. Cool, I will hand it over to Christina. Great, thank you so much Steven for his time today and thank you to all the participants who joined us. As a reminder, this recording will be on the Linux Foundation YouTube page later today. We hope you're able to join us for future webinars. Have a wonderful day. Thank you.