 Hey everyone, thank you so much for joining the webinar today. So today we're going to cover some stuff about the Kubea platform and what it can do to empower developers and operators. My name is Shakira Skyo, I'm the co-founder and CTO at Kubea.ai and let's cover the agenda for today. So what we're going to talk about is first how Kubea came to be, my personal story about my personal journey, then we're going to do a quick demo about the Kubea platform, how it can help empowering developers with conversational AI, how it can cover different use cases around the topic. Then we'll talk a little bit about the generative AI capabilities of the Kubea platform, how it can help there as well. Then we'll talk about workflow management, about the Kubea no code editor, local runners and Kubea software development key. And last but not least, we'll discuss about security because it's something that we should discuss on. So let's get right to it. So I start off my personal journey, I bet that it sounds familiar to you. So I used to be a DevOps engineer for a few years or later stages, I managed DevOps and SAP teams. So I got to see all of the problems and the obstacles managing DevOps pipelines in both small and larger organizations. So based on that, me and my team experienced a lot of non-stop and less risk coming from developers while instead of focusing on innovation and managing infrastructure or empowering changes that will make our infrastructure more robust, we found ourselves struggling on slack non-stop. Essentially, that's what led me to create Kubea. I wanted to solve this pain for a lot of organizations and find a way to truly self-serve the DevOps functions that are available in organizations. So the DevOps team was really focused on what really mattered. So let's jump right to them of the Kubea platform and how it can help with this topic. So I start off with sharing the documentation that Kubea has and explain why I'm doing that. But essentially, as you can see, we have the documentation page, documentation site with a lot of topics. So S organizations have already Confluence, Notion and many types of knowledge based. It all starts from there. Actually, a lot of self-service are documented in such knowledge bases and it's very hard to reach to it because of the wide range of information that are available on these topics. So I start off with the first use case you could use Kubea to solve for you. So let's start ahead by asking a simple question from the Kubea documentation. So I ask, how can I upload a workflow to Kubea using this CLI? So as you can see, the Kubea virtual assistant lives in Slack. It could also be installed on other platforms such as the Mattermost and Teams. And as you can see, it managed to give me the accurate answer to my question straight from the documentation. So here I can see that I could run the Kubea workflow upload command, which is documented in the site. And the platform is able to also add reference links. So if I want to drill down into more detail and understand about the source on where this knowledge came from, I could jump to the link site away. So it's very easy to use. Let's try another question just for the attention. So let's go and say, how can I create a new integration in Kubea? Here I can see how I can do that, straight from the documentation. So the reason why it's very powerful is that Kubea is able to integrate with a different knowledge basis an organization might have. This might include Confluence, Notion, even PDF, Google Drive, anything that includes the source of information, basically empower the developers by giving them a platform they could just ask and using the powerful language model behind the scenes to be able to give accurate answers to the questions that users might have. So everything is good with Q&A, but what if I really want the platform to do something for me? What we call a workflow? So I'll pick an example of a very, very common use case, which is rolling deployment of Kubernetes. So let's go ahead and ask for it. Can you roll out a K8 deployment formula? So what happens when the knowledge source is not really a documentation but rather a workflow that the platform can execute for it? In such cases, besides the traditional as well, which could come from the knowledge base, the platform is also able to serve the relevant workflow. As you can see here, it found the Kubernetes deployment rollout workflow, which is capable of following a deployment in Kubernetes. I'll explain about workflows and how they work as we go. So let's go ahead and just click on Go, which will execute this workflow for me. So Kubea workflows are really easy to use. They have traditional workflow formation tools. The special thing about Kubea workflows is that they are conversational. They are very easy to consume, and they can involve different types of actions across the technology stack the organization might have. So in the case of falling out of deployment in Kubernetes, I need to answer some questions. So even if DevOps on the manual way on the other side will get asked by a developer, hey, I want to roll out my deployment in Kubernetes, usually what DevOps will say, okay, what is the namespace, which environment, and so on and so on. And you can actually build these questions and embed those as could be a workflow so you could automate the process of context gathering in a human manner. So it's very easy to use. So let's go ahead and answer these questions. As you can see, I can see here all of the names that I have in my Kubernetes cluster. Let's go ahead and pick ArgoCT. As soon as the platform has the context of which namespace, the second question is, okay, I got a namespace. What is the deployment that you want to roll out inside the namespace? So here I can see all of the deployments inside the ArgoCT namespace. So let's go ahead and pick one of those. By the way, you have search, if there are many results, the system is also able to extend the search results. So it will be easier for me to look up for the specific results I'm looking for. So let's go ahead and pick a server. Here are all of the resources that are found, all of the deployments that include the server pattern. So let's go ahead and pick ArgoCT.server. So the system will confirm with me that that's really what I want to do. I can roll out ArgoCT.server in the ArgoCT namespace, if I want to continue. I can jump to the previous step if I regret something at any time. So it's really easy to use. If you regret something, what's initially not happening in traditional workflow automation tools, because as it goes, you cannot really have the feedback from the user on the other side to roll back or decide something else. So let's go ahead and say yes. So that's it. I just rolled out the ArgoCT.server in namespace ArgoCT in less than 10 seconds. But what if I have the context and I know the namespace and I know the server in advance? I wouldn't want to go over these questions all over again. So what I can do is basically say roll out Kubernetes, if it creates a deployment for me, the namespace is ArgoCT and the deployment is ArgoCT.server. So again, the system will suggest the relevant workflow. I'll go ahead and click on it. And now what happens? The Kubia platform analyzes my query against the resources and the workflow runs that already related to rolling out the deployment Kubernetes and basically suggests the best parameters to fit into the workflow. So I don't think I don't need to go over the questions again. I could also correct the platform in real time. And as you can see, the options are built on priority from the most accurate priority and going down to the lowest. So you got it just correctly. I'll go ahead and click on Go. So as you can see, it skips the questions and go straight to the validation, type in yes. And amazing, I just rolled out the deployment Kubernetes once again, this time without answering the questions. So that's how Kubia workflows are embedded into Slack and how you can consume it really easily. But the whole thing that I bet that you want to understand is how you would basically create those, how those are created and managed. So that's the Kubia platform, the web interface, where you can see the different workflows that are available in the organization. So let's go ahead and see the workflow that just rolled out the deployment Kubernetes, which is this one. So here is the workflow. We'll go over the different tools that are available in the node code interface in just a bit. But essentially, you can see a graphical representation of the workflow that we just saw. Conversation is the trigger, which is the type of workflow that you just saw. The first action is getting the namespace from Kubernetes, representing buttons on selecting the relevant namespace. As soon as we've got the relevant namespaces as a result of the bottom button click from the user, showing the deployment in the namespace, selecting deployment and eventually restarting the deployment. So before showing you the actual logic and how you can empower the node code interface to create all kinds of workflows for wide range of use cases, I'll show you the generative AI capabilities, which is one of the ways you could use to create a workflow. So let's pick something that I don't have in my desktop. As you can see, you can use the English prompts to create workflows based on the actions that are available in your technologies that were already integrated into Kubea. So in my case, I have integrated AWS. So I'll go ahead and say, create a workflow which triggers a Lambda function on AWS and returns its outputs. To create the hardware workflow to create using generative AI, let's try it out. I just click on generate. It takes a few seconds. Kubea will analyze the prompt that I just gave and match it with my technology staff to understand which actions it will really do and generate a workflow. So as you can see, the workflow is generated. Let's go over it really quick, and then I'll show you about the different controls that you could use. So here is a workflow that basically gets all of the Lambda funds which are made up of the US, extracting their names, showing patterns. And upon the user clicking on a button, which is essentially a Lambda function, it triggers it and shares the response with the user as a JSON object. So before showing the controls, I'll publish it. Let's give it a name, trigger, don't worry about that, and Lambda function on AWS, publish it. In the second, you publish a workflow. The workflow is already able to be consumed by your end users even in plain English. Behind the scenes we have a model that trains and understands the inner aspects of what the workflow does and is able to give the user the ability to just ask for it. So let's go ahead and say, hey, can you trigger a Lambda function for me? So here are the associated workflows that can solve these queries. So here is the function that we created together, the workflow that we created. Let's go ahead and click on Go. So as you can see, it shows me all of the Lambda functions that I have on AWS, in my accounts. I can go ahead and pick test.js one. Invoking the Lambda and showing me the outputs. I didn't have to work too hard on these workflows in the generative AI capabilities to help me create it really easily. So let's go ahead and understand how I can fine tune the workflow if the generative AI didn't match my use case. In most of the cases, 70% to 80% of the actual intention on what you want the workflow to do is covered by AI, but sometimes you want to cover more or adjust it to your use case. So as you can see, the workflow builder has actions and steps. Actions are basically integrated actions across your tech stack. So in this example, in this environment, I have many actions relating to different kinds of technologies. You can see Jira, Kubernetes, GitHub. And basically, you can use the QBI SDK, which I elaborate about in a little bit, to create more and more actions and integrate more solutions across your tech stack. So you can create a really amazing data pipeline of integrating different tools to serve a specific use case. And in the second, you're basically in the workflow editor, you can choose the different triggers for the workflow. So in the example today, I showed you a conversational workflow, but it's not supposed to be only conversational. You can actually use a schedule-based workflow to run workflows on a periodic use case. Let's say each day or each hour, you want to go to some system, grab some information, manipulate with it, and maybe show it to the engineering team in some slack channel where the QBI application is invited to. And you can also use a webbook-based workflow, which is really powerful if you want to integrate QBI with third-party systems. So you could basically trigger the workflow based on a webbook that, let's say, can come from a monitoring system based on an alert and maybe integrate it with your team using a dedicated slack channel where the workflow could continue to show actionable buttons, insights, and so on. So that's what you can do in terms of types of workflows. And as you can see, the workflow is built on steps. Each step basically does something for the user on the other side. There are many types of steps. I won't be able to cover them all, but essentially I'll focus on the relevant ones for the presentation today. So in order to get the Lambda functions for AWS, I'm using Core AWS integration, which is available as a public action, but I could also integrate it in my own infrastructure using a concept we call LocalRunner if we get a chance to present about it today as well. So in the second that you edit the action, you can see it's operative. So you can see that each step has a name. In this particular action step, it's called Get Lambda Functions, which is responsible for getting the Lambda Functions. And I have many types of parameters. I could pass to it. I could decide if to run it in an async way, which means it's something that takes more than a few seconds and the virtual assistant is able to do that in the background and get back to the user as soon as the action is done. I could actually ask even to run it later and talk to the user, hey, well, when do you want to run it? Do you want to do it in a few hours? Do you want to do it tomorrow? So everything is pretty straightforward. And I could actually play the action here in the web UI as I'm building the workflow to see that it totally serves the use case that I wanted it to do. So here is the response from AWS. As you can see, I see the full response here in the web UI. I could actually save it. And I could use the steps panel that you're seeing here to use a wide range of integrated steps that could serve use cases to define a conversation. So I could do things like grabbing input from the user, many types of inputs such as displaying a question, asking a freestyle question and using the user text to continue the workflow. A yes, no button for confirmations, array of buttons based on a result from a reading step, modals, all kinds of crazy use cases you could use. And of course, outputting stuff like sharing in a channel, notifying someone, sending a JSON object, so and so. And of course, utilities which could be conditional logic, jumping between steps, passing JSON objects and even entering sub-workflows, which is nested workflow that are meant for code reuse in terms of functionality. So as you edit the workflow, you could basically take another step, just type it here, say print lambda, maybe. And the system is able to autocomplete based on the context of the previous executed steps in the workflow. So you could really use it. And as you create the workflow and understand that you're creating the workflow that you were intending to, without publishing it and testing it on Slack or Teams all over again, you also don't need to understand JSON or inner object since the platform is able to extract the relevant parameters from nested JSON objects and give me the ability to just select what is it that I want. So it's very easy to use. And I could also create a test run of the workflow here in the Web UI and to end before doing the actual publish to make sure that everything works as I expected. So let's go ahead and click on test run and see the whole workflow that we saw in Slack together in action. So it's getting the lambda functions from AWS. It's passing it and extracting only the function names. It displays buttons, which is essentially the lambda functions that I have in my AWS account can pick the relevant lambda function. It triggers it and confirms that everything works as I expected. So it's very easy to use. So that's about that. And I move forward to another way of creating and defining such a workflow, which is the Kubea domain-specific language. So every workflow is based on a DSL domain-specific language behind the scenes. I could actually download it to show you how it looked like. I'll get to the DSL. I have something with my screen. I'll be able to present it on GitHub. So we'll jump to it before the end of the presentation today. So moving forward to how you can extend the Kubea platform with more and more actions and basically execute these actions on your own infrastructure. So let's go back to the main screen. As you can see, I have remote runners. And remote runners is basically an easy way for you to deploy such orchestrators that could be defined on your own infrastructure. We support Kubernetes and plain Docker images. So you could actually execute the different actions that you saw today. Even from your local computer, going to EC2 instances and our Kubernetes operator. So you could spread the different actions across multiple accounts, multiple clusters. And each runner basically can use the action source capability in order to execute the actions on your own infrastructure. So you don't need to share network access or credentials of any kind. So creating a runner is super easy. If you choose the Kubernetes way, it will just generate Kubernetes manifest for you to deploy the runner on your Kubernetes cluster. If you choose the Docker one, it will just share with you the installation script, which is fairly easy. And as soon as you do that, you can deploy what we call action store. So action source is the actions that you saw on the actions panel. And basically each action store includes the relevant actions as part of this category in your technology stack. So today we execute some actions in the Kubernetes action store that you could see here. So these are all of the actions that are available in the Kubernetes action store. And you can basically extend it based on your use cases. You can even create your own action stores if you have your own internal developer platform or your own way of defining such actions. And behind the scenes how it works, it's really easy. So we have the Kubei SDK. Right now it's supported for Python, coming next is for Golang. And essentially these action stores are tiny Python program packaged as Docker containers, which you could use in combination with the Kubei SDK. So I'll show you an example of some of the examples we did together today. So we did the rollout restart deployment. So as you can see here is the function that was responsible for rolling out the deployment in Kubernetes. And as you can see, each action store basically imports the Kubei SDK. And as soon as the SDK is imported, you could just annotate a generic Python function as an action store using decorator as you could see here. And as soon as the function is annotated and you deploy it to a local runner, you're able to see it in the web UI and integrate it with your Kubei workflow. So it's super easy. You can even do that from the web UI, click here on your store. You can give it some name. Let's call it AWS. The platform will be able to generate a boilerplate for you. So you don't need to rethink how the application structure should look like. And you can see here that it generated a simple Python program and initiated it as an action store. And here you can see that it has some actions, simple action and simple action with model. Essentially, actions with models are embedded by the web UI. So the parameters are not made during runtime. The web UI will be able to generate those on the fly. So it's very easy to use the platform in this way. We can click on next. You need to provide some registry URL where you want to build the action store and publish it to. So I'll go ahead with it here.sh, which is a temporary Autoke registry for the testing demonstration and I'll give it. And now I want to deploy it to the demo runner. Takes a few seconds. Behind the scenes, we're able to package the action store reference as a Docker container and deploy it to the local runner. In my case, I have the demo local runner, which was deployed on Kubernetes. So as you can see, I can see the Docker build log. You don't need Docker, you don't need anything like that. And I can see a success message coming from the runner that the action store was totally deployed. So if I'm clicking on done, I should see it right away. I could see both of the actions, action with model and zip election, and I could use those inside MyCopia workflow right away. So that's very easy to do. I'll continue by showing maybe an example of how the workflow DSL might look like. That's it for me. So that's how the DSL looks like. As you can see, it's even simpler than GitHub actions. You can see the relevant steps in the workflow for each step which runner executes it. And for a regular step like building functionality, you can have all of the tools that you could use and the power to create the perfect workflow, including, of course, conditional logic, just as you saw in the workflow editor. So that was a really, really rapid demonstration of the platform. I'll go ahead and display the last but not least part, which is security. QBI is basically able to integrate with the organizational identity provider. So if you're using an identity provider, such as Octa or Google Workspaces, basically the platform is able to connect to it. And based on that, we grab and understand the mapping between users and roles. So as soon as some user executes an action form QBI, we try to understand using policies that you could define on if the user can do that. If that's not the case, the user is not getting lost. It's entering what we call an approval flow. And the overflow is able to find a person on the other side in the organization. It could even be a role that is able to approve the request from the user, all within Slack. So it's super easy to understand and execute, and you can even do that from your mobile device. So here on this example, I'm connected to Slack. Since I didn't enable an identity provider, such as Octa or Google Workspaces, and we're able to use Slack user groups as an IPP as well. So here I could see the relevant policies that are available in my organization. So I could see that I have some users that are able to execute different workflows, and they could even create a policy right here from Slack. So admin policy and the workflow that I want to allow, let's go with the Kubernetes call out that we saw together. I could have multiple if I want to, and the role that I want to give permissions to. I simply click on Apply, and could be able to create the policy for me. And now the role is able to execute the actions associated with the specific workflow that I chose. Besides everything that I showed you guys today, there is CLI and API. So all of the actions that you saw in the web interface can be done using the Kubea command line interface. And of course, using the API, so you could even integrate Kubea workflow as part of your existing pipelines without completely replacing those. So it's really easy to integrate Kubea all across the technology stack. So thank you so much. That was the presentation about the Kubea platform. And I'll leave some space for questions if you have any. Okay, so Balaji is asking if it is supported on Microsoft Teams. So yeah, Kubea has initial support for Microsoft Teams. It's not as extended as you can see here on Slack, but we're actively working on extending it to Microsoft Teams. And in upcoming releases, we're going to make the integration more and more stable and powerful. So yeah, so another question here. What happens if an end user doesn't have access and tries to trigger a workflow? So it's a good question. In case a user tries to execute a workflow and it doesn't have the proper permission, even an action as part of a workflow, the platform is able to basically tell him that he doesn't have the appropriate permission and also share with him the resource owner in the organization to approve the specific request. Based on the policies engine that the platform has, you can define who can own what, or basically who owns what, and create some kind of an approval flow where the Kubea bot is able to approach the whole on the other side, show him the relevant content, and approve the request using a shared channel that Kubea creates on the fly where you can discuss further about the requirements why you want to execute such action. So about that. Okay, another question here. What are some other cloud and tool that can integrate with Kubea? So it's a good question. Essentially, almost anything can be integrated with Kubea, to be honest. Anything that is supported from a programming language can be run from Kubea. Some of the use cases we've seen that pops up to my head is RBCD, Kubernetes, AWS, GCP, Azure, Jenkins, and many others. So the use cases are endless, basically. Vishal is asking if it's possible to add the approval flow to email. So yeah, that's a feature that a lot of organizations have been asking for. So yeah, it is possible to send the approval flow to email in addition to Slack or the chat platform. Yeah, and based on email replies, the platform is able to continue the workflow right away and also reply to your email. So it's pretty powerful. Okay, another question that I'm seeing here is if Kubea open source or pay. So first of all, the Kubea SDK is open source, but it can be used only from within the Kubea platform. We're planning to release very soon a free forever plan where you could use the Kubea platform for many types of use cases. And it's going to be limited for some of those, since there are also commercial use cases in the Kubea platform. A little more questions. Where can I get docs to start from scratch? So we didn't open the free forever plan just yet. We're going to do that in the upcoming weeks and after we do that, you'll be able to find the documentation at docs.kubea.ai. Another question I'm seeing, how I can control who does what in the platform. So basically you could use the policies I discussed on. You can manage such policies from the web interface and from the Kubea CLI. And we also plan on releasing the Terraform provider really soon. So you could manage this policies as code as you're doing for the rest of your IAC stack. Same for configuration basically and workflow. So it's going to be very powerful. Another question, can we set it up inside our Slack organization? Yeah, it's pretty simple to embed Kubea in Slack. Right now the approach is to do that by contacting us, but really soon it's going to be open so you'll be able to do that right away. Another question I'm seeing here, can we automatically trigger Kubea workflow based on a Gira ticket created by our monitoring system and do the required updates back to the ticket if resolved? Yes, you can actually do that using the webbooks feature. So you can actually send a webbook from the monitoring system to Kubea in addition to the created ticket. You can also query Gira for instance using the Gira action store to get the ticket reference. And using the action store integration here you should be able to comment on the ticket as part of the Kubea workflow. It's very possible. Okay, so I think we covered most of the questions. I'm seeing here if there are any demo sites available. So pretty soon we're going to release the Kubea playground where you'll be able to test out the Kubea platform on a Slack workspace we shared. So it's going to be pretty easy to test it yourself, which I was asking if it's possible to trigger Terraform scripts. Yeah, you should be able to trigger Terraform scripts. We had some use cases coming from aspects about this particular use case. You can even generate a dynamic model based on inputs, let's say, from Terraform Cloud. Let the user feel all of the variables right on Slack. Click on submit. And behind the scenes the workflow will actually get it and run it ethically since Terraform apply and plan will take more time. Okay, so since we're out of time, we'll be more than happy to answer all kinds of questions. So feel free to leave those on email and we'll get to all of the questions, we promise. Thank you so much guys for having me today. And thank you for attending the webinar. And hoping to see you soon. There could be a platform. Thank you.