 Three years from now AI will be taking over the world Basically, we are doomed Fortunately, we have still few years to do some fun with it My name is Iran BB. I'm the co-founder and chief product officer of Firefly in Firefly We are helping teams to get better control over the cloud using AI and In this session, I will share with you some of the use cases and project an open source project that you can use Even today to be more efficient Using the power of this amazing technology of generative AI So when we think about what is happening recently on the AI ecosystem, what is the first thing come to our head? What we can do with AI? So we can generate images out of text So if for example, I would like to create a picture of a cherry blossom near a lake I have at least three products that can deliver it for me Stable diffusion, Dali2 and mid-journey that are doing very good job So in Firefly we thought to ourselves it will be very cool to you to create humans out of the tools from the DevOps ecosystem So try to see if you can recognize the figures on the presentations So we have here open policy agent, cross-plane, Kubernetes and Argo CD look like humans So if you like it, you can scan this barcode and basically go to Firefly blog where you can see additional 20 tools that we created human using AI So this is nice We can generate images. We can even generate videos right now But the real power is with the language model like chaggbt offering and just you know the tip of the iceberg The stuff that we can do with chaggbt. We can create content Marketing is getting crazy right now with all of the options that we have with the new technology We can even plan the trip if I would like to travel to Italy Chaggbt will give me a tinnary of my trip Making everything very easy and I can even take a picture of some dish and chaggbt will give me the list of ingredients That I need to buy in order to create it But if we are focusing on the DevOps ecosystem the stuff that we can benefit out of this technology We have at least two use cases that we can leverage So chaggbt can generate code and it also have Analysis options so I can diagnose issues using GPT so this is the DevOps part and this is where I'm going to focus in this presentation So let's start with the first use case I would like to use chaggbt in order to create code in the DevOps ecosystem so So luckily everything in DevOps if I'm going with the DevOps practice is declared as code So it can start with creating IAC Stuff that are describing my infrastructure, but isn't also can create pipelines and Manifests for my deployment. So if I'm having using like a github action or Jenkins, those are described as code Chaggbt and the GPT models can create that for me if I'm also using policy as code framework like OPA The GPT model can create rego configuration for me So on and on basically everything that I'm doing in the day-to-day can be automated using natural language Let's see some examples I'm going to chaggbt and I'm asking for Docker file So I'm just describing the Docker file that I would like to create in this case an OJS I'm also asking it for be secured and what I see is an output which can be Very good for usage in this case. It's also created From a base image based on alpine which is more secured minimal and better for less exposure of packages Another example is I would like to create Kubernetes manifest also working for Elm chart So basically everything that is declared It can be done using Chaggbt and you can also re-prompt it and ask for additional parameters or Modification if the result that you get is not sufficient and For the policy example if you are using framework like OPA and you are putting a gatekeeper In your Kubernetes cluster you can ask Chaggbt to create the policies for you So it's supporting the rego syntax as well and this is just in a natural Few of the examples of stuff that you create using a GPT model but this is Not something that you can really streamline not you cannot work with Chaggbt as the way to work in the day-to-day Because you need to go to the browser you need to search you need to copy paste it to your IDE and it's not that efficient So we will prefer to have some CLI that you can even automate and embed in your software and This is why we introduce an open-source project called AI AC AC is a command line tool a CLI that you can run locally on your computer or on a remote Linux machine and It's a basically a client for open AI and you can ask for AI AC to generate manifest and code and shell script and whatever you want directly from your terminal AC can also be is written in Golang So you can embed it into your project if your application is written in Golang You can basically have a library that is working through against the open AI API and you can give a superpower of AI to any application that you are using So let's see a quick demo of AI AC So I have an AI AC installed in my computer I'm doing AI AC get terraform for RDS in this example I would like to generate manifest for a managed database instance in Amazon It will take between five seconds to 20 seconds depend on the complexity of the response and Everything that I need in order to get AI AC ready is just an open AI key that you can get from the open AI website and What you can see here? This is a valid working terraform manifest for RDS But in this example, I see that the instance class is something that I would like to make immutable I don't want to be with a T2 micro as an explicit So what I can do with AI AC I can reprompt and give another context So I'm asking to change the instance class into variable So what happened AI AC will send the manifest with the new instruction back into the open AI backend and Get new response based on my modification and it can go on and on until you get The exact response that you are desired to so it's not just getting the first response You are can interact with it. So what you can see here? This is the new response I can see that it take the instance class and make it into variable But it's also create the variable block that is declaring the default that was before So it's working very good. It's not 100% of the time giving you the exact answer that you ask but Surprisingly, it's working Very very good. So I just save it as the main TF and I have a file ready in my file system And of course a I see right you see here is the interactive Mode you can everything that you see here you can do through flags. So you can automate it In your pipeline for example here I would like to create a github action just to show you that it's not just I AC you can you can basically create anything I'm asking for github action that can build the mongo container So it will take few seconds to response And I got a github action manifest. This is a github action YAML file and I see that It's basically doing the basic stuff of any pipeline like checking out the cold building the container and Also, there is a code block here of running tests, but I see that something is missing I would like to enrich that manifest So I'm asking for him to add a security scanner because each pipeline that I'm running building container I also get one to have it scan. So I'm asking it a simple text like add security scanning step and In response what I will expect that will happen is I will get another block in addition to all the stuff that are already have That we will have it and one of the security scanners So here I can see add a step that is running trivia on that Container of mongo that I was building in the previous step. So this is Sufficient for me I'm eating s which is save and put in the file name. I can put a full path and I have a workflow file in my system ready for usage So this is just an example of the stuff that you can do with AI AC and If you want to use AI AC, it's a completely open source project You can scan it. It's also in the URL of AI AC dev And I'm going moving to the next use case of using GPT in the DevOps Landscape the other We are now trying to leverage in the diagnosing Capability of GPT so you can basically give The module an issue and it can help you to identify the resolution of the issue and for this use case There is a very nice project create K8 GPT. This is also an open source project created by a guy named Alex is working in canonical and and This is giving a Kubernetes superpowers of AI so this is a client that working against your cluster and He can analyze the issues that you have running on the cluster and Sending those issues into open AI and getting more context of how to resolve those issue So what you are getting here instead of really understanding and being like a CK Level of an administrator of a cluster He will prompt you back with the simple resolutions of how to resolve issues that are currently You have on your cluster and let me show you a quick example of two issues that I was able to resolve With this tool so I have kubectl installed in my local machine against my cluster I see the first issue that I have when I do in get pods I have one pod of my web server in a pending state So I'm using k8 GPT analyze to get the list of issues that is finding in my cluster So I see he is was able to identify two issues. The second issue is The problem that I have with my pod So the minus minus explain flag is doing the AI magic is giving you more Context with a simple English that anyone can understand and also providing the solution in this case I have affinity selector that is not in the right place So he's explaining to me what to do What I need to do in this case is just edit my deployment Looking for the affinity block and in this case, I will just simply remove that to see if it will help me So what I'm doing? I'm doing edit to my deployment. I'm finding Exactly what always is worth working in in the explanation. I'm finding that block of affinity Removing it saving the deployment and now I will be Scaling down the deployment and scaling it up So I see if my pod was able to recover So I'm doing a kubectl scale deployment web. This is the name of the deployment Going over the replicas into zero and then doing the same to spin it up again When I'm doing kubectl get pods I Will see that my web server pod is now in running state I'm running again the tool of GPT and see if the problem exists So I'm rising the analyze again, and I have only one issue left in this issue I can see I have a service with no endpoint. So I will do the minus minus explain again to get more Information about this issue and it's basically telling me that I have a label That it's not aligned with any endpoint that is available for me in this case what I need to do is just go to the Service configuration, I will do edit for the service and I will check The label in this case. I don't have any Any deployment with the name my server. So it's a mistake. So I'm going to the label and Just change it in instead of being my server into server And of course I'm running again to see and no problem detected. So It's basically shortened the time from understanding that you have an issue Until you have a resolution because it's given a human friendly and human readable kind of context to any issue that you have So once you are using this tool you are getting a very clear List of issues that you can Result without having a deep knowledge or being expert with troubleshooting issues with Kubernetes and this is one of the mode that you can use another mode that you can run it as a demon or your cluster and It will trigger alerts to a third-party System that you have like a local action or something that you are getting alerts from So this is something that I found very useful and we are using it in house in far fly in order to To help the SRE team to identify issues I'd like to sincerely thank each of you for attending my lecture today your time and interests are greatly appreciated I hope you found the content valuable and please feel free to reach out if you have any questions or comments Thank you once again Thank you guys if you have any questions You are the first yeah, I read a blog post on the deep grams site and They're talking about LLMs and how they're How they're going to continue increasing complexity and robustness and They should be yielding more second-order types applications and meaning that instead of just outputting text We're going to start seeing transformers go beyond the realm of core LP tasks but they but they the author argues you can't deal with that a little seasoning and So I'm curious on what you're thinking is around that for DevOps that I think what we are seeing right now And even with this demonstration of the two tools that I just show you is just the beginning the The way that this ecosystem is evolved If we are looking back on December when church EPT was released Right now there is something like and I'm not exaggerating 500 Tools being released on AI each week So I think this is just the beginning I think Eventually Every tool that we will use will have AI capability because it's inevitable So so again, this is a revolution and I'm sharing my perspective and the stuff that I'm learning over time This is a revolution and we are witnessing something Huge happening. So we will see stuff that related to administration of Workloads and infrastructure on cloud everything that you you are doing right now even CD flows Everything will have some sort of AI assistant embedded to them. So all of the project that you are now working whether it's a Argo CD or other stuff that related to Kubernetes maintenance will have AI embedded with them. This is like Something that will happen in the next few months So What's the question you asked? So what are what are the next steps towards that second order? I don't know He doesn't know So I got a couple of quick ones You were generating the manifests early the first manifest came out and it had example Blah blah example blah example blah. I'm not going to generate that in a CI and then run it to deploy it It didn't have the stuff. I needed it. That means I'm gonna have to hit it with said right because you know, I'm old-school is Do you have more control over chat? GPT's prompts to get it to fill out something other than example Yes, so I show that in the When I show how you can reprompt it and ask for something to be with verbal So if you're using IAC you want to have a mutable infrastructure We don't want explicit values. We want it to be a model So what you can get when you are asking something from chat GPT? You are getting initial response This is something that I call like this is the raw response not something that is perfect fit You need to feed them with the additional stuff that you would like to change like make every value as a variable or Please take without please you don't need to please him, right? Take the response and create a mutable Module out of it and you will response back with everything that you would like So it's basically based on interacting with him and not just One sentence and you get perfect answer. Okay, so like in the second file you generated it kept There was a references to docker-compose.yaml and I'm like, I don't have a docker-compose.yaml Where do I get that from? So I would nicely ask mr. Chat GPT. I said, please can I have a docker-compose.yaml to go with this file? And I'd get it. I'm assuming that's what you're saying, right? Yes, okay So the thing is we chat and also with AIAC is It's remember the context of the conversation So every time that you are interacting with them, whether it's a GPT or AIAC You are sending to the back end the entire script of the correspondent So you understand the context. It's called training So you can start and say like this is my manifest I would like you to create an equivalent one for XYZ And you will do that You are basically training the module to give a response that is fit to your needs. Okay So this is the last piece and I was leading you So how long before my dev developers leak my internal server names and my internal IP addresses to chat GPT for Which I'm pretty sure it gets to keep all the stuff you enter into it One of the biggest challenges and I think a lot of awareness need to be Addressed on that topic. So if you are using chat GPT The the non paid version the one that you are using for free Every stuff that you are putting there is being saved on open AI Server and can be used to train other people so it's not safe But if you are a tool like AIAC or if we are using the paid version the chat GPT plus You can select not to save that data. So it's relatively safe Of course, you are sending the data to a third party, but it's not saving it. So This is for awareness if you're using a I see because it's an API based Nothing is being saved on other parts Security is important for that Yeah So this a couple of weeks back I was at coupon and there were a few startups as well who were actually using chat GPT for DevOps and They were not only using some kind of a command line to like AC but being able to actually also create the workflows connected to let's send Is am role within AWS and create manifest files it deploy them to virtual service. So I mean what's your take on This kind of a role where you're giving probably even I am roles to some kind of a side GPT like API or product that again has security potential security risks. So what's your take on those kind of workflows being being built So Even in firefly the commercial product We have AI based flow when customer asking us whether This data is being saved on a third party. The answer is no so I think the responsibility of the customers before they are evaluating a Product to make sure their data is not being compromised by Other tools rather than this vendor But it's really important to understand if you are a Product that is working against the back-ended open API the data is not being saved This is not a part of the training model They are saving data only for the user that using chat GPT for free. They are calling it Experimental something this is how they are declaring it But basically every stuff that you put there being saved and it can be potentially presented to other people on responses So my answer for you is just making sure Nobody's saving like no other third party is saving the data that you are saying. All right So this has a follow-up question How would the training of this model or essentially fine-tuning of this model look like if you're using a paid version For something like let's say being able to create automated workflows For your AWS being able to create new instances. It's if you want to find unit. What would that process look like? how to train a model What you are specifically fine-tuning it for let's say a given company that has DevOps I mean, they'll have their own infrastructure. So something like a Cates GPT or yeah, I see if you want to fine-tune that for a particular company that has let's say a number of different namespaces resource groups. I Can give you an example of how we are doing it in Firefly. So one of the flows in Firefly is creating policies Using natural language. So people can describe the policy that they would like to enforce using a sentence Firefly will reply back with a policy as code that was created and the way that we are training the data is basically sending a snapshot of your current configuration To the model so that the answer will be accurate to your specific model So I guess every vendor that is using AI is basically sampling a data in order to get And a response that is working for your specific use case So I don't have as much as a question is just more of a clarifying to the privacy point that you mentioned earlier So with the newer releases of chat GPT, they're actually going to implement a way for enterprises to Not have their data get leaked so that concern of privacy being You know the names of my servers and stuff like that They'll eventually be something that the user in any version can delete and actually control that And I think that comes from a lot of this stuff happening in Italy where Italy ban chat GPT for a minute And then kind of give it back, but that's a whole different story Yeah That was merely it. Thank you first one Sorry three years is until somebody will acquire firefly Yeah guys those stuff being Like it's escalating very fast I Cannot have this specific talk in six months because So much going to happen in six months so many tools so many implementation. This will look like something old school The stuff that I just demonstrated because we will get used to that our workplace is Utilizing AI in every possible Corner starting with sales and marketing but also in engineering I am forcing and I'm using the word forcing my employees To embrace AI by doing Academy To make sure that everybody taking advantage of the stuff that can be offered right now I'm not talking about opens or stuff and talking about commercial because it's going to prove to be efficient You can do more With the same employees Some will say you can do more with less employees So this is stuff that we need to get used to it and just make sure to be on top of it and as I said just Use the AI for your benefits because it's here to stay and it's involving and you know GPT 5 is around the corner You don't know like Skynet Skynet is here Yeah, it's exciting time. I think I see it and I see it's exciting time. I'm not worried as I'm Really like excited to see what's next what's up and next like think about it. You can Get so benefit out of something that is died digest in your data on your cluster on your cloud on your code Giving you is easier life and of course people people will need to adjust their work because of that. It's like Thank you very much