 Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm your host today. My name is Whitney Lee, and I'm a CNCF Ambassador and a Developer Advocate at VMware Tom Zoo. Every week on Cloud Native Live, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. We will break things and we will answer your questions. This week, we have Safi Janis here with us to deliver a presentation called IAC Migration Using AI, from CloudFormation to Terraform in Seconds. Now, this is an official live stream of the CNCF, and as such, it is subject to the CNCF code of conduct. What does that mean? It basically means just be the nice, the kind humans that you are. Please don't add anything to chat, that would be in violation of the code of conduct. Anything that would be disrespectful at all to your fellow participants or to the presenters. Be nice to me too, please, and Safi. Anyway, friends who are joining us live, if you have questions, please, please, please post them to chat. What we're going to do is after the presentation's over, that's when we're going to answer all of your questions. We're going to have a discussion time at the end. With that, I'll hand it over to Safi Janis to kick off today's presentation. Safi, tell us about yourself. Yes, so first, thank you very much for having me. I'm Safi Janis, and I'm the CTO and co-founder of Firefly. Today, I'm going to talk about leveraging AI for migrating your infrastructure as code from one IC to another. Before that, I'd like to explain about my company, Firefly. It's a cloud asset management solution that scans the entire code footprint, I mean, all diapers, scalars, Kubernetes clusters, SAS application, et cetera. In Firefly, we also analyzed the infrastructure as code from Terraform to Helm charts, and we detected discrepancy between the cloud and the IC. We built a technology that can generate telephone code with models for something that was created manually before in the cloud. We detected drifts and unexpected changes continuously, and we also adapted layers of policies using OpenPoliceAgent, I will talk about it in this session as well, and we can govern the cloud for FinOps, SecOps, and reliability, and again, thank you very much for having me. This is all very exciting. I'm looking forward to digging into the code. So I'm going to go ahead and share your screen now so you can get started. Okay, it's okay. So in this talk, I will discuss AIAC. AIAC is an open source tool that we recently launched that helps the government, helps them migrate infrastructure as code, or helps generating infrastructure as code using AI. So we adapted OpenAI, and we utilize OpenAI in order to generate a Terraform code, cloud formation, et cetera. And I will show you what you can do using a... Here, under all the documentation, and by the way, we have a 2.7K stars, more than welcome to start this new repository. But we have some examples for utilizing this great tool. The first one is to generate new infrastructure as code. If now you want to create a new lambda function with the models and variables, under VPC, et cetera, you can use this open source tool, and it would generate the Terraform code for you or cloud formation. You can also generate Docker file or Kubernetes manifest. In this session, I will also show you our generated CI-CD pipeline using AIAC, and this is really powerful because you know how difficult it is to write YML files. This tool can also help generating policies code, OPA, and even query builder, for instance, if you have MongoDB or Elasticsearch, and you need to build a sophisticated query using this tool, you can do that. And I showed you the tool, I explained very briefly, but let's dive into the code. And in this specific session, I will show you how I can migrate one IC to another. For example, I built a cloud formation stack. I will show you the cloud formation stack that builds an application in a dedicated VPC in some configuration or elaborating that. I will launch this cloud formation stack and then I will show you how we can utilize this tool to migrate the cloud formation to Terraform. But that's just an example. Using this great tool, you can migrate every infrastructure as code to whatever you want. You can migrate Terraform to crossplane or cloud formation to pooling. And I'm going to the first stack here, let's dive into the code. Cool, so the first thing that I'm going to do right now is to launch a cloud formation stack. And all done, I will connect to my AWS account. Super, okay. So I'm launching right now a cloud formation stack that will create 25 resources and it will build a new application. It takes around two or three minutes, but I'd like to dive into the cloud formation just to explain what it is. And of course, that's just an example. You can use any microservice that you have a backend and frontend or other infrastructure in the cloud. As you can see here, this specific cloud formation stack I deployed a dedicated VPC because I built my application and isolated it into a virtual private cloud. I built four subnets to public and two private subnets. Okay, I also created an internet gateway because somehow we need to transfer the networking outside to the entire world. Will you make it a little bigger, please? Yes. Thank you. Okay. Is it good enough right now? Yes, thank you. Okay. So as I mentioned, we built here a VPC four subnets to public and to private internet gateway because we need to transfer the data outside of our VPC. We also configured the route 50, a route table public and private public will transfer the data through the internet gateway. You can see that we built EC2 machine and the user data will launch the application. Also a load balancer because we want to build an high available solution. Security groups for the load balancer and EC2, network ACL, and I think that you've got the idea here. Let's see if the resources in the cloud formation were created. Maybe it will take one more moment, but that's okay. So it takes one more moment and I would show you an example for AAC in that time. So using AAC, we can build, for example, Terraform code for a Lambda, for a Lambda function under a VPC. I just called the YML file, I don't know why it stuck my computer. For a Lambda function under VPC. Now around 20 seconds and you would get the Terraform code, but AAC is a great tool because it's like a check. You can continuously talk with AAC and if now I would like to add security layers on this codified resource, we can do that. We can also generate the code and save it with me and the file. And that's super powerful. You can see here, I'm going to show you the result. I hope that it's not small enough. You can see that it creates the VPC, the security group and the Lambda function here. Hold on. Okay, the Lambda function with the VPC configuration, exactly as we wanted. And I'll show you more examples for AAC, but I'm going back to the CloudFormation stack. I want to see that the resources were deployed and yes, right now we have a good. That's a web application that we launched with 25 resources and that's CloudFormation. And we want to migrate this to Terraform and I'm going to step through my talk. So what I have done is I went to the CloudFormation template and I copied the VPC and the subnet just for a VPC and the subnet, just for explanation and demonstration and I created an input file called metalking.txt because I would like to utilize AAC that I need to read the data. If I'm migrating CloudFormation to Terraform, I need to read the CloudFormation and to run prompt. And I'm going to show you right now that I would run the following commands. And I will show you this here. Can you make that bigger, your terminal bigger? Yes, that's exactly what I'm trying to do. Oh, okay. Let's try here. Excellent. I will write here and I'll copy it. It can't, it can't be, ah, okay, resize. Okay, so what I'm going to run right now and I'll try to do my best is to read the metalking.txt file that I showed you. That's part of the CloudFormation stack and it passes the data here into a variable called the input. And then I'm running the command AAC get Terraform code with variables for following CloudFormation. That's the CloudFormation stack and I'm using the output variable to save the Terraform code into the directory output and file called metalking.txt. Also added the quiet flag. So here I would run this. Okay, let's run the AAC get Terraform code. Takes 15, 20 minutes and you will see that it generates the Terraform code for the VPC and the force element that they pick. And of course I can do it for everything that I just wanted to show you how it looks like. And let's save it. Okay, now it's saved here in the folder called Outputs in the metalking file. Okay, you can see that we have a VPC exactly as it was migrated from CloudFormation subnets, check right. And again, the open source tool called AAC just walk it through that AAC.dev that's the open source tool that you can run and install using grow or using a Docker file. Cool, so we have Terraform code and I also added a provider.f for being given to run this code and let's try to run Terraform init. In Terraform, we need to run the Terraform init, plan and then apply in order to deploy the resources. But my problem here is that I'm going to apply something that is already applied because it exists in the cloud. I've already created the resources using the CloudFormation. Now I declare the Terraform code but I will not be able to apply it again because the resource is already exist. And if I will show you the Terraform plan, we will see five resources are going to be added. Okay, you can see five to add, zero to change, zero to destroy but the resources already exist. So we need to import them into our Terraform state. Otherwise it won't be able to recreate and this is a bit more sophisticated. I just like to mention that in Terraform 1.5, they added a new capability that you can use an import block, looks like that. And you just need to mention the specific resource from the cloud account. But I will go to the Terraform documentation. I'm going to the AWS VPC documentation. And here you will see the Terraform import command. It looks like that. Terraform import, the resource type, the name, the code and the ID. So I'm going back to the code and run Terraform import AWS VPC. Then let's see the name is My VPC. We want to import this one. Okay, import successful. And if now I will run the Terraform plan, we'll see only for resources that are going to be created. But what happens if right now I have 100 or 1000 resources? It's tough and that's why in Firefly, we automated this solution. And as I mentioned before, Firefly is a cloud asset management solution that can scan the cloud and analyze the IOT. And we can generate the Terraform code and import the resource for you. So you can sign up for free, we have a free tier. And I'm going to show you right now how you can use our platform in order to import resources and to fully automate it using our solution. We also adopt the data, it's part of our product. So here you can see an easy to instance exactly as this one was created before, about 48 hours back. So I will read again. Okay, we have here an IAM policy, I can qualify it, you can see that Firefly generates the Terraform code for you. We also detect dependencies. Sometimes you have resources with the dependencies like lambda function under VPC and subnet, et cetera. We detect all of them. We can create models for everything that you want. And you see that everything is fully mutable. Instead of put value here, we can deploy it in staging, prod, et cetera. But what I wanted to show you is that you can click on the import command button. Firefly will generate the import, the import sender. And using that, we automate this process and we can create pull request back to OGE. But I would like to continue with this session and before continuing so far, does anyone have questions? We do have a question right now from Chris. I have cloud formation templates. I'm trying to move to Terraform. I see two options. One, import into Terraform and then do show plan to see the plan and get the Terraform code. Or two, use AIAC, which is the better option. I would say AC because that's fully AI and can help you leverage it. But the truth is that there is not that, there is no one answer for architecture for a question like that. You should do whatever works for you. But I can tell you that I tried to automate this and it took me like five minutes and I succeeded to migrate my cloud formation to Terraform. So in the end, it's the same results regardless of which way you go. It's just a matter of the AIAC will be more simple or actually maybe Chris is asking a similar question than me, which will generate higher quality results. If you use the Terraform, okay. It depends, it's really depends. I can tell you that AIAC utilize a non-deterministic approach. So you can generate everything that you want. If right now you would like to migrate a Lambda function but you want to add more configuration policies, wall, et cetera. So without AIAC, I need to write those results as manually or what happens if you want to codify models. So I suggest to use non-deterministic solutions or as I mentioned before, in Fartite, we do a great job for helping you codify resources with existing models. Thanks for that. And thanks for asking questions, Chris, we appreciate you. Okay. Super, let's continue and then we'll start for more questions, thank you. So I just showed you right now an example for migrating some of my resources into Terraform. But I have done this before and built an old solution using the same technique. And you can see that right now I have Terraform code with all the resources, the provider, output, et cetera. So let's run the Terraform apply command one second. I took the Terraform code and I decided to deploy it in the isolated account. So I deployed the CloudFormation account A then I migrated to Terraform and deployed the Terraform in account B. And you can see that it totally works. I have here a DNS and let's run it and it will be the same architecture, the same thing as you see in the accounting. But right now this one is deployed using Terraform. And why I'm saying you this because that's really useful for you to migrate that I see and not just that, also to generate Terraform code, et cetera. And one more, one more. Good. I would like to show you one more thing that you can do using the ICT. And I'm going back to the Git repository. You remember that before I told you about generating safety pipelines. You understand that running the Terraform code from your local machine is not a good practice. And the best practice is to use the ICT to store the state files in a remote backend, et cetera. And that's exactly what I'm going to demonstrate right now. I built here a read me and B using the ICT and see the command. And I will use the ICT to generate a GitHub action for deploying Terraform. And then I'd like to add a security check for the existing CI-CD pipeline. So I'm running here basically a ICT GitHub action for deploying Terraform. While we're waiting, we have a quick question. And that is, will AI AC work for Azure? Why do you mean? I think we're showing an AWS example here. Does it work for all the cloud providers? Yes, yes. It's a multi-provider because it's not deterministic. You can also generate Terraform code for a GitHub or a Datadoc or other providers. And for Kubernetes as well. Yeah. Good. So you can see here that the AC generated the pipeline. And right now, I know that it's a bit small, but you will see the Terraform import, Terraform plan, Terraform apply, but we don't have any security check. And that's why I will ask the AC to continue the check and the new message is, LIC security scanning stack. And then I will save it into a dedicated YML file. I will use this file and I will show you in a real geek repository the result. Okay. So you see that we have also one more step with TFSec. TFSec is a great open source tool that was created by Aqua Security, additional layer of security. And let's save it into one second. Let's save it into the folder. Let's call it temp2.yml. That's for example. Cool. So right now you can see that AC generated using the generative BI, the ICD pipeline, for the ICD pipeline, that set the Terraform code, Terraform init, also the security scanning. And I did the same before and I already configured the geek repository and want to show you the results for that. Okay. I'm going to the geek abetion and you can see right now that this one, that's the ICD. I ran the Terraform init one successfully. The Terraform plan. Make it bigger, please. Thank you. Yes, sure. So you can see that we set up the ICD pipeline. Everything is, you saw five seconds using AI. I got the YML file. I put it in my geek repository and from now on I have a full, full ICD pipeline running. And the Terraform init works. The Terraform plan also works. And that's the security checker and let's walk through the results. It was failed. And sure, VPC, okay. That's a nice control. That is talking about assigning public AP in a VPC, something that must be configured load balancer, they encross the availability zone or many other controls for security like users with open-fay, public accessibility assets, etc. That's focused, this is focused on security. And now I'm going to introduce you to a different open source tool that we have launched in Firefly a year ago called ValidIC. Because you saw that you can utilize security checkers to streamline, to leverage your CICD pipeline. But ValidIC is an open source tool that is one place to run out of the analyzer, the TFSEC, InfraCost, Inframap, TF-flint, etc. And using that, it will happen not just with security but also with cost or with an inter or with map. And here we have an example for some results. Results as VPC with some, etc. We will run the security behind where we run the TFSEC. TFSEC, by the way, today is craving. Aqua, they do a great job there. And you will see that we have two potential problems and one resource test. But if I'm going to the cost, you can see that we have $505 over total, that's the total price of those resources, probably because of this instance, okay, M6 safe. We can also generate the map or the lintel. So that's ValidIC, you can use it as well and you can put ValidIC as part of your CICD pipeline. And I open for any questions from your side. We have a question about enterprise code scanners. How would you change, oops. How would you change to other enterprise code scanners rather than Aqua? I can recommend, I know some code scanners, TFSEC or Trivide that was created by Aqua is a good one, but a checker that was created by Bridge Crew is also a great solution, it's Palo Alto, Palo Alto Network acquired this specifically. There are more, more of the same, Kix, Kix that was acquired by, that was built by a checkmarks, is also a great offensive tool. Each of them has pros and cons. For example, Kix, I'm not an advocate of none of them, but all of them are great. Kix is based on OPA and using that you can leverage your own policies or you can build policies using AAC and put in Kix your own policies and govern your infrastructure as code. A TFSEC or Trivide are great, they have some controls to help you with infrastructure and all of them are great. I have a question. Are there like best practices around using AI as part of IAC? Yes, yes, sure, thank you for asking that. It's a really important question. The first thing is regarding secrets, anthropies, API, etc. You should never pass your data, your sensitive information to charge to other AI systems and it's dangerous. And one of the ways to avoid this is to use Gitleaks. Gitleaks is a great open source tool that can detect secrets and you should understand what you pass into AI systems. Cool. That's very helpful. Are there any others or is it mainly about secret management, you think? Any other best practices or good practices? I think that secrets and APIs, that's definitely the top priority but also customer data because you should never pass customer data or PII or something that, even if it's not a password, you should be aware of that. And GPT and other solutions are not deterministic. So the result is not always correct and GPT has something called temperature that you can configure, it's a measure from zero to one and that depends on the creativity of the AI. So if you want the code to be correct without any creativity, you can configure the temperature for zero. Excellent. I have other questions. As you've been going, I've been typing out my questions. If you see me typing, that's why. Another question. So what I saw in your excellent demo was you had templates already in cloud formation and then you migrated those. What if you have resources running but you don't have templates for them? Can you generate templates? Oh, that's hard. You mentioned something called ClickOps. What happens if I created something manually in the cloud and I don't have any I see what I can do right now? And that's more difficult and Firefly, by the way, saw this specifically because we scanned the entire cloud footprint. So if you have an issue like that, you can sign up for free to Firefly. If I may, I will share one moment how it looks like. Oh, yeah. You don't need to apologize. I didn't, I'd never heard the term ClickOps before. I like that. That's fun. Sure. ClickOps is... Yeah. Yeah, sure. ClickOps is something... Okay, so before that, I think that they need to explain what is ClickOps because that's really important destination. In the real life, I hope most of you use infrastructure as code but you can see that some resources might be created manually in the cloud because of resources, et cetera. That's ClickOps for unmanaged labor and migrating unmanaged resources into telephone demands to understand the cloud footprint. And by the way, today we're at the only solution that they scan the entire cloud footprint and can generate telephone codes for that and you can sign up for free and use that. So for instance, this one is an S3 bucket ACL. You can see that this resource was created manually. I can qualify it. I can also detect the dependencies and now I have telephone codes. Cool. We have a question from chat. This is from Kenneth. Can I create Kubernetes objects? If so, could I also do the inverse and generate YAML manifest from already created Kubernetes cluster objects? So similar question but with a different technology behind it. Yeah, so I'll answer this from two different perspectives. If you have some Kubernetes resource and you haven't, and the answer is yes, you can generate, generate it, but I can do everything for you. So if you have a YAML file or YAML manifest and you want to migrate to Helm chart, so twice C2, I showed you it's the same. If you have any resource that doesn't exist, that you want AAC to generate the ingress rule for you or to the Kubernetes deployments that will deploy MECRD or from it has, for example, you can leverage AAC with C2. No, I can't really share it. Let's do it. So I think you answered Kenneth's second question as part of the first question. Does it support the creation of Helm charts? It sounds like it does, yes. Yes. Let's run the following. AAC get the Helm charts deployment for ingress rule. Let's start there. Easy. Okay. Okay, I will use the second one. I already passed the configuration. So now I'm utilizing AAC and I will generate the Helm chart deployment for ingress rule. It takes a few seconds and then we'll see that. We can utilize AAC to migrate to one IC to another, but in the real world, if the resource exists in the cloud or in the Kubernetes, you can use Firefly to generate it. I will save it into a dedicated file for you. Is AAC a CLI utility only or? No, you can use AAC with Docker or with the Go package. With Docker or Go package? Here. So regarding the Kubernetes question, right now I see generated Kubernetes and Helm charts with ingress rule. Cool. So we've been talking a lot about Terraform. There are other open source projects for cloud provisioning. Off hands, I can think of crossplane, which you did briefly mention in cluster API. Do these work with those too? Are there other cloud provisioning open source projects? Yeah, definitely. And I like to elaborate on crossplane. That's a really unique and great open source tool. It's still immature, but they see that the community adapts and builds great things out of that. Crossplane is a Kubernetes-based infrastructure as Go tool that helps you to generate one unified resource, a cloud agnostic IAC. For instance, if now I have S3 bucket, Azure Blob Storage and GCS, through three different resources. In Terraform, we need to generate three different resources. Using crossplane, you can build one unified storage and decide in which cloud provider you want to deploy. And this is a really interesting solution. It is really cool. And you can offer your developers a way to self-service provision stuff in a much simpler interface, where you have a lot of the default set to whatever you want as an operator, and then you expose only the knobs of what they need. I'm a crossplane fan. Nice. That's super cool, yeah. So I think that kind of lends itself to maybe talking about in general, maybe just listing off a lot of different technologies that this is compatible with. Yeah, so it's a statement of question. Will you please list some? Please, Sefi, list some technologies that this is compatible with, especially open source ones. Yeah, sorry, can you wait me? Can you say it again if you didn't get your question? Oh, so we're talking about this peripherally. We're mentioning a lot of different technologies, like all the different cloud providers or Terraform, crossplane, policy agents. What other technologies can you use AIAC with or would you want to use this? I understand. Okay, thank you, Francine, for that. So it's not just for infrastructure as code, as I mentioned, we discussed about, we discussed some Terraform, performing all of those solutions, but you can generate a query for the database. Doesn't matter if it's elastic search or MongoDB. I basically don't write a regex anymore. Using AIAC, I just use this for generating for me. That's a big sell, yeah, that's great. Yeah, no more regex, awesome. So you showed the cloud formation resources and then you showed AIAC generating Terraform code for just a subset of those resources. And you said that would take 20 minutes or something to generate that, is that correct? Did I follow that correctly? To generate the Terraform code, no, it's around two minutes. Around two minutes, okay, excellent. So then that makes maybe the rest of my question move. So if you took all of those cloud formation resources and it did them all at once and made them into Terraform code, what would, how long would that take? Yeah, so it's not just about how long it takes. In OpenAI, they have some limitations, our limitations for the payload. And that's why they have a couple of models. And I guess most of you heard about GPD 3.5, 2.4 or GPD 4, but they recently launched a new model that can, you can leverage it for bigger context. And if you use this, it will be much easier for you to deploy, sorry, to migrate the entire cloud formation stack. It takes also around two or three minutes. Cool. So we talked about migration. We talked about some generation of stuff. Is there anything else you wanna touch on? We talked about policy some a little bit. Is there anything else you wanna touch on in terms of capabilities? Policy, that's a great discussion. OPA, Open Policy Agent is also a CNCS solution, a great open source tool for policy enforcement. I know many products in the world that adapted this. Some of them for policy enforcement for authentication, authorization, et cetera. By the way, in Firefly, we adopted OPA for governing the cloud. Using our solution, you can run your own rules for detecting EBS GP2 instead of GP3 or public accessible assets or Kubernetes deployment without CPU or memory limit for all of those purposes. And then going back to AAC, you can generate new policies using AAC. Oh, nice, cool. Awesome. Well, that's all of the questions we have from chat and all of the questions from me personally. Is there anything else you'd like to go over, Safi? No, I think that we cover many things and my takeaway from this discussion is A, that you should leverage AI today. I think that everyone should use Github Copilot and to use AI-themed solutions like that to streamline your day-to-day work. Never save secrets. None for the needle, for your HRGPT, nor for your code. And you can take a look at Firefly. I believe that we can solve a lot of challenges regarding the pains. And don't forget, AAC.dev, that's our open source tool. Oh, let me, I'll highlight. Here's the open source tool. Check it out, aic.dev. All right, I'm gonna say goodbye. I have, thanks everyone for joining today's episode of Cloud Native Live. It was great to have Safi Janus. Thank you, Safi, for sharing your time and your expertise with us today. Safi talked about using AI to migrate infrastructure's code with the open source project, AIC. I've loved also the interaction and questions from chat. Thank you all for participating. And here at Cloud Native Live, we bring you the latest Cloud Native Code every Wednesday at noon US Eastern. So thanks for joining us today. Thanks to those who watch the recording and we'll see you again soon. Thank you so much, y'all. Have a great day.