 Let's talk about Ansible Lightspeed with Watson Code Assistant. My name is Ganesh Nalawde, and I work as Senior Principal Software Engineer in Ansible Engineering Organization. Hey, and I am Himanshu. I work as a Principal Product Manager in Application Development BU. Now, before we start with all of this and bring the AI into the discussion, before this session, Ramki asked a question to the audience that what you feel is most likely to replace our jobs. And surprisingly, the answer was AI. AI is here to help us become more productive. It's not here to replace us. And that's what we're going to showcase in this particular presentation with the new product that Ansible has launched, Ansible Lightspeed with Watson X Code Assistant. It is a friend. It is not a competitor. But before we dive into this, how many of you are working with automation in your day-to-day job? Raise of hands, who are playing with automation on a day-to-day basis? And how many of you are using Ansible to perform the automation jobs? Wow, that's a good audience with Ansible experience. So before we dive into the Ansible Lightspeed and how AI is going to play a big part in Ansible automation going forward, let us first quickly cover what is Ansible and why we use Ansible to get at least at a foundational level for all the audience. So look at this guy. If you have seen this movie before, you know we call him The Dude. He's a DevOps guy in one of the organization. He's maintaining couple of database servers, couple of backend servers, maybe one proxy server and one web app server. So in all, he's managing around six servers. He's not using automation at all. His app gets a deployment, let's say, once in three months. He's able to do whatever he wants to do and chill out for the rest of the time. And with high traffic comes high business values, so you cannot afford any down times anymore. So Dude is not the Dude anymore. He starts feeling like this guy. And if he continues doing everything manually like he was doing earlier, what do you think would be the likely outcome? The outcome would be something like this. That's his infrastructure right there. One simple error in setting the permission of a file and the outcome will be like this. We don't want it in our life at all. So that's where we have Ansible that comes to our rescue. With Ansible, you can automate all your servers, all your infra. You can also automate your config for the applications. You can also do the deployment of your application for a variety of use cases. And let's be honest. You can have best of the app in the world like we discovered in the first half of the event. But if it's not deployed on the lives environment, if it's not available to the end users, it's not adding any value at all to your businesses. So to have a successful app, it is very important to do successful deployments right first time. When I was talking about Edge, this is the complete portfolio that you can automate with Ansible. You don't need any other tool to manage your automation tasks. One single Ansible tool will help you integrate with your ITSM systems, automate your infrastructure, cloud, security, networking edge. Whatever you can do manually, Ansible will help you automate that. So let's just quickly see what Ansible is about for those of us who haven't used Ansible in the past. It's a connectionless, very straightforward system. You have a local machine on which you will deploy your Ansible and it will connect to your remote servers via SSH method, perform the task, and then it will close the connections. There are two key things that we have in Ansible. One is Playbook and another is Inventory. Inventory is a file where you will define your servers along with their IP address and create the groupings of those servers so that Ansible can know on which server it needs to perform the task or actions that you want to do. And the Playbook is where the hard of Ansible lies. That will contain this set of instructions that Ansible needs to execute on those remote machines. That's where those who are already working with Ansible, you know, we create a lot of Ansible content via Playbooks in YAML language and we execute this on our remote machines. And that's where Ansible Lightspeed will play a major role going forward. So now let's look at the Playbook in some more details and Ganesh, can you help us out with how these are structured? Before we have a demo, actually a couple of demos for you on how AI is going to do all of this for us so that we can focus on high value items and get right recommendation each time we create this content. A bunch of plays. If you can see a hyphen here that means in the YAML world it is a list. So the first element here is a play and you can give a description to what you want to do. In this case we want to configure a database server. Then comes a host. You need to provide information on which all managed node you want to perform those operations. And this host comes from the inventory file that Himanshu just showed in the earlier slide. After that the connection since I am connecting to a RL server I am using a SSH-based connection but Ansible supports multiple other connections and endpoints to talk to basically. And I will need privilege escalation here because I want to install certain things that require using pseudo privileges and all that. So I will mention become true here. Then comes the task keyword. So task is again the list of things that you want to do on your managed node. So first task here is basically you can if you give the description of what you want to do this makes your playbook readable and it is a good practice to do while writing your automation content. First thing here is I want to initialize a Postgres config and the thing that I am using here is the command module. Command module is again a built-in module that comes with an Ansible binary. This basically runs a command on the managed node here and I will be initially setting of the Postgres server and it will create a Postgres configuration file. Then moving on after this task is executed Ansible will the controller will move on to the next task and it will start executing the second task. Here the second task is basically starting the service and then it is using the service module here. So it also explains what it is doing there and the service module, the arguments for the service modules are basically name. So name is Postgres, state is started and after it is started I want to also enable it. Similarly there are other tasks in this playbook but I think you get an idea about what a playbook does look like. I also want to cover some other aspects of the Ansible like the configuration file. So in my configuration file I have defined the inventory from where I want to fetch the information of the host that I want to manage. So right now I am using a static inventory where in all the information of the host is statically defined in a file but you can also use dynamic inventory if you want to integrate with your custom CDB or already the host information is available somewhere you can pull that from there and there are a bunch of other configuration options and this is what my inventory file looks like. It is basically just a name of the group and within that group I can have multiple hosts and along with host I have mentioned the IP address. Then you can use host vars and group vars so anything that you want to define at the group level you can mention in the group vars folder and use the name of the group as the file name. So in this case since I am connecting to all the rel machines the connection is same across all the host that's why I am defining this variable here and then within the host you can define your login credentials so password here is hard coded just for sake of demo but you can while working in production it is generally recommended to use a vaulted password and so on. So this was a brief overview of what Ansible playbook looks like. So coming back to the dude reference that Himanshu gave when he decided to do automation he still has some ground to cover what he needs to understand what Ansible playbook is what all packages he needs to use for the endpoints that he wants to manage. Going to the slideshow mode So when you start writing your automation at Red Hat we believe that to get the most out of automation we need to drive it at enterprise level and that should involve multiple teams working together to create automation code and the code should be such that it should scale across multiple domains it should be able to talk to multiple endpoints that are part of your IT infrastructure it should meet the needs of various teams that are using those automation code and the code should be trusted and well maintained. So if you see the IT automation is a key driver to bring operational efficiency and if it has a potential to unlock a lot of potential for your organizations to basically free up teams time to innovate but to do that the people who write automation they need to build skill sets that are spread across multiple domains and often the workflows that are built they are time consuming and are complex to maintain and it takes a lot of effort time to build these kind of expertise within your organization and to solve these kind of problems the obvious solution that we looked for is the AI part so back last year we partnered with IBM to provide us with the AI capabilities the goal was simple it was to infuse the power of AI to Ansible to help address the growing IT automation skill gap by making Ansible more accessible to wider swaths of IT professionals and people who are already writing code we wanted to help them write code efficiently and more faster so these are the three different and three main components of Ansible iSpeed it requires a really simple setup nothing complex there the interface here is the VS Code extension around two years back Red Hat had published a supported offering for the Ansible VS Code extension and it provided a lot of language specific features like auto completion go to functionality and then diagnostic as you write code it would also do some static analysis in the background and show you the problems so for us when we thought of AI this was the right user interface that users were already using and it helped us with seamless adoption of AI so when a user is writing their code within VS Code extension basically when they are connected to the Lightweed service the Lightweed service is the backend service which basically provides the AI capability so when a developer is writing code basically the prompt would reach out to the Lightweed service it is basically backend service that provides authentication authorization related features and then it also does a bunch of pre-processing and post-processing so when a prompt is received by the service it would initially first anonymize the prompt so that there are no PII related information in that and then it sends it to the model hosting service that's where the IBM part IBM Watson code assistant comes into picture and that is again hosted into the Red Hat OpenShift fair we have seen many talks around that in the morning on the model side we are using the Graphite 3B model that IBM has trained as a foundational model it is a code-gen model basically and on top of that we have a lot of content from open communities and along with that we have infused the expertise that the Ansible team and the IBM team had within in-house so once the prompt reaches to the model server it will do the inferencing it will try to figure out what the developer wants to do and it will generate a suggestion and that suggestion is received back by the LightSpeed service and on that suggestion we do again some kind of post-processing on that again we try to filter out the PI information we just want to be double sure that we are not leaking any of the PI information that has gone into the model training after that we check if the suggestion that is provided that adheres to the good practices that is defined within Ansible community and the syntax and semantics that is used it is to the latest of the standards and we are not using any of the legacy code after that the post-processing is done and the suggestion is displayed back into the terminal so moving on while you are writing automation code there are different life-cycle that this code goes through the first is the create so create is basically creating of multiple tasks for your task files and playbooks based on natural language prompt and since these datasets are specifically trained for general purpose the AI basically provides us with suggestions that are more accurate as compared to general purpose AI let's quickly see the demo of how we can set the light speed and get started with using the VS code extension so I had shown this playbook initially and it took me couple of hours to develop this playbook now let's see how much time it takes to arrive at this so first thing that we need to do when you have the VS code installed is go into the extension tab type ansible and install the ansible extension so I have already done that to save time I am using the latest ansible extension and if I can see the runtime status of the extension it is already activated after that is done I then need to go to the settings tab here so basically you can go to file and then there is a preference and settings tab type life speed here and then you will get all the configuration options that are available for the life speed service you enable the life speed service and then the inline suggestions should also be enabled and this is the end point for the life speed service it is by default while you don't have to do anything for now after that is done you will basically have to go into the explorer view click on ansible and just hit connect so when I click allow it will take me to the life speed service login page I have already logged in so it will just ask me to authorize it will also show you certain terms and conditions if you agree to that and you are redirected and you are logged in now so I am logged in as a licensed user and I am also the administrator of my organization so you can see the information here the bottom of the screen is not visible that is where the tab is there for life speed I showed the prompt that it is connected okay but I wanted to show some more things so I will just pull this up okay so with this setup now you can see in that bottom tab as well life speed would have come the document type is identified ansible and I am the licensed user here so if I hover on top of this it will show me the login user details now let's try to generate the code here I have the initial playbook playbook related things already kept there and then I try to provide the national language prompt for the tasks that I want to do so first is basically name the installed postgres server similar to what we saw earlier and then if I want to reach out to life speed service just go hit enter it will trigger the life speed suggestion and hopefully suggestion would come back so as you can see it showed me with the right package that I want to use the name postgres basically based on your natural language prompt it picks up the name of the package and state is present now if I hover on top of this it will also show me the model id that is being used to serve this particular request and when you are logged in as a commercial user you will get your own model id value that would be default for your own organization so now let's so this what I generated right now was a single task suggestion you can also do multiple tasks so basically I have combined multiple prompts here and I am trying to ask for suggestions so first is install postgres config start the service and allow the traffic through firewall so all these three things I want to do in a single prompt I mentioned it in the comment and I just hit enter let's see if it gives me something as you can see as the suggestion is being asked you can also see ansible lint processing files there so it is trying to do some suggestions add some validations as well on your machine so it provided some content in here it is not what actually I wanted AI is not perfect as such I need to go and make some modifications in here so first ask does that basically it runs bunch of command there are some changes required as I can see when I tried it earlier it basically provided me with what I wanted so this should show that AI still needs improvement and it still needs human intervention so yeah I mean next task is starting the service it picked up the name correctly then it used the firewall demodule that I wanted and yeah I mean with that so in the third task basically I want to create a portman container using a pgcontainer var variable so this var is something that is defined in my context okay yeah so this variable is already defined in my context and the lightspeed service it is intelligent enough to understand the context and take that context and provide me with the suggestion that is relevant to what I want to do so if I have this in my suggestion let me try to so let me try and ask for suggestion here and see what it provides I need to uncomment this yeah it used the portman container module I wouldn't have known if someone is new to Ansible they wouldn't have known that they need to use this particular module and the variables that are defined are then being referenced in the ginger template as I wanted to do that so if you are using any general purpose module I had tried this prompt with general assistive KEI as well and if you use that it doesn't provide me with the code that is the best practices for now in this case it is using the fully qualified collection name that is the name of the plugin and it also mentions the collection that it is part of that is the newer syntax that came post 2.9 but lot of the code that is available in the public repositories and on that code which used to train the general purpose model these would provide basically single tasks and it won't sometimes the tasks that are provided they might not even exist as well so that's not the case when you are using a AI model that is used for a specific purpose so Ganesh in your last slide you showed us that there is something called as content source matching because we all are human we want to be recognized if we are publishing something and that's what light speed shows you I think can you just show us that bit as well coming back to your point we wanted to do this because basically we wanted to build the trust for the developers based on the suggestion that is provided whether they should take that suggestion accept that suggestion or also we wanted to give credit to the developers who have already written this code and on whose code who have decided to open source it and based on that code we have the code that we used to train the model we also wanted to give them some attributions so for that to view these content source matches all you have to do is to go into the view panel open view type ansible and it should take you I think it is already open in my system so yeah here so this you can see for each task we have provided the attribution or the content matches that top three most closest source or matches are being provided here the information of that is provided here so you can browse through each of that and basically see where it is coming from you can click on the galaxy link and see what is the open source code that it is used what are the licenses it is used whether that license matches to your enterprise needs and so on so that's one unique feature that we have built up within the extension itself right and that's very helpful to actually realize from where the recommendation is coming and then you can take your action based on the confidence that you have in the sources earlier we have almost 50% of the audience already using ansible so while lightspeed helps you create the playbooks going forward with the assistance of lightspeed Ganesh what do we have for the existing content that the content creators have created and added in their repos does lightspeed is going to help us with those that content as well ansible is a decade old project and the DSL for ansible has changed over time the best practices that are used in community has evolved as people have started learning and they have introduced newer syntax and when you have such legacy code legacy code that is working upgrading that code to newer best practices it often takes time or there is no motivation to do that or there is no easy path to go there so that's why we came up with ansible code bot I will just quickly walk you through how to get started or how to enable that bot yeah so right now it is a github app that we have hosted all you have to do is to go into this link ansible code bot click on install so after that it will ask you for which all repositories you want to install this code bot too I will pick one repository sorry one organization and within that organization there can be multiple repositories and you can enable the code bot on all the repositories all you can just select one of that for the demo I will pick only one right now and I will pick up the network bgp1 and it also shows what permissions are required from this particular repository by the bot so it will need to access the metadata and it needs read and write permissions for code and pull request so after that is done I will install and authorize the code bot so with my subscription it is already active and I am all set to run this code bot onto my repository so there are two ways you can trigger this code bot one is the manual way and another is a schedule format I will show the manual way because scheduling might not be possible in the demo itself all you have to do I have already said that let me you said the topic so in the about section all you have to do is go add a topic here and you can see the code bot scan select that and save it so this will when I save it this will trigger the bot in the back end and it will take your repository and try to look for better practices and recommend you in form of a pull request while that is running I will show you how you can do in a schedule manner for that you need to go into the .gitter folder within that you need to have this file and then the schedule here right now we support monthly weekly and daily schedules but over time we plan to add more configuration options to make it more customizable and let's see if the pull request is raised this pull request is raised just now you can review the pull request changes that were recommended by the code bot so basically this allows you to your team to review the pull request test whether things are right and then go ahead and merge it yeah thanks for this I think this is another key feature because it allows you to one do a scan of your legacy code but if you do it on a monthly basis with your team you know that all the best practices are being followed now Ansible light speed was only five months back it's a relatively very new offering so as with any AI tool there is lot more to be done so Ganesh can you quickly walk us through what is the plan for Ansible light speed and what we can expect we jade just in October and we are just getting started with AI and there are many more things planned in the next year the first thing obviously we have right now is the multi task generation that I just showed you and then the content monitoring code bot in near term we want full playbook generation so at the top you write a comment or you give description of the playbook and it should generate the entire playbook for you then we will be supporting REST APIs so this will allow you to integrate within your CI CD pipeline so code bot will eventually also be able to reach out to light speed service and recommend you newer modules that are available that follows better practices and provides you with more features that can be provided using the REST APIs you can also have add more clients if you do that then the next thing that we are planning is a model fine tuning so many organizations big organizations that already have Ansible content that they have developed they have their own best practices they follow their own coding standards and the suggestion that they want to provide that they want to see from the AI they would want it to adhere to those practices that's why we will be also supporting the model fine tuning so basically you can have your Ansible content taken and then then provided to the foundational model and then the foundational model we would be again fine tuning it retraining it and providing a custom model for that particular organization and in the long term we would want to do content discovery and optimization so basically when you are writing a playbook it will just go and search that if there is a similar playbook that exists and if a playbook is provided there it will also provide features like debug capabilities or if there is scope to improve that playbook such things would come then the content description basically you select the playbook and you can ask the AI what this playbook is doing it will provide you with description of what that is doing then content control so basically within an enterprise environment the org admin can see how the lightspeed service is being used so we need to send some data for that to make possible so that would be coming in control content controls and then a newer user interface based on the feedback if there are more ideas that we want to support this we will be taking that as well then custom post processing and rule book recommendations so we will be covering the EDA in the next section but we will also be able to provide rule book recommendations with lightspeed how to get access to lightspeed you have been hearing about developers.redact.com since morning go to the same website find under products Ansible and you will get your d4i subscription started and you will get a step by step guide on how you can start using it as of today so it does not require again any enterprise licenses as of now it is available for d4i users but if your organization is using Ansible automation platform already you can reach out to your admins and ask them they can enable it via paid versions as well alright so that's what we wanted to cover so we can move to Q&A you had some query earlier my question is about that first of all I had seen that 1.YML was created manually after that 2.YML was created automatically through automation so first of all I had seen the number of line of code was less in 1.YML as compared to 2.YML I am bit confused here that which code we should supposed to use because if the code optimizations level if you were seeing the number of line of code at 2.YML has more so in that case architect won't support us right? yes so as I mentioned the AI is not perfect when you give a English prompt basically it tries to generate a suggestion that is close enough to that particular prompt it is trying to get the intent of the user and sometimes when the suggestions are provided that might not do exactly what you want to do so in that case you have to go back and try to change the English language prompt it is basically prompt engineering and the more details you provide to the prompt within the prompt the better the suggestion you would receive so in 2.YML that you were saying it generated more commands you would have to basically go and intros... when those commands you want to run you will obviously know what commands you want to run only then you can go and automate it so when you review those suggestions you take a call whether you want to accept it or reject it so by pressing tab you accept it or by hitting escape you reject it so based on that so we get that feedback that after accepting the user has done some modifications and then at the back end we would eventually go and try to improve the model so eventually as you start using it the model becomes more matured the suggestions that are more closer to what you want to do my second question here is that ANSIBLE light speed is using Watson and just now you suggested that this ANSIBLE light speed was developed in October since it is using Watson IBM Watson so the thing that it is very old the first time that if any kinds of things have been discovered in artificial intelligence that was Watson that I know so since it is using this one then why there was any accuracy there so I think you are confusing it in Watson that was earlier so Watson has many capabilities and one of the capability that we have added was code generation so code generation didn't exist before as well I mean we started work on this code generation last year last year in October we announced the tech preview and in May this year during the redact summit we announced the GA that we did in October so for all this work is very new and very latest and cutting edge that we are using the foundational model that was generated and that is hosted by Watson X is also taken from lot of sources and lot of programming language for the model to basically identify this as a code generation model and when we use the ANSIBLE YAML syntax to do the fine tuning the model basically started understanding that this is an YAML syntax that it wants to generate so what we are using is very cutting edge and I think I think Watson was the AI offering and it provided multiple other things in the past but what we have right now is the latest one so can we say that ANSIBLE light speed is a kind of artificial not generative AI it is a generative AI it is a generative AI thank you sorry so with co-pilot co-pilot is again a general purpose thing it works for multiple languages and even if you try some of these prompt the suggestions that would provide they would be a single task name it wouldn't follow the FQCN and bunch of other things so the YAML syntax that it generates is not correct if you try to use light speed and try to do comparison with co-pilot you will eventually figure out that light speed provides more accurate ANSIBLE content because it is specifically trained for that particular purpose so does it support the Q&A as well? right now no so we have that in content explanation and all and if you see we have analyzed the user pattern over the last couple of months and we have seen that the user acceptance rate of the suggestion is on the higher side which is very high as compared to general purpose AI and it supports the other modules as well right? I mean not just the built-in modules yeah it works for any module that is out there on ANSIBLE Galaxy so the more specific you are in your prompt the more likelihood it is there to provide you with module or task that you actually want to use and if it is provided you accept it you can again use the language specific features to try to understand what this particular model is doing you can basically go to there and see the examples and all that and try to figure out whether this is the exact module that you want to use if not then again improve your prompt and try again hey one question so the main reason for why such an AI would be there is that you know I can sort of like that cool whatever that lightspeed gave out would be most optimized do we have a confidence factor there right now that is say the lightspeed says something if I still have to go google it okay is this actually the best one sort of defeats the purpose of using that so you don't have to google it at first when you give the English language model it provides the task with the response I am saying yeah that's what I am coming to that so when you get the response you accept it go and deploy it directly into the production you would want to first test it on your local machine run the pay look and see what changes you want to do are actually done there or not if that is not done possible with what the recommendation it is provided then you might want to go to google and try to get help from there I am pretty sure it will be functional obviously you know a dedicated AI for this it will be functional but is it optimized because that's the goal here right that is what that is getting generated is following all the best practices that is possible so the post processing step I told you in the lightspeed service in there we run based on the so the data that is used to train the model it contains all the legacy code and all the newer code as well so the amount of new code is quite less and if more the data more the probability of giving suggestion in that format so if in the general purpose thing it provides you syntax that are used by older ansible version but in the post processing we do exactly what you are saying the optimization part we check whether it follows best practices if there is a legacy syntax we convert that to a syntax that is newer to newer standard or if there is a deprecated syntax we ensure that the latest syntax is used there got it got it so and you think over time it will get better because yes as with any as you use it will start getting more mature got it one final question so is this open source right now the full version because you said it's licensed I have this so there is a open source version called as tech preview you can log in with your GitHub ID and you can start using it but that is soon will be going it will be sunsetting and then eventually you will have to accept it using the developer.redact.com to get the trial period and if you see value in that then you can move to the enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .