 So welcome to this session dedicated to an uncivil one-on-one workshop on demand So I'm working with my friend Frédéric Frédéric, where are you? Are you with us? Can you hear us? Yes Say hello to everybody. Can you hear Frédéric in the room? Sort of yes Maybe better with the mic Say hello again Hi everyone and welcome to this session. Great so Frédéric is working from Grenoble in France where I'm also living but I'm the one who has been elected to be there between the two of us to make this presentation So I'm not sharing with you Okay So we want to give you a bit of information around the uncivil tool through a workshop on demand So this is a tool which we have developed with my friend Frédéric Which gives you a certain access to a certain number of Workshops available 24 by 7 all the time you can register run the workshop Learn some new stuff around a certain number of tools. We will detail that just in a couple of slides So my name is Bono Karnak As Frédéric I'm working for a bullet packer enterprise based in Grenoble in France. I've been working on Linux since 93 Involving certain number of open source project as upstream or downstream because I'm also a packageer for the magia distribution I've done stuff around governance as well and I'm Doing some music when I have time to do something else and just coding and Frédéric you want to introduce yourself? Yeah, I want so my name is Frédéric Passrand I've been in HP for the last 20 years or so And unless or unlike Bruno, I don't have that much of a background on on Linux and open source But I'm working on it as you can see I have a strong focus on solution I've been working on many of the infrastructure solution that HP has put in place over the time And I'm currently part of the HP dev experience team or the HP dev community Team within HP. I'm actually responsible for the workshops on demand the platform that we'll be using I do a bit of coding along with Bruno. I'm building up some kind of a high-five streamer That are based on volumio So that's what I do when I'm not working in HP and I'm not playing volleyball So the the hack shack is a virtual is a virtual place where all the developers can gather around and actually Figure out about new things that are learn new technologies So it was introduced back in Austin and Kubecon in 2017 and the idea is really to share among a developer of community as much as we can and Tell the world that HP can provide you with some Good things apart from very good service. We also have very good solutions and very good. I would say open source project that we are dealing with so Give a give it a try and if you go if you don't know the website it's HP dev.io And that's all you need to know for me for now Okay, thank you So in order to introduce to this unstable concept We will use a certain number of Technology here behind the scene one is a Jupiter notebooks technology, which is Hosted as Frederick said in his infrastructure Everybody's familiar with the Jupiter notebooks who is not familiar with Jupiter notebooks Okay 15 pairs person among the person in the room here So the goal is to give you documentation as well as code cells in the same document, but the code cells are active So we write what you will have to pass as a command But you can run the command and get the result and all that is in a single document where you have both documentation text images explaining you the architecture how stuff work But as well as a command line that you would like to pass to do perform the operation and the command line is live You can perform it. It will run execute the code and give you the results. So so it's a nice way to Create teaching documents I would say for for people So as I can add some comments on this originally the the notebooks technology is actually used by many of the AI and ML Engineers while working on you know data sets and that they want to know crush somehow So they're building up, you know some within the same frame the web a web page I would say within the same web page they can document the different calculation that they are doing on their data sets and Along with the Python or the different code language code that they're using to actually crunch the data And we found that we found about this a few years back And we found that the program was actually great because the possibility to embed at the same time some documentation and some living cell codes is Code cell sorry is is really convenient. I mean we used to have you know a document on one side in a PDF and a pretty section On the other side and we would copy paste over that over the two sessions ending up in some endless issues of Pasting some PDFs stuff in there. So this is just absolutely great It's a it's fairly easy to implement and we encourage you to look at it I would say and Maybe it's worth to say that we'll be open sourcing We are currently open sourcing the the workshops on demand program to so all the automation that we built around this Will be will be making it available for people to leverage it over the time Yeah, so The link here is a live one you can go on on the side you can book a workshop as You want run it for a certain amount of time So you have a windows of two up to four hours to be able to perform the workshop They are available all the time and they are stuff are allocated on demand So once you have registered we generate the right back-end environment for you so that we can run your workshop And that's the mechanism to perform all that which we are working to to open source it and split it from the content so we will open source both some content all the Mechanism behind the scenes so that you can reproduce in your own environment in your own company or structure The same type of learning experience for your your own users So we will focus on on Siebel today. So this is an automation tool The goal is it's it's said to be an an agent less tool. In fact, that's not completely true There is a notion of agent behind on Siebel, but the agent is ssh So this is a communication between The master place where you are managing all the information and all the clients which are generally servers That communication will be done through ssh. So the master point of control Will connect through ssh to the clients and launch operations using the python language behind the scene generating python scripts and executing those python scripts on the remote target to Put in conformity The target system with regards to the rules that you have described On your central point of control. So that's really the goal Everything you do as a sysadmin manually you can automate stuff Through yamel scripts yamel configuration files You will configure the system you will describe how you want the system to be configured You will give that to on Siebel and on Siebel will connect to the remote system Pass all the rules that you have described And put the system in conformity to all the information you want it to to set up so So If you're a developer what should you care about because this is a notion of infrastructure as code So what you really want to do is to behave as a sysadmin In a similar way as you are behaving as a developer So you want to be able to code your infrastructure in yamel files To describe your infrastructure and to have a tool which does deal with the complexity of deploying testing Managing the results of all the rules that you have put in place Frédéric any other comment? Well a single one I mean you said it's agent class in the sense that it's ssh But the way it works is fairly simple. I mean you do the ssh connection You copy over the scripts the Python scripts you execute them You get the result and send back the result to the main server. I would say so the agent is really The timeless agent in the sense that it's temporary when you copy over the scripts You execute them then you send the back the result back and then you delete actually the scripts from the the target machine So that's uh, even though it says as you said agentless. It's not that agentless really but that's uh, and the importance of The beauty of it is the the possibility of managing really large systems in a in a programmatic fashion. I would say So let's have it up. So maybe uh, yeah, you have a a pointer now that you can use to actually Reach out the the workshop registration page So all you need to do is copy paste or copy or type in your computer and your browser This URL you will end up in this page saying Ansible 101 introduction to Ansible concepts The video has not been recorded yet for all our all our workshops are usually backed up with a video And that will be registering this one by the end of the month with green or icing if I remember well We we set up a call for this So you click on the register button and you fill out the different fields being the company name the company email Your email then your name and the company name just with os s 2022 because uh, we'll be seeing how many of you are actually registering for Live and uh, that'll be an interesting figures for for for us So And register then you will see a grommet, which is the the small the the funny guy on the on the left, which is the the grommet representative I would say And you will see pop up this registration window You need to actually acknowledge the the terms and condition and as it is stated in the in these tncs You will have a given four hour window or three hour window depending on the workshop But keep in mind that whenever you click the register button the automation kicks in and the time Start ticking okay the clock start ticking at the time you'll hit the register button Don't send you register and leave the the thing going on and say oh, I'm going to do the workshop later in the day No, whenever you register you hit the register button You need to have some time at the time you're doing the registration to perform the workshop That's the the same because we're implementing the automation as as as if I mean just on the time you're hitting the button Okay, so not not everybody in the room has a laptop with him So you will be able to perform that later on Keep the QR code or keep the URL available for you And you will be able to replay what we will do now live for you At your own pace later on without any problem In the hotel, whatever you want It's a as Bruno mentioned it's available 24 by 7 all your lung and it's completely free I mean we don't ask any credit card number Uh, yep We'll never do don't worry And so maybe you can move to the to the next slide So whenever you hit the register button the automation kicks in and I would say somehow within a minute or two You should be receiving an email. Okay. The first one is a welcome email telling you Welcome and since we're participating to the workshops on demand blah, blah, blah It will provide you necessary information because for all the workshops we have as I said normally We have a replay as a backup when you have a dedicated select channel On which you can you can go and actually ask question if you have some issues Okay, uh, and so on so forth the second email That should come in very shortly after will provide you with the credentials that are necessary for you to work to connect to the Jupiter notebook environment So you will have this username and the password and you will have a start workshop A button that you can hit and whenever you click on it, it will open up a browser To our jupyter hub environment and in the login page You will have to enter the username and password that will provide you to you in the email Okay, once you're done, uh, you will see that once you log in you will enter directly on the readme file Which is the readme of the workshop every single workshop that we create has this readme And every single workshop starts with this one This is really to set the stage of the workshop and it will provide you information about the also How to handle the jupyter notebook So if you're not familiar with the jupyter notebook interface as we were explaining earlier on As you can see the the main frame on the oh, I would say on the left hand side You have the the browsing panel that shows you the different notebooks file that you have The pictures that may be embedded in the markdown cells and so on so forth on the right hand side is the rendering of the notebook Okay, so depending on the type of cells it will be marked down and the markdown will be really as we said Instruction and commands whereas the code cells where is actually a small cell where the code Is actually executed The code cells whenever you run a code you will see that on the left hand side of the code cells There will be a star appearing this means that the code is executing Whenever you get a number this will mean that the code has been executed and you can move on to the next cells To run a cell you can use the button play the play button at the top Or you can actually use the shift enter control enter Keys So the control enter is running the current cell and the shift enter is running the current cell and move into the next one For rich cells, I mean for the we say the code cells They are actually depending on the different kernels that are installed in the environment So depending on the workshop you may use a simple bash kernel that will provide you with a bash environment When running your code cells, but there are many many different kernels available in a in a jupyta notebooks environment So in our workshops, we're leveraging pysen. We're leveraging rust We have goa and other so there is a very large list of kernels that will be available for you Depending on what you want to achieve in our environment mainly we stick to bash and some few I would say language kernels and that's enough for what we want to achieve which is simple examples of automation around Either open source tool or apis that are embedded in some of our solutions outside Okay, so let me reflect and I will see whether some people have managed to actually To register for some workshops So just as a reminder, so if you look at the main portal You can get here and have a certain number of information around what we do in that in that community This is the hack shack portal where you can have access to the workshop themselves and replace You have the main registration page with all the Workshops on demand available the one which is of interest for us is the uncivil 101 one you can click on the register button and then you will have access to this Implementation once log logged in on the system So once you have the registration you get the log in the password and you will be able to have access to that environment Nine people registered. Okay, great. Thank you So now let's move to to the interesting part, which is the uh, the uncivil 101 itself So you have some instruction at the beginning which give you also the different parts for each workshop on demand This one has four different parts Maybe I need to refresh a bit My environment to get The images No No, yeah, it's just just on your side Okay, so I think we will I think we will go on with uh With showing to you what is the workshop for the people who wants to do it at the same time as us Please go on log on the system Are you able to to register to you can't hear me Can you hear me? Yes Yes, we can hear you Yeah, okay, go on As we saw I mean the the read me is just telling you about the the different Concepts around the the notebook itself Now we've already stated about the architecture of uncivil 101 So this is a very small diagram Providing you with some details about how the the the thing is working So we have we have a Ansible management engine and we have some targets on the right hand side and we're communicating through ssh And what we'll be executing is obviously playbooks The inventory file is is very important We'll see that later because this is the place where you list all the different targets And you can regroup that regroup them depending on your logic and set So we will see we'll see about this. So just to to State the purpose about the answer ball when I started in the it probably 20 years ago Uh, I started with some tools like uh, Intel land desk I went with Rambo in very early days of hpnet servers Then we moved on with actress. I sync rapid deployment pack and so on and these were all the different tools to To manage and deployment and configuration of many different servers uh, I would say that I was very pleased to discover Ansible a few years back when I started working with all the cloud solutions like open stack Because this would save so many hours of work Having to manage like a complete open stack environment with you know, uh, ha configuration with multiple controllers and Different clusters for database monitoring And so on so forth. I mean I could set up an open stack environment That was made about 25 servers in less than two hours Leveraging Ansible playbooks and that was just wonderful. So this is why this this uh Engine is so cool. I would say so as we um, as we're saying, I mean ansible comes with different sets of uh, I would say tools or Uh concepts the modules are the very first one we need to talk about So they're fairly simple. I mean these are the The heart of ansible in the sense that they they will provide you with a set of Tooling that you can actually use against your target So maybe it would be worse clicking on the ansible module link to give The people the chance to see the list of all the different modules that are available And they're very very very numerous and you will see Uh, there are tones of them depending on what you want to achieve from a single server type of thing POSIX system, it could be AWS Azure I would say in industry leader Related like HP or Dell or whatever there are plenty of them I mean you can achieve so many many things with that modules That's one of the greatness of uh of ansible because you don't have to start from scratch Keep in mind that already many people have worked on ansible before you and Whatever what kind depending on the type of idea of configuration you may want to achieve Take a look at the list and you'll probably find what you need rather than rewriting Something that is already existing. I would say So as you can see from docker and ec2 in aws and so on to burn a test. They're just Tones of them. Okay in the if you go back to the to the workshop I provided with the simple examples of what HP is providing When it comes to infrastructure related I would say modules So there are obviously predefined stuff for eletra primer as report Simplivity, there are other stuff That you can use for other I would say HP related solutions Collections is a different thing. So a module is just a script that you you can execute on the on the target The collection is just a distribution format It allows you to actually gather Different things that are I would say ansible related from a playbook a role modules plugins everything that you can That are related to one single subject and that you want to package in a single Format I would say so as a foreign an example in there you have a one view collection So this will provide you with all the necessary tooling That HP can provide for our infrastructure related management software, which is one view So there you can see that we have all the different sets of plugin And it keeps on adding last time we run the workshop We were on version 6.6. I think when we did the same back in March, I think 7.20 now They're going way way too far fast for me. I think so the the collection is fairly simple to install as you can see You have a command line that says ansible collection blah blah blah the name of the collection And you actually set up the collection for you in your environment the utilities Uh Well, uh, I mean there would be there is some code sharing in ansible in the same in in a certain sense Meaning that uh, there are multiple modules that may use the very same code So ansible store the functions and and the store they are being stored as module utilities to minimize the duplication So if they are being used by multiple type of modules, then uh, they are being shared You can write your own, uh, they are only available in python and power show plugins We won't really make use of them today. They are here to augment the ansible core functionalities So the modules are executing on the target in many different separated processes Uh plugins are executed, uh, widely I would say as a as a process They offer more advanced functionalities for our features for for ansible The inventory as we said earlier on is the one of keystone of ansible because Obviously it needs ansible needs to know about the different targets that it will reach out and obviously Work with and so the the default Format for the inventories i9 or any file In which you can group or regroup the different machines that you want to use So in this example, we can see a simple, you know, web server or database server But keep in mind that There could be I mean if you were to talk about open stack for instance, you will see that there would be uh, yeah monitoring server control cluster ELK cluster You can also group them using other type of approach it could be geographically for example You know the the type of machine they are they could be based on the location So it's really up to you to decide what kind of logic you want to use In this in this inventory file the roles actually are I would say, uh, uh Super The ultimate goal you should want to look for when dealing with ansible So in the workshop, you'll see that will be starting very low at the very lowest level Which is running the commands for an ansible scale i then using a module Then probably we'll be creating a task which will represent the module that we've been using But this will be integrated in a playbook. Okay, and this playbook can have multiple tasks Okay on top of the playbook you can actually create a role that will be a set of playbooks Okay, that will be targeted on different type of machines Okay, and the roles as you can see can be divided in many many different categories So for instance, it will start by a common category And this is the basic of every single connection of every single single configuration That you want to achieve on every single machine that you are dealing with in your ansible environment So common is just operating system related L that configuration it could be the type of security you want to implement The latest updates and packages that you want to be common on every single note that you want to deploy Okay, and on top of this you are adding some layers of it. Okay, and they can be cross I mean Everybody will be common Then you will have a web server that will have dedicated tasks to install Apache configure Apache There might be some dedicated tasks to implement h a proxy and this type of things And then you will move on to maybe another group that will be db server But the db server might might have some tasks that are common With another group of servers because they are both h a Related for instance and so on so forth Within each group you can see that there are tasks default. There are files templates variables So for each of them you can actually leverage some other I would say tools or Functions that are available in ansible. So task is just The task you will be executing in a series and those are kind of reactions So something you want to react on a given task If something goes wrong or not, maybe this is something you can actually Use The libraries that you want to use the default is things that you want to The default variables you will be using some dedicated variables for each of the group and so on so forth You can use also some templates And we'll see that later maybe in a moment the playbook as I said is Is a way to orchestrate and Had some additional or a series of tasks within a single File, okay But providing a way for you to build up the layers that you want to implement on the given system or on multiple systems Okay, so you can use a simple. I would say operating system commands as a as a In a playbook you can use the modules as well And you will have to define obviously on which target you want to run the playbook In the example that we have you can see that the target will be web service There will be a serial Siri in the sense that how many servers do that do I need to work on at one at one point of time? Which are the roles that I want to apply on so obviously the common one Which is the operating system as I said around plus the web layer the web application layer Sorry and so on so forth. So the playbook will provide you this this structure to actually It's a it's a simple script, but it's defined in a M. O format I would say the variables Just the all the systems are not equal and you need to define them and differentiate them somehow If you want to run a very same command on multiple servers Well, take a simple example as an IP or a name or Any other user name and this kind of thing So these are variables that you can use and leverage To be passed at one point of time in your YAML final in the example here you can see that the The host the application server has different variables for the application pass The base pass is between brackets meaning that this is coming from a ginger template and that at the time of the deployment The variable will be substituted by the By Ansible with the variable that has been defined as the base pass. So it could provide the regular thing I will leave it to you for explaining the templates and ginger too. We know if you don't mind. Yes, thank you You don't hear me Okay, so let's go back to to to the templates The notion of variable here is interesting when you want to do overlay for example You may have different type of environments a production environment the development environment test environment They may all have the same way of being addressed, but having a different URL different pass a different whatever And the variables is one way to globalize an ocean and instantiate a differentiation between the different type of system that you have And as frederick was mentioning this is using the ginger to A templating system which is part of the python language It is a feature of the python language in addition to the python language And this is a very powerful templating system which allows you to do this type of variable substitution, but also to perform some control structure on top of What you do in the normal YAML files that you do for example, you can make loops On a certain number of systems You can take make tests you can say if that system is a test system If it's a proud system I do that type of generation of code or that type of generation of code So you have a lot of possibilities through ginger to to create Dynamicity into your environment and the description of the infrastructure that you want to create So you have the bracket person which is The beginning and an end of control statements for loops and if else then if else Statements you have the double brackets so for variables typically And you have also the bracket hash which is Used for commenting your your templates in your environment and we will see during the execution of the Of the uncivil playbook how we can use and leverage those templating system To address differently Two target systems in the environment I will not detail too much a search path. You need to know that there is a search path in uncivil that you can modify and adapt the way you want Okay, let's go to Lab two and start to run some stuff so As we said before running some stuff. We need to know Which system we want to target which system we want to configure and manage And here we will have two systems. We have a local system So machine which runs my jupyter environment Another target system different empty On which I want to pass and control some configuration items So typically what I want to to start with I won't start with anything no one no inventory nothing. I will just start with some command lines Examples to give you an idea of how it's working and how also should jupyter interface is working for those of you who don't know that Let me put it here okay, so typically if I Press shift enter on that first line. I execute What is documented here? So I ask to uncivil To look at the local local host machine. Sorry No, no I can't that maybe Is it big enough now? Okay should be better like that So what is asked in this command line to do to uncivil is Uncivil the main uncivil command, which is not the one we will use later on but that's the main command We want it to act on the local host machine, which is a local system Everybody's comfortable with the notion of local host Okay And we call one specific module of uncivil, which is a ping module Which is like the ping command on the command line interface So you send an icmp request to the machine and you get an icmp a reply from the machine if everything works fine And here we have the trace that uncivil generally gives to you. This is a success operation It has not changed since the last time and to the ping command we launch we got a pong response From the local host system So this is just a simple one-liner to show the basic mechanism of how uncivil is working We can change of course modules We have other type of modules like the uptime command I type shift enter and so this is again executed live and I see the answer that uncivil provides to me, which gives me The uptime of that system which has been up for a certain number of Hours and days now okay so that gives you an idea of What can be and and we use those tools as debug mechanism When we are set up in especially the communication with the remote system to be sure that we can communicate correctly With the target and don't have any problem at that level. That's one way to to help with the setup Now let's try to do something more interesting. So writing our first playbook in yaml So what the block here does it's so this is shell It's asking to the cat command to create that test dot yaml file up to the world And a file which is here So everything here will be put in the file called test dot yaml and we will print at the end the test dot yaml So if I press shift enter It says to me that the file has been created. It contains that content. So this is yaml. So you have Different type of information and a specific format to respect and a number of spaces to respect This is pretty strict with regard to the format that you need to to use but it's Uh pretty easy to read and to understand Also during time. So it's it's not like a programming language where you need to understand the logic behind It's just a set of instructions and giving you A declarative way of performing operation on the system. So here I say, okay I want to work on the horse local horse. I just have one still one for the moment I want I don't want to gather the facts on that system on symbol give you the possibility to query the system before launching anything and putting in environment variables A large set of information that it gets from the system ip addresses mac addresses distribution number Set of packages etc It gathers a lot of information and creates variables so that you can use those variables yourself in your playbook to perform different type of actions depending on the value of those variables So here we don't want to use that yet. We will do later on And we have just one task. So this is a playbook Which is completely equivalent to the previous command line with unseable. We just use module ping Which is here the name of the ping module and there is no parameter because the ping module does not take any parameter And we need to give A list of tasks. So we just have one task here the task has a name which is called pink and it does Call the module To perform the ping so exactly the same as the previous one But this time I use the unseable playbook command and I pass to that command the name of the playbook I have written Under the yaml format just a couple of seconds before Yes Okay, just I wanted to mention that it would be relevant to explain to the people the minus vvv Just to show the verbosity of incidental and how actually the the Well the overall process of Of copying the modules over and executing them through SSH or locally when you run them on the locals. Okay Okay, so if you want to to debug that unseable playbook Operation you you can and this is also interesting to see that the jupyter notebook Environment that you have here is able to be modified on the fly. So I'm for now. I have just run Existing commands, but I can change the command. I can say okay I want to have more debug on that command if I know the command and I know how to pass some information to it I can change it dynamically and now I have a bit more information on what does the unseable playbook here with regards to The context of execution which is at the start and then the name of the playbook and the ping module itself becomes more verbose And gives me a bit more information and I can increase the verbosity of my command by adding more v to it Be aware that if you have passwords for example that are masked By normal operations they may then through that level of verbosity appear in clear text Under the trace that you are generating at that level. So here you see all the Gory details of what happens behind the scenes. So unseable is creating A certain number of elements That you can see here. It creates a temporary directory. It creates a python set of scripts Includes some Python modules that it needs Generate all that launch it to the machine and execute it here as we are local hosts There is no ssh connection yet We will see the ssh connection later on when we deal with a remote system for the local system It's just executed as we see here locally using a shell command So we can have all the level of details if we want to help debugging an issue that we are encountering here Okay, so That was to give you an idea of how we started with it Now we want to address more than one system and we want to have another target So we create an inventory file which Describe a group which is called target and that group has just one ip address which is the target system that we have in our environment We create a new playbook and that new playbook Is using that group target. So here we have just one machine behind But it could be 10 systems behind the same way and unseable will iterate on the 10 systems To perform the same operations on the 10 systems We still don't don't ask together facts and again, we just do a ping On that system. So we create The playbook and we execute the playbook No host matched. Okay Did I miss one? I missed the inventory, of course No, so I need to first run the inventory now this one is created and This one will have the inventory created and will be able to contact my remote system And say okay, I can ping the remote system Which is nice because I want to do more stuff with that remote system Okay Okay Okay So we will see indeed that there is no Shell executed here, but that we have An open ssh communication Reading the configuration of open ssh And the ssh execution of the ssh command to run on the remote host the ping Operations a ping command which is run As a python script and get the results. So you you see all the Verbosity that you can get in the dialogue and you see that the dialogue is pretty Intense between the control system and the remote machine Okay, uh, I would like to jump This one Frederic because I don't think it's so useful here I would like to so this is the configuration file You can modify the way on sable behaves by changing some parameters in the configuration file But I'm I'm not sure it's something so useful here So let's now create a task Which is play So this is a new play So play again target the same target system that doesn't gather facts And again is pinging the remote system Now we will modify The inventory so it's working exactly as previously. We will just modify the inventory And we will add a second system to our inventory. So now we have two targets In our target group And when we execute a gain The playbook and that's where on sable really makes sense Is when you have multiple systems that you want to play with So the same playbook the same ping operation Is passed on the first system, which is a local system and on the remote system as well below and you have For each execution a status, which is return And you have a summary at the end which gives you what has changed What is if the system is reachable or not if there is a failure if the system was skipped because of a condition or stuff like that so Here we have two systems you can have 20 systems you can have 100 system Systems it works exactly the same way So now that we have seen the basic Commands and how it was working. Let's go into more features of how on sable is working First we will use the notion of variables That we have seen before during the presentation And we will define some variables so We will create a vars directory which will appear here on the left hand side in my In my environment under my sub directory for the workshop on sable And I will say okay My student name so I will say I'm Bruno I'm student ID 75 and I am In the bling And I will let so this is this all the variables that will be affected to a specific file variables which are local to my system and I have variables which are Dedicated to my target remote system and I will be able to use different variables So the variables have the same names of course Between the two files, but they point to different values depending whether I'm using a local system or a remote system Okay So the variables files are now created And now we create a playbook Which will make benefit of those variables so the playbook targets The host first the host local host We will include the variable locals and we will create a text file And that text file Which is here We will use inside the text file. So this is a block We we want to create it. We will put it here at that location and the content will be And it's not student 75 the name of the workshop. So the file won't be appearing in the left hand Yeah Exactly Yeah, because we are one directory down So in that block what I do is I create a text file in which I use unseable variables So one I have described just before so the student name the student id and the location Whoops Excuse me And the node on which this is run So this file which is created here I can execute my cell. So I created my playbook Now that my playbook is created. I can ask to unseable Please execute with the inventory And so now I have Run my playbook and if I go So this is just on the local host And I have made the adaptation for that file. It has changed There is one file which has changed if I go so upper here. I see that my file Is here and if I go in the file, I see that the file was generated For bruno, which is my my first name id 75 Running and this is the name that unseable gets from the Gathering of facts on my system. So the machine on which I am running the local host In fact as a full name, which is this one So I have replaced my unseable variables by some values some values that I can pass In a variable files and I can have multiple variables files as we will see some values Which are queried by unseable on the local system gathered as facts and reused by unseable itself Okay, so the same thing here You can see it directly in the notebook Okay, so Now that we have seen how we can use variables There is something we can do with jinja too, which is the possibility to make a test And use one or the other of the two variables depending on the value of some environment Items so here I will again Use The variables which have been declared previously I will create a new file in lab 3.2 The source is in the templates So I will create first I will generate that playbook and I will have a look at that template at that file in the template Which looks similar to the previous one Okay, so I have again my variables here and The fact which has been gathered At that level so if I execute it I won't see much differences with the previous one except that now my inventory Is taken in account So I run the playbook on two systems not just one system and I can see that As I have touched the two systems. I have generated two files one which is local Another one which is remote and I can have print the value of the two files And of course I see that The first name is the same because it was a static variable file that was used in The playbook if I go back to the playbook. I just included the Local variables which contains the Bruno first name and the Dublin location but as The third variables used while something which is gathered as a fact It was created by ansible on the system directly So on the first system I point to my jupyter hub system But on the second system the remote one I have another value here which correspond to the real name of the remote machine So now let's go a bit further so Instead of each time looking at the results separately Having to launch a shell To monitor the results I can integrate in my playbook The monitoring of the files which are generated So here is a new playbook Targeting the same systems. I still gather the fact this time And I want on each system to cat the resulting File which has been created by my ansible playbook environment and I want to look And I have a new concept here. I register the result of the command which is here So I want to execute the command Get the result And the result I want to print it So I use the std-outlines here, which is an attribute of my result And as part of the debug module I want to print the content of those files So I create my new playbook And I can execute That playbook again On my inventory so on my two systems And I can see in the debug part That I can print with the message variable. I can print the two content of the two Files which have been generated on the remote systems. So now what I would like is Having a bit more dynamicity In my generation in my execution of ansible. I would like to be able to say if I'm working locally. I do something if I working remotely. I do something else So here in this playbook I have new tasks Which are using the when keyword of ansible That when keyword allows me to when I gather facts to get that variable, which is gathered by ansible, which is the host name of the of the system and depending whether The IP address is loopback or not local host or not local host. I don't include the same variables And I continue to generate The same files and that files is using Variables which have which are declared in those Viables files, but are not the same depending whether I am working locally or remotely and I think there is Something interesting to look at here so Let's look at the template and we also change the template here So the templating file 33 at that level is a bit different because we use some gingya 2 features so We say okay I'm printing the variables as before same variables nothing changed I just added the location to be a bit more exhaustive and now I'm using a condition in gingya 2 I can say okay if in my inventory I am working on local hosts Then I will add to my text file this sentence. I'm running locally If not, I will write I'm running remotely so now if I use Always my ansible playbook this new playbook with this new template with the same inventory Let's look At what is done here So again, this is run on the systems and this this is run Twice here. We see that for each system In our playbook we make a decision and we skip The the the include vase which is So we skip the remote when we want to include the local variables and we skip the local when we want to include the remote variables So it's a possibility here with the one keyword to say when the condition is satisfied I do this type of inclusion or this type of inclusion That way I have two set of variables which are different Depending on the system whether it's local or whether it's remote and then after this inclusion I template my file And I template it on the two systems at the same time But using different set of variables and now i'm generating The new text file And I want to look at the new text file content. So I'm using the same ansible playbook to look at the results Okay, let's create it Let's run it Okay, so the first one Which is on the remote system says That it has been generated for apollo ID 11 running on the Other systems a remote one placed in the moon And the second one which is local is for borno ID 75 the right name of the jupyter hub and placed in dbln so with a single template By injecting different type of variables through the when keyword in the ansible Playbook I am able to generate completely different files And again, I am I have also I am running remotely I am running locally which this time is generated through a test case in the template itself so you you have a lot of ways to Dedicate some actions based on some variables some content and perform different type of setup The way you want it's you're just limited by your your creativity here And you have possibilities through templating and through orders inside the playbooks to modify completely what you are generating Typical example a concrete example I have on that is When you are for example managing smtp servers in an infrastructure And you have internal relays external relays You have different and and you are using the same tools say postfix for example or sandmail The sandmail.cf file or the main.cf file for postfix Can be completely generated using the overall same template but different type of Values because when you're internal in your environments the relay is not the same as the relays that you use externally for example Repeat that please I said there are 19 people registered Okay, hopefully they're all having a good session. They don't face any issue with the registration or anything According to the database. They all received their credentials. So it should be fine. Okay So One typical use case and something we have been faced with during the setup of this environment So this environment that you are seeing here the deployment of all the workshop on demands rely on a lot of uncivil playbooks behind the scene and one use case that we have Is we want to set up our different appliances using a single playbook but as We don't run the same software in the same place. Sometimes we need a new boom to distribution Sometimes we need a centOS distribution And we want to be able to perform the right action on the right system based on the distribution The distribution is something that uncivil gathers as a fact during the gathers fact operation So here typically you have a new play Which gathers facts and the first tasks say get curl package version And typically what we do is we do rpm minus q curl when we are on centOS Distribution and we register the result Or we do a dpkg minus s curl by grab version when we are on an ubuntu system to have the same information and after that we print The version for centOS or for ubuntu And as we registered in two different variables we print with two different orders So again, we can Create that's that new playbook. So you see on the left hand side all the playbook We are running are created here and you have them at the end of the execution if you want to Copy them replace them modify them. So where you want So if we execute again that playbook on the two systems And here we have two different systems We see that the remote system is a centOS system, which has curl 729 And the local host system is an ubuntu machine running curl 768 So based on the on the notion of local variables through the gathering of facts We can get get this information and we can take actions based on that Typically you can say that curl version is not recent enough. I missed that option in that version So I need to grade it etc etc. That's there are a lot of stuff that you can perform later on based on that Let's see another concept in the playbooks Which is a notion of loop And for that we will use an external api Call possibility Provided by the Gutenberg project We will create a new playbook We will query Books on that site And we want to create multiple books. We want to make a loop to Go from number 20 to number 25 And if you're running the the workshop at the same time as me you can modify those variables Make 15 and 20 for example don't put too much to not put too much Pressure on on that site, but you can perform that very easily and what we want to get So there is a bit of gq magic behind the curl command to analyze the json format Which we are receiving from that website and to get all the entries and all the cover Images of those books that we want to register And then we will download Those cover books to store them in our environment We have the pictures Directory ready for that. So we use new concepts In the ensemble playbook environment. We will do a loop And the loop will be defined by range And the range will be here In our case from book Minimum value, which is 20 up to book maximum value, which is 25 But you can do With whatever value you want in your loop here You can loop also on items. You can loop on other type of elements And unseable variables the way you want Okay, let's compute create set playbook And now run it So this is run just Locally and now we see that we have A loop which has been done for all our items from 20 to 24 Like in python you you give the maximum value and the Value which is the last one used is the maximum minus one And we see that we get we make some queries on the Gutenberg site to download the cover images Of all the books from 20 to 25 and if I remember correctly We have here so some seconds ago. We have downloaded Some images of some books Which are Hosted by the Gutenberg project So I said when when you have for example a verbose mode You see some password appearing and sometimes you have operations where you you need to perform With some secrets and and not printing them and not putting them in configuration files So that they are seen easily in your environment. So Unseable provides to you the notion of volt Which allows you to store encrypted values of secrets That so nobody can read And those values are then used on the fly During unseable playbooks execution. So that you can typically Encrypt a certain number of values or you can use some passwords To have access to some services without providing the password value to Other people So we have on the system A website Which is available here The website is running on a specific port and so when we go on that port We see that we have an Apache server, which is running here waiting for us to do something clever with it So what do we want to do is? Using a private zone On the web server So we can see that we have a private page here so which is And and Frederick I wonder whether we are not missing here the support of the So we have a private page hosted on the web server for each student Each time we create a student we create a private zone and that private zone allows The user to have access to private information Which are only available if we use the password Associated to the user So here as an example, we show with Carol how to use the private zone Giving on the command line the username and the password of that user Which is just an illustration, but not something you want to do in a real environment, of course Let's Execute that environment So if we look at the student 75 zone by default It's not found Because we don't give any authentication to it. So the first command Yeah, I think I missed the the port number which should be generated automatically Which is here 051 so each user has a Port if I do I think it should be Because the port is mandatory if we want to Use it It was not a good idea Sorry for that. We'll have to look at it because I don't think it's correct here So the idea the idea if you have a password that you want to use Is to create a vault To store the password. So there is an uncivil vault command Which can encrypt a string such as the password that you have So you can pass that On the command line You keep a vault secret associated to to it And you create a variable Which will be created. So if I do that here you will sit In the environment normally so you create One variable Which is associated to the notion of vault and which is completely encrypted through the salt past On the command line. So that variable you can use it the web login information Which is which contains the password you can Have it on your file system. You can use it through uncivil commands, but nobody else But you knowing The secret pass That encrypts the vault Will be able to have access to that information and then you can use those information through uncivil Variables mechanism to connect remotely to a system or to use A mechanism which needs the password to be able to perform it and the way you do it Even if I'm not able here to to show it to you Uh, it's to include the variables files that you have generated And then for example, you can use the ui Module of uncivil to say I want to connect to that ui And the user and the password the password here is web login But nobody is seeing web login Even when you do the verbose activity on the command line, you will see A set of bytes which are past Bioncible but decrypted on the fly through the secret which is stored in the vault So the vault is a var is a yaml file itself So uh each user can have its own vault environment And used Include see if for example, if you name the same the var dot yaml file, you may you may have a single Playbook file which includes that var file as here You do it here This command can be completely common to all the users But the value of what is inside the var dot yaml file can be specific to each user So that's something you can share And with an implementation which provides to you different values on the fly based on what the user has Encrypted as as a content in the vault So sorry, I'm not able to show it. There is something which is not completely working correctly here But I can move to to the last Lab Which is a notion of role Because that's that's really the ultimate goal of what what you try to achieve with with ansible is you want to say I want to create a web server role a mail server role a security server role whatever By accumulating a certain number of tasks Some of them being common between all the roles some of them being being specific And you want to reproduce each time the the beauty of of it is that the reproducibility Of the execution each time you relaunch the command ansible replay what needs to be replayed It does not regenerate something with with When the generation is already correct So you you converge to a final situation, which is the one you describe in your yamel file and You converge by applying a certain number of tasks grouped into playbooks Themselves grouped into a role To perform the nature of the server you want to configure and manage at the end so Here we have in fact, I don't I don't want to repass here Everything which has been done in the introduction, but Here we have created small playbooks each at a time And we can gather them now to create the notion of role for our lab So I will first create a structure on the disk to store that information I will copy So the role will take advantage of everything we have done before so we need the templates And I will copy the existing templates we generated under the role directory so here I have my templates I will also create the tasks that I want and I will copy the last one of the last playbook As my main task for this role So what is possible with with that is you create elementary playbooks elementary configuration items And at the end you can group them to create and perform your role So now I said edit the main dot yamel file to remove the three first line Why it's because in a role the notion of target system is part of the inventory and it's not part of the the content Of the role itself. So if I look at my file here I don't need this anymore because I already know that's what I have as the yamel content below is The set of tasks I want to perform on my system. So I can remove this And I need to adapt my yamel content So it's correct yamel So that's The party which is not the most fun So each time you you have an indentation. So you have the Dash at the start to introduce the keyword Which performs a block and then you have the same word which needs to be aligned with the first block keyword So here for example the name and the template is the name of the module name is just in fact a comment And the template module takes three parameters and those three parameters are indented of two space each time And each has its own value Okay So I can save This one I can do the same with my handlers. So The operation of tests that I've performed in a playbook. I say, okay, this is interesting I want each time a playbook is run. I want to also run the test operation at the same time So that it's performed as part of my role And same story. I need also to edit In the handler I didn't do any any specific cleanup, but I didn't run the the role previously. So it should be it should be clean normally Okay, so I have my handlers. So for example a typical use case of handlers You you configure a web server your HTTP server And at the end of the new configuration if you change the configuration you re-deliver a new configuration file At the end you need to set to the HTTP server rerun re-load The configuration file to get the new configuration and put it in operation So her handler will do that handler will be the operation You need to pass to set to the web server reload your configuration file and apply it Kill minus one typically of a demon. That's generally what you have as handlers when you need to set to the demon Really reload your configuration file and reapply it Same stuff with my variables Which are now Here as well. So here we see the content of the lab role We have some directories under which we have some yaml files And this role is something that you can build Once on the system and you can Propagate it to other people if you have colleagues elsewhere running on civil and wanting to apply the same role as you That's a separated unit that you can pass to other people and of course you put that under a git and you manage it in a central place and and Everybody is is using the same content But that's something which is completely isolated and that you can pass to other people to run And they will get the same result as you using the same inventory file, of course And now I need to create the last yaml file So I have my handlers. I have my tasks. I have my templates. I now also need to have A header of all those files Which is the main role yaml file Which will Tell on which target I want to apply That and which has a new keyword which is roles and which describes the directory in which I will find all The operations that that role needs to perform And now when I am at that point I can run The uncivil playbook on the role And I made a mistake In my in my modifications in the handlers Oh, there is something missing here. Yes So yaml can be a bit tricky to to deal with when you when you start And so here I have the possibility with my my single role if I go again up So the role is executing all the tasks in order Including the creation. So including the the variables as we have seen before creating the template file and Launching the verification At the end of the template when there is something to change. So here there is there has been no change in my content No files has been changed. So uncivil said the job was already done correctly. So I don't need to run the handler I don't need to make any modification. I don't need to check the modifications if I do a modification somewhere Typically in the templates and I say, okay, um, which one is used here? Is it? Oh, let me check in the tasks This is a 3.3 for the template So if I modify the template now, I say locally here Don't really care about the nature of the modification and you go back Here and you say I want to rerun again that playbook This time uncivil will check that something has changed So I've changed one of the two parts of the template one generated file has changed and in that case we see That it's a local host which had been changed and the check has been performed on top of of this modification Uh, which is not displayed but which is run by by uncivil as a check Yes So let me go back to Whoops Do you want to To take it frédéric? Please mention how important the survey is Okay, so we have performed here a certain number of basic operations with uncivil Um, what is interesting as well as you have seen there are a lot of modules Just tell me when you're done and I'll be saying just a very Final words, okay, so At that point you have seen a certain number of basic operations that you can do with uncivil There are a lot of existing modules. We have just show to use Three four five modules that are mandatory I would say to make a tour of what uncivil provides to you um You can interact with a lot of hardware and software Pre-packaged Provided by the project upstream You just need to download probably sometimes the right collection in addition to the roles that you already have as part of the uncivil distribution So you have a certain number of roles part of the distribution and you have a certain number of collections which adds Interaction with other type of hardware and software available Now it's uh, what is very interesting with that tool is that you can um Leverage on existing playbooks and you can write your own But you don't need to write everything at the first start You you start with small tasks that you want to perform And you want to reperform each time and you leave the tool doing that for you And one day you realize that there is an additional task that you want to make So you had two lines in your playbook then two other two other lines, etc So the playbook is really something that you are building as time passes and that you are growing to um take Typically some special cases that you've missed initially. So for example, when you configure a system Sometimes you start from a clean state sometimes you start from an unclean state And the problem with idem potency, which is uh, so the feature that uh uncivil provides is that you need to think about what state my system is and what state I wanted to have as a final state and depending on the original state you want to analyze all the cases and Converge the situation to the final state which are the one you want And so that may lead you to add a new lines to your playbook to correct Some other states that you find on the system because someone has made manual modifications because some configuration files have been modified by another tool Not under the control of ansible. So all those typical cases you want to be able to Address them in your playbook. So as time passes your playbooks will grow and be more and more Precise in the way you are configuring your your target systems and You can use a lot of modules existing on on on inside the project and inside values collections And develop your own very specific way of managing the system using that type of tool Frédéric final words for you Thanks Bruno So first remark fortunately for me. I'm not epileptic Otherwise I'd be dead a hundred times during this session because of the flickering of the screen You can't mention how it was just keeping on flickering. It was quite a Quite a pain anyhow. Just wanted to mention that The survey is very important in a in a conclusion notebook and that obviously we are working on the On adding some new content. So this is a one-on-one session on ansible Hopefully it'll be followed by by another Another session So you probably know about you probably know about hpe green lake And all the services that it provides you it provides to customers We're currently working quite closely with the different development teams and One of them is actually Building up some ansible playbooks to be used in the in the different services like Vm as a service or container as a service or pyramidal as a service type of Since so and I'll be working closely with them to actually try to build up An advanced session on ansible As well as Green Lake so all together an advanced session And really that's that's was what I wanted to say I'm pretty sure that Bruno already mentioned that we are And I'll be repeating it that we are open source in this and that we'll be very happy to share this The work that we've been dealing with for the last Two years nearly with Bruno building up the infrastructure allowing these workshops to exist and to be Delivered hopefully the the session was interesting for you and Hopefully the the 19 people who went through the registration process Did not encounter an issue in running the workshop itself I hope to see many of you In the workshop database later on In the next coming days or months Take a look at what we provide. I mean some of the the workshops are really open source related And they might be relevant to you I mean if you have no knowledge about Or given programming language like rust for instance This might be of interest. We managed to gather the different workshops in categories We're dealing with many many different subjects. I mean raging from infrastructure as a code to AI and you know Spark machine learning The green lake There's a simple api 101. So there's a plenty of kit 101. There are plenty of things to be to be Looked at looked at I would say I know that Once we'll be we'll be done with with the open sourcing of the stuff Bruno will go back on trying to produce some some packaging on linux workshop There is a concourse 101 that is also on the on the list and and many many more to come. So I think that's really what I wanted to share with you is the fact that it's A living program and we will we are also sorry welcoming contribution from the external world So obviously all this content is mainly produced by hp people But nothing prevents you from giving us a call and reaching us by email And if you are willing to you know, create some content that is open source related We'll be very very happy to welcome you and work with you Building up a new workshop that will be Available for the community. I mean that's the the purpose of the hp dev community Is to innovate and share all together within the the community So we'll try our best to make more and more open source content available in the workshops as well as on the portal itself and We also if you're willing to write a blog we welcome this type of Initiatives as well We have a CMS that allows you to go through the the process of editing a blog very quickly So if you look for a place to Shout out if I could say so Well, you can also contact us with that being said. I think that I will leave it to you, you know, and I think we're nearly done with the time So I'll be thanking you. Merci beaucoup from France. I wish I could have been with you in the room unfortunately The budget was not that high enough to allow me to fly over Unfortunately, but next time I'll make I'll make my best to to go and meet you all Thank you very much. Merci beaucoup and have a good day Thank you very much Frédéric, um, is there any additional question in the room? We still have two minutes left If not, I give you two minutes of your time back And wish you a happy event. Thank you