 Ok, so, good morning, thanks for coming. In this talk I'm going to talk and hope we all talk at the end with questions about SASTI, doing effective SASTI, secure code analysis CICD. This is what we are going to see, not going in a lot of detail here because we will see during the presentation, but just a high level view. Juan Main, I'm Florencio, this is my e-mail, Twitter account, I work at Red Hat, I'm a secure architecture team lead, I work with the product security architects that work with all product services in Red Hat to try to help with the security, I'm a principal product security architect. I'm in information security science, 1999, I saw a funny Twitter thread, I think yesterday or the other day, telling ok, tell me your age with a common, so this is my age, adding plus plus to this file, so the people who know will know. Some background, since 1999 I have done a lot of different things in security, always in cyber security, pentesting, had my own startup for some years, been pentesting forensics expert witness, I attended like 10 trials defending my forensic analysis, IT, not opening course. I was also an auditor, third party auditor, ISO 27001, GDPR, we die, as ISO of a big company in Spain, supermarkets company, and now proud security architect at Red Hat. Just a bit of context, well now what is SASTY, this is an acronym for static application security testing, when we talk about static we talk about analyzing an application without running it, so usually very frequently we talk about analyzing the source code, I'm going to focus on analyzing the source code, but we can also talk about static analysis when we analyze the binary without running the application, right? Infact there are some mechanisms and strategies to analyze the binary code statically for sure, but here we are going to talk about testing the source code and testing it to find security vulnerabilities or potential security vulnerabilities, right? We want to analyze the source code of an application to find security vulnerabilities. This is one of the core practices in the security development framework Red Hat is working on, you have more information there and there, and I'll share the slides after the talk. Benefics of doing SASTY, why communities, why companies, why orgs would want to do SASTY, because at the end any security activity we do needs time, right? Needs time, needs effort, needs resources, so why a community would want to do SASTY. It allows to identify vulnerabilities early in the development process. It points to specific lines of the code where the vulnerability is, versus other kind of security analysis, dynamic application security testing for example or a pentest. Maybe we know that there is a vulnerability because we see the results of sending some pilot and see the error or the behavior of the application. With SASTY we know the specific line where the vulnerability is present. It is developer friendly if it is implemented correctly and this is a lot of the reason of this talk, right? SASTY can be implemented in many different ways and there are some that are very, let's say developer friendly and some ways of implementing SASTY are not. It has a low entry barrier, right? It's not expensive, there are open source tools and free open source tools that we can use to start, at least start doing SASTY. It has a high coverage, SASTY is usually done with a lot of automation, there is manual work for sure to analyze the results and act upon them, but the scan itself can be done automatically against thousands of lines of code covering 100% of the code, right? So no other kind of testing does not have that coverage in the same way with that level of effort. ASSO is easy to extend and not only to extend to adapt to the company. When doing SASTY we have the possibility and we will see of customizing the rules to the way the company develops, so that way we have better results. ASSO in general and this is true for any activity that improves the source code of an application. We are not talking only about confidentiality, right? So why does my front end need to do SASTY or why this application needs to have better security? This is not going to be used by the military or the NAESA or whatever, but more security is also more quality, is also more resiliency, right? We cannot have resiliency, we cannot have a good quality software if we don't have security. When we improve the security of a code base, we improve the reliability of the source code. So a lot of benefits if implemented correctly. And now this is the other side. So if this is so good, why not everyone is doing SASTY, right? You see 100% coverage, we know vulnerabilities, so we have solved the security problem with SASTY. No, why? Because we have many problems when we implement SASTY. In general, too many false positives. We run a SASTY application and we have 100 findings that have to be reviewed. And many, many are false positives. Ok, this is not a security issue, the tool does not understand what we are doing. We have methods, functions that are cleaning the data. So the SASTY tool thinks here there is a SQL injection but there is not because previously there was a function that we have coded that cleans the data. Too many out of context detections, a bit different in the sense that in this case it's true that's a pattern that can be dangerous. We are using MD5, right? 50 findings of MD5 being used in different places of the application. Is that a security issue? Well, it depends. If we are doing the hash of the password and storing it, it does not sound like a good security practice, right? But if we are using it maybe because we are implementing a hash table ourselves and we need this identifier that is unique, it's ok, right? We can assume we don't need a better hash algorithm for that. So we may have many findings that are not security issues just because the application, the SASTY tool says there is a vulnerability there. Because many times we implement SASTY without integrating it, integrating the tool in the developer's workflow. If the developer needs to go to another application, to another dashboard to see the results or needs to do an effort, a great effort to analyze the results, maybe it depends on the company, right? But probably they will work on that the first weeks but slowly that friction in the process will do that the developers stop using the tool. Ok, yes, we are doing SASTY because we have these results there but who is reviewing the results and acting upon them. That happens a lot. Or maybe we are using the wrong SASTY tools. We cannot customize rules. They are a black box for us, that's a problem to optimize the process. Rules are difficult to understand, they are there but we cannot understand them. It is difficult to create, modify, remove the rules of the SASTY tool or require compiling the source code before we can start doing SASTY. This is not really bad. There are great, great SASTY tools that require that the code compiles. Usually they require that the code compiles because they have a way of working that reduce the number of false positives which is really fantastic. So it's a trade-off here. To start doing SASTY signs the first line of code or when the code is incomplete but there is a legit approach that is using this other kind of tools. But that can be a problem for the company if the workflow does not understand that the code needs to be compiled. I'm going to talk about a tool just because I'll explain the characteristics. But any tool with these characteristics would work. It's not the tool, the important thing here, what I want to transmit is the process is what is important. But why, for example, this kind of tool? It's open source, we have the opportunity. You know, the benefits of that, it's fast when scanning, can be run signs, the first line of code. It's easy to write rules, modify, we can have even our own rules set from scratch. But they provide many open source rules already which once we can start. And can be integrated very easily in, I guess, any kind of CI CD pipelines. Trivia also to install and to execute. This is just an example of an execution. This is the rule set we are using, ANOVA's top ten rule set just for the example. It is provided by the community and there is also a company behind this tool. It's about security because what this tool does is analyze code and it also has other kinds of rules, like LinkedIn rules, quality best practices. I'm going to focus on the security rules and that's why I'm using this rule set. It is, we are indicating that we want SARIF results. SARIF is a kind of formatization, right? But with some specific attributes focused on security. And it's becoming a standard. If we have our results in SARIF format, it's quite probable we are going to be able to upload directly to ingest this file in a different tool. For example, defectoio to manage findings, to manage defects. It can be ingest by defectoio and all the data is parsed and put in the correct places. So it's as easy as cloning a repo and executing installing and executing this command against this repo. This is the first thing we have in the output. So here it is indicating the kind of files I'm using for this example PyGoat. It's an open source vulnerable application written in Python. So it is on purpose vulnerable. So I wanted to have findings for sure and I use that as an example. So here Semered is saying, ok, I have 140 rules for Python, 65 rules for JavaScript. So this is for the rest the same. On this number of files. So I have 100 files in HTML, 53 files on Python. So at the end, 597 tasks. I think this was running like 1 minute, 2 minutes. Not a very big project for sure. And it found 23 findings. So this is a section of a sari file of the output. This is the kind of results that we get with a sasti tool. We get a hash. This hash identifies the finding and can be used for managing false positives. If we have this, for example, this hash we put it on a file and we use it like a post process. Just remove these findings from the sari file the next time because we already know they are false positives. That's a way of managing false positives. We have that information in one. In number two we have the place in which folder and file this vulnerability. This is a section for one finding. One potential vulnerability. In this case, pgoat, introduction, APIs. Then we have the exact place in the code. Starting line, end line, from 59 to 65. This is the code. It is not well parsed, let's say, so we can see the code directly in the source code. But we know the exact place. Then we have the text of the vulnerability, found the user control request. A request data that comes from a user and it is directly used in the right function. Sounds like a security issue. Something that comes from the user is dirty and it is used without cleaning. That's the kind of vulnerability this finding is about. We have an ID of the rule. Python Django security injection that identifies the specific vulnerability that has been found. This is one of the 23 in this execution. As it is Jason, we can process the file quite easily and work with it. And something maybe you miss is the severity, the tool things or the tool assigns to the finding. In this case it's an error. Many rule sets in the same graph has a severity of error warning and informational. Not that good in my personal opinion. I would prefer critical importance, something like that. But that's the rule set. If we don't like it, we can do a different one. So we have executed a SAS tool. Are we doing SAS T? No, we are not doing SAS T. Just running a tool against a source code, a codebase is not doing SAS T. Just by running a tool we are not improving the security posture of the application. So we need to do something else. SAS T is a process to implement SAS T for it to work. It sounds like more effort. But if we don't establish the process, the workflow, we won't get the results. And if we don't get the results, we will stop doing this very fast. So how an ideal SAS T process would be? We need to do these scans automatically. If we want to put the results as close to the developers as possible and is what we want, we would do SAS T for each merge request. Maybe for a merge request we don't have a lot of context of how the application works. But for sure the findings are close to the developers and we will have more probabilities than the people are acting upon them. If not, we also can do, for example, a codebase scans and schedule intervals. It's another approach that maybe works as good as putting the scans in merge requests. Then we need to analyze the results. There is no way we can avoid this step. We need to understand what is the finding and we need to classify the findings depending on the severity. Because it's not the same to have critical issues like low impact results. We need to track the results. And this is also a key thing to do if we want a successful SAS program and not a failing SAS program. We have to put the results as close to the developers as possible. And this is where we need to adapt to the company, to the org, to the community where we want to use SAS T. Is the community using GitHub or GitLab issues to manage the findings, to improve the code. Then there is where we should put the results of SAS T. Are they using Jira? Are they using Baxila? Any other kind of system we need to put the results where the developers work. That's essential if we want that this works. And we need a final step that is improving the process. We need to remove the false positives and modify the rules. These two parts are essential if we want a successful SAS T program. If we don't have an easy and agile workflow to identify false positives. So maybe it's okay to have 100 findings the first time we run SAS T. And 90 of them being false positives. Maybe it's fine for the first scan but not for the second. For the second scan we need to have classified these 90 findings as false positives so developers don't get these same findings again and they need to review these 100 findings once. If we try to do that, that process with time will stop. Also we need to adapt the rules. Sometimes we cannot just say this is a false positive. Identifying false positives in the code is sometimes complex because how do you identify a finding? We had a hash. It's true but that hash is probably related to the place, the lines in the code, the structure and maybe we had a line and the hash changes. So it's difficult to always say this is a false positive. So the other option we have is to remove the rule or change the rule. So for example maybe in our project it does not make sense to have this rule that identifies Md5 as a vulnerability. Or maybe we want to reduce the severity of that one to low because of our context. We need to be able to modify and work on the rules. Also very important. So now I'm going to show briefly some examples of Github and GitLab are already providing to communities the possibility of doing sasti quite easily. It's just enabling these options and knowing how they work and start working with them. So try them on your own projects. This is my suggestion, let's say. In Github there is an option, just set up code scanning in one. It is a workflow and at the end in Github it means a GitHub action. They even have an existing GitHub action that uses SEMGREP. This GitHub action that we are enabling and uses SEMGREP behind we can just also customize it if we need more functionality because at the end Github has some different licenses and it offers more things in one license and less in others. But we can use our own GitHub action with our worker that uses SEMGREP or other static analysis tool and that's whatever we need to adapt to the process of the community. That is what is really important. In GitLab, very similar, they have included already static application security testing just when you create a new project. And you have the results automatically for each merge request. You have a satisfy and get that by default again uses SEMGREP. Well, not in Github, it was not by default, it was if you choose it, but in GitLab uses SEMGREP in this case. And we can also write our own integrations. For example, if we have a community that works in Jira, or an org that works in Jira, it's trivial to parse the SARIF results and using the Jira API just create an EPIC with all the findings extracted from the SARIF file. In which file, as summary of the vulnerability and which line and the content of each one, more information. So the important thing here is to adapt to the way of working and put the results as close to the developers as possible. Managing false positives, it depends a lot on the tool that you finally use. SEMGREP has some options ignoring files and folders. For example, if there are files and folders that we don't want to scan because, for example, are coded that comes from a dependency, for example, and we don't want to analyze them or examples or testing code, we can ignore those folders or we can do something custom to remove the false positives from the SARIF files. This is just syntax from SEMGREP. And listing having the false positives identified with the hash is something that the SEMGREP open source option does not include, but we can do something custom with the code. About custom rules that is very important and a strong point in this kind of tools, in this case they have some commercial rules but they have some open source rules with LGPL in a repo. We can fork, mirror and work on that to remove, add, whatever we need to adapt to our case. This is the syntax of SEMGREP rules as you see. Really easy. They are Jamel files with an ID. The languages, this rule applies, this helps for the scan to be faster because SEMGREP only applies the rules to the language. They are full, we can have also generic rules, but in general it's better to have rules adapted, the message. And the pattern, the pattern is the most important part of the rule. Here it's like a trivial rule. It says it's not just a grep, although it has grep in the name. It has parsing functionality. It says here this is a placeholder for any vulnerability. A constant token is, so the word is, and then this means any valid statement in Python. So if we have a variable is an any valid statement, this rule triggers a finding. This is a more advanced rule and the key why we need or the kind of rule that shows how we can adapt this to our custom code. Here we identify patterns that are sources. A source means where we get information from the user so the result of that is dangerous. So if we have this, monitor the result of this, then we have the sync, if we get some data here and we use it directly here in one of those patterns, we have a vulnerability. So if we get some data with this and use it here, we have a vulnerability. But if the result of this pass through this function, it's okay, that's not a vulnerability. We have sanitizing functions, cleaning functions. So this has this functionality, this is called standard analysis, that if we use our own framework, our own cleaning functions, we can customize our rules to reduce the number of false positives. Let's see it with an example, Xembre has a playground. This is the rule, this is an example code and another example code. In this case we get some data, this data is dangerous, but it is clean here, so when it is used here there is no finding. In this case this is highlighted because the data is get here and used here without sanitizing. We have a highlight, this is a finding. This is the way we can create our own rules and reduce the number of false positives. So this is the last slide, then we can do some questions. So at the end we have bad safety and good safety. Bad safety, when the results are not processed, good when they are processed and we act upon them so we really reduce, we really improve the security posture of our application, bad when the results are far from the developers storing a different application, good when they are close to the developers, bad when there is no clear way of managing false positives, good if we give that possibility to programmers. Rules are hard coded or difficult to change, that's bad, good if we can customize. Bad when it is slow, so 2 days or 12 hours to stop a merge request, we cannot do that, right? We cannot stop a merge request for being available hours. We are talking about minutes, if we want to do this in merge request, good safety when it is fast, few, few minutes. And in general bad safety when we think safety is deploying at all and having results somewhere, and good safety is when we have results that we use to improve the security of our application. Much, much more better, less results that we use to improve the application than many results that we don't use for anything, right? And thank you, that's it. Yes. So the question, repeat the question, yes, and also to see if I understood it. The question is if we can use it for other languages or other use cases. Yes, totally. For infrastructure as code, as terrafor, there are already existing rule sets, it has already rule sets and for sure we can use our own custom rules too. Yes. I would say it's semantic, semantic, but not official answer, right? Yes, we can establish our own severity for the rules and yes, that should be, that is not only about the tool but let's say the process that is something that GitLab should allow us to do and in fact they do, again, depend on the licensing, but it's exactly how I think it should be done. If it is critical or maybe important or critical maybe block the mercy quest and if not let it go. There is another approach that is very interesting and some orcs do that is having two sets of rules, the mature and immature ones. So the mature ones are they directly used by the developers and the mature ones are only used by few security people. So they don't generate the mature, generate many false positives. So there is a process to promote rules from the mature set to the mature set. So when they stop generating many false positives because they are tuned, they are moved to the mature set of rules that way we reduce the noise in the developers team. If there is a security and developers team to separate that. If a one is developer like many cases in communities maybe it's a different approach but this is also done sometimes. So as I understand the question is what happens if we have to analyze, let's say we see a false positive in an area of the code but then we have the same issue in another area and also what happens if code changes if we can analyze only the difference. Ok, we are just finished this one, we are out of time. Just if it is a false positive it can mean it is a false positive in this specific section and we do that, we stop saying this is a finding here but in other place we say it is a finding and it's ok. If it is that the rule is not working it says it is a vulnerability but for us it is not, the option we have is modify the rule or remove them so we don't have the result anywhere. And yes, we can do partial scans of the things that have changed. That's for example if we do the scans in merge requests that would be the approach to scan only the thing that changes. Please send me any questions to e-mail or catch me in the hardware. That's fine, thank you. Well, alright. Thank you very much everybody for joining me in this talk. It's called Uncontenable Ansible and I'm introduced first myself so you know me. My name is Guillermo Gomez it's kind of difficult for the Czech people to pronounce so I decided to go for the Nick Gizmo so if you go to TPVC and ask for Gizmo that's going to be being in the Technology CB Park, ok? I'm from Venezuela I have a master degree in electronic engineering that men 6 years in the university back then in Caracas and I started my career as working with the telco company Alcatel I didn't know them and I worked for Alcatel 6 years then I worked for another German company that was based in Berlin, Portsmouth and London there were 3 sites and they produced internet appliance that was my first touch with Linux in 1999 ok? If you want to know I got my degree in 1994 so stop doing the math, yes? and after going with Daica in internet links appliance I really got interested in the Linux world so I start embracing the movement for free open source I tested all the distros as you probably did in your early stages on open source movement we fought the wars so we won and I spent a lot of time with Fedora I was even part of the Fedora board during the 2011-2012 period I went through translator package writer, ambassador, speaker, blogger I did it all with Fedora except developing software, ok? Most of my efforts in Fedora were based on Latin America so I pushed the communities in Venezuela Argentina, Peru, Colombia even Brazil, ok? So that's me and I started using Ansible switching from Ruby driven technologies after Python kind of a start killing Ruby I'm a Rubyist but I decided to go with Ansible as a strategic decision in my career so it was a good decision so I decided to go with Ansible and OpenShift so those are good recommendations for anyone in their own careers and I've been working with this automating anything I can my previous role was as a C-Satman in Red Hat for developers I was in charge of maintaining internal platform for developers in short we were in charge of about 35-40 OpenShift clusters different architectures different sizes different platforms their metral, virtual, open stack so it was a very complex environment so for me my success in PSI, ok? dealing with OpenShift was just based on Ansible so barely I use OC commands with OpenShift during my time in that team for more than two years and there is my new engineer colleague Adam he started an internship with us and we developed a lot of Ansible code for testing those 35 clusters to know their status probably many of you are going to ask what about Prometheus, Grafana, Splunk and all those specific platforms well, those platforms always they were not consolidated in an easy way they were not easy to consume so for us it was not really useful so it was really useful just to test can I create a project on this cluster now can I create a deployment on this cluster now if it fails I know something is wrong so that's what was my job then I decided for another role now I'm a technical account manager e now the challenge is different now I'm not dealing directly with the clusters because I have no voice, no keyboard to get into those clusters that are owned by my customers but my customers have problems with them I'm a technical account manager in the end that means support them they have an issue how can I support them my automation now is not dealing with the open shift that I control but the open shift that I can simulate in a laboratory and test the configuration of my customer and try to reproduce the problems they have and make this as quickly as possible so this is my background so you can ask me about this so this talk in particular is about how Ansible is related now with the containers world so I'm gonna try to engage in the talk and ask you about what is that you understand what is a container and what is the problem that the container is trying to solve sorry, maybe you have an answer to this someone, what is a container nobody knows it please come on a container is it a new concept for you? ok, what problem does it solve deployability, what else what does exactly mean alright, I will add on that that's great, ok that for example I just switch my laptop my previous laptop went expire the warranty so Red Hat offers me the laptop you know, with a very small cost I decided to go for it but in the process I got wiped out my operating system so I got my new Fedora installed what you're seeing is the new Fedora 38 it's not the CSB, it's the standard Fedora 38 the problem that I'm facing now is that ok, I'm gonna do my presentation tomorrow and I need to install Ansible and I need to install all these things and you know what between my Fedora and my well I started to have problems with the dependencies so the container basically is solving a very old problem that is in the end very simple you need to package all your dependencies in one this little piece so you can be sure that you have what you need to run the piece of software that you want without colliding a neighbor so many people comes from the world of virtual machines virtual machines did the same thing let's say but in a very very heavy way ok, very heavy so you had to emulate the hardware you have to emulate everything on top you have to put the operating system so you had a whole machine of course it had some little benefits ok, but that's what the container is so how Ansible relates to this that's what's the introduction about the tools I'm going to talk about because in the end I'm going to fly through the slides ok, it's more important what I'm telling you right now that what is on the slides so in the previous iteration of Ansible through the web from Red Hat it was, what was the name? Tower so how Tower resolved the problem to solve, to actually run the playbooks to run the workflow job templates to run the job templates it was what was the name? virtual environment from Python you needed those kind of vessels with the dependencies you need to run your playbook your workflow your job template in a consistent matter so Tower evolved but in the end it got very complexo to keep up with this and in parallel the container world started to raised so it was obvious that we got the ditch the virtual environments from Python and start using containers for running our playbooks so this is the very basics about the presentation I'm going to make so you can see where I'm talking about the dependency problem the virtual environment the problems when you need to patch the OS ok so there are two new tools that are based on containers that I'm going to talk about those two new tools are Ansible Navigator who knows Ansible Navigator do you know a bit about Ansible Navigator? ok thank you the second tool is Ansible Builder and even though I'm going to through all the slides very fast concepts are pretty easy Ansible Builder is going to let you create an image for you to run your playbooks and Ansible Navigator is basically a text user interface for you to run your playbooks or any playbooks that you had developed before so I'm going to switch right now anybody has used before Builder? alright who hasn't why? I mean it looks like everybody already tested it so I just I don't know if I need to switch to the terminal right now but the idea of Builder is kind of the tool just to create your image whatever image you need basically based on the OCI format even though it does support the docker file the docker format also for the containers no that's another word too the good is it support script format script in format I mean you create a simple bash script and you end up with an image you realize that or you are always using a docker file who had used a script for creating an image what is the what is the virtual using the script repeatability, what else you can do whatever you want is not just a docker file so you can do whatever you do right now with your knowledge based in bash and on top of this you create your image so that's what's Builder is ok I'm not going to go into the terminal you don't have the time for this I'm going to review a little also flash about what are the commands that we have in Ansible we have a lot of them ok probably some of them you never tried before I'm not going to count of them and then we have it finally Ansible navigator it's a text UI the analogy I came up with is the MCLI so we all are used to in the old times you IEF config interface config for network and then IP something and all the you know parameters it's very complex it's very powerful but at the same time it's very hard so the text UI kind of fits in the middle to solve day by day problems so now a day for example and that was a surprise I showed this an engineer at Red Hat and I show NMT UI and he say hey I never saw that before really alright cool but for me it was obvious ok you don't have a graphics user interface you don't have a genome shell you just SSH into your machine then you this kind of tool simplifies that you don't know all the details about the commands of IP details so Ansible navigator is like this and on top of this it adds some functionalities that's where the analogy ends so this is the command and then where the problem starts actually this is where the problem starts so I got into the dependency problem trying to solve the dependency problem you see how painful this is yes so I tried this in the Fedora I got an Ansible call version blah blah blah I tried this on the real aid ended up with another version it was a mess still a mess and then I'm trying to figure out but in the end the goal is to get an image that is going to work so when you get it in store this is what you're going to get you can run all those commands like collections config dog help image there is a lot of functionality here so the main point of Ansible navigator is to consolidate to make it easier to consume all these things that actually Ansible provide you it's not a replacement again it's like the network manager CLI it adds some new features and make it easier for you to run some stuff or to debug some stuff one of the for example interesting feature is the line number for 13 which means replay Ansible navigator give you this option that when you run your playbook and something goes wrong then you can actually replay the run without actually running against the target but just in your machine through the locks it's like a dump so it goes through the dump I show you all the steps and you can let's say trace every step on your playbook without actually touching the target system this is one of the very nice features that you can use on Ansible navigator this is the help this is basic information regarding the releases I'm going to make available the presentation for you so you're going to have the links I guess you can pretty much get that information easily so for example just to run like a regular playbook you can just call Ansible navigator run my playbook and you can run it in a non-interactive way actually exactly adding minus MSTD out so it kind of behaves this is the regular Ansible playbook output you know this and this is the Ansible navigator run output it looks pretty the same with the same playbook nice but if you run it without the non-interactive you're going to get this and when you get this you can press zero which is the play number zero in your playbook you're going to get the tasks at least one by one zero and one in this case you just have a playbook with two tasks so task number one got in facts second task print something about my laptop but then you can press again zero or one on the left and this is what you get you're going to get all the details of your task run your step here so you don't have to go and re-run the program include the verbosity or whatsoever it's everything already there and when you run for the first time Ansible navigator against your playbook this is what happens it's going to try to pull an image so this is the key part for you to understand your playbook is not running based on your operating system installation of Ansible it's going to run inside that container you're going to spawn an image you're going to pull the image you're going to run the container it's going to run and then you're going to pull out all the details out of it in this case this screen shot shows that by default at that time that I create this it was pulling Ansible automation platform 2.3 EE means execution environment supporter well blah blah blah so now you can look forward that the new Ansible automation platform is based on this so the new Ansible automation platform is based on the execution environment which is a base image so this whole talk is about containerize Ansible so the idea is you're going to create images or you're going to consume images to run your playbooks so here's a quick mapping the Ansible CLI very important new functionalities one of the cutest thing is about inspecting collections the replay already mentioned now about the configuration you can actually run Ansible navigator config so it's going to start gathering information about your actual configuration and this is a nice thing because again when you try to run your playbook you're wondering how is it configured what is it configured all those things that you may be not aware about at the moment you run your playbook when you run Ansible navigate config you're going to get all the information in your screen what values are you using what source it's pulling this configuration parameter in so in this case all the green lights lines are just telling you that it's using the defaults and the two lines that are not in green is where they were changed somewhere then of course you have some configurations that you can tweak for example obviously you can have an environment variable telling what is the config file you can use the Ansible navigate that yaml or in the home folder the not visible Ansible navigator yaml there you have it's a yaml it's complex there are a lot of parameters that you can tweak one of this is about the execution environment that you're going to use ok so you're going to get support on those ok you can use other images or you can create your own in this case I took just one of these and I'm going to show you how to do it so I'm going to show you how to do it in this case I took just a screenshot on what is being used now in my Fedora installation the image is called creator-ee and that's the version and that's an example of using a different image from the CLI dash EEI with my image maybe your image now I'm just putting very simple examples but imagine your image includes many of these modules that you need or you're including many other modules that you need to use in your playbook what else settings, my image collections again when you run the command you're going to get all the information in your path the recommendation is when you create your new ansible automation you create a path you put everything in there your configuration files your ansible CFG your ansible navigator, eML and so on you put everything in there so when you run ansible navigator collections in that path you're going to get all the information that your environment is seeing so you're not going to get confused at all what is available for your environment at the wrong time you explore them you see the index on the left so if you press I'm going for ansible POSIX ansible POSIX is the number 5 and then I get all the modules that include the ansible POSIX and if I press again one of the index on the left I'm going to get actually a lot of information about that module, even the examples ok so you don't need to go google blah blah ansible module, no you're going to browse your collections you're going to be able to look at the modules that are installed in your environment and you're going to be able even to see who to e-mail and blame about something, ok that's cool that's it about ansible navigator of course there are a lot of more things there this is just a kind of eye-opening presentation I'm going to fly through builder because I just have ten minutes more I already told you this is about creating new images for ansible navigator or ansible automation platform I just told you yesterday and the current version ansible builder is 300 in fedora 38 it doesn't match with my rel 8 that was all the mess up started this is the basic help online it works based on a e-mail definition and right now the key point here is the version right now it supports three versions, one, two, three and you have to go through the documentation exactly what every parameter means what is available dependencies ansible galaxy base image python things that need to be included in your image and here are some examples of the files involved so for example the requirements yaml my list the collection that you need to be installing the image the bin depth it's about the system you need the binary git in your image so you can specify you will specify like this and there you are going to specify your python libraries needed in your environment after you make this well this is one of the extra slides that I need to put yesterday I was trying to make this presentation shorter but I figured so this is important about the versions version one is supporting all ansible versions version two one dot two e version three ansible builder version three dot zero this is a typical run in this case I'm just adding minus v3 just to block everything in my screen to see how is it working you can see that it's actually using podman build command to create your image and if you this is the result after all my plays with this the ansible execution environment was the custom created with ansible builder the metop ubi minimum it was created with the builder script and the ubi minimum is the base image from redhat from ubi so that's it I made the ten minutes for you to start shooting me at for question and so on thank you very much thank you for the patience and now it's on you no no that's packaging he has to repeat ah I need to repeat the question oh my good friend your name Andrea is asking about the software distribution packaging for ansible navigation and builder and no are only available through pip and I think they distribute some zip files from the github so it's an opportunity for packages they're willing to do that ok other questions it was so clear so it's a success thank you very much again I can get started I can start good morning and I'm extremely happy to see so many people in the room normally the room is full when either the talk is AI or it's Dan Walsh so I'm none of that but I'm here to give a brief talk about what secure development is and what secure development is from an open source point of view my name is Huzefa I am a lead security architect I work for secure development in product security I am a fedora contributor for a very long time probably for the last 10 or 12 years I have been speaking at DEVCON I have almost spoken at all DEVCONs except the last one I think and I normally speak about security I speak about security practices I have spoken about heartbleed shell shock and you know I have spoken about those topics so why are we basically here we are basically here to talk about secure development and when people talk about secure development they normally talk about how I can write my code in a way that is secure but that's not really always the case secure development is not only secure code but it's the process of making sure that your system is secure your project is secure from the time when you design the project to when you actually write the code when you build the project when you run the project and when you develop your code when you deliver your code and the binary to the customers so the question which most open source projects will usually ask us is that people think that secure development is for large companies and enterprises like probably Red Hat or IBM because their developers and their customers are normally asking about security most of the projects will start small so if you take the example of say open SSL or if you take example of Mozilla or Bash or you know whatever most of the projects really start small it's just a few people who you know get together there's an idea they work on it in some years it's common to see these projects being used everywhere so if you take example of Bash Bash started quite small but if you look at where Bash is used in our days Bash is used in a lot of projects which you can't even imagine when we were working on some Bash security issues we figured out that Bash is used in network attached storage devices it's used in your television set of boxes so you know the cable operator can give instruction to the set of box and you know you can see those channels so the moral of the story is lot of projects start really small so when you start working on a project it seems that you know my project is not not big enough to have a secure development process but that's not really the case and when the project is small when you are actually trying to work on a project and design a project that's the right time to work on secure development and by the time your project is very very large so if you take example of open SSL again when your project is very very large it becomes really difficult to inject security into the project because you know the design is set in stone you have developers and you know you have customers so the right time to do secure development from an open source point of view is when you are actually trying to start the project and remember one thing security sometimes can be expensive when you try to add security later on in the project so when you start the project that's the right time, at that time security is cheap but you know as the project progresses as you have customers as you have users it's really difficult to have security after that small projects small projects don't have the inclination or don't have the money to do security because the general thought is that security tools are very expensive so if you look at off the shelf security tools like covariate or you know these kind of scanners or something like that they are extremely expensive so if you are a small time project it's very difficult for you to have that kind of money to have that kind of manpower to have the security process so at the end what happens is that you use each a process in your project in which your project is mature enough you have people who use the project but there is no security at all which is a big problem so we are trying to look at those problems so what I can do to fix this issue right so there are three things which I can do and you know when I say I can do it means basically if I have an open source project I have a few developers I have a few users maybe I don't have a lot of money and I don't have a lot of manpower to invest on security there are three things which an open source project can basically do the first thing is learn learn what secure development is learn how secure development can be done and learn what I can do as a leader of the project or what I can do as a developer of a project or you know if I am the QE person for the project I need to learn what I can do the second thing is change the mindset security is a lot about changing the mindset then you know doing the actual work like I mentioned it's a mindset and not a process there needs to be security in every stage of your software development life cycle and very very important observe what other people are doing there are a lot of people who are doing good open source security try to understand what they are doing what resources they are using and how I can adapt that to my project and how I can use that for my software last but not least there are immense amount of free tools which are available you don't need to go away from the mindset that you know I need to buy this or I need to buy that there are immense amount of open source and free tools which are available and you can use that to do a lot of things in the rest of the talk we will probably look at some of the tools and we will see how those tools can be useful to us there are a lot of other open source projects which are available like you know we will take a brief look at Google OSS first and how you can use that to improve the security of your project so there are huge amount of learning and application resources which are available to work on the security of your project so trying to reiterate what secure development is secure development is not just code audit and patching and stuff like that it is security of the entire life cycle of the pro project right from the design phase to you know when you develop and you know all of those things and think of security from what I mentioned earlier from design to the delivery phase when your project goes end of life what happens after that do you tell your customer that you know on so and so date my project will be end of life so if you are still trying to use it use it at your own risk so all of those things are related to secure development now there are 8 things which I am basically going to talk about and I know we don't have a lot of time in this talk but there are 8 things which are normal open source project can basically do when you design your project right get security knowledge as much as you can and a lot of this security knowledge is free there are other resources also which I will probably talk about later on in the presentation know what a threat model is right and we will very briefly talk about threat model later on know how a threat model can be done there are various free tools available to do a threat model so know that as well when you are storing your code know where you want to store your code if you want to do it on github if you want to do it on gitlab what are the risks which are associated with that if you want to create your own git repository on the internet and you want to do it over there know what are the pros and cons of doing it so know where the code is being stored and what are the security aspects associated with storing the code at that particular place when you write the code this is the third part when you write the code learn what secure code development is learn how code can be written in a secure way if you are using mem copy is it safe should you be using something else if you are using string copy is that safe or should you be using something else if you have multiple projects who are working with you who are working on a project see if you can get a peer review before you commit the code there are a lot of people out there on the internet who are willing to help you and criticize you both which is probably good for your code so if you write code see if somebody is able to help you trying to review your code trying to review your pipatches and see if there are any issues with that when you build your code then sasti is very very important right before you build the code my colleague Florentio gave a very good talk on sasti and same grep there are other tools available as well we will talk about some of them there is a lot of integration with github githlab which will allow you to automatically scan your code before your code is being built after your code is being built while your code is being built so there are a lot of again free tools available which can do static analysis understand what secure compiler is what are the secure compiler defaults which you want to use so if you are delivering your code in the form of a binary to your customer or to the user know what secure compilation is and what are the different defaults which you can use when you deliver your project whether it is in the form of a source code or a binary figure out if there is a way to sign your source code if there is a way to sign your binary so that your customers or your users will know if there is a compromise around 15 years back if you remember vsftpd was compromised the source code was compromised and somebody inserted a backdoor into vsftpd if you give a smiley command to vsftpd you will have full root access and the person who did that he got onto the vsftpd servers he put in the backdoor he re-created the tarbol and he put the tarbol back on to the server and the problem was at that time vsftpd was not signed so anybody who used that tarbol probably got compromise as well the good thing was that the author realized this in a couple of hours and he could remove the backdoor and from that point onwards he made sure that the tarbol was signed so you know all of those things are very important when you deliver your code to your customers or users in form of either source code or binary supporting your code make sure that you clearly advertise on your website where security issues need to be reported if somebody needs to report a security issue then please use this email address do you prefer the emails to be encrypted do you prefer plain text emails what is the timeline which the customers and users are looking at all of those things are very very important so that people know where the security issue needs to be reported and how fast or slow you are when those security issues are reported to you when your project is end of life this is what I discussed earlier make sure you clearly mention this on your website log4js is a typical example right the earlier version was end of life it was mentioned on the website in very small words nobody cared they still used it and then you know what happened after that so clearly mention on the website in clear words that my project is going to go end of life December this year if you continue to use it then use it at your own risk there will be no security patches which will be applied after that last but not least what I mentioned earlier there are a lot of free options available so research them there is OSSFuzz if your project is applicable for OSSFuzz then it's a very good tool it does fuzzing for you free of cost and you know if you know what fuzzing is then fuzzing is very expensive so if somebody is able to do it free of cost then that's an added benefit to you so we don't have a lot of time so I'm just going to talk about things which are very very very important threat model is number number one right threat model basically means you try to decompose your application and put it on paper try to figure out where data is flowing from one end of the component to the second end and what are the threats to the model to the design of your application threats from inside, threats from outside so we are trying to figure out what the threats are there are many ways of doing it OWASP has got a lot of exhaustive information available for threat modelling OWASP has got a lot of automated tools as well so you can use those tools there is a curated list on this github site so you can look at that it contains books it contains resources it contains free tools which you can use so threat model is not impossible you don't need a lot of security knowledge to do threat threat threat model so that is something which really can be done again when you write secure code learn, audit and repeat the trick is to be paranoid with all the input any input which goes into the application please be very very paranoid with that input you don't know what will happen especially if that input is not processed if it is not sanitized in the way it should be then you should be very very paranoid with that input understand that everybody in your project is not at the same technical level as you are probably you are not at the same technical level as everybody is so if somebody writes the code and if you feel that code is not written very well then make sure you tell that person so that any future code which you write is written in the right way learn from each other do code reviews do code audits as much as you can use sasti this is very very important its static analysis can be used to find flaws and even bugs in your code there are a lot of free stuff available which can do sasti github has got free sasti which is integrated they are like 40 or 50 different scanners which are available there is a github has got its own free language called codeql which is there by default but there are a lot of other scanners which you can also enable I think it provides you some 1,000 seconds or 10,000 seconds per month or per week or something like that which you can use in github actions so you can run these tools with that provides very easy integration with CI CD pipelines as well so if you are using CI CD inside github then sasti gives you very very good integration for fuzzing like I mentioned earlier all the cool kids are doing it but fuzzing is computationally very expensive you need a lot of computational power to do for fuzzing signal to noise ratio is very very low which basically means that if you fuzz for a couple of hours or a couple of days or a couple of weeks you probably find one or two flaws because that's how fuzzing basically works but there are a lot of again free tools which are available there are free resources which are available if your project is eligible for OSS fuzz then nothing like it what OSS fuzz basically is a Google project in which you give your project to Google and they fuzz at their ends they have a high end fuzzing cluster which 20,000 nodes or 40,000 nodes at their end and whenever they hit anything they will automatically file a bug bug system and you can look at it so there are a lot of free tools which are available there is Hong Fuzz and there are a lot of free fuzzers also available the only catch is those things need to run on your machine but if you use OSS fuzz it runs on Google infrastructure so it's win-win for everybody like I mentioned earlier make your security stance known clearly notify on your website what the security address is who is responsible whether this project is only your part-time project and you will get to it when you have free free time so you know whatever your security stance make sure it is available secure code is money in the long time if you write secure code more people will use it probably you can productize it you can monetize it as well but if you write crappy code which is insecure then probably nobody wants to use it that's my talk yes in my opinion long term problem of security when you do stop up at the beginning features it will customer buy they don't buy something physically so do you feel like this has changed or it's still the same but some of the customers might see the outcome of unsecure code in their services I think the question is that what I mentioned earlier that secure code is money especially for startups where customers are more interested in features and probably less interested in security so the question was whether the landscape is changing I think it is changing a lot nowadays and the thing is that if you have a startup and if you have a project or something like that which your customers are buying they are going to buy it's going to be a time probably in the near future when your customers realize that your project is not as secure as they would like to they are new traits which are there in your project and by that time it's going to be too late for you to go back to the drawing board and to change the design and to change the code and stuff like that so I think the customer mentality is changing as well plus what you need to understand is that what secure development basically means is security baked into your life cycle which means you don't really need to spend additional resources or additional cycles trying to do security you don't need to hire security engineers you don't need to get your code audited by a third party auditor who is probably very very expensive and is going to charge you a lot there are a lot of free resources and there are free workflows which are available like I mentioned GitHub earlier GitHub has got integrated sasti integrated malware integrated dusty all of those things GitHub basically has and you just need to enable it and it needs to be a part of the workflow while you write your code wouldn't it be great if your IDE, your integrated development environment tell you that you know you just wrote a function on line number 50 but that's not safe so would you like to revisit so it's a part of your development process no additional cycles are required later on to look at it and from a startup point of view if you feel that you know there's no advantage for me to do security right now because the customers are not asking us because my priority is to have more features than security it may hurt you in long term and we have observed that with a lot of startups that initially customers buy your product because it's a new thing in the market and they want to do it but later on there is some other startup which is doing the same thing but they basically said that you know we are doing the same thing but we are more secure so it hurts you in the long term I think yes when you have a lot of money so his question was do you feel that a bug bounty program is useful from a security point of view there are a lot of conflicting views on that right my personal opinion is that like what I mentioned earlier you need money for bug bounty program to run there are a lot of companies who give you free amazon vouchers or something like that if you find a bug there are not a lot of hackers who would go for that kind of thing right people normally need the money you can be associated with bug bounty projects like hacker one and stuff like that they are useful for a particular kind of thing when you are a consumer project so when you are a project which is a web app or something like that or a mobile application it may be useful when you figure out that there are a lot of security flaws in my project but my team is not able to find out what those security issues are also remember bug bounty hunters after money they are not after improving the security stance of your project so if you have money and if you feel that that's going to be useful I know a lot of companies who started bug bounty for 6 months or 10 months and they realized that it's a waste of effort because the researchers they write 10 pages of research report saying that with screenshots and videos and stuff like that and it was not very useful for them so I think if you have the resources maybe you can give it a go and see if you get any valid things but then it really depends on how much resources you basically have probably with that money I can hire a good security engineer and I can get more output out of him than trying to get it from a bug bounty program yes so couple of projects which I have been I have been working on the question was that I mentioned that you can look at other open source projects for inspiration as you know what those projects are doing I have been working with a couple of projects in the last 10 years and some of them which I would like to mention is jillipsy jillipsy is currently doing a very good work of trying to they have some time back I saw a white paper which is I think public on the internet which talks about all the security features jillipsy currently has what their roadmap is what they are trying to basically implement by when they are trying to implement those features and you know what are the resources which are required so this is a good thing I mean you are basically talking about what current security features you have what you want to do what your roadmap is so that your users clearly understand what your security stands is jillipsy also has a page which says that if you find this kind of report it is not security please don't bug us with it which is very very important because you know people will tell you everything that you know I found a dose and this is security I found this and that is security so this is very very important because it increases your signal to noise they show there are less people reporting stuff which may not be security I think that's one very good example open SSL is doing lot of good security now they are doing fuzzing with OSS fuzz and they are doing a lot of useful things as well they are doing auditing as well Mozilla is one very good example Mozilla is a bug bounty program as well so that's a very good example as well but you know those are those are large projects they are very small projects as well which are doing good work I think yes yesterday I spoke with a person who is who is writing a lot of code in Rust and he told me he is creating a collection of how you can write insecure code with Rust right so the general understanding is that because I am using Rust I am not affected by all the classical issues which non normally CC++ these kind of languages have so I think even if you try to use a secure language like Rust or something like that you should still understand that you can write insecure code with Rust right and you should be familiar with all the different compile time run time things which can be enabled or disabled irrespective of what kind of language you basically run come on there has to be questions my teacher used to tell me if you don't have questions here either means you have understood everything or you have understood nothing yes thank you thank you I would like my government to spend more money as well on these kind of things which is great actually the thing is normally government agencies are not really aware that there is open source number one there is open source security as well even though a lot of open source is being used everywhere including government installations and their servers their servers and everything so this kind of awareness is really really important and if there is a government which is doing it then it's a really great initiative if you are running an open source project and if you are eligible for this kind of incentives then you should definitely definitely use it I would love for other governments and other government agencies to be aware of these kind of things as well and to be able to spend this kind of money to support these people it's a really great initiative I think yes so your question is how safe is it for us to use code written by AI your question is how safe it is for us to use code written by AI kind of so how easy or difficult is it for people to understand and kind of review what AI is doing wouldn't it be great to say chat gpt please fix all my security issues right so I think the thing about AI is that it really depends on the data which is used to teach coding to that particular model right and if you look at code on the internet and if that kind of code is trying to teach AI how to write code then I don't think the output would be very good as well right but that being said I think there are a lot of projects a lot of security projects for which AI can be put at very good use and one thing is for example when you do sastis scanning one thing about sastis there are a lot of false positives which come out of sastis it would be great to feed those false positives into AI model and AI model be easily able to figure out if there are any future false positives which come from your scanning right so that would be great right so it's a feedback cycle in which you feed the false positives into the model the model figures out that if this kind of information we get in future from sastis scanner it's a false positive as well so it feeds back into the sastis database so these are some of the useful things which you can use but right now the state in which I think most AI models are if there is a code which is generated by AI and if you want to use that code I would be very careful about trying to use that code my personal opinion is I would have at least one human look at the code to figure out that it has been done in the correct way yes so what I feel about sastis tool is that it's like an antivirus if you scan your system with one antivirus it may not be able to detect something which other antivirus can so I feel and this is my personal opinion I feel that the combination of sastis tools for your project may be more important depending upon the complexity of your project what code base your project basically has plus my second observation is there is no sastis tool which can scan all languages it's very difficult to find a tool which will scan everything in the same way so a tool which can scan cc++ probably won't be able to scan ruby python golang, java, algo like that so in the end if your code base is very complex and consists of multiple different languages you may end up using different tools because you know one tool is more efficient in this language the second tool is more efficient in this language so right now I think the right recipe is a combination of different tools if you have the resources for it would be much better than using a single dedicated tool one problem using multiple tools is you have duplicate number of issues which are reported by each tool and then for the person who is looking at the issue it becomes very difficult to figure out what the actual issue is there are a lot of tools available on the internet free of cost, paid different kind of things which can do something called dedublication and what dedublication basically means is that it sucks things from the scanners and it is able to figure out that the same issue has been reported by scanner one scanner two, scanner three so instead of showing three issues it will show you one one issue so you know those kind of things are really important because the thing with sasti tools is that the output is very chatty which basically means that there are lines there are pages and pages of logs so for a developer or for somebody who is looking at those logs it may be very very difficult for him to understand what is actually going on that is number one thing number two thing is that if the code base is very large so if you are scanning kernel liberty office, mozilla or something like that you have like 10,000 flaws or 20,000 flaws so it's very difficult for one person or even for a team of three or four people to be able to actually look at it and try to figure out if something is wrong I'm out of time thank you very much for coming I'm there in the conference if you have any questions then we can probably meet in the hallway and we can chat thank you e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e e