 Let me introduce you to Yulyanel, who's gonna tell us more about the COALA framework. Thank you. So Hello everyone. My name is Lionel. I work as a DevOps engineer at EvoAzure SNCF. EvoAzure SNCF is the digital factory which addresses the SNCF group digital challenges. We are almost 1,500 employees working in three sites in Lille, Nantes and Lille. I work in Lille. We address two main challenges, digital distribution through VSNCF and Ryle Europe internationally and travel information via the mobile app that is SNCF. We are delivering to them IT services. Today we are going to talk about static code analysis. We will start first by make do some overview, a quick definition of static code analysis. So we will follow up with the COALA framework and COALA beers. How is it used to do static code analysis in Python? And we will see what are coming next into the framework to ease the use of Python to do static code analysis. And we will finish with the Q&A. If you have questions, of course. Static code analysis. We can define it as a method to extract facts, detect and also fix defects in source code without executed it. We are mainly used to do code quality, code reviews, also compliance. When you have to do compliance test to in order for security, or you all just want to ensure that your code style into the team are well-respected. And of course to detect flows and try to fix them before going to, before running it. The COALA framework. We have a lot of tools today to do static code analysis. As you see, there is a bunch of tools. And the more you have tools, the more you have a way to configure those tools. And it's pretty hard. So let's categorize those tools into analyzers as you see up there. And you have also the way we use those tools via editors, tools and services. And also the way we consume the results produced by those tools. Like exporting your results into JSON or an HTML report. As you see, it's pretty complicated to deal with all those tools and the way we use them. So how can we know what if we can have one-only tools to manage all this mess? That's why COALA has been built. COALA is not pretty much, it's just an API which is language agnostic. That means you can, you use COALA in Python, but you can analyze code from any languages. It supports more than 60 languages for now. Programming languages. Let's have a closer look. That's the typical constitution of the static analysis tool. We start by using the code as data. And you have some model extraction to just get data from the code. You can then produce an intermediate representation. We could be, we can be, you have AST. You also got data structures. You could also have COAL graphs and also control flow graphs. If we zoom out into the main goal of COALA, as I said, you have data structures. For analysis, we have rules that also call routines. In COALA, they call it bears. It's the way you implement your algorithm to do static code analysis. And after that, the produced result, we could be in two forms. Outputs and actions. Output is like detecting the way you detect flows and errors. You could have also as an output a fixed recommendation to how we can fix our code. And you can get as a result also compliance reports for flows and code styles, as I said before. You also have actions. There is a lot of actions there. Action could be apply a patch or actually fixing the flows or the defects. Let's see how to pick setup. You could use PIP to install COALA bears, which install everything you need to start working with COALA. Or you could also use Docker, which I think is a recommended way because you don't have to deal with PIP packages. You just use the container which is packaged up all the things you need. You have another way, which is online. You have a beta web page online where you can just put the repository of your code and start working on it. Let's see how to use COALA. Let's see some code. First, you clone the project. And as you see, you have a bears directory and the SRC directory. And this project is mainly, you have mainly two languages. We have C and Python. And the way you approach the use of COALA is by saying, okay, I have a project with two languages. Let's list the available bears, the available rules that I could use to analyze my code. So you run this and you see a bunch of bears available. For the next slides, I will just add this COALA to make the slide more short. You just run COALA and it means running COALA behind Docker. So I want to analyze my code written in Python. I will list the bears available for Python. Yeah, I choose the perfect bear, for example. I don't know much about it so I get some documentation how to use it. And I see also we have optional settings how to set the configuration for the bears. And what the bears can do. The bears can detect formatting and can fix formatting. So you run your bears, you run COALA and specifying the bears on your Python files. First you will get a GIF output. It says how your code is not compliant to the PEP 8 rows. And after that you have action as I said before. You can either do nothing or open the files, apply the patch which means make the code compliant and also ignore the command. And you can also apply patches at the command line to do the apply patch action directly. You also have another option to specify which action you need to do. As you have noticed, also the tools recommended you or suggested you use dash dash save to save your configuration. That's mainly brought us to the configuration file. As I said, you have a lot of tools and you have a lot of configuration files to tell which tools should analyze your code. In COALA you also have one file to deal with all bears. When you run with dash dash save, you produce a file called a COA file. It looks like this. You have sections. It's an init file. You have sections and you have two, at least two mandatory settings. Which is specifying the bears, the rules you need to apply on the code and the file you need to be applying on. You can also enter your settings at the command line as you see in the int. In the configuration file you have a way to organize your bears. This to avoid repeating yourself in the configuration. You have inheritance. You just have to prefix the section with a base settings, base section name. As you see section one and two extend from the configuration set at the base section. You also have append operators which you can, like this, append files from section to section. As you see at the base I'm only analyzing Python files and at section one I want to analyze not only Python files but also C files. That's an example. I have all the C files section and at the example section I want to check also if the space is consistent in my code. Okay. Let's see how bears work and how to, what are bears really and how to create your own bears. As I said, bears are only rules. But bears are the base construct when you need to write a rule. You have to implement the run function. And the run function is the one which is executed to run your algorithm. From the bears you have two classes, local bears and global bears. Actually your code should extend from those two ones. And you also can have user inputs at the arcs which is provided by the framework. Local bears run on every file of your project. And the run function had provide the file name and the file content for you to run your algorithm. Also use input and settings. And global bears are to run analysis on the world project. As you see in the run function you don't have the file but you can do some internal things to do whatever you want with whatever you want with your files. Let's see an example. I have yellow bear and now I just print some logging things. So extend from the local bear that means I want to run it on every file and just output a user input which is provided by the user. And at the end the rule is that you have to yield the result. Why? Because when you mixed bears you could get the result from any bears in your bears. If you have dependent bears you can get the result, provide the result getting from the inner bears you will use, you have used in your bear. I run it and that's the output. It suggests me to enter my user input and I put my name and it suggests me as you saw before action to do on this. So bears to write bears you have three main categories. You have native bears, linters to do linting and you have external bears. What are native bears? Native bears are the bears that extend for local bears and global bears. Simply like this. If you want an example that's a native bear. As you saw the yellow bear was the native bear. Then I just implement the run function and yield some results. And the global bear aside also do the analysis on the work project. Linter bears. Linter bears use your own tools and wrap your tools. Just imagine you have a linter like just lint and you want to use koala to wrap it. That's why you use a linter, a linter bear. You specify the executable and you have to say if these bears is a global or not. If it's global it means your tools analyze the whole project and if not it means just the tools analyze the file, each file of a file. The particularity there is that the linter bear has to implement the create arguments. That's how you pass arguments to your executable. Let's see an example. And you have also the way to provide a configuration. If you have a tool that's a highly configurable file you can just do everything in the generate config, produce the gson you need to execute your tools and it will be injected in the create argument after that. Let's see an example to see how it works. I have pilint that I need to wrap. So create the create argument function. That's where I return a list, a topple. The topple contains every options pilint need to analyze my code. That's simply how it works. And from the output specify there you specify how you want to analyze it and interpret the result. Xn or bills is wrapping also your bills, your tool but written in any language. It will provide, Koala will provide you some data in gson and you have, as a rule, you have to produce this result which will be analyzed or used by Koala. That's an example. I create a beer. I wrap my tool which using node. I create my script in node.js. And after that with my add result I produce the gson wanted for my tools to be considered as a beer. Like this. I should use console.log to output it as a gson. Going further, the new thing that will come up is they are creating a way to provide to us the AST of any language. A new API using the aspect of grant programming with aspect and taste. And a new package manager to just specify your requirements in your beers and it will go fetch your NPM package or your PIP package inside your beers. To be sure that when your beers run you have everything you need to analyze your code. Thank you. We still have time for a few questions. This gentleman here. Oh, yes. With Koala, are you able to do some kind of semantic analysis to produce some properties of your program or so on? The question was can I do some semantic analysis? I will answer that yes, you have just a run function so you can do whatever you want. Oh, the intermediate representation in that case was just a Python data structure. The Koala, for the local beers Koala produce gives you the file content and the file name. And you can work with this. And if you have the file content you can do whatever you want with your tools that do semantic analysis on this. That's how it works. And they saw that that was a little complicated and you cannot do much with this simple construct. That's why I said they are working on bringing some real tools to give you more abilities. Yes. Oh, wait. Any additional questions? You mentioned it's a language agnostic. Can you use it for Python 2.7 as well? Can you speak alone? Oh, wait, yes. Python 2. I think I said that it was available for Python 3, I think. Yes. It supports 60 languages. It supports 60 languages. Yes. Oh, it's hard. Right? Yes. It supports 60 languages. So we could analyze the C program in Python with the complete ASP from C. Yes. It depends on the compiler options. Yes. The question was Koala supports 60 languages. And you can use Python to analyze C code. And how it works. Yes, that's my question, because it depends on the compiler options. Yes. You can use the linter, the linter beer, and provide your compiler option via the beers. You have to see it as a wrapper. That's why I think it's very powerful, because you can have just one tool and do whatever you want with what is out there to do analysis. Any question? Okay, thank you. Thank you very much.