 Okay, so it's me again, so this is just a brief experience about a study I made for one of the builders of the TGV actually. So what I'm talking about is actual industrial software used in the TGV. You may not be familiar with railway regulations, this is governed by a standard called EN 5128 and it defines various integrative levels. So seal zero is when it's without safety implications to seal four, so it means a safety integrated level which is the highest criticality thing that can result in loss of life and things like that. As you can guess the constraints on the software and the cost of development of course increases greatly between seal zero and seal four. What is a mixed criticality system? A mixed criticality system is when you have either the same computer running various applications with different criticality levels. Or sometimes when you have one application with some part of it being for example seal zero or seal two and some other parts being seal four. And of course the big issue is how to convince the safety people that it's not possible for an unsafe component that didn't get the same scrutiny as the seal four components. How can you prove that it will not hurt high integrity components? I'm talking about the safety people, these are terrible people. They are in charge of all the safety and just to give you insight into what safety is about. I was once discussing with one of the safety people, I don't know what and I told the guy, well that can't happen, nobody will ever write that. And the guy looked me straight in the eyes and he said prove it to me. If you can't prove it then go away. So the simplest way to solve the mixed criticality problem is to validate all components at the highest safety level but of course it means having the high procedure for low components it's very expensive. Another solution is to have hardware protection using MMUs or things like that, make sure that you have separate address spaces and so on to make hardware protection to show that the low safety does not harm the high safety one. Or prove your software sufficiently to convince the safety people that this will not happen. So in the case at hand, one of the clients had a question whether to use software or hardware protection. So I was asked to make a study about the software solution and someone else in my parent's company made the hardware study. So this is actually the result of the software study. So the requirements were that we had various components and to simplify there was safety or non-safety one, seal zero and seal four. There was nothing in between. Data could be passed from a seal zero to a seal four component so from low level to high level. But of course those data were deemed unreliable because they were produced by low safety components. So this could happen only through special gateways that were in charge of checking the data to make sure that they were correct. Seal zero components should access seal four components because the seal four components are doing dangerous actions. Like controlling the speed of the train, changing a needle or things like that. So unsafe components is not allowed to give an order to something that's in charge of doing dangerous things. There were reusable components that were used both by seal zero and seal four components. So since they were used by seal four components they were classified as seal four. And seal zero components were not allowed to call any other seal four components. In some cases seal four had to call seal zero but in that case same thing they had to go through special gateways to ensure the safety of the boundary. So the solution was based on the notion of child units since you are not supposed to know data here. I'll give a short slide about what child units are. So I talked this morning about packages. A package can be a child of another package which means it behaves logically as if it were included into the package while still being physically separated in another file if you want. But the visibility rules are like if it were in another package which by the way I didn't mention that before but in EDA anything can be declared inside anything else. In C for example you have no nesting you cannot declare C function inside another C function and so on. EDA allows arbitrary nesting. You have two kind of child units public child and so they start simply it's a package whose name is on the form parent dot child. And if you add the new word private that makes it a private child. A public child is public so it can be used by outer components but it cannot access the private part of its parent. On the other hand a private child can be used only by its parent and sibling so it's a closed set if you want only the same family can use it. The outside world outside of the parent is a kind of subsystem outside of the parent you cannot use a private child. But on the other hand the private child has visibility on the whole parents including the private part. So this is how it was used. We defined two empty packages called safe components and unsafe components. These were just root packages so you guess for C4 and C0 components. So this is the notation for a public unit or a public child and grade a private child unit. The data on each side and the components were children of either safe components or unsafe components. Which means that the name of a safe component is safe components dot something. So by reading the name you know if it's a safe or an unsafe component. So all the safe components could have access between them and so have all the unsafe components. But since it's a private child this one cannot access the data or the other components on the safe side. And symmetrically the safe one cannot access directly the unsafe side. Now on top of that we define a public child called shared services. These are the C4 services that are shared between C0 and C4. On the other side unsafe components have an exchange memory component that will provide access to the data. So the body of a public child has access to the hidden private children. So those shared services can access other components but since there are C4 components that's okay. And similarly the safe components can access the data but only through the exchange memory that's provided here. Because the body of the exchange memory can access those data. So you see that all the rules about who has access to what are covered by this example. Of course that's not completely sufficient because you have I mentioned that low level access. But everything has to be visible. So for example you have a function that allows you to violate the type system. But you have to use a function that's specially intended for that purpose because sometimes you need that. And so you have to declare that you depend on that function so it's always visible. So because the safety people needs to be convinced we have also to prove that it was not possible to cheat with the rules that I presented. What exactly are you trying to prove? What's the property you're trying to convince your safety people of? That nobody cheated. That's the global property? Yes. Because for example you have a way to cheat with the private parts, with the check conversion and so on. So this was complemented with the use of a tool. So that's a tool that's been developed by Adaloc, my company, called Adacontrol. But it's a free tool so it's appropriate to talk about it here. It's another GMGPL license. So it's a tool to find, to check programming rules and check various things in Adaloc. It's a static analysis tool. And so it was sufficient to ensure that there is no unchecked programming in the users code, that no language shakes were removed through options, and that also one thing that was not prevented by that structure, if people declared variables in the specification of packages, of public packages, other units could access them without using the protection. So that was forbidden by programming rules and that was checked by Adacontrol. So in the end what was achieved? First it's easy for the reader to identify if a component is critical or not. It's very important when you read code you have to know what are the rules that apply to the component. So simply by reading the name you know it. And moreover, by the visibility rules, and that if you name a component as safe-components.something, automatically the visibility rules of the safe-components will be applied. You will be allowed to access only the safe-components. And they will be enforced. Which means if you just by mistake don't name something appropriately or if you try to do something that's not allowed by the safety rules, it will not compile. So that the best thing you can have, because you don't have to check it after the fact, you don't have to debug it if it doesn't compile that the easiest way to check of course. And just to maybe be sure and please the safety people, quite simple in that case analysis is sufficient to prove that nobody cheated with that. So, well, we're here to advertise for the language. What's interesting is this idea that you have in ADAS sufficient tools to translate your requirements into the language so that if your requirements are not being obeyed then it will not compile. Which is the best you can have. Do I understand correctly that even your L0 has to be fully type safe? Excuse me? Do I understand correctly that even your L0 component has to be fully type safe? Oh, yes. That's normal ADAL language. Sure. But, you know, overflows, underflows, there are sometimes attacks that can go circumvent. Okay, no. No, no. Yes, first you have to trust the component. Second, we are talking about safety here, not security. That's a completely different issue, okay? Our goal, well, if I have... I think it's different. Yes, but you have to understand, some people think that trains are easier than planes because you can stop a train, you can stop a plane. But that's not true because on a plane you have a pilot, if you have a problem, the pilot has the possibility to take over the control of the plane and save the day, okay? A TGV that's running full speed if it puts what they call the squared wheels, an emergency braking, see? It takes three minutes and a half and three kilometers to stop the train, okay? So, safety depends 100% of the software. The drivers, if there is any problem, the drivers can't do anything to avoid it. So, it's a different constraint, but it's as hard as airplanes. Yes? At which level your static analyzer works? Does it have to be integrated within the compiler? No, it's an external tool. Okay, it parses the language itself? Yes and no. It uses a special library that has been presented in previous FOSDEMs called ASIS, and it allows you to leave the parsing and the difficulty of the compilation to the compiler, and then it allows an external tool to work the decorated tree of the program. So, in a sense, the semantic tree is built by the compiler and offered to external tools, and it's a library that allows you to go through the tree. So, that's the basis of this tool and many other area tools. It uses ASIS. You don't have to worry about the parsing yourself. Yeah. But you would love to trust it for your security, right? Yes, but that's a different issue, of course. Well, the compiler has to be certified. So, you have different techniques for that. There is something called certified by usage. Well, that's the best you can do for a compiler, because you cannot certify a compiler at still fall level. It's just too complex. You just can't. So, this is complemented by various techniques. According to deal 178, the airplane standard, the code generated, the assembly generated by the compiler has to be read by humans and checked against the source code. You have people who are paid to read the data source on one side, the assembly on the other side, and to check that it matches. Well, it doesn't mean that humans don't make more errors than compilers, but it is assumed that humans will not make the same kind of errors as compilers. So, double checking might be useful. In trains, this is not required, but we just have to prove to have from the compiler vendor a list of known bugs and to prove that those bugs are not exercised, for example. So, I think that's it now.