 So, thank you all for coming. Thank you. Thanks to the host for organizing this event. So, I'm Fabien Chouteau, Embedded Software Engineer at AdaCore. Here, you can find some of the silly stuff I do. And today, I want to talk about alternative programming languages for safe and secure RISC-5. So, I will first start with a bit of context. Then, I will try to explain the philosophy of the languages of my choice. And then, I will try to open more the perspective with some ideas on how the RISC-5 community can keep the door open for alternative programming languages. So, the first question is, what do I mean by alternative? Well, basically, it's everything that is not C or C++. Okay? So, I know that, for them, it's probably the best audience for this kind of talk. But probably, most of you are going, you know, why? What is wrong with C and C++? I'm perfectly fine. I like it. It's great. So, we could talk about the Stockholm syndrome, but I think I will focus on something more positive today. So, there are a lot of different programming languages, a lot of different principles, and of course, they all try to give you to help you make the best software, the fastest, by reducing the debugging time, reducing the maintenance cost, increasing the portability, et cetera, et cetera. So, I'm sorry if your favorite language is not in the list here. So, first let me say that there is no silver bullet. If someone tells you, okay, use this language, it's going to be great, all your problems are solved, just run away. There's no such thing. But there is a lot of progress and improvements to be made on top of what you usually get. And why am I considering that, let's say, these languages are alternative, because usually when you get a piece of hardware, you always have a C++ compiler with it, which is not necessarily true for all the others. So, I have a really clear bias towards two of them. So, that's the languages that I will talk about today. So, as I said at the end of the presentation, I will try to open the subject a little bit more. Why do I think these two languages are relevant to the RISC-5 community? So, these are, so, Ada and Spark, some say system level programming languages or embedded, you can say, bare metal programming languages. So, they compile through machine code like C, C++, Rust and another. And what I really like about those languages is that they really make the great gap between, on one hand, being very high level, and I will show you some examples. And on the other hand, being really close to the hardware, giving a lot of control to the developers. So, yes, two there. So, some keywords about Ada and Spark. So, these languages are really from the ground up designed for safety and security. What's probably the most important, those two points here, powerful means of specifications, I will show you that, and strong typing. What's important to note here as well is that Ada and Spark are not only strongly typed, they also give you a lot of ways to define your own types, which is really important. And then you have all the good stuff, object-oriented, concurrent programming, generics, et cetera, et cetera. So, this is my very personal way of explaining the philosophy of Ada and Spark. Programming is all about communication. You have something in your head, you have an ID, and you want to express it to different people and different tools and machines as well. So, of course, you talk with the compiler because that's what is going to make the code to run on the CPU. You're talking with other tools, static analyzers, provers, I will talk about that again. Users of your API, of course, they have to understand what you have in mind when using the API. Your team, colleagues, working on maintaining the software that you wrote. And, of course, yourself, because we all know very well that in two weeks we will never remember what this piece of code is doing. So, let's start with an example, simple example. I'm writing a driver for this servo motor. So, you know this is a piece of hardware that you can control by the, you can set the position, the angle. So, this is how I would write the API in Ada. So, in other languages, you might just use a floating point as an argument for your sub-program to set the angle. But, floating point is really not enough information for the API because it can be radiant, can be degrees, can be a percentage from 0 to 1, minus 1 to 1. There's not enough information. With Ada, you have the means to specify what you really mean. So, you declare your own type with some restriction. You say, okay, it's a float, but I only allow values between minus 90 to plus 90. So, just by doing this, you're giving a lot of information to the compiler that will decide the right representation in the hardware for this type and the checkers that will check if the values are correct within the range to the user of the API, et cetera, et cetera. Another means of specification in Ada is what is called programming by contracts. So, you have preconditions, something that must be true when you call the sub-program and a post-condition, which is guaranteed to be true when the sub-program returns. So, very classic, basic example. You write a stack. Of course, it doesn't make sense to push something on the stack if the stack is full. So, you express it in the API. And same thing, once you have pushed something on the stack, well, it's not empty anymore. So, this is called programming by contract because the contract is that if you give the right parameters, you will get the right outcome of the sub-program. So, now let's take an example that is probably more relevant to this room. So, something that I don't have time to talk about. There is some kind of real-time operating system within the Ada programming languages. There's actually a blog post. If you go to blog.a.org.com, I have a blog post about this. So, right now, I'm working on porting this real-time operating system to RISC-5. And so, I have to work with the platform local interrupt controller to handle interrupts. So, this is from the specification. Very quickly, the idea is that there is an interrupt notification. Some piece of software here will claim the interrupt. So, I want to service this interrupt. The PLIC will answer with an interrupt ID. The software handles the interrupts and then we signal the completion of the interrupt to the PLIC. So, how do we specify this in our code? So, first, we define our types. So, this value here will be dependent on the implementation. But let's say I have a maximum of... The last interrupt ID is 15. This is a common design pattern in Ada. So, the range should be 0 to 15. But actually, I add one more value that gives me the opportunity to define something that is not an interrupt. So, an invalid value, if you want. And usually, we have the prefix any in that case. So, I define my range and I define that the last value of the range means no interrupt. Then I define a subtype because I also want to express when the interrupt ID is valid. And so, this is a subtype of the full range. Okay, so let's write the API to write our specification. We need to extend the, let's say, the definition that comes from the hardware. So, this is what I'm doing here. This function here claimed that returns the last claim interrupt is not really implemented by the hardware. It's not mapping a hardware feature. But I need this function to do my specifications later. This is why I will flag it as ghost. This means that the compiler know it's only used for verification and the function will not be in the final executable. So, this function tells me what is the last claimed interrupt and potentially, we can see, it's any so it can be no interrupt. So, now what is the contract for claiming an interrupt? So, we can claim an interrupt. It can be that there's no interrupt to handle. So, potentially, we have no interrupts here. What we want as a precondition is that there is no interrupt claimed when we start doing this. So, we cannot claim an interrupt when there's already one claimed. And once we return from this function, the interrupt claimed is the result of this function. Okay. And the last point to complete the interrupt. So, again, I have a contract here. I can only complete a valid interrupt. So, it's not any. And my precondition is that there is an interrupt claimed and that the interrupt here that I want to complete is the one that was claimed. Okay. And the precondition is that there's no more interrupt claim. So, I'm asking to the expert. I'm not sure this is really, you know, valid representation of the hardware specification. But what we can say is that at least we can talk about this. So, it's really, it's expressed and everybody is able to reason and to discuss. Now, what is happening? How these constraints, contracts are actually used by the tools. So, there's different ways. The first one is the runtime checks. So, maybe you notice but the contracts are actually just the same as the code that you write in Ada. And so, the compiler can produce a code for it which means every time you run, every time you call the complete sub-program there will be some piece of code that will check the precondition is true when you return the post-condition is true. So, obviously this will have a huge performance penalty. But, this is still something we want to do. For instance, for debug or testing where performance is maybe not the most important thing. And what this means is that when I debug I have right away all the information but what's going wrong in my application. And same thing for testing. You save a lot of time writing your test because you already specified the boundaries of your inputs, outputs and everything is checked. Now, we can do a little bit better than that. Of course, we want to, at some point when we release the software we want to remove the runtime checks. So, there are multiple solutions for that. First, the compiler because we express a lot more with the contracts and with the strong typing the compiler can do some basic verification. So, if you do something really obvious like setting the servo angle to an invalid value statically the compiler will find it very easy. The static analyzer. So, if you don't know, static analyzer is a tool that will do its best to find bugs. Sometimes we compare it to peer review so the tool will analyze the code and try to find bugs. Sometimes it will find things that are actually not bugs and some bugs the static analyzer will not find. What's the advantage of Edyne Spark in these situations that because we give a lot more information the tool is really more capable of giving a good result. And the last step which is somehow the ultimate goal for software verification is the formal verification. So, formal verification is actually doing mathematical proof that there is no bug in your application. So, this is what we do with Spark. So, Spark is a subset of the Edyne language in a way like MISRACI is a subset of C but as I said Spark is for formal verification. So, Spark transform your software into a mathematical proof and then we tell you, we'll be able to tell you if there is no bug at all. So, this is extremely powerful because you can say I have a mathematical proof that there is no buffer overflow in my application, there is no division by zero, there is no integer overflow and for instance that I follow the API listed above. So, of course very, very strong guarantees that you get from this tool. On the other hand, as you probably guess it's more difficult to achieve this level of safety because you have to tell really precisely how your application is supposed to work. So, all of this is what we can call functional safety. That's the ultimate goal of Edyne Spark. To be able to say that your program does what it's supposed to do and only what it's supposed to do. So, this was the really high level part how do I specify my application and how do I check that it works. As I said at the beginning Edyne Spark are also great languages for hardware access manipulating the hardware. For every type that you define in Edyne you have the high level view and you have the low level hardware representation of the type. So, here it's not necessarily very interesting but I can specify the size in the alignment for enumerations, same thing. So, I can define the size, define the values, the hardware values that are used for each enumeration. For record, so record is more or less the equivalent of struct in C but much more advanced but let's say it's equivalent here and so again, high level definition of my type and low level representation specification. So, here we can set the ranges of really which bits in the byte will be used by each field of the record. And so, the ultimate goal and the ultimate benefit that you get from this is that you don't have to do this kind of things anymore. This is really error prone there is no checking whatsoever and so, I would say unfortunately this is more or less the industry standard for drivers but this is really, really, really easy to mess up. So, in Edyne, with all the hardware representation that we define, this is what we would do. So, here I'm not using any pointers at all. I declare variable and I tell to the compiler do not allocate this variable on the stack do not allocate it on the heap or whatever I'm giving you the address where this variable is allocated. Again, very important, this is not a pointer and then I can just assign the value to the field I want to modify. So, as you can guess, this is really, really powerful. One slight problem that we have with this is that it's a bit cumbersome to write all these things especially with modern microcontrollers for instance that have thousands of registers. So, something nice that happened in the world of ARM microcontrollers is the definition of the SVD format. So, SVD is hardware description, more or less. Of the memory mapped registers and with the SVD to Edyne tool we can generate all the hardware mapping in Edyne. And I will come back to this because I think this is really something really, really important. Another point which might be of interest Edyne is really easy to interface with the C application. So, for instance here, I have a C function that I want to use in Edyne. I can just import it. Just import, I say it's a C function and I put the symbol here. What's really interesting is that you still benefit from the specification features. So, you can have a C function, it still puts preconditions, post-conditions on it. And to export to use Edyne function in Edyne it's just more or less the same. Okay, so that's it for the, let's say, very quick introduction to Edyne Spark. Maybe you're wondering where this is actually used and the answer is here. So, Avionics, Defense, Rail and Space. These are the really core domain of Edyne and Spark. Of course, what is common to those domains is that failure is not an option. And what's really important to see as well is that most of them, not necessarily defense, but the other three, you not only don't have any rights to fail but you also have to prove that your software is correct. So, before you put an aircraft in the air, somebody will sign and say, okay, this aircraft is safe and you have to prove it to this body, this authority. But we also have new emerging domains starting to use Ada. The automotive industry, so there's some really bad or story these years about software in the automotive. So, we have some companies from Japan coming to us, asking us how they can use Ada and Spark to improve their software. Security as well. So, as I said, mostly with Spark, you have some very strong proofs like the fact that you know there's no buffer overflow in your application. This is of course really, really interesting for security and so we have some nice projects going on like Muen, which is an supervisor in Spark. We have the French National Security Agency that also wrote a microkernel in Spark. These kind of things. And just a quick word about the company I'm working with. There are some, I think, interesting points here. So, we are developing an ecosystem, a software development ecosystem around Ada and Spark, so compilers. What's important, I think, is nice here is that the Ada compiler called GNAT is part of the GCC tool suite. And everything we do at Ada Core is open source. So, we have IDs, code coverage, et cetera, et cetera. And we do... Okay, I won't say it too loud, but we do support CNC++ as well. We support dozens of platforms including RISC-5 since last year. We do frontline support. That's one of the values when you are doing open source software. You have to be really helpful to your customers. And if you don't know what this is, well, lucky you. And so, big announcement for this week. I'm really happy about this. Ada Core joined the RISC-5 Foundation just this week. So I'm really happy. And I was supposed to be able to do another announcement this week, but it's going to be next week, okay? I'm sorry. So follow us on Twitter and other stuff. We have a big announcement around RISC-5 for next week. Where am I? Okay. So, now I just want to give you a quick getting started overview. I won't do any live demo because that's a recipe for failure. These are the two solutions that you can use right now, two hardware solutions that you can use with Ada very easily. So, as I said, the Ada compiler is part of GCC. So if you know how to compile GCC, you can also quite easily compile GNAT and use it on any hardware you want. I'm mentioning those two because we have out of the box support for it. So it's going to be easier. So this one is the high-five one from Sci-5. You probably know it. This is a tiny FPGA BX. So what it means, basically any FPGA, I have a blog post using especially this one. And I use the PicoSoc, I think, soft CPU on this. So quick instructions. You go to our download page on the community. You can download the community edition of our tools. You have the cross-compiler here. And I also recommend taking the native one because there is the IDE and the Spark Provers as well if you want to have a look at that. You go to GitHub. So the objective is clear is to develop drivers in ADA to use on microcontrollers. And in particular, we have support for the high-five one. And after that, you should be able to do something like this. Yeah, videos in PDFs, that's not a good solution really. Well, I think you get the idea. Okay. So for the last part of my talk, I want, as I said, to open the topic a little bit. And I want to give not necessarily advice because I'm not really in place to do that, but maybe some ideas from my point of view on how the risk-five community can keep the door open for alternative languages. So first, I want to say that the risk-five community is already doing very well to support alternative languages, mainly by contributing to open-source compilers. So of course, when you support, and so I watched the talk this morning about LLVM, which was really interesting. As soon as you support, you have support for risk-five in GCC and LLVM. You're almost already there. That's really important. And because of the very early support in GCC, we were able to start programming in Ada and Spark very quickly. Debuggers, of course, GDB, Open OCD, and QMU, I think I'm particularly interested in this one. It's very important to have some simulation tools to be able to check quickly your implementation. So some of the challenges for alternative language maintainers, let's say. So the first one, I think, is the complexity of the instruction set extensions. And actually from this morning's presentation, I think I should also mention the ABI. So if you are a hardware provider, you know you have this kind of CPUs, you use this kind of instruction extensions, so you can make one compiler for, you can make compilers for your customers, and you can check them. For us, potentially, you can have any customers using any kind of hardware, and we have to be able to support them, so you have to be able to produce a quality compiler and toolchain. So the more combination of extensions, the more difficult it will get. Something else that I want to mention is that knowing what is really implemented by the hardware is really important. So if I want to take the comparison with, if some of you are familiar with the PowerPC families these days, like you get the name of a microcontroller, it's impossible to know what's going on inside. So being very clear about what are the extension use is really important. Those two are probably going to give us, so deviations from the standard and custom or proprietary extensions. So as long as you go outside the standard, maybe you will say, okay, I have this very nice feature I want to add, it's going to be great, and I will do a special batch of GCC for it. Well, you have the risk that you put out of the game every alternative programming languages because we won't be able to maybe use your patch if they are not contributed and we probably won't be able to really test those features. So every time we will see something like this from our customers, it's going to be more difficult for us. The reference implementations in C, so I guess we cannot expect hardware vendors to provide the drivers and libraries in every other languages. What I would like to mention is that it would be nice to keep in mind the alternative languages when doing reference implementation, maybe getting in touch with the different communities to ask if they are willing to participate in writing an implementation in Ada, in Spark, in Rust, in whatever. Also what I want to mention is that C is actually not... It's an okay language for reference implementation, let's say, because more or less it's the basis for everybody. All programming languages have some way or others to interface with C. Going beyond, going into the C++ territory will, on the other hand, make things quite complicated. And so the last point for me today, and as I said, I'm going back to this. The SVD format was really a game changer for us in terms of support of ARM microcontrollers. As I explained, it can be difficult to write hardware mapping. So having a format that you can basically take any microcontroller, you generate the low-level representation. It's really easy to start programming this microcontroller. And so I think that the RISC-5 community should take inspiration from this. And actually I don't know if there's already projects going on, but I would say that we are willing to participate in this kind of definition of format. So at least I think SVD is really the minimum and probably we can go beyond. So for instance, what I have in mind is specifying the CPU characteristics inside this format. So I was talking about, for instance, the instruction set extensions. Specifying the memory banks, that would be very useful. Also, one of the big problems with SVD is that it's very monolithic. So let's say I have two microcontrollers. They use the same I2C controller inside. Unfortunately, there will be two separate SVD files and no real way to know that it's actually the same I2C controller. So potentially I should be able to do right only one driver, but it's going to be very, very difficult. I know that the RISC community, some people started to do some analysis of the SVD files to try to find different, I mean, similar patterns in the SVD file to try to identify common controllers, but really I really think that having a modular representation would be a great advantage here. And since the trend and what we maybe are going to see with RISC 5 is the ability to generate and to create custom chips, custom microcontroller, if the tool that generates this custom microcontroller could also generate the custom SVD, that would be really, really, really helpful. So that's it for my presentation. I think I hope that you got an idea of what it can mean to do something different than CNC++. If you want to have a look at Ada and Spark, I really recommend to go to this website. It's a new interactive website, so you don't have to install anything. You just click on the browser and you will be able to compile and run examples. And you can follow us over here on Twitter and join the Ada subreddit where you will see news about the technology and the community over there. That's it. Thank you. Okay. Do we have any questions? Maybe? Yes? We kind of... Oh, yes, sorry. So the question is that the next MicroSemi FPGAs, right, will have RISC 5 cores. And the question is, do we have any plans to support it? The answer is yes, but I don't know really when, so far. Okay. Yes? Sorry, runtime. So the question is, how good is the runtime performance of, I guess, Ada and Spark compared to C? So the answer is that it depends on the features. So Ada has more features than you would find in C. So if I think about exception propagations, these kind of things. So if you use similar features, we have similar performances in terms of code size and runtime performance. Some of that is thanks to the fact that we use the same backend. It's GCC, so performances are really similar. Of course, when you start using more advanced features, there's a runtime penalty, but that's a balance between the feature and the performance that you want. Yes? From your examples, I got the impression that your language could actually use, so Ada could actually use hardware extensions, for example, for boundary checks because you limit the range, value range of data paths. Is there anything planned to work on a proposal in that direction together with Rust files? Because why we have the opportunity to change hardware extensions? Okay, so the question is, it looks like the specification of the Ada language can use or can lead to use some hardware-specific maybe implementation to help implement those restrictions. So I would say the answer is definitely yes, but we are not really in this kind of ecosystem, unfortunately. So we joined the RISC-I Foundation. This is something, so we are definitely willing to participate into this kind of effort, but that's not really something we do right now. So yes, there's a lot of things. In Ada, you have fixed-point, native fixed-point support, you have modular types, et cetera, et cetera. So there's a lot of things that could use some specific hardware, but yeah, we don't really do these kind of things right now. So we use as best as we can what's available. Yes? Is there another key possibility to define like clear on read or set to clear registers? I mean, you showed how you model the address into the variable. I mean, especially when it comes to micro-controls, as you mentioned a few times, you sometimes have these exotic... Okay, so I will try to refresh the question. So the question is, is there any possibility to do, what do you say? Set to clear registers, et cetera, et cetera. So no, there's not really that kind of precision. What we do have is the possibility to specify, for instance, for a 32-bit register, the compiler will try to optimize the access and maybe sometime do 8-bit access, which is not always allowed by the hardware. So we have ways to specify to use the entire register, these kind of things, but nothing towards what you said. Okay? Let's see then. Thank you.