 Het is stoon voor tijd. Ja. We kunnen gaan. De laatste presentatie in de Eredevroom voor vandaag. We zijn blij om Janik Kmoij, is dat correct? Ja. Of Edakar, die er een nieuw evolutie in gaat werken met pointeren in spark. Pointeren die niet in spark waren, maar er is veel progressie gemaakt. Ja. Ik hoop dat je met mij begint dat er geen beste manier is om zo een lange dag te zijn dan met pointeren in alacis. At least, ik ben heel blij met dit project. Toen ik 10 jaar geleden in Edakar begon te werken op spark, had ik vorig jaar werken op analiseren van C++, C at Edakar, ik heb ook al werken op analiseren van Eredevroom. En in alle deze languages, eindde ik als een nachtmaar. En eindelijk, met spark, hadden we de language die kunnen worden analiseerd. Maar het was altijd zoals deze kleine stoon in jouw schoen, dat je niet pointeren in spark kan gebruiken. En dus hadden we altijd een look voor dat. En sinds de laatste 2 jaar hebben we werken om wat moeilijke pointeren in spark te geven en ik ga dat proberen. Dus wat is spark eerst? Het is een subset van FEDA, maar het is niet zoals andere subset, maar gewoon een koningstader. Dus het is een subset die je garanties provideert als je de associatie tools gebruikt, dus de informale analiseren tools, dat komen met spark, die we genaakt worden. En je kunt naar de vier functionele correcten van de kleinere deel van de code, van iets simpelers te ontdekken, die is een simulant koningstader gegeven je goede garanties over de code, zoals geen side effects in functions, geen gebruik van pointers precies. En als je tussen deze 2 gaat, je hebt hier wat we konden de bronze level, die je garanties over de initialisering van de data die je readen en de correcte data flows en geen aliasing. Aan de civil level, dat is over absence of runtime errors, dus dat is een groot deel. Dat is waar je de buffer flow, integer of flows, al deze soorten overflows en andere runtime errors die voor de veiligheid en de veiligheid zijn. En dat goal is waar je de probleemproperatie begint, dus als je je contract gebruikt, niet alleen voor dynamische vertrekking, maar als je de type invariant en programma contract probleemt. Dus de meer je gaat, de hoger het is voor de tool en voor de user, maar hopelijk de tool helpt je te gaan op deze camp. Dus waarom didn't we gebruiken pointers tot nu? Because that's the view of pointers for Spark. That's the view, in fact, of pointers for analysis. If you want to do sound analysis where you don't miss any errors, that's the view of the complexity of using pointers because you can end up with this mesh. But also of all the traps that pointers lead to. So in Ada, that's as point full pointer support, you can end up with double free, you can end up with memory leaks, you can end up with pointers in that point to the allocated memory. And there are a number of concepts in Ada features that help you avoid these problems. But still with unchecked allocations, so the Ada name for free, you have all of this. So there are uses for pointers in Spark. In particular when you have some data structures that need pointers because they need to grow, for example. So that's typical for containers that don't have a fixed size because they have elements that don't have a defined compile size. So for example strings or any indefinite type in Ada that you must point to. That's the same for class-wide types, so the objects in Ada that were mentioned before. Or because you have a recursive data type. So if you want to have your own list or tree or similar recursive data type, you need to use pointers. So what changes with ownership? So this concept that has been used in other languages, the most famous being Rust. What changes with ownership is that it brings these great crew solutions, so concurrently exclusive writes, which is exactly what you need to analyze code. So that's what already we're using in Spark for analyzing code with references. So references are these special pointers if you like, so pointers at the executable level but not at the source code level. That can be manipulated when you pass arguments to a function in-out, input or output. In that case, Spark already analyzes the calls to make sure that there is no conflicting aliasing between things through which you will write. So there will be only one writer in the function and if there's one writer there won't be any reader through another path. So we already do that in Spark for references. So of course for pointers we want to do the same. Same kind of non-analysing check with some addition because if you have pointers you can assign pointers all over the place. So you can create your own local aliases and this is the kind of thing we want to prevent and that's where ownership kicks in. So when you are going to assign a pointer to another object you're going to move the ownership. And if you stop that you can do useful things but you're still stuck. For example, if you want to traverse the recursive data structure without destroying it because each time you're going to move your current pointer you're going to destroy it. You're going to take the ownership of the things you had before. So what you need for that you need some local handles that will restore the ownership when the scope is finished. So when we borrow observe the data that maps to the rest concepts of multiple borrows or simple borrows. So what's provable with this pointer ownership? So we have a prototype right now and what you can do is you can write this kind of code with a pointer type that is in excess so that's the way to denote in Ada pointer to T. And you can implement this swap contents or swap pointers and so I don't show the code of the body but you can imagine very easily swap contents will swap what's underneath these two pointers by differencing them and using a temporary variable and swap pointers will simply swap the actual values of X and Y. That's why swap contents takes just input parameters X and Y and swap pointers takes in out because it will change the value of these objects. And so if you put the right precondition postcondition here not only you can prove with the Spark tool you can prove that all of these differences in Ada is dot all to the reference and objects are safe so the pointer is not null and you can prove what you usually prove with Spark absence of runtime errors in general and the contract so here that the postcondition holds so they are essentially the same except here you're repeating the nonality of the arguments and all this because we can assume thanks to the ownership system put in place that's when swap contents and swap pointers are called respective arguments are non-aliased that's very important here otherwise you cannot verify this implementation so that's what it does now let's look now at every one of these operations that I mentioned the move, borrow and observe operations so the move is when you assign a pointer so that's either assignment statement or when you're passing an out or in out parameter in the procedure and the thing that you're assigning from loses the ownership of the data and becomes unreadable and the ownership goes to the thing you're assigning to so for example the implementation of swap pointers can be this one I take a local variable temp here that takes the ownership from the value pointed to by x then I can write into x I cannot read x but I can write into x the value of y and then I can write in y the value of temp is correct so now if I make a mistake so for example here instead of writing temp in y I write y well y was moved already here so I cannot move it again it was moving to x so I cannot move it again and our borrow checker so our implementation of these rules in the compiler say that there's no sufficient permission for doing that the object was already moved if we make another mistake and now so instead of moving temp moving x so here y was moved in x x was moved in y so that seems not very useful but at least for these two lines that respects the ownership principles but when you return from swap pointers here it realizes so that's why it's pointing here to the spec file it realizes that there's not enough permission for x so x you're supposed to return to the color with this parameter there's no ownership of the underlying memory there's an implicit assignment and here it realizes that's not possible because it was moved here now let's listen to the borrow so borrow occurs when you're passing an input parameter of access type or pointer and temporarily the actual parameter in the call will lose ownership of the thing it points to and it will regain it automatically so it's an input parameter it doesn't change through the call the actual value of the pointer and there will be checks like for references before that there's no possible conflicting aliasing between the arguments so for example if I call this swap contents so swap contents was taking input parameters x and y are okay there are two different pointers here x and x is not and it will be code by the analyzing check now let's look at the more complex borrow the local borrow so we can define local variable here of an anonymous access type that's how we distinguish this from the move and here we're because we're writing that the ownership system understands that we are borrowing x into this local variable what it means that for the scope of the local variable x becomes unrightable and so here I can still write but through the borer to the underlying memory by x still but that it doesn't own so I can still I can implement swap contents like that let's look at what happens if I make a mistake so if here instead of local x the borer I mentioned the borer we I get an error that says that the object was already borrowed before and and that's all for borers finally last operation the observe the borrow here was the mutual borrow of rust the observe is just the regular borrow that's for passing parameters of a composite type so an array, a record or a combination thereof that has pointers in them here we're going to consider that the parameter only provides a real only view to all the memory underneath all the possible tree of data and that's the same for defining a constant of this type again the top level objects in Ada is immutable but in Spark for this ownership system we're going to consider that all the tree of pointed to data is also immutable that allows for more analysis and after the scope of the call or the scope of this constant ends then the original object recovers its ownership during this scope both objects the borer and the borer have read only permission so let's look at an example so yeah so this was for constant and parameter the more complex observer is the like previously the local observer and here we recognize it that's our choice of design by saying that it's local variable of an anonymous access to constant so not the name type like before and here instead of a move again we do an observe so y has a read only access to the underlying memory and during its scope y also has a read only access so I can read local y here and the memory pointed to and I could replace this local y by y so that works when I make a mistake so here so if I move the final assignments up inside the scope so here in the scope where y is observed it's an error to try to write through y so read only access to the underlying memory and that's what the borer checker says so there are some limitations we are only focusing right now on what we call inata pool specific types so not the general access type that's when you define the access type as all or constants because these are much more liberal inata and that would be much more complex in the rules for having proper support in Spark so it will be either limited or there won't be any possibility to take the address of a variable in the stack so that's the kind of things we're discussing right now to define the exact rules of what we want to allow and what we can implement in the checker in any case it will be less powerful than the system of rust we don't plan to have any annotation in fact we want to have something that integrates very well with existing code or code that is regular AIDA even if it's new and so we don't plan to have annotations for lifetime we want the borrowing observing relationship to be statically known some of these constraints come from the wish to have no additional annotations and some other come from the fact that we want to do something the goal is different than from rust we want to be able to do formal verification and so what about this statically known relationship is something very important otherwise you end up with things complex like what researchers do exploring formal verification or first end up with and there are things that are specific to AIDA so when we add these ideas to AIDA there are different types and for example array types when you take an element of the array when you move it you're going to mentally move all the array because you don't know which cell it is so if you want to for example swap elements in an array you will have to do it through a call so that the call hides the fact that two different elements of the array are moved you won't be able to do it just taking index i in a variable and then doing the swap locally so there are some kind of limitations that come from the limitation of the aesthetic analysis in terms of roadmap so what we expect to do by the end of this spring so may, june when we issue the net community release which now includes spark and the GPS and everything we hope to stabilize the reference manual rules so we're still working on this so if you want to have a look if you're curious that's there, that's online and I still a bit evolving to make sure that we have something sound that maps to the implementation to complete the implementation of the ownership checking so what I showed was what's working right now but there's still a lot of work to make it work in all conditions and to implement all the rules that are already defined and to adapt the flow analysis engine so spark goes in two different stages for analysis there's this flow analysis that does local static analysis simple one for all the flows reaching essentially the bronze level and then there's proof and the hood and to reach the higher levels of guarantees of silver etc we want to finish that flow analysis which we haven't started and for the next years we want to support local borough and observing proof which we don't do right now so it's more complex to support proof of our requisite data structures and here what you want to be able to do is to quantify over the contents of these things so for all elements of my list I have this property so that's more complex we have prototypes in the underlying technology Y3 and we have to leave that to spark and to check absence of memory leaks so the rules are set up so that we can check that by proof and we haven't done that yet and I think that's all thank you yes I think it's cool too so my question is about at the beginning of the talk you talked about use cases one use case was for example this container you don't know how much content will be in the container I wonder how does can you go back to your use cases explain for example is it possible to create an AVL tree or something like that and now with this new technique so it's planned but the proof support right now doesn't support requisite types yet it will so the 3 use cases that I mentioned well the first ones the first 2 ones are the easy ones that are supported already well partially because flood analysis is not there yet etc but just having a tree which is not recursive yes it's already supported so that's cool welcome yes so spark works locally so in order to be able to do this powerful analysis using SMT solvers you have to be modular otherwise do something else like symbolic execution that traverses the thing so we are in the realm of deductive verification where we really are looking at the piece of codes although we do things like in lining so there are ways to go around these limitations yeah yeah so that's the same what I just use the examples that use plain variable but the same applies to fields I mean that fields are followed individually so what we have in terms of implementation is that we unfold on demands the structure of the tree of the type to go deep inside the fields of given variable so you can borrow for example if you have something that points to points to