 This year I'm not presenting. Every year for Bosnia. Yes, I see. This year I'm not presenting. Every year for Bosnia. Yes, I see. Can I have one? I did that once. Let's mix up them. Thank you. I'm going to talk about the Spark language. Usually I do table presentations. I present scientific results which we achieve, or tool demos, or tutorials. But not today. Because all of this you can find on the Spark website. I'm going to talk about the Spark language. I'm going to talk about the Spark language. Usually I do table presentations. I present scientific results which we achieve, because all of this you can find on the Spark website. So that's easy. Today I'm going to use this opportunity to explain something that we don't show otherwise. A historical perspective of why we got there. And you can see that on this line, it goes back to 1997. We're going from team to team. So I'm holding the project leader for each of these books, I believe. And we end up with a small library of books in part. Maybe a tenth of a shelf. And it's not finished. So maybe one day we have a Spark for the list. Something else I want to touch on is something more recent that we see interesting, fast projects that use Spark. So we're going to work on this, of course. So I'll talk about those that are particularly noticeable. So going back to the history of Spark, it started long time ago, more than 30 years now, 1987. The company is a great name, program validation listed. So they have this goal of indeed doing, giving warranties for all the programs that you can run. So it's a little added with a failure that you have in force. So this company went through various renaming and acquisitions. It was practiced during a long time. Now it's Alpha. And in these long time, they had great projects. One of the first, really big, was this C-130J, which is a big military plane. And the mission computer application was done in part. They had also a security, so that's more of a safety realm, that security project that took an year, which was a project done by Altrand for the NSA in 2005. And we'll talk about it a bit later, because in 2008, this project was open-source. Because of this, because of these events, so Altrand partners with Aircore, and we convinced them that there was some value in pushing the technology forward open-source. So it's been a GPL system. So the past three I won't cover much today. If you are interested, there's this article by Roy Chapman and Florian Chanda that really covered this period. So a number of things evolved. And 2008, we started something new. So we started this project, Highlights. The small timeline is simplifying the use of all of this. And what we ended up doing in this project was rebuild the world. So we packed the previous technology, kept the good ideas, and we did it. And the result was this Spark 2014 language. In between, so while we were rebuilding the technology, projects were still being done. So two noticeable projects are the IFAQS project. So IFAQS is part of the air traffic control over the UK. So when you travel to the UK, usually waitlights are going to use the shovel. You're using this software, which helps air traffic controllers to foresee possible collisions, drive the decision-making and asking planes to do some actions to avoid collision. And in 2013, the first version was issued. And we'll talk a bit more about this because this one is open-force. An interesting fact is that these two were initially coded with the previous version of Spark of this week. And in the past years, they have both been related to Spark 2014. So you may say, well, migration. Isn't it a big word for going from the subset of it all to the subset of it all? Because it's precisely not as good as the subset of it all. Please. You're so sorry. So let's see an example. So this was previous version of Spark. So it's not a subset of it all. It was presented in those days as really a different language. So it was heretic to say that it was a subset of it all. It was using a subset of it all as the operational code on which it was imprinting logical adaptations, logical language. There were two languages in one. And so for example, here you can see an API about computations and price of something that looked like the post-conditions that Dirk Holbrock just mentioned that says that when ads returns and it must return something in the sum which has some properties, same for melt and then this one returns this exact value, et cetera. And you can see that already there were some rich expressions like these quantified expressions that you can see in mathematics to quantify your collection. But there was a purely logical language without relationship to what's executed in English. So only for consumption by the analysis. So we transformed it into that. So maybe at the first loop that doesn't look so different from what you had before. Maybe a bit of syntax line. In fact, so all these things now are part of the language that's executable that's really code, code that expresses properties. So we're really tied to what was just presented before. These are the post-conditions that were mentioned by the lack of before using richer properties like quantified expressions here. And these are additions that we felt were needed in Spark and so are not yet in Ada and we are pushing so that we are included in this version of it. So here for example, these contract cases why do you think we need richer contract? Because there are a lot to specify pre-posts by cases and it's very common to have services, functionalities that behave differently depending on the inputs or the sets of inputs that are given. So we have all here ads. It's a saturating addition so if the result is less than a differential the result will really be the addition. Otherwise, if the expected result would be greater than the threshold then it's such a waste. So these contract cases have these semantics that they divide the input and this can be checked either vertically or vertically. So I'm just showing it here with really the nicer display of symbols when you have a ligature in your monospace fonts. Some of my colleagues hate it but I take this occasion to advocate. I find it cool. It's really nice to display properties that makes it easier to understand. So what we built during this highlight project is this ecosystem of open source tools. We built our own ecosystem based on building blocks. So AltarGo, CVC4, Z3, Coq, all these are automatic poolers here. So the tech formulas say true, false and unknown. Interactive poolers here, so they ask you for inputs and help you work out the proof you saw. And all of these have communities. Some of these in Coq, for example, has a huge community. Dozens and dozens of research institutes. Those that are most interested to us are these, the automatic poolers. And Y3 is this autistrator of proving technology. So what we've done in this project is we've done it uprograms to this intermediate Y3 code. So Y3 is our intermediate language, so what we call it. Intermediate verification language. It's the assembly language for doing this kind of deep analysis of code that proves the victim verification. And from Y3, there's something called BC, verification condition generator. I'll use the same simple term formula. That is from some code here is going to generate mathematical formulas for this community. And we have a shared lab which is called 15 News between us and in Rial, which develops this part. We co-develop Spark and Y3, so that they have a really, really great integration to go further than what we can do. Yeah, so this, all this in fact is for the proof analysis that this part does. There are two analyses in fact that we do. There's proof and there's flow analysis. Flow analysis is something simpler that checks all the flows in the program. And this is done at this level, really, at the level of this net proof tool. So this is depicting the harder or really more complex task which is to go through. But in two slides, I'm going to present now what we do with flow analysis and proof. Because I'm going to say it's not a technical tool, but just so that you have an idea of what it does. So flow analysis does that. Here is the signature of a service in ADAL. And now you can add to it with the ADAL 2012 syntax some aspects to it. And this aspect global is an aspect introduced for Spark. So the tools that work for Spark understand it. And here we are completing the signature of Stabilize with the global variables that are accessed by this service. Here it has additional inputs that are not passed by parameter. And it has an actual variable rotors. It might be more than one variable. It might be a name that references variable with that. Which is partially real or real. And once you've done that specifying effects that you want the function to respect then the tool can do this flow analysis and check that the program implements this specification. So that's the explicit specification. You might have also implicit specifications namely in terms of flows that everything that you compute serves a purpose. Otherwise you will get a warning. While you're not doing much here you read that mistake or that everything that you read is in the slide. This is the typical reason. It takes both explicit and implicit specifications. So that's for flow analysis. And the second one is proof. So proof using the same signature for Stabilize. We can now specify functional behavior of what it expects in entries. This quick condition that the modes in entries shouldn't be off. And get specified post-conditions. Here is the if expressions latest addition to 2012. In the case of success then there should be a relationship between the old value of reverse and the new one. Of course the previous contract will be added together just focusing on one at a time. Once these specifications are added to the code the proof tool can check that the program implements this specification. So this is for the explicit functional specification but there are implicit ones. Namely that there are no runtime errors in the code. So all accesses are within bounds. All the arithmetic is also within bounds. This takes the ranked checks of AI. So to talk a bit about how we got to the current solutions, technical solutions I chose to present two in terms of objectives that we had for this revamp of the Spark technology. And so we had really high level goals. Not technical ones like let's use this technology, it's better than this. So we had some of these but the high level goals were these. First functional contracts can be executed tested developed because we want them to be considered as light code. So we don't want to wait for formal proof of these to be useful. We want them to be to replace less precise comments and specifications in natural language. We want them to be able to replace some of the testing articles that you would have otherwise in the separate artifacts and the code. So that was one of the main things which completely related to the changes that were going on the data at the time which was this addition of contracts of other forms in the data language. Data subset supported in Spark should be as large as possible. Indeed. The main reason not to use the technology because it doesn't apply in your base or definitely people who do see will be able to use anything that will do. We like people who use AI to be able to use this technology as much as possible. So that was a big change. The user needs no annotation to start proving code. So there are various levels in which you can say proving code. I'm using it here in a large sense. So giving warranties on the code whether it's by analysis or by proof. And in the booklet that you will find here that was on the far side on the right in my first slide. The booklet that we wrote with Thales we have defined five levels of different analysis of proof achievements that you can reach. And so at the lowest level you shouldn't have to work too much. You should be able to just have a program that feels fit into the subject and starts by nature. This one is that you might want to go further than that at some points. But you want few annotations to fully prove the code. We'll say what few means. What we want really here is that it's super hard to get beyond the initial steps. The last one, we'll also need a bit of background. It's manual proof is something really hard that people usually don't want to do unless you approve big. So we want manual proof to be in that required. So looking at each one of these in a few more details. We want contracts to be executed, tested and debugged. So of course that's using the electronic software conditions and post conditions. If you were here in the previous talk you'd know all about them. So that's this aspect three and post. So really, our view now is that contracts are code. We can do as well with contracts as we code in terms of usability, integration, as bad as codes for the functions with the crazy things. Spark helps you here with that respect. It's more restrictive. But that needed some additions to AIDA. So we were very pushy with the ARG so that they include in AIDA some of the features which we really needed for proof. Quantified expression, the ability to state properties of a collection. Expression functions. The ability to abstract properties. You don't have to inline everything each time or have to go to the body to state something that you would need in a spec. It would be crazy. It would be completely contrary to this spirit of separating the spec and the body in particular when we want to read to specify rich properties, not just bounds or things that you would inline. So for these two we I don't know if we write so we pushed in of the ARG and in particular our colleague that they were a member of the ARG so that these two features were part of the ARG. The fact that the AIDA subset is large, there was a departure from the previous technology. Previous technology adds this correct by construction motto that if you followed certain steps and restrictions it was far easier to verify formally your program. This is true. Unfortunately that's very rare that you can follow all these constraints from the start. You have to be in a rich, nice situation where you start over and do everything as you were told from the start. And you may need then to completely change your ways of doing things so that's really your big constraint. So we opted for something different. We say it's all these constraints they're going to be a new kind of standard so there are other tools we are developing some of them but there are other tools. We're going to opt for every feature that doesn't make formal verification impossible. So we include it almost everything. What we still excluded to this day are pointers and exceptions. All the rest is in. So in particular all types, discriminant types, types with dynamic bounds. Anything can take off. No restriction on confirm flow. So you can return from within the loop anywhere. You can have a complex hierarchy of packages. That's not our business so we know how to make sense of the code. This is a separate issue. Recursion again in many embedded situations, in some non-impeded or others you might and that's okay because we can make sense of it. Generics there was an attempt that supporting generics in the previous version of Ada which was quite limited because the attempt was to check generics once and for all. We wanted to allow any generics that the Ada language would allow so we opted for something much simpler which is to analyze each instance. We support generics but in a way that may be less principled than before but much more permissive and allow more code together. For all these pictures, all types, no restriction on confirm flow recursion in particular these impact or there are good reasons why they were not supported before. There are good reasons why they were not supported before. They can be quite difficult to treat the proof and when we implemented this we ended up with subtle bugs longer years in the treatments of types with dynamic bounds in particular complex confirm flow recursion really subtle bugs in how we generate the formula and which ended up in possible unsoundness in the tool. So we believe today that we have had good fixes for these bugs but that's one of the reasons why, ideally, you want to be possible as well. We made this choice to allow as much as we could do and to focus on some of course each one of these few bugs that I mentioned were treated very carefully to be sure that today we don't have them or anything that we can do. Still initial version of SPOT 24.2 did not support a number of things. So we felt we had done a big jump here. Some people were saying, well, you don't support O, you don't support transparency, tasks, you don't support data environments that were just added to that as well. So the exact thing as well is that's why we were doing a big jump forward and O and we can see because the previous version of SPOT was supporting slightly OO, so no dispatching calls but type pipes not really O but a bit of type pipes. We were supporting some key grantees and it was called Revensk Park so restrictions of Revensk car for 5.0. But we opted for let's get the tool set out and we'll have a roadmap for these as time goes by. Something else is that Spark code from the start after a few experiments where we were looking at solutions to mix data and Spark code, we opted for something quite fine grained and at the same time prescribed by the user. So the user can say exactly in their code where the Spark code lives. The tool can analyze this so that's quite possible to mix things that are supported with other exceptions on the user. So the data subset is still expanding so we are adding longer years of programming including dispatching calls and for this we use the best that academia as shown is possible to do which is to show that derived classes are providing subset of behaviors of the parent classes so let's call the long name is LISCOP substitution principle for example that's one of the means to comply with OO in AVI-LIX context now. So the DO-170-8 is AVI-LIX standard, allows for that and there is a special supplement to allow for OO that mentions that LISCOP substitution principle should be verified by testing but so essentially that's the same as impact on correlation proof but on the proof scene it means that when you derive a function your precondition will be possibly weaker so you will accept to be called in more context than the thing you derive from but your post addition will be always stronger so you will deliver more possibly to the color because you might be called through dispatching calls with the static type that is one of your practices so that makes the kind of analysis that we do in here function by function possible we had each support vocal currency one year later so again using what had been done in previous technology so we support Revenscar really almost all of Revenscar even the extended Revenscar profile that has been defined about two years ago and contrary to the previous technology we insisted on not having a host of annotations that you need to put in the code so it's mostly generative all the things that are accessed by whom, by which tasks how to detect data races much less annotation much less user work but it means also it's less modular so it really builds on the whole program to really detect all possible data races support for type predicates, type invariance so I won't go into much detail but there are two types of data invariance in Ada and we now support both for why we had to do it one by one because in fact Ada is defining what this means to be type predicate or type invariant dynamically which means that it checks at some points in the execution that some properties are verified this is not enough to prove type predicates these are the things that should always hold what they call strong invariance in Wikipedia and type invariance, there are things that should hold outside of the defining package what they call weak invariance which means that they are less good than the strong invariance which means that sometimes you allow to break them when you define operations over your objects and to be able to prove them you have to be much stricter you have to be able to load these to mention any global var mold that anyone can modify you have to be able to assume them for example when you enter a package that is responsible for an object and how to infer that really these guarantees are strong always hold when they should hold and so we had to define stronger roles in Spark performance that we could implement in proof and proof and finding an exciting one on which we are working now is the support for ownership access types in Ada and Spark so Ada has a really rich access type so pointers which prevent the normal problems one which is not prevented and it's explicit when you do uncheck the allocation unchecking that can worry you is automatic save the allocation management of memory and it's something that other languages like REST have solved for a number of programs by using this notion of ownership where a pointer really holds really owns everything that the tree of memory that is underneath with mechanisms to borrow this ownership when you're doing a temporary modification to a program and so we have experimented with that last year in the context of that we even have a prototype last week we sent submissions to the CAV scientific conference for the underlying scientific mechanisms for Ada Europe for the proposal for the future Ada standard you can also read the Ada issue 2014 if you want to be a good language lawyer so that's the thing that's cooking or I'll say the next two years because let's not be too ambitious the rules will take time to really redefine for Ada and it's probably in combination that's a great example where different parts are moving in the same directions although we will certainly have a slightly different answer for spark which is probably restricted okay so users are lazy we are users we want to have all results without doing any work and in particular when we start and we want to try out we'd rather just see what good is it for us before we invest any more that's why it was important that we can start doing these plans and these proof without further use so every program signature in fact it has a quick condition which is implicit in its signature that all inputs of the program are in their types and when you return all outputs are in their type there are some consequences in terms of what you initialize of course so spark is a bit scripted than Ada in that respect that when you're passing things around they have to be fully initialized in their types but that gives a really reasonable the default functional contract then you need to know inputs and outputs what are these they are partners so that's easy they are in their signature and global variables might be really global in terms of having the lifetime of the program or syntactically and output at an outer scope and so this is what we call these global variables that are read and written by any given program directly and directly to both they are directed by people so here again this analysis is not modular it goes through the call graph and collects all effects so that you don't have to do this work now when you want to go beyond so you're satisfied with the tool to achieve more you have to start with some work but what we claim now is that you need few or fewer annotations because few is quite relative to where you start from few for many would mean none no annotation that's not possible again it depends where you put the goal if you aim for something easy you will need less work for something harder more quality you will need to work more so yeah proof as I said is mostly modular so we will analyze using the very complex tools very rich SMT solvers sub-program by sub-program which means that to analyze calls we will need whatever information you give us preconditions, post-conditions in particular in some cases we can do without because we have inlining mechanisms so internal sub-programs that have no contracts the tool some loops so what we call the simple four loops we just enroll them so that you don't have to add the necessary annotations in simple cases and things that do not participate in the API the API is really the specification but things internal we would like to strip all of them if possible and try to go in this direction you can factorize annotations for example data invariance that is really important to add support for this type the biggest type invariance because not putting them in means that you should specify them for every pre-post everywhere where you manipulate some of these data so that's really important factorization here and now a lot also of these benefits has to do with how we now formula so that's the technology behind it we completely replace the technology using state of the art and all this direction of formulas you need much fewer loop invariance special kind of annotations you need for loops sometimes you don't need cut points so ways to simplify the proof work just to show it in practice an example where you really need fewer annotations spark scheme container for staff 3 and there was standard implementation of that algorithm and Rob Chapman who was at the time a leader of the spark team at Antron wrote the spark implementation of that thing so this spark implementation so there's a scientific article about it and this spark implementation requires a lot of work so you had many annotations for effects and dependencies so the global dimension there were many preconditions so I don't call it here just a precondition because a precondition could be tens of lines of code so I count conditions the things that you add together because there are new conditions that you want to check and there were a lot of conditions in loop invariance as well there were loops there were a number of annotations to just do with the limitations of the tool in terms of scalability and even with that you have to complete a number of proof manual so there was a huge effort for someone really a worldwide expert in the technology was led to this moment for some effort at least to prove absence of random errors on this code that's the situation today with the current technology it's really achievable by anyone and that's related to what I mentioned before so to end up with this part manual proof so you don't want to read that that's why it's so small on the left you have a formula that was generated by the previous tool that it could improve there's an encoding of trace in your program of a static path in your code and this h1, h2 correspond to assignments, calls first thing that occur on this path with an encoding of all the data types and you want to prove this conclusion that might correspond to a post condition or an error well when the automatic proofers of the time didn't do you had to go through manual proof possibly so this is a manual script where you had a number of instructions do this, improve that by this rule and wrap this instantiate that et cetera manipulating this formula so this we won't do at least not in this shape because these formulas are really far from the code here this is a different language, this is yet another language obviously that's a lot of languages so we quite too much effort for the typical projects even those having a really critical and static map so what we do now is that we target automatic proofers so we have support for with the state of New York's SMT solvers, alt-target, cvc4d3 and the whole platform really targets all of this with adaptations to the formulas and these really handle well arithmetic and quantified properties that arise from the encode the encoding the encoding that we do apart from this specialization the natural encoding that we do is also tenoring these SMT solvers so we really focus on automatic proof and not so much manual proof like before and the user has some control over the strategy of proof and timeouts I won't go into that but that's an editorial level when we use the tools ok so I've roughly 10 minutes to go over four projects good thing is that there aren't that many so so Joaquin from there who was here before ah so if you have questions at the end that's for him and he has this quite rich library of things that are useful in physical applications that is mostly included in Spark but it's trains for our containers UTF-8 parsers that were just added yesterday officially and a few wrappers on Aida code but wrappers so that they can be called from Spark typical also thing that we do implement them in Aida but make them available from Spark certify is software for this drone for this smaller crazy fly drone that you can buy open source platform and certify is the software that runs on top of it to do the stabilization was coded by my colleague Anthony and he replaced Friartos on this by Ravenscore Profile of Aida in the meantime and the stabilization communication code by Spark and did absence of a runtime error proof so there is a cool demo feature where you can let it fall and it lands slowly at least we can do it in the small rooms like that I use in prototyping teaching research so we have a company Sojilis often we have representatives here who are using crazy fly for prototyping and Jérôme from EZA-8 we use it in teaching research Poly of I talking about Jérôme is developing this high-tech middleware for ADL so ADL is architecture of the stupid language there is a tool to generate Aida code called Ocarina but of course all the all these things that are generated they need to talk through this middleware so this middleware was produced in Aida and now it's been moved to Spark and it tells a bit of this story in the paper that you will find here that it presented last year at the pharmacy and property and the cool thing so it did proof of absence of a time errors proof of some contracts over this middleware and the coolest thing is that he had a colleague Christophe de Arnaud and they went through this process of developing this tool or adapting this middleware from Aida to Z to use on one side Spark and on the other side from C equivalent to D for C to prove absence of a time errors and possibly more but he did it much faster and his colleague at some point had to stop because he was too hard not to say that it's impossible but Aida and Spark made it much much easier to achieve Pulsar so still from Sojibis still the same people here it's drawn to pilots so there's not yet the public code repository but there will be by the end of the year it's still ongoing so what they achieve is in the autopilot is manual and stabilized flights and soon they will have loaded flights and then really full autopilot as part of a project to achieve really high level of safety for the drone industry so the highest level level A of the Aida standard D178 so Spark is one part of the process that put together around agile methods and formulated to achieve that at reasonable cost and so Spark is used for some pulling some of the functionalities and also absence of a time error Splato X glider by Martin Baker a researcher from Technical University at Munchen it's a firmware to control this and mine fixed wing glider model to collect weather data it's launched from a balloon and did glides but how do you get it? well we need a kind of autopilot and they did that over a very small period of time a few months they achieved the target proof absence of a time error and functional contracts so you can also see the results in the slides at this location and what I show here is that these are all the units of codes in the software for the glider although the green here has been proof of absence of a time error and functional properties the gray ones are the spec size part and the black ones are the part here and so yeah it's ongoing projects but they already achieved a lot in a very short time I took in here I mentioned it at the beginning I was led by Glendale Barnes at Atron a few months maybe one year project for the NSA to demonstrate that it was possible to develop very high security software so EAL5 level with formal methods at the reasonable cost so the cool thing is that all the private artifacts statistics on the original code are available from this webpage and you have a version that we have completed the translation to Spark 2014 not long ago which is part of Spark Pro or Spark Discovery and which is part of the example that we distribute with the release now Mouen Mouen is the largest Spark open source project is this separation channel by Adrian Ken Rexiger and Reto Berke so these two are researchers at the University of Rathausville and they work with Securitz which is a security company in Germany so you look at the website we need to know all the details of the multiple features that have been added since 2013 it runs on X8664 and the goal was to have this very very small code base of 3,000 lines of code in Spark 300 lines of code in S&B S&B grown a bit since then but it's very very secure software on which they prove atron software timers one thing is that the latest version their websites they put it on MirageOS on top of it so it's really functional on their laptop they use Mouen it's really a functional separation channel and to what they say which is true is that Mouen is really a milestone in getting this first open source separation channel that is proved for absence of timers and the small one it's Mouen that can be modified by others because of its small size and its simplicity so if you're interested in security look at this one as I said before the code was originally in the previous Spark and they managed to migrate it to Spark 2014 and so now it's moving on with the newer technology something interesting is that security is never ended the battle you have to defend against the tomorrow and how best to illustrate the strength that they had with their approach then to see how well they did with the mail down and the spec code which were completely accepted by the mobile community and if you look at these mails that were sent on the MirageOS on the menu list so you learn that mail down is not necessarily by Spark but by the way that they selected very very low set of optimizations options to do this virtualization and a spec is really mitigated and although it could be vulnerable it's much less than if they add multiple possible other jumps that they did not have to defend anything so both security is not a software thing security is a system thing and in software there is the issue of choosing that but certainly the choice of simplicity and doing for strong guarantees like they did comes from a lot of the results that they did here so very fast Spark community resources so there's a community release every year next year it will be bundled with that so we have only one of the nodes and if you want to get the most from Spark there's a small difference between Spark and Spark 1 the number of printers that are shipped so we're only a shipped one right now with what we call Spark Discovery so Spark Discovery is both our pro and community version for everyone Spark Pro is the one for people who pay us but on the community version in fact you can install CPC4 and G3 easily it's documented so please look at this to get the most of proof and in particular some of the features like on the example will only be available now if you want to learn Spark now there's plenty of online resources Edakore University there's another 500 class which is not checked on the website which is you can access it from GitHub and all of these will move to a new Eda and Spark learning website from Edakore Edition there's a blog that will move to Edakore Blog this year so you can try to cover but you will find them from the Spark website the great thing is that some people in the community are also producing learning materials so for example 0MUG and a list of gyros they're putting this part by example for teaching nothing better I produce a really nice tutorial which is going to Spark so if you produce anything like that let us know we're very happy to have located there are community events we try to gather both open source and other professional communities around Spark so with plans and researches Spark and Pharmacy Days Spark and Pharmacy Days but I think I was biased here which occurred for the first time in 2017 will occur against this year at National Institutes of Science and Technologies in the US on June 27, 28 great program, great speakers if by any chance you happen to be there drop we have presentations upcoming presentation at conferences so Alexander Senier who happened to have presented his work at the Embedded Mobile Automotive just before me so if you want to have a look at what he's doing for security you can probably watch the video later we'll be at Bob's conference in Berlin and there will be presentations at Edakore obviously finally if you want to play with Spark and you're too lazy to download the tool you can play with it this address right now it will be also part of the website for example click around so prove by clicking yeah so what's your first project with Spark that's my question to you maybe you have time for one or two questions is there an impact at runtime between AI and Pharmacy Days