 Coming up next, we've got a talk on Ada, exploitation in the era of formal verification, a peek at a new frontier with AdaCoreSpark. So please help me welcome to the stage Alex and Adam. Hello everyone, and welcome to the presentation in the exploitation in the era of formal verification. A peek at the new frontier with AdaCoreSpark. Some folks already approached us and asked what is this picture here. You can see the answer is pretty dumb because there is no logic behind that. It's just kind of visualization that in any kind of software you might have a wrong path, correct path, failed path, and some of the path could be correctly verified. Some of the path is just wrong or undefined behavior. This picture looks cool. That's why we decided to use it, but don't find any meanings here because there is not. If you find, let me know. I'll be happy to listen. So I don't want to spend too much time on this slide. My name is Adam, and together with Alex, we have been doing some security research for some time already for a few decades to be honest. And yeah, there is some private contact to us, some short bio to us. And this research has been done during our work at NVIDIA. Both of us work at NVIDIA currently. So yes, apparently this research which we are going to present is not only the results of our work, but entire offensive security research team, which I'm currently leading. And Big Kudos also to Jared, Max, and Nikola, which also are involved in this research. And before we also move to the formal verification problem, especially in the software, we need to speak a bit more about software vulnerabilities, because this is what the formal verified method is supposed to address. And you cannot speak about the software vulnerabilities without touching the problem of memory safety. So just a quick intro about memory safety. What is memory safety? It's apparently a term used by security engineers to describe a state of being protected from various bugs related to memory access. And of course, there's a bunch of bugs who falls into this category, including every type of overflow, some memory, like on the stack, on the heap, on the global, every piece of the memory which can be overflowed, any kind of out-of-bump, read and write. And invite page, including null portable reference, any kind of use after free, use after return, use after scope, etc. There's many types of use after bugs. Also, any type of untilized memory, including wild pointers or memory leaks, invite free. All of that, it's a memory safety issue. But not all of the security vulnerabilities are memory safety issues. So some of them, like any type of the overflows, integer overflows, arithmetic overflows are not memory safety issues. Logical issues is also not memory safety issue. Error handling race condition, there's a star, because race condition on data access is type of the memory safety issue, but in general, it's not. And there's a few more. But what is worth to mention that even this is not a memory safety issue, they often result in memory safety issues. So if you have any kind of integer overflow on the size of how much data you want to copy, it's not memory safety issues, but when it starts to copy the data, then you will have a memory safety issue. So why we even speak about memory safety? Because apparently memory safety errors are today the biggest attack surface. And I really mean it. It's the biggest software type of the bugs. And there's a few reasons behind that. Some of them are easy to spot, like this one. Memory unsafe language has been chosen to develop a core of the execution environment. So every type of the operating system like Windows, Linux, Macintosh are written in memory unsafe language, like CRC++. And the question is why? Because at that time, when this ecosystem has been developed, we already have memory safe language, which could give us opportunity to not have these bugs. But we still develop them in the memory safe language. There's a few reasons behind that. Some of them is because at that time, the hardware was not performed as well as today. That was much slower. And the code in the memory unsafe language was faster. And also it gives developers for fun growing control over the memory address where their code can be executed with the attributes. This is one of the reasons why memory safe was chosen to develop these ecosystems. Also memory safety bugs are very well researched, as everybody knows. Some professional exploiters even developed the exploitation framework to target exploiting software stack. I've seen by myself, reverse engineer, it's one of the framework developed to exploit Windows kernel vulnerabilities. You have a full framework. You just find the bug and expose primitive like read write and include into the framework and entire exploitation process magically runs. It bypasses a salar, bypasses any other mitigation. So you just find the bug, plug in the framework, run and it's done. So it's a very well researched idea. And additionally, memory researchers do automation on focusing for memory safety bug detection. They don't even need to understand what the software does. You just take an open source project. You recompile this with the various code coverage instrumentation. Then you spin up code coverage driven fuzzers and you just run it and you have bugs. And apparently it's kind of not rare that some researchers, they just pick up most popular libraries without going into detail to analyze what's the architecture it is. Let's just fuzz it. Let's find the bug and we have benefits, yes. So it's not also a theory because a big corporation starting to struggle with this problem more and more and they cannot ignore it anymore and they want to study a few cases and the biggest corporation who produced software is apparently Microsoft. And they also face this problem for ages and they make a few presentation about that which I want to recap here. So in 2002 Microsoft created something called Trustworthy Computing, TWC, where they wanted to, they cannot ignore any more the security issues in their ecosystem because larger customers like government agencies, financial companies starting to be impacted by the security problems on the windows and this is Microsoft products, and they need to ignore it and they needed to do something on that. So they created TWC and they focus on the few areas and one of them was security and as a pillar of this initiative and they developed as a direct implication of that, a few interesting things like TWC developed as DLC, security development life cycles process to be able to make the quality of the code higher from the beginning. They also created MSRC, which is Microsoft Security Response Center, which treated security bugs differently than any other bugs. So they also know that this is something which can impact the security of the ecosystem, so they are also starting to fuzz by themselves some of the software, they are also starting to do bug hunting, they are also starting to do exploit research, so they do a lot of interesting stuff and traditionally in 2019 they said, okay, we spent so much engineered effort, we spent so much money on that, do we see improvements? Do we see any benefits of all of that? In 2019 they analyzed the last 12 years of all of the security cases reported to them and that's what they saw. So this is slide from Microsoft itself. As you can see, the type of the security bugs which are reported to them are more or less the same over the time. In the 2006 around 70% of all of the security issues was memory safety issues and in 2018 we still see 70% of memory safety issues. So does it mean nothing changes over the time? Even of this years of investment? It's not obviously because this slide shows what changes. So we see that the type of the memory safety issues which are reported are differently. So in the beginning in 2006 we see the quarter of the bugs are stuck overflowing vulnerabilities while in the 2014 there is almost zero of them. And the reason is not because this bug disappeared, it just doesn't give benefits for the attacker because the modern mitigation makes the exploitation of this bug sometimes impossible or very hard. But in the same time they saw the rise of other type of issues like use after free, this is this gray box here. You can see the huge rise of them because there's no any mitigations on that so it's much trivial to exploit and then they introduce mitigation and you can see that the amount of the bugs are shrinking. And now we can see there is a rise on the edge of this year. You can see the rise of yellow which is type of the confusion because there is no mitigation so people move to that type of attack. So something changes but overall 70% of the bugs are memory safety issues. And Google case, let's speak about that because it's slightly different. They wanted to avoid the problem with Microsoft hats instead of patching bad software with adding new security layers. They wanted to make something differently. So they designed Google Chrome in the security in mind. They have a high code quality. They constantly fasted since 2015 because they knew the bugs will be there. They wanted to catch them in-house as soon as possible and you know Google has power. So they fuzz it in the various platform. They use OSS fast platform. They use Google Cloud platform which essentially means they have essentially unlimited computer power. They also have dedicated team to improve various mitigations and in 2020 they did exactly the same job. They analyzed all of the security issues since 2015 when they started to fuzz what were reported to them with the high and critical severity and guess what happened. They end up in exactly the same place. Even they took different approach. Roughly 70% of the bugs are memory safety issues. And half of these bugs are use after free bugs. And even more they knew that these bugs will be there. So they designed the Google Chrome in such a case that these bugs will be there, they knew this. They heavily sandboxed them and they make a warranty that one of the bugs is not enough to take over of the hots machine although this is a quote from their blog but we are reaching the limits of sandboxing and site isolation. And yeah, that's pretty concerning. And what about open source world? This open source world is different than corporation world. And there is not many research in this area although we found one interesting research done by Technical University in Darmstadt in Germany, Continental AG and Intel Labs which summarized it as we find no clear evidence that the vulnerability rate of widely used software decreases over the time even in the popular stable releases. The fixings of the bugs does not seem to reduce the rate of the newly identified vulnerabilities. So it's not much better I guess. So are we doomed? Is memory safety something who will be with us forever that we cannot fix anything? So apparently no. People trying to approach this problem because we understand it much more and we have a various ways how to try to address that. We have a new mitigation push to the hardware like memory tagging who are physically in the hardware trying to stop the memory safety issues from being exploited or even happens before it happens at first time. That's the memory tagging idea. We also have ideas of creating a new architecture of the memory like Cherry architecture. Cherry architecture treats differently in memory so it also will be redesigned the hardware. And there is a different approach, different path. Let's rewrite all of our software in the formally verified languages or use the language which has static method of proving that the memory safety doesn't exist. And people starting to rewrite the program in the Rust. But we have also the formal methods which is even stronger, which can formally prove that there is absence of memory safety issues and there is absence of undefined behavior. So pass form of verification you must be safe. And one of the language which is the most advanced of giving the attributes of the formal verified software is apparently at the course park where Alex will speak more about now. Yeah, let's read just a few key points from our presentation last year. Spark is a subset of other core language. It's basically a programming language formally defined including a set of analysis tools. And the real strength of this comes from these analysis tools. These are Gnat Proof, Gnat Stack and a few more. So one of the key features is that it's statically provable and the Proof is produced by this Gnat Proof tool. So you have to run it after you compile and build your code. So what can you achieve with this formal proof? So this tool can prove... Using this formal verifier you can prove that certain dynamic checks cannot fail. So you can just omit them. The compiler may omit them and do not build. That may give you some performance benefits. Also it can prove that there may not be run time errors. So everything runs in a well-defined state. You will not have any errors that raise in run time. And you will have hard proofs, like formal verifier. So also AIDACore Spark is memory safe language like Rust. But it also has a very strong typing system which is much stronger than Rust. So for example you will not have automatic overflow, integer overflow or stuff like that. And because of this it traditionally is used in industries such as automotive, IoT. Spark is safety certified. So NVIDIA constantly used that for build safety certified code. So the key points is that you can build buggy code but the problems are detected by the tools you really have to run them to actually verify that your code doesn't have any problems. And tools are orthogonal to each other. You have to run all of these tools because they detect different kinds of problems. So we approached the project written in Spark. We thought about this attack surface that we can use. So obviously no memory corruption bugs are there anymore because it is memory safety language. So also there are no pointers. So problems with pointers do not apply to Spark. But since 2019 Spark has a feature. So it's introduced the concept of pointers pretty much like Rust Borrow checking. So what about arithmetic security? So we have a very strong typing system so that's why arithmetic issues also don't apply. Our project was single threaded so we didn't care about parallel execution but if you do care about it there is extension to Spark called Revenskar which we can use. So what is left is attack surface. These are logic bugs and bad design. Obviously if you prove wrong things you can have correct proofs about wrong assumptions and you will have bugs in this case. Yes, so everything what Alex said it's compelling, sounds compelling. Apparently in Sery Cup you can have a buggy code you just must run all of the tools. If you don't run them you still have bugs. And the tools are orthogonal to itself so you must run all of them. If you don't speak some of them you will not have all of these benefits and most of the potential security issues must be in the design issues or in the logical errors. And bugs also can be introduced by the compiler itself which we can see later on the slides. So everything sounds very compelling and pretty interesting but benefits. Do we really see benefits in the software within Spark or any formal verified language compared to non-Spark code and apparently before we go there I just want to quickly say that what our team does because we did this evaluation so just to give you overview why we think we are the good guys to do this comparison we essentially work as a third party company inside of the company we just take a product after they pass all of the checkbox and they said okay ship it it's secure then we said no don't ship it let's review something which is the most critical it can import components so that's what we do and since 2020 we analyzed 17 very high product impact product which it's not a small feature it's a huge product and 10 of them used the software written in memory and safe language and 4 of them had the software written in Spark and what is important that 4 of this project was fully written in Spark there was not even single line of code written in the non-formally verified language and 2 of them was written in Spark and the Spark was the like imposer and enforcer of the security warranties of the software while inside they could be still software written in the memory and safe language but they didn't change the security warranties of the entire ecosystem so it's fine to have it there so we get a closer look on them and we wanted to compare do we really see the benefits and so first question how do we compare them how do you compare them as Apple to Apple not Apple to anything else and have some kind of bad impression about the quality of that so it's difficult so what we did we just we just put the raw data of all of the project which we reviewed we choose the one which could be compared we sort them out by the time frame of the review which we spent analyzing the software we also put what is the total back counts what is the percentage of the memory safety issue and what is the type of the software which we review and by sorting by the type we could be able somehow to compare because if the type of the software is the same we can more or less compare how do they behave with the formal verified language so first project in Spark could be also not could does not apply to this comparison because it's focused on the hardware modeling and we have nothing in the memory unsafe language to compare with so it was out of scope so we left with the free project in Spark which we can compare so let's look at the first one this is operating system like software which we fully develop in Spark and also we have in the memory unsafe language another type of the software also internally developed which was developed in C and C++ and as you can see we spent three weeks in the memory unsafe language for the reviewing and six weeks which is twice as long on Spark but we found only five ten bucks in Spark while we spent twice as long as reviewing the memory unsafe language and in the memory unsafe language we found 45 total back counts it's a huge difference as you can see there is almost 52% of the bucks 53% of the bucks are memory safety issues in the memory safe language while in Spark it's zero another software which is also good to compare is the project 4 the last project in the Spark which is a boot software also written in Spark and we spent almost the same amount of the time of reviewing as booting software in the memory unsafe language and apparently memory safe language boot software was not written by us, not written by NVIDIA it was written by external company although it was using NVIDIA products so that's why we reviewed it and as you can see we found around five bucks in the Spark and 17 bucks in the memory safe language again huge difference if you look at the percentage of the memory safety it matches the industry standard 65% are memory safety issue in memory safe language and 0% in Spark the last project is a pretty difficult let's say because it's kind of hybrid project combining two different other software it's a boot it's a root of trust software and additionally there is a functionality of being resource management software and apparently in the memory safe language we had two separate project doing exactly the same functionality so two of this project in the memory safe combined together gives more or less the same functionality in the same code size of the software written in Spark so we spent more or less the same amount of the time frame four weeks plus less than two weeks and five weeks in Spark so more or less the same time frame we found around 40 bucks in the memory and safe language and 28 bucks in Spark again big benefits and so in the recap that's a conclusion based on this average data formally verified software can be free from the memory safety problem this is star as you can see later there is still some room for abuse but in general yes it can be memory safe formally verified software has much higher quality because Spark apparently enforces a lot of attributes like secure and strict coding practice strong typing system you need to correctly internalize any data otherwise before you use it you will be like killed by the compiler you cannot use it and also what is worth to mention Spark is not a silver bullet you just see that you can apply and all of your problems disappear it's a pretty heavy software programming language so you can just sit down and write a code like in the C you write a function you write another function no you must be really sit down think about the software design it properly map any object and then when you have that then you can start starting to compile them together and write a software so you need to keep this in mind additionally Spark can prove that there is no dynamic checks cannot fail there is a something called absent of front-end errors and depends on your level of assurance you might have warranty they will never appear so this checks might not be in the binary because you don't need them anymore also it enables much more efficient security efforts because at first you will not see dummy bugs there like we sometimes see in the memory safe software and something which is not verified or unprovable is very clearly marked with the attributes so we just focus on this function what is unproved and what is unverified and focus there and there is also something called speak condition post condition ghost code I don't want to go into details on that but essentially can clearly define the state of the software in that specific point of time so you know what to expect from the software and how to behave you just know what the purpose of the specific software state machine and what also to mention is that most of the bugs which we found in Spark requires apparently very deep knowledge and understanding of the software so in the end of the day the bugs which we found are very deep type of the bugs you have architecture issues design issues and if you look from the statistical perspective when we do review the project in the memory safe language on average in four weeks we found around 40-50 bugs while in Spark we spent six weeks which is longer on average we found around 5-10 bugs which you can see there is a there is a benefits which you can see internally in our company so what is the real bugs what are the example of the bugs which are speaking about here so yeah former verified software can produce you the software which is very strong but there is always but so let's look at this but in the Spark so first problem is a problem the signature verification which we found in one of the project so there is a function who calls let's say verify public key and this verify public key has a specific checks if the software is configured to verify the signature then it verifies the signature so what is this root of trust for this function of verify signature let's take closer look and we can see that this function get authentication has apparently three states which is state authentication known authentication RSA there is also state authentication unknown but the function verify public key never takes into account this third state so is this state even possible to appear because if you have authentication RSA everything is fine authentication known we don't need to verify so it's okay what about authentication unknown and apparently the root of trust authentication unknown it's coming from the register on the hardware and register occupied three bits but three bits gives you eight states not two states so eight states apparently means that authentication unknown will be assigned to six states of this hardware not so and these states these six states are never taking into account in the verify public key because we only take into account verify known and verify RSA so what happens if you have authentication known apparently the software will treat this in the same ways authentication known so in the out of eight states only one state enforces signature verification and seven states are handled as there is no verification at all and again Prover will not be able to catch that because it's a logical error it's a logical error how it's defined by the developer so this type of bugs you can still have here and this is a problem with compiler which Alex speak more about one of the projects we reviewed had fault injection protection in scope so the code must incorporate some kind of mitigation against fault protection it's true to enable this fault mitigation here you see the assembly of the binary that we analyzed we see some memory compare function implemented in a constant way so that we this is protected constant time execution but at the same time we see a single point of weakness which is definitely a bad thing if you care about fault injections so what happened here we analyzed the source code and saw that we implemented everything correctly but for some reason compiler optimized out certain functions which weakened the protection so this was essentially a compiler problem and you only can see that in the binary so that's why you not only have to analyze your source code but the binaries yes and that's another interesting example of the issue it's not a one issue we found very small issues here and there but then we realized when we combined them together we found a very serious security issues it's a problem with the auto-utilization problem with the absent-of-random error problem with the design and performance and then in the end when we mixed them together we found something interesting so first and first Spark of course cannot prove the correctness of some metadata coming from untrusted sources like from the external media from the ROM because how it can you don't know what's there but although if you want to prove such kind of software you can provide this knowledge to the provers so the prover know what are the limits or what are the bounds check of specific untrusted data and this is what the developer did here they essentially provide some manual verification for specific contracts to help prove or prove what they want to be proved essentially so what was verified there essentially they verified the maximum size of the external media to not cross through the out-of-bunt of the memory they also verified what the local buffer is what the minimum size is there are some other checks which are boring so I didn't include there but there is also you cannot start reading from the beginning of the media but from the offset of the media so you also don't want to go out of the media size so all of that is there and in short all of this sanitization for the prover allows you to read the data between X and Y then they will pass the check and the prover will be happy then and X apparently was some kind of small number let's say 8 bytes Y always warranties that you never go outside of the media size so everything is fine it's okay so far we just have some bounce check and the prover can prove it's okay and then what we saw when we starting to execute further this function they starting to read from external media some image and this image had another header inside which had different size that the minimum size verified by the prover because the minimum size of this header was 512 bytes not 8 bytes which essentially means that some portion of the data if you read will be not really internal representation of the buffer and internal states of the buffer based on this header also won't be internalized because you just didn't read them so you cannot parse them etc etc but because before you wanted to reference any memory in Spark or if you want to use any type system in the Spark you must internalize them because otherwise you will not even compile the software so what happens I mean prover will be unhappy if you do that and then the developers force internalization so they zero this memory so it's fine yes we just zero the memory which you don't use so it's fine yes so let's go forward what we see that somewhere in the middle later on the stage execution we see the function who also tried to verify some pieces of the memory from this second image which you read and this is pretty interesting function because first thing first they internalize the entry point to be always success should be failed but they make it success so okay second thread flag they said okay if you already verified some port of the image why you want to re-verify it again so if it's already verified we have some local cache don't verify anymore performance improvements and just go to exit and return success but there is also as you can see there is ID zero this ID zero is internal representation of the state based on the header which you read in the second image and because you never fully read this image this ID was internalized by zero probably written by different team assumes that this ID in the internal memory representation is zero you means there is some performance improvements in internal state so go out and never verify anything so in the end because of this authorization the zero was there and you never verify the image and the other say verification never happened it skipped let's see another example of the bug so this one is a memory initialization code and you see that the developer for some reason decided to make a certain piece of memory readable for both super user mode and user mode so basically there is a lack of memory isolation between these two modes but problem was that spark didn't know the context of this code so it was a fine code it compiled correctly but since there was no context prover didn't catch any kind of issue with that so it's a logical error we didn't want this memory to be readable by both user and supervisor mode but since there was no good model of the hardware the prover could not find this problem but it's possible to build such knowledge via ghost code but it's very difficult to do another example of the issue is in this code that performs initialization of the tasks during task switch and you see that certain features of the cache management have been enabled to be accessible from the less privileged mode and also the cache is not invalidated on task switch so it gives less privileged mode an ability to perform side channel attacks again without a good hardware model the prover could not catch that so essentially if you write the software in Spark you also should care about the hardware model everything otherwise the prover might be blind so this knowledge is possible to be done but it's difficult so that's another problem which we found it's pretty interesting so as I mentioned we developed some specific operating system in Spark and this works on the top of the custom risk 5 microprocessor we also developed in Nvidia and this is a problem which we found during the shared memory I don't want to go to the details how this entire operating system works because we can have another talk about that but it's already public it's nothing new that we disclose here and there is a talk made by the Marko Mythic who fully goes over all of the details how this operating system works written in Spark in short just to recap this is architecture you have a hardware hardware is sliced by specific sub portion of the hardware and this sub portion of the hardware is being seen as exclusive independent hardware for each of the software partition and the Spark code is a part of the separation kernel and the separation kernel essentially enforces the limits of what the partition can have access to or not and because it's formally proofed and formally verified we have warranties that we didn't miss anything that what we divided and sliced it's exclusively only accessible for that specific software partition so we can have multiple partition that each of them have exclusive view of some specific portion of the hardware and we prove there is no way other parts of the hardware can come than that so essentially everything when you verify it looks fine and looks okay but then we realize that from the performance regions which of course make sense if you have the shared libraries used by two separate partition we do not want to lose the memory and map the same shared library twice in the different physical page yes so you just share the same page so you have a shared page between the various partition if they use exactly the same shared libraries they not only include the text section but they also can include the data section and the data section could be read and write yes so if it's read and write and depends on the shared library we might have interesting information there this interesting information might include the global state of specific shared library which could be taken into account in the logic of execution of the specific software in the partition they also might have utilization data they also might have some control flow data and save language pointers and this is pretty getting pretty interesting yes so apparently spark and the prover cannot know what you physically do with the content of this shared library inside of the partition because how so essentially this opens the room for abuse and depends of course what is in this shared memory and what is in this shared library and also what is in this shared global data so we found a few interesting bugs in our use cases and we decided to develop concept exploit against some of this shared libraries in this operating system written in spark and we wanted to show you how it works I'm not sure what will be the quality of the of the of the image here but let's try so this is apparently can I ask for a help some of the goons because when I switch they should the video mode somehow now you have ok apparently it worked so what we see here I'm not sure how much you can see here but this is software like a partition one what I was explaining before this is in the sea written partition one it's not very complicated as you can see here this is one partition main and there is also trap handler because if something screwed up we want to you know handle the exceptions and what you can see in this partition one we just first we trying to define the pointer with specific address of the memory and you can see the address of the memory is a shared buffer this is a shared buffer which is visible in the partition one and partition two and this is the address of the shared buffer a virtual address of course 18 1000 so we define the pointer here exactly to point somewhere in the middle of the shared library which is exactly in the page where this shared library is it's 1, 4, 10 essentially so it's inside of this we point somewhere inside of the shared buffer which is visible in the both partition and then inside of the shared buffer we will write some value and we write this value in the partition two there is definition where the stack page is and the stack page is mapped and the virtual address 18 5000 so we write to the shared buffer address of the stack that's all what we do in partition one which is the partition to partition two so that's what the entire software does this is the magic tool which we use internally in the NVIDIA to be able to debug or verify specific hardware specific software so we run this magic software let's say to be able to connect to the RISC-5 engine we see RISC-5 engine to be clear we check in the DMS clock you see there is nothing there it's just clear state of the RISC-5 and you can see it's clear state but it's still running it's here it's not halted we still RISC-5 is running so what we do now we try to reflash the firmware of the RISC-5 engine in the GPU to exactly have these two partition running together under separation kernel software which is from a verified and so you can see after reflashing there is some censorship which we needed to put but logic I guess it's visible you can see that it's something happened you see the core is started everything is fine and you can see the RISC-5 status is not halted after reflashing of this new firmware it's running but when you see the DMS you see the partition 2 crashed and we have stack overflow and it's formally verified software and formally verified language and you can see be able to do stack overflow that's interesting so let's check the specific page where we write so you can see when we done the software that the page of the stack page is the physical page which is only visible in the partition 2 it's not visible in the partition 1 we were managed from the partition 1 just put the pointer to the stack and overflow the stack of the memory which you don't even see after the partition switch happens so you can see it's exactly the address what we overflow so let's check again this is the address of physical page and if you reset the engine and you check again the DMS we see it's nothing there so we are not really running the firmware which we designed to run and yes and then again verify the address to be sure this is what we did so you see in the partition 1 we only write the address of the physical page where the stack will be mapped in the partition 2 which we don't see in the partition 1 and the address of the page physically matches so that's all what we did here and this exactly page is mapped as a stack so that's the proof that this is a stack page so yeah we are here first forward so just to summarize everything at first we would like to thanks NVIDIA for developing such an amazing execution environment that we can analyze we want to thanks offensive security research team we want to thanks the GPU software security team we want to thanks the product security team and ADACOR for creating such a brilliant language and everyone who was involved in this research and to summarize what we did the use of the type safety language like formal verification can minimize the attack surface not only for the memory safety issues but for other issues as well but it's not a silver bullet as you can see here there is also other issues and we need to keep this in mind also formally verified software has a much higher quality thanks to enforcement and also can prove that dynamic checks cannot fail if you have assurance of silver plus and also it enables much more efficient security efforts in short most of the bugs which we saw requires like a deep knowledge and deep understanding of the software and also hardware not only software you need to analyze entire execution environment in the details and as a result the bugs which we found it's a more architecture bugs more design issues more deeper problems essentially and again in short just to repeat again the summary of the reviews like if you lose memory safety language we internally found around 40-50 bugs in average during the review in the time frame of 4 weeks in the memory safe language we found only 510 even if you spend 50% more of the time on average but the bugs which we found is better quiet of the bugs so that's all thanks and we could take some question I think we still have 4 minutes so the question is how many bugs in Spark we found in Nvidia when we analyzed the software written in Spark so in general we haven't seen any software where we did not find the bugs that's what we can say so essentially every software which is written have some bugs we haven't seen anyone without the bugs although it's worth to mention that the software which we write usually it's a huge in scale it's complicated and we also have a custom hardware so this is very difficult to model so most of the bugs we found in this area how many of them we cannot say the numbers certainly it's significantly lower than in the memory safe language in memory safe sorry thanks