 All right, the floor is all yours. Okay, cool. I will add the slides quickly to the dock so everyone can open the slides and see them at their own pace. That looks like I can't edit the question document. So we'll just post the slides in the chat here in Zoom. I just updated the permission so people can edit it now. Okay, great. So, yeah, then let's get started. So today we are here to talk about DAS. I give you a quick walkthrough. Before we start, let me introduce myself who I am, what I'm doing at GitLab. I'm a security researcher. That means my job is to keep GitLab secure by finding vulnerabilities and to get them patched. What I also did is I re-implemented GitLab's DAS tool. When I started at GitLab about a year ago, we already had a DAS tool, but it was a fork from some other project. And we, yeah, it was not ideal because it was a fork and we couldn't easily upgrade the new Z-versions and so on. So the re-implementation that I did used some other mechanism to extend SAP with the features that we want to add for GitLab. And yeah, so that's the re-implementation we are going to have a look today in. So my background is that I did a PhD in automated security testing. So I spent a lot of time looking at how to devise clever testing techniques and solve academic problems, like detecting if a test case worked, something that's called an oracle problem, designed a lot of empirical studies that compare different dynamic testing tools. I'm not going too deep into this, but I'm linking the publications here that I did on Google Solo. Feel free to check them out and if you find anything interesting there, we can chat about it. So that's me. What would be interesting to me is to hear a little bit about your background, especially what was your, or what is your experience with DAS. Maybe we do a quick round where everyone is saying one or two sentences. Yes, I'll start. So I'm fairly new to DAS and a lot of the automated security tools from a development perspective, but have used a lot of them at some of my previous companies just to run scans on some of the applications. We can just keep doing the round. Thanks, Seth. I'll go next. I'm totally new to DAS and other automated security tools and I'm really excited to learn about them. Awesome, thanks. Hi, I'm Lucas. I've used tools, but nothing DAS-related in the past. My primary exposure in class has been through receiving the results of reports and having to figure out exactly what's going on with the application. So switching over to this side, I get lab has been very educational. Hey, I joined GitLab last week. So it's my first time using DAS tool. I've used some static analysis tools before, but yeah, this is the first time I'm using it so far. I ran DAS locally on the system and I've also used the proxy. So I'm just learning stuff. Hey, I'm Adam. I'm a backend engineer on the secure team. And yeah, I have recently had experience with SAS, but not with DAS. So this is all new to me. Hi, I'm Julian. And I have some limited DAS experience that I have used in the past. But yeah, never fall like industrial scale just for fun. So this will be also very interesting for me. Hi, I'm Paula, similar to everyone. I'm an engineer, not much experience with DAS other than setting it up and running what we have at GitLab. Hi, I'm Ethan. Okay, go ahead. Okay. Hi, I'm Ethan. I'm actually an application security engineer. So I have used a couple of different DAS tools, but I'm interested in learning more about how GitLab's works. Hi, I'm Victor. I'm from GitLab from the static analysis group. But previously I've been doing some work on DAS as a programmer. So I'm curious about the information security background behind the Dynamics Canyon in general and what is the direction for DAS to what new features are about to come? Awesome. Thanks, everyone. Did we miss someone? Well, I did chime in, but I'll go ahead and do it now. I've used DAS tools at other companies, but never been responsible for a DAS tool. So that's why I'm joining in. Okay, cool. Welcome, everyone. It's really cool that we have so many people now working on DAS and I think with that much engineering power, we can really do something cool. So, yeah, I guess I can start then a little bit from the basics of DAS since many people said they didn't add too much exposure on DAS before. So I will go through all the slides. Yeah, if you have questions, just interrupt me. All right, yeah. GitLab 12.2 now with 100% more DAS. It's just a funny slide. Nothing real to announce with GitLab 12.2, but I like the slide. What is DAS? DAS stands for Dynamic Application Security Testing and the difference to other testing tools that we are having in GitLab is that it's actually executing the code of the application that we're going to test. If you compare that with, for example, static analysis with our ZAST tools, ZAST looks at a source code. So it anonymizes the source code and builds an abstract syntax tree with this kind of stuff, but it does not necessarily execute the code. What DAS is doing, it's executing the code and by sending test cases to the running application and observing the behavior to that test case, the tool infers if there is some vulnerability in the test of the application. To give you a basic example, if we send an HTTP request that has some SQL injection payload and then the server-side application replies with a 500-something error that says you had an error in your SQL syntax, then you have a very good indication that there is some SQL injection problem happening. So to get to this knowledge, we don't need to look at the source code of the application, we can infer it from the response that we got after sending a SQL injection payload. The advantage of doing it that way is that DAS is language agnostic. It can be applied to any of that application no matter with which framework it has been written. Yes, but the way we propose it to be used is in combination with ZAST, dependency scanning and so on, because of course DAST will not find all vulnerabilities. That's why it's complementary to other approaches like ZAST and so on that can together give you good coverage of the vulnerabilities that might be in the source code. Quick overview of how you actually can use DAST at GitLab. We have since GitLab 11.9 something that is called job template and that makes it really easy to use DAST. You just need to include this vendor template and then the pipeline will already run the DAST tests. I linked it to the documentation where you can read more about it. It also supports additional variables that you can specify, which I then used to start an authenticated scan. What that exactly means, I go into detail later on. If you set up DAST on your project, your pipeline might look something like this. First you build your app, then you do your normal unit tests, ZAST tests, dependency scanning and so on. After that, you would spin up in the review app and once that is up, you can run DAST against that review app. Here you see again DAST is running against a live application and based on the behavior of that application, it can tell you if there are vulnerabilities in the application. If it found something, you will see it in the merge request and it will be like the pick-deity on the screenshot. Oops. Like in the screenshot, it complains about certain headers I'm missing here and some other vulnerabilities. You can also click on each of these findings and it will open a model dialog where it will tell you what was the exact page where it found this. So, this is how it looks from the user perspective, but most of you are engineers and we want to dive a little bit deeper on what DAST is actually under the hood. So, DAST is built on OSP-ZEP, which is, let me maybe say a couple of words to ZEP. ZEP is a proxy that is used a lot with semi-automated security testing. Basically, what you do is you proxy the request that you do from a browser through ZEP and ZEP shows you exactly what is going over the wire on an HTTP level and that is very useful to play around with different parameters to see how the application reacts when you send certain parameters that you might not be able to set via the UI. Other than that, ZEP also has an automated mode where you can just point ZEP to a certain URL and then ZEP will start to test that application that is found at that URL and exactly this behavior is what we leverage in GitLab. So, yeah, ZEP is applicable to web applications. That means HTML and JavaScript and all that stuff. The repository is linked there. I guess most of you are already aware of that. And it's implemented. The part that's re-implemented is mostly written in Python and also some shell script and our data is basically shipped as a Docker container. ZEP itself is mostly written in Java and also has some Python scripts, especially for the automated scanning that I mentioned where you can just point it to a URL and it will do some automated tests. These are Python scripts. That's also why our code is written in Python. Maybe I make a quick short pause to see if there are any questions already. Are there many functional differences between the OWASP GUI tool and what is possible through the CLI? So, I'm not aware of too many differences, at least not far away use case, because OWASP ZEP has an API that we can use. This is also how our testing tools interact with ZEP. So, most of the things you can do via the UI, you can also call directly via the API. It might be that there are some things that are not possible via the API, but I haven't run into these yet. Thank you. All right. Let's move on. The most significant features that we have added to ZEP so far support for authenticated scans. And we also report now a little bit more details on what the crawler found while it was running. Let me explain you a little bit more on these two features. Why do we need authenticated scans? Most of the application, the application is typically only accessible if you're signed in, right? If you're authenticated, think about GitLab. If you're not authenticated, you only get to see about.gitlab.com. Once you sign in, you can have access to much more. So, in order to test everything that is behind authentication, ZEP needs to be able to authenticate. And the basic ZEP scripts didn't have any support for authentication. So, this is what we added as a feature. The way that works is you specify a command line parameter or environment variables that tells up which username and password to use. And then our Python scripts will use Selenium and Firefox web driver to actually request the sign-in page to fill in username and password to submit this. And it will pass the session cookie that is returned from the application. And every following call will add the session cookie. That means that all the DAS scan is basically running in an authenticated context from then on. And then the second part with the crawler is... I haven't yet explained what the crawling actually is, but I come to that later. And the basic over ZEP scan does not say very much about what were the URLs that actually tested. It just reports the URLs where it found something. And the problem here is that you don't know what was the coverage of your scan. To several reasons, it might have not scanned everything. And as a user, you want to know about that. That's why we added this feature where it actually tells you what URLs it scanned. For both features, I was linking some relevant code and issues where you can read up more. This is an example of how it looks when ZEP is reporting the URLs it actually scanned. So the crawler here is called spider. It tells you things like progress 100. It is important because the crawler might not actually be able to finish in time because he has been running into a timeout. And if that's the case, then the coverage of your application won't be 100%. And we won't use us to notice. If it was running in a timeout, it would also not say finished here, but say something else. And then under results, you actually see what were the URLs it found. The ones that were out of scope untested. So while the crawler is searching for links, it will find links that are pointing outside of the application that we want to test. And these links are not going to be tested. Only the links that are in scope are actually being saved for the actual tests. And the links that are in scope here, for example, are under URLs in scope. And everything that is essentially under goat colon 8080 will be saved for later testing. Dennis real quick. Is the scope defined by the review app URL that you put in the pipeline? Not directly. There is a command line parameter that you should set to the review app. But there's still some configuration that you do. So in a way, yes, the in scope will be defined by the review app URL. But you need to tell us what is the review app. Okay. Dennis, can you tell quickly a bit more about scopes? What are those for how we can increase the benefits of death scanning by leveraging scopes? Yeah, so scope in the basic form, scope keeps basically the test focused on the target. What that means to really understand is maybe I explained first how crawling works and then we come back to the question of why scopes are important. Okay. That's actually the next slide. So let me quickly say what crawling is and then we come back to how scopes work with that. So right now does essentially works in two phases. The discovery phase, that's the crawling or spidering. And then the second phase is the actual testing phase. The crawler works by you give it an URL from where it starts crawling. And starting from that URL, it will follow all the links it can find. So it will request that initial URL that passes all the links it can find or references to JavaScript sources, to CSS sources, everything that somehow looks like an URL, it will extract and then it will follow these links. If you're wondering what is this starting URL, this is exactly the parameter DAST website that we can specify for DAST one. Which is also what Ethan was just mentioning before that the URL that we point to. Yeah. So it starts basically crawling from the start page and recursively it will go into all the URLs it finds. It's for all the pages it finds, it stores them and it also looks for potential input parameters. Like think of it, if it's informed that can be submitted, it will remember which parameters are in that form or if it requested a page we get, it will also remember what were the parameters on the get request and so on. This is going to be important because these are the parameters that are later on being tested. And then finally the DAST run is going to run either until it runs into a timeout or until it's finished. This ties back into what I told you before earlier about the progress here and state finished. If it runs in the timeout it won't be at 100% here. By default our baseline scan is just crawling for one minute. Everything more than that would be timeout and the active scan doesn't have any timeout set. So that also means that for very large applications the crawler might run for a very long time. Now that we talked a little bit about crawling we can go back to Victor's questions about scopes. So how are scopes important? Basically we set a scope of the crawling to that of our DAST target application. What that means is if there are links on our page that point outside of our target application like what we see here in that array urls out of scope here we are testing code colon 8080 and on that page it found for example a link to getbootstrap.com. What that means is it would not actually follow that link. It's not requesting getbootstrap.com or even sending malicious test cases to anything out of scope. So with scopes we can essentially focus testing on our own application and not on all the internet. So scopes are sets of urls to scan. Maybe there are some patterns like wildcards or something like that. How can we define scopes when we are running DAST? Yeah, you're right. Basically scopes can have wildcards and so you can, if I remember correctly you can say what scopes to include and also if you want to exclude certain subparts of your application, right? You might only want to scan a certain path but not other path within your application. This should also be possible to be set via the API but I haven't done it yet but I think that API call exists. Does this somewhat answer your question? Yes, thanks. Yeah, I think Victor, I don't know if this is one of the reasons you're asking but it's particularly relevant. One of the things that we want to do in a future release is to have multiple URLs. So you could have url1.com, url2.com as a single scan. So I think the scope might be the area if we pass in those URLs as a scope as a JSON or whatever format it takes to kick off the scan. Yeah, I guess we have to do some exploration here. I wonder if it would be easier to kick off different tasks here or if we want to do it in one jobs. I don't know, maybe it's something to investigate on. Thanks. All right, so we have been talking about crawling. This is the first phase. This is basically the discovery phase where our tool finds out what are all the pages and what are the potential parameters via which we can pass malicious inputs. And the actual testing phase where ZAP is discovering vulnerabilities is the second phase. How this exactly works depends on the scan mode. If we use the passive scan mode, what it will do is it will just look at the pages that have been stored during the crawling. So all the HTTP communication during the crawling phase is recorded and in a passive scan, it just looks over these HTTP messages to see if there's some vulnerabilities. Things that you can identify by doing this is, for example, you can check if form submissions, so post requests, have a CSF token. If they don't have a CSF token, that typically means there's vulnerability. Or you can also highlight other things like missing CSP headers and these kinds of things that you can just infer by observing how normal requests look like. And then the other test mode that we're having is an active scan or also it's referred to full scan in the documentation. What a full scan does is it's doing the thing that the passive scanner is doing, just looking through the recorded legitimate requests. And in addition to that, it will take all the pages and all the parameters that have been identified and it will submit some malicious test cases. So you can think of this like sending a secret injection payload, like up a strop table, whatever, right? And based on how the application responds, it can interfere if there is a vulnerability in the application. Dennis, are those malicious submissions? Are those based on the type of application? So it sees a PHP app and it knows PHP is vulnerable to a particular type of payload versus a Ruby or some other application or are they generic across the board? There are many generics in there, but some have definitely been inspired by common programs that are around in PHP or other applications. But yeah, it's not too tight to specific frameworks. And then are those rules or those tests, are those built into Zap or is that a separate database that Zap downloads? These are actually implemented as a kind of add-on system. So they are not tied directly into Zap and probably it would be even possible to develop our own test characteristics and add them via add-ons. That's actually something that is pretty cool for myself because I'm looking for vulnerabilities and based on the knowledge that we get an app stack, it would be cool to add some additional tests to Zap that can look for the vulnerabilities that we found. Great, thanks. That sounds like a cool way to add, do value add for this particular product. Yeah, absolutely. One thing that we have seen a lot and Zap doesn't have tests for is, for example, service at request forgeries. Might be cool if we, based on all the experience, if you and I have get out on this topic if we put together an add-on. Yeah. All right. So next slide. I just mentioned a couple of challenges from the top of my head that I see with these two testing phases. Regarding the first phase, the discovery phase, some problems we have been running into is insufficient support for different web technologies, like, for example, JavaScript. I mentioned that the crawler is actually fetching pages and is parsing the content. But the default crawler does is it only parses HTML content. Now, what happens in JavaScript heavy applications is that a lot of the links are only loaded when JavaScript is executed. So our crawler will only find them if it's executing JavaScript. And this is for dust only the case if you pass in a certain parameter. That's not a default set. And then I also found an issue which is somewhat annoying where it starts crawling not at a URL that you specified at an entry URL, but it will always start crawling at a root URL. And this leads sometimes to problems if your application does not serve content on the root URL, then the JavaScript crawler won't find anything. I've laid the related issue here. And yeah, we should look into that. Another thing that is giving me really headache is that the crawler seems to be losing the authenticated context. I mentioned that one of the big features that we add to ZEP is that it actually, it's logging into the application and it's discovering the application under a certain user context. What seems to be happening is that at some point it might hit the logout URL at which point the session it's using is invalidated. And from that point onward, all the crawling will just hit the parts of the application that you can see from an unauthenticated view. So probably this leads to a lot less coverage to what we could have. I also linked an issue here on this point. For me personally, this is the top-most priority issue that we need to look into. And then the last point that I was mentioning here is that crawling can really take a long time since it's following all the links it can find. And if you run the crawler and then exhaust the fashion where you don't set a time out, that can be for a long time. Like we run it this way for the full scan and the full scan can take up to a couple of hours. The crawling adds to that, but also the fact that we are sending malicious test cases. I come a couple of slides back to why the runtime is so long, but I see, yeah? Does the crawler utilize sitemap? It utilizes what? The sitemap.xml file. Ah, that's a good question. I've seen that it actually requested that file. It looks for a couple of standard files like robots.txt and sitemap.xml. And I think it would extract you out, and it's finding there, but I haven't tried it out specifically. But I would be surprised if it doesn't. Could there be a feature inside the sitemap.xml file where we can specify which URLs are authenticated and which are not? And if the crawler reaches another, let's say a URL which logs out, can the crawler be modified to re-login and set the query? Yes, that's a very good point. So I think ultimately how this authentication logic should work is first we create an authenticated session, and the crawler should be somewhat aware of if the current page that he requested was done in authenticated context. The way you could do that, for example, typically there is some HTML that is giving away the fact that the page was served in an authenticated context. For example, for gitlab.com, if I'm logged in in the top right corner, there's a little picture of me, and it says signed in as de-addled. So we could add this information to DAST and tell it if this pattern is not in the response, you have lost your authenticated context, please log in again. I think this should be like the ultimate solution that we are having. The easiest solution right now would be if we just enforce the URLs that we already told us not to crawl, not to hit, and then as a follow-up, we should teach it how to re-authenticate in case it's losing authentication. So the crawler code base is part of the proxy or it's a different third-party tool which is a dependent dependency of SAP? The basic crawler, the one that is just doing HTML, is part of SAP, if I remember correctly. But the one that is doing JavaScript is called crawlJax, and that's a other open source dependency. So yeah, I think to come back to your question with a sitemap, I think we should not require to modify sitemap.xml but to give DAST some other way of knowing if it's logged in. But it goes in the right direction, your idea. Okay, I'm giving you a brief overview over the two complementary scan modes that we have. I already explained a little bit along the line. We have baseline scans and we have active scans. The baseline scan typically takes around five minutes because crawling is limited to one minute. You can increase this, this is just a default value, but if you don't overwrite the default value, it's limited at one minute. And then it will do the passive test just based on the traffic that the crawler was seeing. And the baseline scan, because it's not running that long, it's suitable to be run in time-sensitive CI pipelines. You can imagine that if you run a long-running DAST test in a pipeline, your developers are going to complain that they want to merge their code and why does DAST take again two hours just for one little commit. So we need to be aware that time is very critical when we run in the context of CI pipelines. Yeah, a lot of these other points that I list on the baseline scan I already mentioned. And in comparison to that, the active scan, as I already said, takes a longer time because there's no limit for the crawler and for all the parameters it identified, it actually sends HTTP requests with malicious payloads. So where I see active scans applicable is more in a scheduled pipeline that is not run for every commit that you are pushing, but you run it, for example, nightly or in 12-hour intervals or something like that. So we've recently been talking about strategies of running incremental scans for static analysis. And I'm curious if you see any strategy for moving DAST close to incremental scans. Oh, yeah, absolutely, I think. Absolutely, I think that's a great point. And I'm talking about this a little bit at one of the last slides where I say future vision. If we could add incremental scans for DAST, that would be awesome. And I think that also would help a lot with cutting down the runtime and with focusing the time we are spending with DAST to really test the changes that have been introduced by a recent commit. So that would be an awesome feature. And as far as I'm aware, other DAST tools that are out there, they don't really do that very well, if at all. So other DAST tools out there might be better at coming up with testing for various vulnerabilities and so on. But in the context of CI-CD, I think we have a chance here to do what you were saying and to actually make DAST useful in the CI-CD context by doing incremental scans and focusing on introduced functionality. All right, if there are no follow-up questions, I move on to actually explaining why DAST scans take so long. And just a quick time, Jack, we've got about 20 minutes left. Okay, cool. Let me see. I have three more slides coming up. So I hurry up and then we can open up for questions. Basically, why does it take so long? Essentially, you can approximate the time in active scan takes by the number of pages it has founds, times the average number of parameters per page, times the number of test cases it's executing. That will give you the total execution time. And for each of these, so for this number, it will have to do an HTTP request for each test case, right? An HTTP request needs to open TCP connection and so on. So this will take roughly about 10 milliseconds. And so the number of total test cases that it's executing is easily thousands or 10,000s. That's why it quickly adds up to a couple of hours. Yes, so right now we only test applications, but we also have an issue where we talk about adding support for testing. APIs, rest of APIs, you can have a look at the link to the issue. And the last slide here is my personal future vision, where I could see we could add great value to dust. So the first one is related to what Lucas was mentioning, which is incremental scans. Right now how it works is on every commit, we crawl the entire site over and over again. So we start again from the entry URL and we try to discover the entire site structure. I think we could save a lot of time if we would be a little bit smart on what to scan and do some kind of incremental scan. Only scan the part of the application that has been affected by the change in the commit. So for that we would need to infer what has been affected by a commit. And we also would need to persist state between dust runs and to pass it from one run to the other one. And another area where I see we could do really big improvements for dust is where the user experience once we actually present the list of findings. Right now it's just a list of findings that is pointing you to the URL of the review app with whatever I found. That's not a very nice experience because as a security analyst, what I would want to do is I want to confirm if that finding is a true positive. For that I need to reproduce it. And there would be much nicer ways of enabling the security analyst to reproduce the finding. I will write this point up in more detail in an issue and then it might get more clearer. I won't spend more time on this now because I will turn it up to your questions. So who wants to go first? I just wanted to know about the speed of scanning. You said it takes quite a few hours. Is it possible to paralyze the process? To increase the speed? I think it is already paralyzed. So the crawler for sure is paralyzed. It's paralyzed in the sense that it's using several threads. Maybe that could be optimized. Maybe there's a bottleneck because I don't know. I haven't looked yet into that. Possibly that can be increased by... Yeah. But it's running from a single machine? It's running from a single machine. Yeah, from a single docker image. Okay. So it might be possible to scale it out to multiple machines and coordinate it or maybe the coordination would take too long. I think that should be possible. Of course that comes with quite some engineering effort. It's a question if we want to do that. If that engineering effort is maybe better spent in looking into incremental scans. That's something that is up for the team to decide. Thanks. Are there any other questions? Yeah, it looks like... Sorry, go ahead. Go ahead. Okay, thank you. I'm curious about the crawler. I just wanted to ask how does the crawler avoid cycles? Is it just that whenever he goes back to the same URL, he won't crawl the same page again? And related to that, if you have second order vulnerabilities in your application, basically this implies that a crawler that don't visit the same page cannot detect these kind of vulnerabilities, I suppose. Yeah, these are all very good questions. So the first question, cycle detection. It has some logic to detect loops based on if it has seen in URL before, but then of course, you know, the same content can be served under dynamic site passes, right? So it would keep requesting new URLs, but keeps getting the same content. So in theory, I think it's possible that it runs into loops, but I haven't seen that yet much. But we should keep that in mind. And then the other question that you brought up was with second order vulnerabilities. So just a little bit of background on second order vulnerabilities. What it means is basically that first you need to submit the payload at one part of the application, but it's not directly executed. You need to request a different part of the application to actually get the payload that you have placed on the first request executed. This is a second order vulnerability. And I don't think Zap has any test statistics to detect second order vulnerabilities. So yeah, that's I guess the simple answer. Dennis, one of the questions in the DASH architecture migration issue was related to ZA proxy being called directly, which I guess was in earlier versions of GitLab. Can you just give a quick overview as to why it was called directly the benefit of moving it over into a DASH wrapper? So at first it was called directly because we were forking the project and we were editing the source files. So we were directly calling our modified source file. When I re-implemented tasks, my goal was to do it in a non-breaking fashion. That means I kept calling the same source file, but this time we weren't modifying the source file. So if you look at the Docker file in the DASH project, you will see that it's moving some files around. And this is because we want to keep the file it's calling by the same name to avoid a breaking change. But this is all historical reasons. So the implementation today uses the full upstream ZAP project and then we supplement it with our own files? It's using the full upstream project. Yeah, and it's using two ways to extend the upstream project. One way is we have a wrapper script around the upstream project. For example, the different scan modes that we are providing, they have different entry scripts in ZAP. So based on whatever scan mode we want to run, we call different entry script. And we also do some things like checking that that application has already started, right, this curl check where the team has been working on the last couple of days. And then the second way of extending ZAP's functionality are so-called hook scripts. ZAP allows you to define custom logic at certain points of the application, of the execution of the scan modes. It will call out to functions that can be defined by us. And we use this, for example, to generate the test report containing the URLs. And we use this to create an authenticated scan context. This is all done in the hook scripts. And I think that's a very nice solution because it allows us to not modify the upstream project. I have a question about mitigating the issues related to JavaScript. So have you considered moving away from ZAP or maybe building some kind of scaffolding on our side, on GitLab side to be able to swap the scanners, the scanning tools under this scaffolding, under this common interface to the test scanning tools? To try leveraging some more GIS-friendly scanning tools than ZAP? Absolutely, yeah. So Philippe has looked into a couple of alternative solutions and I think ultimately we want to support different tools. I don't know if I can mention names here, if this is public or not, so I guess we all know the issues that I'm talking about. So I guess does ZAP just not have good support for JavaScript and that's not something that would be likely to come in the ZAP tool? It does have support for JavaScript. It's using the scroll jacks project to do the JavaScript crawling. And this basically also works, but this is one problem that I identified where the crawling doesn't start at the correct entry URL. Once we fix that, we might want to re-evaluate if the crawler is finding then more URLs. I think it should because I was using the desktop version where I could use the JavaScript crawler and I could make it crawl at a different entry URL and then it was finding a lot of URLs that didn't fit in our DAS image. So are there any other questions in the doc that I haven't addressed yet? So there's one remark here in the document that says the variables in the DAS GitLab CI YAML file referred to DAS username, DAS password. Where are those consumed in the project? Do they work and is there a project that tests them? That's actually a good point. That might be actually wrong in the documentation and the environment variables that we expect in the image are different from what is specified in the docs. Yeah, what I found in the source code is the timeout and the full scan and the website environment variables are used. I believe in the analyzed script, but the DAS username, DAS password, I couldn't find any reference to those in the analyzed or in the Python scripts. You can pass them in as long parameters, but it won't pull the environment variables, at least from what I saw in the source code. Yeah, that's a very good point. We probably should fix that. I was using this mostly locally with CLI parents, so that's probably why I've missed that one. And of course, we should also add tests as you're mentioning there. Yeah. Okay, I think all of the questions in the docs are addressed. Are there any other questions that you have in mind? I just had one follow-up and it's a big one, I can just limit it, but I guess could you speak shortly on the limitations of DAS, like what sort of things you can't find, and then of those are there ways that we can help improve the product that we can contribute so that it differentiates our product from other DAS scanners? Yeah, good point. So Julian was already mentioning stored vulnerabilities. We don't think ZEP has support for finding stored vulnerabilities. We were talking earlier about server-side-request-forgery vulnerabilities. That would be really nice if we could add some testing capabilities here. Alternative tools like verb, street, I think they have some add-ons where you can test for that. So if we would plug in for that, at least we had feature parity with verb, street, here, especially since SF now gets a lot of attention and they might even include it in the Oversplot 10. So yeah, absolutely. I think the the abstract team can give good input here because they have good understanding of what are very common or trendy vulnerabilities right now. And based on this, we could develop add-ons for ZEP to test for these kind of vulnerabilities. One interesting one I saw that Jeremy is using is like a thing that will scan with two different accounts and then compare the pages that it can access and will give you some idea of permissions models and stuff like that. That's awesome. Was this something for ZEP or verb, street? No, it wasn't for it was a separate third tool specifically generated for this. So it would be something we would have to build out. It would be a cool idea if we were going to work on extending functionality. Absolutely. Testing permission models is really hard and you can also, based on GitLab on the reports that we are getting, you can see it's difficult to get right. I think if we do something like this, maybe we do it outside of the current task tool. Okay. Okay. If there are no other questions, I would say we are at time. Yeah. I'm really looking forward to working with you to focus a bit more on adding advanced tests, testing heuristics like Ethan was just mentioning. Yeah. But if you have any questions regarding what I've been done doing in the past, just let me know. Okay. With that, I guess we can stop sharing and stop recording. Well, thank you, Dennis. Thank you. Yeah. Thanks really appreciate it. Bye. Thank you. Bye.