 This is the 2019 Gardner Report. Axway is the largest independent vendor in the API gateway space, competing with Google, Apogee, MuleSoft, which is now Salesforce and IBM, some other names I'm sure you know. So we'll just kind of quickly jump to how we started with GitLab and a little bit about our transformation story. So Axway, historically, we were mostly licensed on-premises software. So when you download and install on your data center or in your own cloud, in about 2014, we launched our first managed service. In 2016, we acquired a SaaS business called simplicity. And then in 2018, we launched platform.axway.com. So what we've really been doing is making a journey from a legacy licensed software distribution model to a subscription-based cloud model. And we started with GitLab in 2016. So we were using SVN as source code management. We replaced all of our code bases and source code moved into GitLab. We also, at that time, moved from version one for defect tracking, JIRA. We moved to JIRA and Confluence for project management, agile workflow, et cetera. So one of the interesting things about making this transition is that we now have cloud services as well as on-premises services and now the ability to and the need to hook all that together and maintain security and velocity. And so what happened in 2018, the business came to me as the security director and said, OK, this is our mission for 2019. We want to really be in the high-performing category of software delivery. And so this comes from the DORA project. And these are some of the metrics they use to really kind of measure success and high performance in software delivery. And so you can see some of the metrics here. Deployment frequencies. So delivering licensed products, traditionally, you're looking at a 12-month, sometimes 18-month sales cycle, which would put you in the low end. But as we move to cloud and you move to cloud, high-performing businesses are now pushing new code out once per hour, once per day. Lead time for changes. If you're in a legacy subscription model, or sorry, an on-premises model, you might have a quarterly release or a semi-annual release. So there's a lot of pressure to get things done and into that release and shipped. High-performing organizations ship every day, so therefore new features can go out every day. If you miss a release that's OK, you can just ship tomorrow. Time to restore. Looking for less than one day. Again, this is for the cloud services. And change failure rate. So this is a measure of after you push a release into production, what's the rate of success versus the rate that you have to roll back. And so the business, again, this is their goal for 2019. And so really, I'm not going to go into all of the details of how we achieve that on the development side, because this is really a focus about security. But I am going to talk about the enablers required to achieve that from a security standpoint. So how do you meet those metrics and maintain your security while releasing several times per day? So we'll talk about some of the enablers. And really, this is kind of a culture first. You can't really accomplish anything without passionate people that are trained and enabled and empowered to do their job. And so again, some metrics here. And a lot of this model comes from BCIM. If you're familiar with BCIM, it's an organization building software in maturity model. And so we implement a program very similar to what a lot of other companies as a member of BCIM are doing. But basically, you have a core secure product group. And so that's a product and cloud security group. And this is my team. And then within the development projects, we have a security champion. This is also known as a SPOC, which is a security point of contact. And so what's interesting here is kind of how do you scale this? A lot of folks are here from different organizations. We've got Disney. We've got Charter, T-Mobile. So very large organizations. So what's the gauge? Where do you, how many of these people do you need? Best practices say around 2% to 5% is sort of a benchmark metric. At X-Way, we have about 3% in the product security group. And then in the security champion program, you have at least one person per engineering project designated as the security champion. So some of those of you who raised your hand, this might be you at your company. But this is the person responsible to work with the product security group and make sure that security is implemented in the project and work with the product security group. The product security group is responsible to define security best practices, develop tools, deploy the tools, centralize the infrastructure, and enable the security champions and the development teams to meet their security goals. On top of all that, you want to have a training and awareness program so that not only your SPOCs and your security team, but everyone is trained on security basics. So we have the white belt program. You shoot for 100% attainment here across your R&D organization. 30% for the blue belt program. This is a little more advanced security training where you're getting in and actually doing coding challenges, fixing known vulnerabilities, and doing tests about that. And then you have the black belt program. And this is for your elites, folks that have gone out and gotten certifications, who are doing presentations, who are doing blogs, and really kind of your leaders in the security program. Any questions or comments so far? Oh, thank you. So training and awareness events. So Capture the Flag is one of the cultural things we do where we'll go to each R&D site and host a one-day Capture the Flag event. So we'll stand up a vulnerable server with known vulnerabilities. And then there's the levels of challenges that you have to compete to capture the flag. So there's a SQL injection on a web page. You have to find it. If you get it, then you get the flag or the cookie. Then you submit it, and then you get that award. And so we do these events as cultural events to get everyone thinking about security and also see in their own projects and in their own products how things can be broken and how things can be abused and used in ways the developers never imagined. And so it's something that we do annually as well as smaller events just to kind of get people thinking about it. And it's open for everyone to participate. You don't have to be on the security team. It's for all of R&D. We have sometimes support and other departments come in compete as well. Yes, we do have a red team and I'll talk about that in a second. Yes, we do. It's called Security Shepherd. And if you go to the OWASP homepage, look for Security Shepherd. It's a flagship product or project within OWASP. And several engineers from our team at Axway are the leads on that project. But it's open source, so you can download it and deploy it in your own environment and run your own capture the flag as well as if you want to commit a new challenge, want to commit a new level, you can commit right back to the project. Thank you. So next I'll talk a little bit about process and sort of framework, which is another thing, key factor that the product security group provides. So this is just an example of our SDLC. We call it the application security bar. It's not as fun as that bar right there, but within the application security bar, we define the different levels or security gates that each application and each product has to pass through. And based on the type of system you're delivering would depend on whether or not you need to pass these gates. For example, if you are not shipping container images then you don't have to do container analysis. So it's sort of a flexible model that adapts to the type of project that you're working on. So these are just kind of the bullets or the outline. I won't read them all off to you, but for each one, so as an example, we have the third party component vulnerabilities. So the first bullet there, this is an example. What is the criteria to pass this? So it means that you need to run the automated security tooling. It should be integrated into your pipeline and the results are then audited and then anything with a medium or higher severity must be mitigated before you commit that or before you release that into production. And so the product security group is kind of responsible for this side and then the SPOC and the development teams are responsible for this side. And then I put the logo there to indicate this is one of the areas where we're using and leveraging the pipeline and the source code control within GitLab to instrument this. So this is a little bit more about process. This is our DevSecOps chain, if you will. There's a lot going on in this, but basically you're probably familiar with the DevOps loop by now. This just illustrates really where our security program and our security practices layer on top of the traditional DevOps pipeline and what we call DevSecOps. On the left side, you have kind of the traditional application security on the right, more the cloud security operations side of things. So the gentleman who asked about red teaming, this is the slide I have which illustrates where that comes into the process. So we do have an in-house red team that does manual penetration testing as part of the delivery. And then we also use third-party pen testers to test the products once they're either at release or now in production, because most of the time for the third party it needs to be released to production. Are there any other questions about red team? Or who's wondering? Okay. And so just kind of another alphabet soup of vendors here and I'm not endorsing any one of these, but just to give anyone who's interested an idea of what our kind of ecosystem looks like. So you can see some of the tools we use and some of the systems in play throughout the life cycle. And so what I'm gonna focus really on the next section is kind of talking about what we're doing in the testing, coding stages, testing and coding, and the release stage. The traditional model we would have early on in the development of a new system, we call that your planning trades, you sit down with a security team and do a security threat model. This can be a manual whiteboard exercise or we also have tooling that we use to facilitate that, having a globally distributed development organization. We can't always fly to each development site to do a sit down threat model. So we use some tools to do that. And then you do your testing. Then when you get to your release, then you have to do a final security review which is called an FSR. So this is another kind of meeting with the security team where we sit down and say, okay, what were the results of the scan? Did you meet all of the requirements that we defined? Did you mitigate all of the risks that were identified in the threat model and so forth? And if they pass all of the gates, then they can release into production. So back to the first slide of the challenges we were presented, that all works really, really well when you have the time to do a threat model and to do a manual sit down with the security team and do a final security review. But as you're doing continuous delivery and you wanna ship every day, the time from inception of an idea, creating an issue or creating a ticket to releasing that ticket, doesn't really give you a whole lot of time to do all of those manual steps. So we needed a way to automate the steps that can be automated and done in the pipelines and in the branches, as well as make sure that we were still covering the bases and doing the threat modeling and doing the other steps that might take a little bit more time. And so just kind of a look at what that testing pipeline looks like. So in the continuous integration side, this is where we're doing the static analysis. We use Fortify for static code scanning, dependency check, retire JS to look at the third party components and make sure there's no known CBEs or no known vulnerabilities against the third party components using the code. There's containers, we're scanning them with Twistlock. And then once it's sort of running and on the dynamic analysis side, we use tools like InsideVM, AppSpider to do dynamic application scanning and to find vulnerabilities that you would detect in your REST APIs or in your application, if it's a web app or just in the API layer from a dynamic standpoint. So we use a correlation system to kind of take all of that data and all of the findings and remove duplicates and correlate things and then throw that up in a dashboard for the teams. And so one of the things I'm gonna talk about is the continuous security report. And so this is something that the team is doing right in the pipeline itself so that they can take all of this data and not have to go to another dashboard to look at something, but they'll see it right inside their build and they'll see it right inside their pipeline so that as long as I've done all this and everything's green, I can release to production. Yeah, some of these are commercial products, commercial solutions that we're using. Dependency check is an open source and there's some other open source tools that we use. Some run at the branch level, but most of them are post-merge. And that's the kind of the problem that CSR was invented to resolve the continuous security review. Another aspect of this is really giving the data to the developers and really kind of driving continuous improvement with the data. And so we've come up with a few mechanisms to do that. Kind of on the, where we started is here, but this is where we do our initial security review, final security review, and surface that up in a dashboard so the teams can go in and look, the project owners, product managers, product owners can go and look and see, okay, how is my product doing? And they can also see how they compare to other products so everyone in the company has access to the dashboard. And when we do have a meeting with the team to do an ISR and FSR, then we have, we track what was the status at that point and then we set an agreement to say, okay, by next release, here's where we expect you to be. And then we worked with them to get them to that level. We built this, yeah, that's us. And this is Grafana, so that's just, I believe that's open source. So I'll touch on this slide again or this graph again, but to the question of is this running in the branch or is this in the merged build? This is what the CSR report will do, is it runs in your branch and it'll go and pull the latest results from your threat model to say, okay, did the threat model pass, then this is agreeing. Even though I didn't do a threat model for this branch, I'm at least checking what was the latest status. And if it's older than say, two weeks, then it would fail and that would be a red and I wouldn't be able to ship. And so whether it's running in the branch or if it's running out, it will go out and fetch the latest result and then surface that up. So the team knows when I wanna go hit deploy, if I don't have all greens there, then I'm not gonna be allowed to ship. Okay. So the next couple of slides, again, I'm going a little bit to the question. Yeah, it's a great question. And it depends too on the maturity of the team. So for the audio, the question was, who does the triage or do the teams themselves triage false positives and look at false positives? For a mature team, I mentioned the Spock role. A lot of times the Spocks, it's their responsibility to review the scans, review the results and to triage the findings. And then my team will go and audit them occasionally. But most of the time, once they've reached a certain level of maturity, there's no need to do that audit. And then of course, if there is a question, hey, this tool is showing us this vulnerability, we think it's a false positive. We think the tool is broken. Then it's our job to go fix it. Great question. So I think I'm still good on time. So I'm going to go through these next couple of slides. So this is just a little bit more detail on that CSR process I talked about. So again, I'm senior director. So I don't get into the pipeline every single day. So do my best with these slides. But here's a new feature that we're launching. So within GitLab, we have a CSR profile, which is part of the repo and it's part of the branch. And that just says for this product, what security gates are required. And then the runner will download a Docker image with the test instrumentation with the test tools in the runner from the image and then run the scans that it needs to in the branch and then pull the results, like I said, from the other tools that are running on the main branch. And so they can get the latest static analysis scan. So static analysis, if you worked with this, these types of tools, some of them, if you have a big code base and you're working on a monolith application, it could take you a day, sometimes longer, to run that static scan. So that's why we don't scan through the full static analysis in the branch, but pull the latest from the last run. But then at the last run is over older than say one week, then it would fail. So in this example, there's a known false positive with Twistlock and it's already been suppressed by the Spock and so the rest of the gates will pass and the pipeline will succeed and then you can go and merge that. In this example, I had the suppression, I rerun it. The false positive is still there and it's still suppressed but now we found a new issue. So since we found a new issue, then we block the pipeline and we block the merge and they have to go fix that new issue. And then what about the version that's running in production? We wanna make sure if we're scanning or running scans, a lot of times we're scanning the branch, we're scanning the merge, we're scanning the dev branch, but what about the release branch that's already out there in production? So what we also do is every day we'll scan the release branch just to make sure there's no new zero days that we might not detect in the dev. And so we run every day, we'll run the daily scan and then if we catch a zero day, then that would fail the CSR and then when your branch runs, it would pick up that result and also fail your merge so that somebody knows to go look at that new finding. And again, so if everything passes successfully then there's no requirement to meet up with security, there's no manual approval from security, you've gotta go from security and you can deploy this into production. And so that's just a quick overview of some of the processes and what we implemented to help our development teams achieve that metric and achieve their goals and like I said, we launched platform.axway.com earlier this year and that was a large part of the SaaS delivery and the continuous delivery initiatives to get us there and just kind of looking at how we stacked up, working with the team. These are their metrics on where they are today. So in the high category for the deployment frequency, lead time for changes, time to restore, change failure rates. And they said, well, don't get complacent, don't get lazy because here's where we wanna be for 2020. They wanna go to the elite. So now we're working on how do we increase the velocity and do this at a greater scale with more of the teams because there's new products also now shipping into platform.axway.com and they'll also be trying to release with the same velocity. So that was exactly 29 minutes and 44 seconds. So any questions? Yeah, sure, I'd be happy to take any questions, sir. Yeah, so as far as giving the feedbacks to the dev, yeah, we have ThreadFix and the PSG dashboard which I showed. And so during their initial security review or final security review where we will be meeting with them to review those results so they can go right to ThreadFix or they can go right to the dashboard and see those results. And then if they wanna deep dive into the findings, they have those in JIRA. And so they can go into JIRA and see their findings and they can put them into their release and mark them for the target release. Today for my team, ThreadFix is our source of truth. Yeah, so for reference architecture, we do provide some templates. So our dashboard, for example, is a Java project and we have some other projects that we support. And so we'll provide an example pipeline for both GitLab and for Jenkins because not everyone yet is on GitLab so we provide them an example pipeline. And then we'll also work with them to get them, if it's not running or something isn't working correctly, then we work with them to set it up but we provide those reference examples. And then they can clone them into their own project and then just use that as part of their, as building their pipeline. And then we also provide sort of as reference or a framework, some secure code bases. So we have an Axway Defense framework which is libraries that they can use in their code that do white listing, encryption and some of the security requirements so that they don't have to go right around. Okay, thank you everyone. I appreciate your time and attention. Thank you very much.