 Thank you all for your time, and we'll jump right into the content. So, Fuzz Testing is a process of testing your APIs with genuine data. It's different from conventional unit testing, because you are not trying to verify the function or parts of your code. Instead, we're looking for group reverberators, such as MSAPDs, unbound resource usage, and plainly non-crash. Fuzz Testing has not been widely adopted in this project, but we've had a lot of success with it online. In the past three years, we've caught the force to use the other fuzzers. Our fuzzers have also caught any other non-secure networks that might still support fuzzers, and we improved our algorithm skills to fix them. I'm going to give you a quick overview of the number of fuzzers online to demonstrate the impact. First, before writing Fuzz, we need a library to test. Up here, I'm going to give you a brief implementation of a C++ function. This function unescapes new line characters in the input string. It works generally well for the majority of inputs, but there are a few problems with it. Let's focus on the indexing. Notice that the index i is incremented in two places. Under certain inputs, we might end up incrementing outside the bounds of the string and read a character that's not part of the string. That is invalid behavior in C++. Let's write a fuzzer that can catch edge cases like this. So first, to write a fuzzer in Envoy, we need to create a fuzz input schema. The input schema indicates to the fuzzing engine the types of data to generate. The previous library takes in one single input parameter, a string, so we create a protocol message that has a string field. Now the fuzzing engine will generate random strings. Next, we write our fuzzer. For this library, it's very simple, three lines of code. We use the standard lib protobuf mutator macro to define a callback function. This callback ingests a generated input and passes it down to the library under test, the unescape function. Now, when the fuzzer runs, it generates random strings, one string per iteration. Here is some possible strings that might generate. Our hope is that the fuzzer will generate that last string, the string with a trailing backslash. This input will cause that undefined behavior in the library under test. Our fuzzer running with C++ sanitizers will catch that and report the error to the developer. So now, the key question is, how does the fuzzing engine generate strings like this? If the strings are completely random, it might take it a lot of trial and error to create a backslash at the end. So the answer is that it's not completely random. This is where continuous fuzzing with coverage-guided fuzzers come in. Continuous fuzzing is essentially running our fuzzer's 24-7 in the background. We combine this with coverage-guided fuzzers. Coverage-guided fuzzers employ a feedback loop to generate inputs. These fuzzers use code coverage as the input signal in the feedback loop. So every time these fuzzers run, they generate a random input and then score or rate that input based on code coverage. The fuzzers can then explore other statespace new inputs that rate higher, that have higher code coverage. It boils down to an optimization problem. The fuzzers are trying to optimize against the input space and the loss function is inversely proportional to code coverage. So this whole concept of continuous fuzzing with coverage-guided fuzzers is a solved problem at Google. We have two open-source frameworks that any developers can use. If you're a high-impact open-source project that is used widely across the web, you can integrate with OSS Fuzz. This is managed by Google and it's completely free for you to use. If your project doesn't meet those requirements, you can run cluster fuzz on your own infrastructure. That is all I have. Thank you.