 Okay. Good morning. Good afternoon. Good evening, everyone. Welcome to the LFN webinar series. Today we're going to be speaking with a couple of members from our FIDO community. Today's topic is how to build secure terabit network services with FIDO technologies. Before I do a quick intro and kick it off to our speakers today, just a couple of housekeeping items. All panelists will be muted for the duration of the presentation. If you have questions, please type them in the Q&A window that is on the bottom right of your screen. We will dedicate some time at the end of the presentation for live Q&A. So please feel free to type your questions at any time in that window. There will be a recording of this webinar available within the next couple of days. That link will be emailed to all of our registrants. Also post the recording of the webinar on the LFN and FIDO websites so folks can watch it on demand at any time. All right, without further ado, our presenters today are Ray and Machik from the Technical Steering Committee of FIDO. And I will go ahead and kick it off to Ray, who's going to give a more thorough introduction of himself and Machik, and we will go from there. So thanks for joining everyone. Thank you, Jill. And welcome everyone to the FIDO webinar on how to build secure terabit network services with FIDO technologies. I've been involved with the FIDO community now for quite a few years and, you know, we continue to kind of push innovation and performance within the FIDO community. And that's a lot of what we're here today to talk about is the new stuff available from FIDO to help build fast and secure network services. I should tell you a bit about myself. I'm Ray Kinsler. I'm a product architect at Intel and I'm a member of the FIDO Technical Steering Committee. Do you want to say hi, Machik? Thank you, Ray. Hi everyone. I'm Machik Konstantinovich, Project Technical Lead for the FIDO Assisted Project and the founding member of the FIDO project in the beginning of 2016. I'm a distinguished engineer at Cisco. And my passion is to make networking and computing work together. And a few words of thank you about this, about what we're going to talk about. So there's a number of quite a large community for FIDO projects, the core projects, the BBB and CISID, but also a visa and others. This content wouldn't be possible without contributions from this community and special thanks to the folks listed on the slides from Cisco and Intel. Not all of them always visible in the community, but doing a very hard work at the back end. So thanks very much. Ray, do you want to say anything about this iconic puppy on the right? Oh yeah, the puppy. Thanks for the prompt. So yeah, we're looking to give away some puppies today. So please chime in for your question. Sorry, which are questions. And I guess the best couple of questions that we get will get the people asking the question will get a puppy. So we love magic and I love questions. So please chime in. And there's a chance to win a puppy or two. Okay, so let's get, let's get going. Ray, this is your favorite slide. Yeah, and Intel legal legal noses need to be at the front and every presentation and just protecting corporate brands and the like. So I think we can go on. Excellent. So, a few words of introduction about what we're doing in the IO. And the attitude to benchmarking is pivotal in the project, and the great degree distinguishes it from other similar efforts in the industry, open source and close source. So, why is that networking throughput latency matter and the developers and testers and the designers who work in the software networking world must, by definition, care about performance and the resource usage efficiency, paying attention specifically to the optimized software and hardware interface. So, in this world performance should never be a secondary consideration. And that's why the benchmarking culture primates everything that is done in FDIO two examples of that, in terms of the community developers are using FDIO CI benchmarking to catch performance regressions across multiple compute platforms. So, x86 arm and such we'll talk about that a bit more, and also verify performance optimizations that's the tooling would provide the platform we we run and operate in LFM FDIO labs, everything in open source. And then for consumers to actually level set the expectations and specifically around performance and scale and have a reference and industry reference that's what we aiming for in terms of repeatable deterministic performance. And quoting David Patterson, who some of you may know, he's a quite a famous chap, a risk processor inventor, and co-author of books on computer architecture together with John Hennessy. For better or worse, benchmarks shape a field. And the way we go about it is that we're doing our best to push the progress with good relevant benchmarking and going after real problems. I might say something about that magic if that's okay, just go ahead. My own personal experience with Fido is both as a developer and a consumer. So as I contribute code to Fido, there's very extensive checks that get run on my code in the CI to make sure that I haven't caused performance regressions and, you know, would make sure that we're keeping Fido performance, you know, patch as patches gets admitted and get features get added. Developers have a benchmarking driven development experience. Well, also as consumers when I'm out talking about Fido in forums like this, you know, being able to reach for the numbers because it's very easy in Fido to go reach for the numbers and be able to pull them up and show IPv4 performance and show IPv6 performance and show IPsec performance, which we're going to be talking about a lot today, is a very, very valuable tool. And they're right there really easy to get in Fido. Thanks much. Always welcome right. So few words about the principles we follow the that are basically principles or traits that are increasing in our performance culture at FDIO. We, the quality of our quantity and when when I say quality, we really mean measurement verified data proven quality. We are spending lots of time focusing on the tooling and methodology improvements development. And you will see a bit a bit about that later in the talk, and also very important the cross industry collaboration that open source enables. And thanks to that, we are able to do things that a few years ago were seemingly impossible. And of course we're embracing change. That's the part of the world we in terms of technology internet and fast moving world. And, and finally, we are really spending a lot of attention to openly and widely share all benchmarking data. So this, you know, everything we do is in the open. Everything we do is shared. And, and with that, we have a great experience of attracting a lot of good talent and feedback, driving helping us to drive the envelope. What does this mean practice. There's a code that that is being developed and and and merged and tested and benchmarking at Yale optimizations in the code and do not get merged without demonstrated performance advantage. And we would take care of that features don't get merged without seeing their impact on performance, the specific feature and collateral performance impact and releases don't happen with regressions and if they do we would cause them. And that's it. And, and above all, and quoting Mr. Tafty, and if you know him from a vision information above all, above all, above all else we do show the data. Ray any comments here or should we move on. I think that just reinforces what I said on the last side so you know a benchmarking culture kind of permeates everything we do here and you know it's that culture. As much as the technology innovations that we're going to talk about later and the kind of collaboration we're going to talk about later. It's that benchmarking culture that permeates everything that we do in Fido that really allows us to hit terabit speeds, we'll get more deeper into that in a few slides I think. And actually, we're getting it now. So, go. Okay, so this is my favorite slide this is that this is the hook slide. So, I guess we're kind of showing you where we got to at the start of the presentation and then we'll, we're showing you the cake that's been made and then we're going to show you how to make the cake right. So, this is a, this is a screenshot, I guess from a recent demo we did where we had a dual socketed a third generation Xeon scalable as platform from Intel, also known as isolate quickly platform from Intel. We were running and we were running IP sec on it across both sockets, we're able to take advantage of all the optimizations that that platform gave us things like PCI Express Gen four, things like additional sockets things like additional memory controllers, things like improvements in instruction sets and we're going to talk a little bit about vector AES later, and the help the vector AES was to us, but we're able to leverage all of those things bring them to all together. True Fido, and actually benchmark a throughput of one terabit to site one terabit one terabit through that a single dual socketed Intel platform. This is where we got to this is a really impressive data, you know, and we actually include a link at the end for the video where you can actually go see this demo in action it's on it's on YouTube and I encourage you to go do that. But this is the cake that we made, you know that this is a stack you can go download from Fido BPP today you can get yourself a dual socketed Intel server and repeat this and that's very much what Fido is all about. This is something that's not just a, I guess a demo for YouTube but this is very repeatable we make the software is available the platforms are available, you can make this happen in your lab. I say next slide. Okay, so, and onto that. We're going to do a little bit of what we're going to talk today, how the cake is made. So we're going to talk a little bit upfront about the trends, driving and influencing Fido's work so why I be second such a hot topic at the moment and why we did this work in the first place and that we're then also going to talk about VPP and the optimizations we did to VPP I'm very enthusiastic about VPP so we're going to spend a little bit of time framing VPP what it is how it works. We're going to also spend a little bit of time about how we optimize VPP for a nicely quickly platform. And then we're also going to spend a little bit of time on additional few features that I got added to VPP recently that you might be interested in. And also, we're also going to spend that get a deeper dive into CCIT, CCIT's a project in Fido, which I am really excited about and I don't think we talk about nearly enough. And so VPP, when I talk about the benchmarking culture that permeates Fido, VPP is sorry CCIT forgive me magic. So when we talk about the benchmarking culture that permeates Fido, CCIT is the project in Fido that drives that culture. CCIT is the one that drives the innovation around benchmarking and the other aspects of benchmarking that really, you know, that really allows VP Fido VPP and the other VP Fido assets to be able to enable people to be able to build these network services. So VP CCIT has a central role in that and we're going to deep dive into CCIT's central role in helping us to build Terabit network services. And then we'll round that with a bit of a summary. All right, so let's move on. We're going to look at trends that impact the work we do specifically across within the FDA specifically across the three dimensions, compute, evolution of Moore's law, cloud, developments in distributed computing and internet security. So let's let's look quickly at those three aspects. I know it is a bit of a busy visual. So, but that's because it's trying to capture quite a few trends happening in the in the silicon. So the Moore's law, as we, I think most of us know have been running out of steam has been running out of steam for some time, and has been replaced by sort of two dimensions of the evolution of silicon, the Moore Moore. And the terminology toyed by the ITRS and IRDS, which are part of IEEE rebooting computing initiative and links are at the bottom of the slide. So more Moore and and more than Moore Moore Moore is is basically scaling the silicon performance at speed power and improving the density and lowering the cost by shrinking the physical features. So this slide is actually from this graphic is from 2010 original white paper from from ITRS, but it's still being used. We are now getting to five nanometer and IBM is proving that three nanometers is possible, and then there will be stuff beyond, beyond silicon before beyond beyond the other dimension, the horizontal one, the, the, the, and the blue arrow is the focusing on functional diversification and specialization. And the net result is the purple arrow, a combination of those two dimensions. And this is where software like FDIO and very much plays the role concentrating on, on packet IO intensive workloads. And, and using the latest optimized software to hardware interface for the hardware that that this being optimized for for IO, and we've seen that from 2016 within the FDIO happening very much on all platforms so today we touch Intel zions and atoms processors and course we touch we also test arm and we're waiting for the latest and one skews and we also test AMD in all of these there's a lot of attention paid to to IO. So it is the combination of those two dimensions more Moore and more than Moore that that results in those terabit speeds that the trade talked about and and and the way the way the VPP software adopts the latest hardware and and hard optimizations. This is where Ray will talk about later in the context of, of, of IPsec and crypto. The other dimension is the, the distributed computing so moving away from, from silicon and processors or single machines distributed computing now everybody calls it the cloud and or it's not connected computing. I'm, I like the way that I'm in fact that about that captured the evolution of cloud technologies. Now that's one of the keynotes from sitcom 2020 keynote. If you haven't watched it, it's quite a enlightening talk. So starting with a quick one. When the internet started ftp email talent. And getting us to where we are today with ml and dominating the applications and also customers lives I guess that degree and highly distributed applications. And that's dominated by or characterized by high speeds, multi-turbit, and also the other aspect is much lower latencies with aiming in the context of this aggregated compute at 10 microseconds RPCs. So why is that important? Because it has repercussions across the whole stack. And that's infrastructure, physical infrastructure, network compute, virtual infrastructure, the virtual networking and virtual computing. Virtualizations, VMs, containers and such, and distributed apps. So we need to play, we are playing in that field. And finally, security. Specifically, this graphic is taken from the recent Gartner paper or report. And that's where Gartner defined this marketing term, SASE, Secure Access Service Edge that encapsulates really major industry trends. Zero trust applied to network, apps and resources. Quite often referred to as ZTA or ZTNA. Secure and encrypted communications anywhere and everywhere. So everything on the wire is either already is or will be encrypted. So no clear text on the wire, networking. And that's basically making sure that the importance of encryption skyrockets. And that's where the telebots level software looking helps. So, go ahead. So we're gonna talk a little bit about FIDO VPP here. I'm just gonna provide enough kind of context in order to help the rest of the presentation. There are other deeper dives on FIDO VPP available on the internet. If you're looking for a deeper dive, the FIDO website's a great place to start for that. But I'm gonna take you through the kind of broad strokes of FIDO and how it's been recently being optimized and how we've recently optimized IPsec. Next slide, please. So one of the key aspects of FIDO VPP and really to classify what FIDO VPP is, FIDO VPP is a high performance network stack in user space. That scales very, very well with the more resources you give it. So the more cores you give it, the faster it goes. It's very, very feature complete. It supports a range of networking protocols at layer two and layer three and layer four and above. I'm gonna talk a little bit about more of that in a moment. So that's really what FIDO VPP is. It's a networking stack in user space that's typically used to build high performance software defined networking applications. And when I talk about performance culture, benchmarking culture, permeating everything we do in FIDO, you can see examples of it here. We test out to many, many cores. So we test per core scaling. So if you look at the first row here, you'll see single core performance. If you look at the second row, you'll see dual core performance. And if you look at the third row, you'll see quad core performance across a range of IPv4 and IPv6 test cases. We also test scale out because, you know, the typically the kinds of products that are built from FIDO VPP are things like high performance routing software, high performance and gnashing software, high performance IPv6 software. And those typically handle many, many tens of thousands to hundreds of thousands, even perhaps millions of flows concurrently. And you can see here for the IPv4 and IPv6 test cases, we do just that, we test at scale. The two things I'd point out here is the great per core scaling we have. With a single core, you're getting 17.6 million packets per second in IPv4 performance on an Intel Gold 6252N. On with two cores, you're getting 32 million packets a second and on four cores, you're getting 70 million packets a second. And that's fantastic scaling. But also, as we scale up the number of routes concurrently, we also see that you're not dropping performance significantly. Your performance goes from 17.6 million packets a second, with a single route down to 15.7 million packets per second on a single core, by the way, with two million routes. So we test at scale, test many, many cores, test many, many routes. And also, you can see on the left-hand side, we also test with many nodes. So we test things like multiple CNFs and multiple VNFs and those kind of topologies. So maybe we could move to the next slide then. So really, when I get into the slide, I should preface this by saying Damian Marion actually is one of the core contributors to VPP, deserves a lot of credit. He's been a kind of a key innovator in the community. And much of this wouldn't have been possible without Damian. So there's a number of technologies that we've been built into Fido VPP that really optimize IPsec performance on third generation Intel scalable processors. The first thing I'd point out is the technology in Fido VPP called multi-arch variance. Fido VPP is designed around a graph node is designed around the hierarchy of graph nodes. And you can see an example on the screen of an typical IPsec pipeline, where you have multiple graph nodes, each graph node is responsible for a different part of the IPsec processing pipeline. Now, what multi-arch variance allows us to do, it allows us to add platform-specific optimizations. So we can add optimizations using things like Intel AVX 512 instruction set or add optimizations using things like Intel vector AES, specifically for accelerating AES cryptography. And we can add those in a way that are available on platforms that support it and are not, but have another fallback as software, typically a fallback, a less optimized fallback on platforms that don't support the optimization. And it's Fido VPP multi-arch variance that make that possible all within the single binary. When the binary starts up, it detects the platform in which it's executing and it selects the most optimized version of itself to go along. Another technology that we added recently was the Flow API. And this allows you to configure the network cards that support it to direct IP, one IPsec flow to a given core and another IPsec flow to another core and to spread your IPsec load over multiple cores and allow you to balance the IPsec load across the system. And that's available through the Fido VPP Flow API and it's supported on many numbers of Intel NICs. The final technology I wanna talk about today is Intel's vector AES technology. And this was something specifically added to on the ISLAG platform to accelerate AES cryptography. Now Fido VPP added its own highly optimized native implementation of AES encryption and decryption and has made that available. And that gets automatically selected, automatically pulled in and executes automatically through the Fido VPP multi-arch variance. So you don't have to think about it, you don't have to do very much. When you run Fido VPP on a NICSLAG platform and you're executing an IPsec pipeline, it's already executing this accelerated vector AES instruction set in order for you to get the best possible performance. It happens out of the box. Okay. So what else are we adding to Fido VPP? And next slide, Magic. Actually, before we move Ray, there are questions coming in about multi-platform support AMD. So just to re-emphasize, VPP and Sysit think equally, run and we do test Intel Xeon R autumn and ARM and also AMD. And if you click on some of the reports published later, the links published later, you will see that the full suite of the platforms we test. And just to emphasize what Ray said about this slide. So in case it didn't sink in, when you run the VPP code on any compute platform, it will detect the platform it's running on and it will use the most optimal instruction code with the most optimal instruction set using that platform. And that's quite cool. And the operator and the user have a choice which versions they want to use or just use the latest. And it happens automatically. And this is I think an excellent representation of what Mr. Patterson and Hennessy talk about in their compute science books about the optimized software hardware interface. It's an excellent demonstration though. It all happens automatically out of the box. Yeah. But wait, there's more and we're being very busy. So like Fido VPP is very rich in terms of protocol support and we support other secure protocols. WireGuard support was recently added in VPP 2101. And we're going on looking at adding acceleration for WireGuard support at the moment that's in the works. Quick support was also added in Fido VPP 2105. So we have a host stack in Fido VPP where applications where we can link against applications TCP, TLS and quick applications and they can use Fido VPP as infrastructure as an alternative to say the Linux kernel. We also added support for multiple different kinds of crypto engines and that's been there for some time. We support asynchronous cryptography through acceleration such as Intel Quick Assist. And then we also support multiple versions of software cryptography. So we heard earlier about Fido VPP's native implementation that implements the vector AS instruction set on Intel. But we also have a link and use the Intel IPsec multi-buffer library to support other protocols such as Cha Cha Poly and so on. And then we also support OpenSSL for an even wider set of protocols as well. So Fido VPP supports a wide set of protocols through software cryptography and also through various pulling in various different libraries and it also supports asynchronous cryptography through accelerators such as Intel Quick Assist technology. We also support, we've added recently you heard me refer to it earlier. We have the TLS stack and we also support multiple forms of a TLS cryptography also. Okay. All right. So let's move on and dive into some of the detail. So we're gonna talk now about the Sysit projects itself. This is a single slide that's our single visual that provides the quick overview of what Sysit project is. I think the goals very much repeat or aligned with the guiding principles I talked about. It's about fostering good engineering discipline and so by focusing on functional and performance benchmarking and so providing the developers with and testers with the tools and to that regard including the as you will see later some automated anomaly detection and such and defining the metrics and also trying to do our best to guard the code quality from the networking perspective. We execute all of the Sysit framework and VBP and also we test the DPDG applications as a reference in the labs of the IO labs hosted by Linux Foundation Networking. Most of the gur is actually living in Canada and all data is available and including the low level Jenkins locks and console locks, robot locks, battle locks and generally pretty much everything. And the other important thing is that the FDIO Sysit benchmarking environment and tooling we develop is portable and it's available for clothing and you will see later what this enables when Ray talks about some of the terabit testing and also the EPSAC testing done in vendor's lab using the latest hardware all in the Sysit in the clone of Sysit environment. So not to spend too much time because we keep hovering at the sort of high and mid level but we do have few challenges when we evolve and develop VBP software and that is maintaining the quality across the code base across multiple platforms. So that's where the more and more it's flooding us with varieties of in our lab case Intel ARM and AMD platforms. We are doing more and more test use cases and clearly the speeds are going. We are currently at the 100 gigi nicks and going up. So again, for better or worse, benchmark shape of field, I think we've seen this quote before. So let's go into set off the innovation, communication, collaboration aspects within the project. We're gonna start with innovation. The another busy slide from my side. So talking about the importance of benchmarking. We run all of our tests automatically but we do need to execute quite a volume of tests and we need to run them continuously. So what we found out the hard way, of course, is that if we stick to traditional tools and traditional methodologies, we won't be able to actually deliver on what we are asked to do. Basically the combinations of hardware platforms, test use cases, configs, traffic profiles, core, queue combinations, and so on. If you live or lift or interface with this world, you know what that is. Basically it's, and also executing different acceptance criteria for performance limits is basically taking too long. So to that end, we have come up with the set of methodologies and some of them we are standardizing in the ATF. I will quickly walk through them on the graph that you see on the left. So the first one is our maximum receive rate benchmarking which is basically a fast rapid test. Test time is in the order of seconds where we measure the performance, whether this is packets per second, bits per second, connections per seconds, or transactions per second depending on what we test. And we're able to deploy it in our daily trending tests for anomaly detection and such. And we are able to execute thousands of those tests a day. Then comes the multiple loss ratio search where we measure with RFC 2544 accuracy where we measure multiple rates each with different loss ratio acceptance criteria including zero frame loss and partial frame loss. And those tests take in the order of minutes and we execute those tests on the weekly basis and also for the report to verify repeatability. And the final one is the PLR search, the probabilistic loss ratio search which we use for soak testing. And we use them for stateless test and for stateful tests. Now, you may ask why do we need all three and those multiple rates? That's because what we want to verify to the test that the system we test, in the configuration we test is actually behaving well and deterministic. And that is where all of those listed rates stay close to each other. And that's what we call a deterministic system. The MLR search is currently being standardized in ITF benchmarking working group. And here is a quick example of what we gain from applying MLR search versus just standard binary search for rates discovery. The data is from 2018 as an adult but we've seen three to five times or even more time reduction in test execution which basically allows us to execute three to five times more tests in the unit of time. So we talked a lot about what we're doing how we're collecting the data. Now let's talk about what we're doing in the data and how we are using it to achieve our stated goals. So as I said earlier, communicating openly and widely and showing the data is one of our guiding principles. We have two main channels for propagating and distributing the data and sharing with a community and industry at large. The first one is trending. We perform the trending tests and trending reports daily and weekly. They are fully automated. There's no human interaction involved in any of the from-the-test execution to the data being published. And what you can see on this chart on the left is a menu that is user-brosable using GridDocs template. And in the center, you see an example of the MRR daily charts in this case for the L2 switching test cases with the machine-based anomaly detection emphasizing the progressions, the green circles and the regressions, red circles. In addition, we also have a system sending notification emails, listing the tests, the builds and so on, summarizing and listing the progressions and regressions. The charts are fully browsable. You can zoom in and you can actually see which builds is involved in a specific regression or progression. As we do have automated anomaly detection, we clearly have also automated, this gives us an opportunity to automate also the bisecting of regressions and progressions. And that's exactly what we do and that's what we use for root cause analysis and such. The second channel is the reports. We read the reports every release at this stage. The releases are always synchronized with BPP. So at this moment, we have three releases a year. And we show those box and whiskers graphs and run tests multiple times, at least 10 times to verify repeatability. And we also, if any regressions do a sneak in, sneak through, we, whether it's due to hardware or environment change or environment change like Linux change, we mainly test on Ubuntu or kernel change. We capture those there. So before you move on, Magic, can I just chip in a bit on the last slide? I can't underestimate the value of these reports. I reach for these all the time, sometimes multiple times a day. There's actually a huge amount of data in here, a huge amount of data that's gonna be very, really valuable to you that I'd encourage you to go really take a hard look at these. So at the top level, it gives you the expected performance of a whole raft of test cases across a whole bunch of different Intel and AMD and ARM platforms. So if you've just installed Fido and you're trying, you're running an IPsec scenario, sorry, an IP forwarding scenario, IP666 form, IPP4, IPP6, IPsec, and you want to just, even just checkpoint your performance, you can reach for the Fido CCID report and see actually, okay, this is the performance they are getting, this is the performance I'm getting on a roughly equivalent system and you can use that to sanity check your installation. But you can also drill down, if you're looking to get, understand how the actual test cases themselves are configured, the configuration data is there. If you're looking to actually drill down even further and actually understand at even a graph node level, you know, how the system was performing, how many clock cycles each graph node is taking, that information is there is also. So there's a wealth of information here about that the high level, what your performance expectations should be and millions of packets a second, what your latency expectations should be. But also a configuration of a raft of different test cases and actually a deeper dive drilling down the performance of each testing cases. So if you haven't actually seen these reports before you'll be blown away by the depth and breadth of information that's here. Yeah, that's a good point, Ray. And in fact, you reminded me that in terms of the reports, so folks may not be familiar on what we measure. So we measure packet throughputs using all of those different methodologies I described earlier. And if you go down the menu, you can see that we also graph the speedup multi-core to verify linearity of speedup. We do measure a packet latency each way. And we have number of different subgroups of tests, including the NFV service density where we measure a box full of container, container-based VNFs or CNFs or VM-based VNFs as a workload in the container or VM. We use either DPDG application or VPP. We run number of host tech testing at TCP-IP and recently integrated with NGINX GSO testings. And we also do comparisons between different hardware that the tests run on and also between the releases. And going forward, we will be actually providing a curable, sorry, a query interface to the database backend that will contain all of the FDIO data ever measured from the inception of the project from 2016. We expect to go live with that towards the end of the year. And then hopefully we can see something like more than more graphs in that. And then there's one more point, right? That I don't think we actually captured in our talks earlier. And that is that one of the tools we have embedded in VPP is a Perfmon tool. And Perfmon tool allows you to actually verify resource usage on a per VPP node basis with cycle processor or CPU cycle accuracy. We now integrated the VPP Perfmon with Sysit, it's our first release. And the report will be going out in two weeks time, 21 or six, we will not be graphically presenting any of the data, but we will provide a sort of interface into viewing the data and looking at the cycles per packet per specific node, IPC and such. All of that data is now available for all Sysit tests. So we're expecting great benefits to the community. So one of the questions in the chat window was, is FIDO VPP being used in the production environment? And the kind of the breadth and the depth of the testing that you see here that we do on FIDO VPP is what both allows us to have confidence about the production grade of this software. But also it's what allows us to hit that terabit. You know, you can't lose your performance patch by patch release by release and really get to terabit grade performance. Whereas this Sysit infrastructure as well as enabling us is driving that performance culture and allowing us to build that production grade software. And yeah, FIDO is used in production grade software and this is the testing, this is the benchmarking that's backing that up. So finally, we're gonna talk about collaboration, right? Yep. So I guess, you know, we kind of showed the cake earlier or the baked cake earlier and we've walked you through quite a lot of the detail of how we bake the cake, you know. Really we were reacting to a strong industry pull for high performance secure connectivity. So we hope FIDO responds to the trends that are out there in the industry and the community at the moment. And then we were building on top of a really a rich new set of accelerations available on the third generation Intel Xeon scale up process. But, you know, we were starting from a great base and I'd encourage people to look at the previous webinar from our colleagues at NetGate and NetGate also build technology with FIDO and from Cisco and Intel where they dig in deeper into the guts of how the FIDO BPP IPsec implementation actually works at a more granular level. And also the message there is we were starting from a great base, the FIDO BPP IPsec stack has been years in development, you know, and it's used in a whole bunch of different software today. So we were starting from a great base. We added a great platform and we were responding to a strong industry pull. And, you know, there's the video. We include the link to the video and we don't have time to go play it now, I apologize but the video basically shows you us walking through the steps and the set up of the Terabit IPsec platform and actually shows you the path, exe ascending and receiving a terabit of traffic and that traffic being encrypted along the way. The, all this data is going to be included in the next CSIT report. So we spent a lot of time talking about the reports just in the previous slide and the 2106 report is going to include a lot of that deep dive information about executing test cases on third generation Intel Xeon's scalable platform. So if you're interested in how FIDO executes on Sky Lake, sorry, on Ice Lake, forgive me, you know, that's all going to be contained in the FIDO 2106 report. If you're interested in reading even further into the technical detail of how the IPsec pipeline is put together, there's also a white paper from Intel and that's the third link there where we dig into the technical detail of how we optimize the vector AES and how it was optimized with AVX512. And again, a lot of the credit goes to Damian Marion's input there. So yeah, there's a lot of links here that we think you should follow up on. You know, we were just responding to a poll starting from a great, starting with a great platform and a ground for a great foundation in FIDO. So this is some of the results. So I'm just showing you like the, I guess the basic uplift that you get from the Ice Lake platform before we dig in specifically to IPsec. So just with the existing test cases, IPv4 routing and IPv6 routing, you can see generation to generation and this is comparing Cascade Lake to Ice Lake. So this is the second generation Xeon scalable processor, the third generation Xeon scalable processor. You'll see you're getting maybe about a 1.2X uplift, gen on gen on gen. Just in the movie, without looking at the specific optimizations we did for IPsec, you know, standard test cases like IPv4 routing, IPv6 routing are getting a 1.2X performance uplift gen on gen from Cascade Lake to Ice Lake. But then when we look at the IPsec, which is the next slide, we can see that there has been a really dramatic uplift and yep, there's the next slide. So you can see from the optimizations that we discussed earlier, and we discussed all those optimizations that we added to FIDO VPP. We discussed things like the multi-arch variance to add specific platform-specific optimizations to FIDO VPP. We discussed things like the native vector AES implementation that exists in FIDO. And then there's a whole slew of other optimizations that have been done using AVX512, also available in FIDO VPP today. And all that's being pulled together in one package and automatically executes out of the box on that Ice Lake platform. And this is what the result is, you know, for a four-tunnel scenario, sorry, yeah, it's a four-tunnel scenario, you're getting a 3.5X uplift between Sky Lake and that's the first-generation Xeon Scalable processor to third-generation Xeon Scalable processor. So you're getting a very dramatic uplift because of these additional optimizations that are getting automatically pulled in. And that scales out reasonably well as we scale out the number of IP sect tunnels. You know, that's 3X with 1,000 IP sect tunnels and that's something like 2 to 2.5X with 10,000 IP sect tunnels. So it scales out very, very well. And so this is the kind of performance uplift that this kind of these platform level optimizations that are automatically enabled are giving you. But a lot for the most of this was really, sorry, all of this was really made possible to a very, very strong collaboration that happens in the final community. And Intel have that collaboration with the final community, but other vendors have that collaboration. ARM is there also, AMD is there also, contributing platforms and contributing optimizations also. So this isn't only an Intel story and there are other platform vendors also looking to optimize FIDO for their platform and also adding optimizations in FIDO for their platforms only. You know, we were able to tell a really good story today because around IP second, we're delighted to be able to do that. So to emphasize what Ray said, in the many cases in the past, what we have found with those compute platforms, they were actually IO bound in many cases for VPP performance. Now the IO has been addressed over the last few years. We, especially for crypto, whether it's IPsec or OTLS, we are going back into the core bound. And we are awaiting high core parts from pretty much all vendors we have in FDIO Sysit Labs. And that's AMD, ARM and Intel. And we expect to be going into the very much operating in the a terabit space for a single machine, encryption, decryption for a box full demos, running environmental. Now, of course, one may say that this is not realistic to do in production to run a terabit speeds, but we're using it really to demonstrate that capabilities are there and how people package it. And how do they package those virtual network functions and specific network operations? It's up to them. So we do provide the per core numbers and the core multi-core speedup. So far we've seen mainly or close to linear speedup. And we are looking to do some box full-like tests in Sysit. Once we have enough mix and hardware. But if you do the maths based on this charts and also if you see YouTube videos, you will see that terabit is very much here. Great, I think we need to jump in. And the scale here is millions packets per second, right? And the packet size here is 1518, as noted. So, Ray, we have eight minutes left. I think it's time to get a summary and then go into Q&A. I think we have a few questions that we'd like to answer live. Super, and thanks, Majek. So, yeah, just to round out, Fido is all about collaboration. We'd love for you to come and we'd love for you to show up and to engage with us, either through the mailing lists or through the monthly calls. Really, please reach out and please download the software, try it out, give us feedback, tell us what you think, chime in with issues. We'd love to hear from you. We really, the two key assets that we talked about today was Fido VPP, as a toolkit to build best-in-class best-in-class network functions. Also, Fido Sysit as a means to test opens, it is meant to test this end-to-end open-source toolkit for testing terabyte-level network functions. And those are the kind of two key messages for you today. Yeah. So, just to add a few words from my side, just a bit of the call-to-arms or appeal, if you really want to be pushing the envelope with software networking and driving it and making seemingly impossible happen in real, we believe that projects like FDIO is the place to be in. So, we've got a couple of questions and some of which I think we can answer live. One of is, what do you think of the Fido Advantage over XDP, EPPF, which intersects packets earlier on the NIC driver? If there is performance comparison report available, could you please share thanks? So, maybe I can go ahead and add, you know, answer this. Yeah, go ahead. Fido VPP, EBPF and XDP are great projects. You know, they're doing really interesting work in the Linux kernel. One of the advantages I'd say we'd have versus those projects, and I really don't like, you know, it's not one's better. The other, I very much think of them as different tools for different problems. But Fido VPP runs entirely in user space. It runs entirely in user space. So, you can GDP, you know, you can add a debugger to Fido VPP and really dig into the source code very easily and find out what's going on. For things like failover scenarios, if Fido VPP crashes, it doesn't crash your entire system. So, you can build more resilient systems, more debuggable on more traceable systems with Fido. And I think for me, that's very much the advantage in addition to things like better platform optimization and better scaling. So, you know, I don't like to think of it in terms of, you know, competing tools, you know, the traceability that EBPF and XDP add to the kernel is really fantastic. I really think of them as different tools for different problems. So that's one. Magic, maybe you'd like to answer this one. Yes, I would like to, I would actually like to take all the other three questions because they are very much in the same space. So, the first question is, I mean, which products in the market offer FDIO capability? And also, are you aware of any FDIO deployments in the IAS or PAS? So infrastructure as a service or platform as a service product in any major public cloud providers? We are aware of VPP technology used in many deployments, including IAS and PAS. But the best friend is really a Google search tool and the mailing list. We encourage the vendors and cloud service providers to publish the, to publish the use cases for VPP. Not all of them do, in fact, quite a few of them do, but you can glean who is using it by just looking at the mailing list, VPP-dev at least.fd.io and sysit at list.fd.io. So hopefully that answers the question. And the other question is, are there performance numbers available for public clouds? We did on the board sysit environment onto AWS on the C5N specific instances. And we will be publishing the numbers of running a subset of the tests on AWS C5N instances in the context of sysit 2106 release, probably not in two weeks time, but in one of the maintenance releases. So hopefully the community and people here will find it useful. Okay, and I think that's more or less us unless there's any other questions. Okay, I suppose we should round it up with thank you. Thank you for attending. We're available through the contact means below. You can find our website at fd.io and you can get the source code from git.fd.io and we have a Twitter channel that you can follow. I prefer, I'm an old school guy, I prefer more old school methods. So we have a public mailing list you can reach out to us on and you can join the mailing list on fd.io. I wonder, I think perhaps another question has come in. No, I think we're done. Well, let's give it maybe a moment in case any more questions come up or if there are any questions that want to be asked live. I don't know Jill, whether we have that capability here. Yeah, if somebody's got a question you wanna ask, just hit the raise hand feature and we'll unmute you. And as a reminder, the recording will be available in the next few days and we are also going to be determining who our winners are for the Fido Puppy. So stay tuned for that as well. It's very exciting, I know. So this is for the best or the most difficult or most unusual question. I guess the criteria is to be. Yes, all of the above. Exactly. Okay. All right, folks, if there are no more questions then thanks very much again. Thank you very much. Thank you for engaging with the community and with us ourselves. Thanks very much. Thank you very much. Thanks everyone, bye-bye.