 So next we have Hemophony talking about testing infrastructure and services in OPNFE. Thanks Ray. And I'm not talking just about testing your infrastructure and services in OPNFE. Actually the point of this talk is to show you how you can use the OPNFE test tools to test your own infrastructure regardless of whether you're already a user or whether you deploy in really large environments or are just interested in benchmarking small components. So there's typically two goals to this talk. First to create awareness of the OPNFE test tools. We want to target users who are already looking at NFE deployments but also users that are interested in other kind of use cases outside the NFE domain because we think these tools are actually quite useful even outside NFE. And the point of this is to try and trigger discussion about the evolution of these test tools and find out how they can evolve and how they can be modified to meet existing use cases and also to address some emerging ones like for example cloud native edge computing perhaps things like enterprise or IoT and definitely a lot in the SDN domain. So for anyone that's not familiar with OPNFE it's the open platform for network functions virtualization. It's a Linux foundation network project and it provides system level integration, deployment and testing. So basically what we do is produce a reference platform for NFE and also produce requirements for other upstream projects for extending those with features that make them more NFE friendly. So something like this. We provide requirements and features into the upstream other upstream projects like OpenStack or OBS and then we implement those features and update installation tools to actually be able to configure and deploy these features and then once we have that deployment created and deployed we take our tools and we run various functional and non-functional tests against these deployments. So typically the tests will cover all kinds of resources in the stack. So we're talking about VNFs, ManoLayers, VimLayers also a data plane and as part of the OPNFE community lab there's some a lot of donated hardware in different ferro slabs around the world that allow you to reserve some time and actually use some community hardware to test these features or to help develop your features in upstream projects. So in terms of test tools our test ecosystem looks a little bit like this. We have functional testing in the form of a FUNC test and then a bunch of performance and benchmarking testing split up like so with these other projects. And one use case for these is actually in the OPNFE compliance verification program known as Dovetail which consumes tests from all of these components and lets you run this test suite against your deployment to know whether or not you actually meet the requirements to run certain NFE workloads. So I'm going to start off with FUNC test. This is our functional testing tool and what FUNC test does is it allows you to run functional tests against your VIMS so that may be Kubernetes or it may be OpenStack and FUNC test doesn't actually provide a lot new. What it does it gives you one tool to use to be able to run tests across multiple standard OpenStack tools or multiple standard Kubernetes tools that are used. So for example in OpenStack the FUNC test test suite would consist of a lot of tempest, rally, shaker tests and a bunch of others as well. So it just gives you one source, one point or one interface to use to actually test these systems. There is also some VNF testing in there as well and in terms of Kubernetes it at the moment just supports I think one or two standard set of tests. But upstream in the OPNF ECI this is run against our daily bills and it can be run in your deployment from patch verification all the way up to nightly bills and release gating. In terms of how it's packaged if you want to use FUNC test you can pull down a Docker container and get started immediately and there is some very, very comprehensive documentation available on the wiki. Let's see if I've gotten everything. Does anybody have any questions at this stage about FUNC test? Comments? No? Okay, well next up is Euristic. Euristic is the infrastructure verification project within OPNF E and Euristic consists of two parts. There is a set of test cases that it actually runs and also this Euristic is the framework which is the part that you can extend to meet your own use cases. So in terms of framework, Euristic provides a bunch of test functions such as scenarios that let you customize your test steps and also if you want to integrate some additional test tools into it this is the point of extension. It supports running tests in multiple different contexts so you can run it against a Kubernetes or an OpenStack deployment and have those provision your environment for you or you can say, hey Yardstick I have some hardware over here this is where you run the tests. There's also a number of test runners that let you define how the test is actually run and when to collect the metrics during the test. Euristic is a benchmarking tool but it can also be used to do some functional testing as well and also provides the ability to define certain surface level agreements or SLAs to determine whether or not your system passes or fails and it also supports reporting of results through InfoxCB or as JSON format into a file and also reporting over HTTP if you want to integrate it into your own systems. The second part of Euristic is the test cases and these are already existing test cases that are defined in YAML and we make use of the Ginger Templating format so that you can actually take existing test cases and modify those instead, edit the parameters to actually meet your testing requirements. It's the same as SwungTest, it is packaged in a Docker container so you can pull that down and use it and we do collect performance metrics using Yardstick and there's also the ability to use something like Collectee or Barometer which is an OP NFE's extensions to collect NFBI metrics while you're actually doing testing on different parts of your system and there are some already existing use cases and already different scenarios that we already test so if you want to pull down Yardstick now and use it out of the box you can do things like high availability test cases which basically says, hey Yardstick, start taking down services I want to see how long it takes to recover. It also integrates a number of the other test tools in OP NFE so Storeperf and VSperf have some integration in there so you can use Yardstick to run those tools as well and it also supports NFEI and VNF characterization as well which provides you automations on top of a number of traffic generators and also supports a bunch of reference VNFs and also commercial VNFs if you want to do characterization of those. That's Yardstick. Does anybody have any questions at this stage? Okay, so next up is the bottlenecks project which does system limit testing and bottlenecks aims to identify bottlenecks in your performance it does this by consuming test cases from some of the other test tools so it'll consume some fun test test cases, Yardstick test cases and also some NFE bench test cases and will adjust the parameters of those to design some very stressful tests and then you can think of bottlenecks as a scheduler because it'll run those tests multiple times either overlapping or over long periods of time. There's a lot of flexibility there and it will collect metrics while it's doing this from barometer and prometheus so if you're using for example OpenStack you can also pull a bunch of metrics about how the OpenStack services are doing as well as seeing what's happening with your infrastructure and getting metrics from things like your DPTK or OBS or memory CPU and other system resources and bottlenecks integrates this into a FANA-based dashboard which you can actually use to quickly pinpoint the pinch points in your system it collects metrics and produces also a pass-fail output you can use this from a Docker container as well this is a way that most of the test tools are actually packaged at the moment and in terms of extensibility you can add more test cases to it or you can modify the test cases that are there to meet your own requirements and if you have other test tools it's a fairly straightforward process to actually integrate those into bottlenecks to use it for the system limits testing Does anyone have any questions about bottlenecks? Moving on. Oh Chris has a question Can we go and have a look at how certain things run? There is a dashboard. We cannot go look at it now But if you go to the bottlenecks wiki on the it's part of the OPNFE wiki they provide a list of instructions and also how to use the dashboard as well and I believe there are a few demo videos as well So next up is VS Perf This is one of the two tools we have for NFVI data plane performance and VS Perf is it's used for optimizing the performance of your V-switch it lets you do it lets you test out different configurations in your environment to determine what's the best configuration for your V-switch This is done before you start deploying any kind of VIM on top of it This is base level performance tuning for V-switches There's a number of V-switches supported but also you can use VS Perf with a number of different traffic generators to actually do the load testing on the V-switch It's generally run in a pre-production environment and actually has been responsible for the development of two new standards that came from its actual implementation So there is V-switch performance benchmarking It's an informational RFC and there's also a new one that's under review at the moment and TST009 which takes into account benchmarking your V-switch but also taking into account intermittent failures that may happen in your system that are not constantly there Excuse me In terms of traffic generators support you can run your test cases with Ixia, Spirant, T-Gen, Moongen, Xena and I think two more but it has a very comprehensive support so you can use whatever traffic generator that you want with it In terms of usage VS Perf lets you choose your V-switch, your traffic generator and VNF that you want to use to your test case and your test case can be simple 5-to-5 benchmarking physical to virtual or the physical virtual physical some people call it bunny ears test routing through two VMs but if you want to extend it you can support multiple different deployment options in VS Perf so you're not limited to the set that's there you can define other traffic routes or service function chaining and use different VNFs as well See Does anybody have any questions at this point? No, okay Next up is NFE bench and this is slightly different from the other tools in that it's designed to be used closer to production and it's used through open stack systems and does a full stack testing of your network performance all the way through your system it's designed to be used very late stage pre-production so day zero capacity testing but also when you're considering extending your deployment for capacity planning NFE bench is used there as well it's distributed in a container which contains NFE bench itself but also the t-rex traffic generator as well so you pop this onto your system and you can do day zero baselining or you can do data plane performance monitoring for each deployment like when you're verifying in your compute node or you can use it as I said for capacity planning as well extensibility you can modify the parameters of your test run and I don't know if I have anything else on this because it's designed to be used closer to production NFE bench runs something that's similar to RFC 2544 but collects a few additional metrics including collecting your partial drop rate and that's something that you can configure during a test run as well what kind of loss that you're willing to actually tolerate so almost there the next one is does anyone have any questions on NFE bench so everything is installed in your Docker image pretty good and as well and you can write the program you can do additional configuration but with most of the tools they're good to go when you install the Docker container when you pull it down you may have to provide credentials and locations for your open stack deployment though just if you do nothing about the project just bang it on because you can start quickly okay next up is store perf which measures the performance of block storage and ephemeral storage at a VM level and this is a pre-production the purpose of store perf is to report on storage performance after all virtualization there is a taken into account so a typical test run with store perf involves spinning up a VM and running some very stressful tests to test the limits of your storage so it'll take the continuous it takes your continuous readings of your storage over time and it reports once a steady state has been reached in terms of extensibility you can modify the parameters such as what you define as steady state you can find a set of hysteresis or how long it's supposed to actually run for and if you do end up running it and you don't reach a steady state it lets you know that as well store perf is distributed in a container and it's currently following a continuous release model which is distinct from the rest of opnfe so whatever hits master is usually good to go we talked about extensibility and does anyone have any questions on store perf fantastic okay so the last thing I'll talk about is dovetail which is the opnfe compliance verification program it allows you to verify that your commercial platform is actually opnfe verified or meets the requirements that they've defined for opnfe readiness it is usually released mid cycles there's a three month delay between opnfe release and the dovetail release and the latest one was September so we're doing an update dovetail will take a bunch of test cases that have been defined by fung test and yeri stick and perhaps a few others and it runs these and reports on whether or not your platform has passed it for the moment they're focusing just on functional tests because when you're talking about commercial platforms performance tests are a little bit controversial when you're trying to define a standard platform so that's it usually there's a addressing original use cases but that's not something we're going to cover in this so does anybody have any questions about the whole ass the question is what are the platforms that are already verified I don't have the answer for that but there is a OVP website that you can you can go to that's the opnfe verified program website so you can see the results because people will take dovetail they'll run it themselves and upload the results and it'll be reviewed by community members before being published so my question is did you already meet together well we're all part of LFN and as part of that ongoing initiative we are looking at areas that we can work together and combine and consolidate but that's an ongoing process are we using T-Rex as a tool in some of the testing? yes Yaris 6 supports T-Rex I believe and so does NFE Bench and so does VS Perf so it's kind of integrated at least it's okay I don't know how much birdlet there is in the tests you all know it the CSIC guys and the NFE Bench guys do work closely together but I think outside of that not very much thank you everyone