 Let's see if we can start it. So hello, everyone, and welcome. We're going to give you a quick tour of OP NFV's Brahma Putra release, and specifically, talking really around the very quick introductions to what is OP NFV. It's still a little unclear to some. A little bit of a walkthrough of what it is that we do, and then where are we going, and where are we headed. With a focus on how we've been working with OpenStack, and how we continue to work with OpenStack moving forward, and the roles that I think each community can play with each other. My name's Chris Price. I work at Ericsson, and I'm involved with OP NFV as well in various projects, and on the technical steering committee. So I'm Frank, and I'm on the TSC as well. And I think the first thing that we want to go do address is really what is OP NFV, because there's a load of confusion still around what OP NFV does and what OP NFV doesn't really do. So I think first thing first is we are trying to go and do NFV for real. And that means we're going to go and try to go and build the entire Etsy stack that we got laid out by Etsy, but somebody's got to go and pull these things together. Somebody's got to go and integrate that. And once you integrate it, well, you've got to go do that on an ongoing basis, because, well, all these individual projects move. So there's no point in time where you say, well, we're done. Exactly. And it's not just about putting it together and hoping that that's where you need to be. You need to iterate. We need to come back. We need to look at what OpenStack is doing and bring that in and provide the capabilities for the platform. So for us, it's very much a process of each iteration. We come back with new features and new capabilities. We come back with new components in the platform, and we try and establish this NFV cloud that we're trying to produce. Yeah, so if you look at the cartoon style of what OP NFV does, on the left-hand side, you further abstract at the picture of what that's the NFV is, right? So it's a little bit of compute virtualization control, storage virtualization control, network virtualization control on top of a physical infrastructure. That's, by the way, one delta between what, well, a typical OpenStack deployment looks like and what NFV is about, because we are about performance. We are about, well, getting packets shifted. So it's not entirely the only kind of, you don't really care where you're on. You do care where you're on. You do care where you're running on. And, well, building the entire picture, well, means a couple of things that you need to go do. You need to go and integrate a bunch of components. But you also got to go and test them in an ongoing way. And if you test, you find out that there are certain things missing. Even at the requirements level, you're finding out that there are certain things missing. So we ended up having three pillars, right? Yeah. Yeah, the integration pillar, of course. That's the one where we compose. We bring things together, where we plug things together. The testing pillar, where we make sure it works. And we pull from upstream. So when we do tests, we're pulling tests from OpenStack. Of course, a lot of tests. Pulling tests from Open Daylight. Pulling tests from the Onos project. And also, building our own tests. We build our own end-to-end types of tests. We have tests for VNF onboarding. We have tests for VNF onboarding and platform failure scenarios. We have tests which will then give you latency figures on how quickly you can bring features up and the platform. How quickly you can upgrade those features or take those features away. And measurements for how quickly traffic is running through. In addition, new features, very important for us. There are things that we want in NFV that we don't traditionally get from a cloud-centric view. We want to see peering using BGP from one data center to the other. And we want to see, you know, SLA managed connectivity services to an edge device where we're going to want to put some sort of a workload at some point in time. We want to see these types of features coming through and we spend a lot of effort on bringing that forward. Let's understand, what is OPNNV and why is it a little different? So why did we start OPNNV as a separate project, even? So why didn't we put it under OpenStack? Or why didn't we put it under Open Daylight? Or why didn't we put it under, ah, where do you put it, right? It has multiple homes, it has multiple kind of upstream sources. So, well, if you go and integrate, so if you do in systems integration as a community effort, well, you got to go find a home and there is no natural home, so we created a home for it. And there is another thing that typically I think people moan about. So where is the end consumer? Where is the end user? How does the end user really take influence over what, well, happens? Usually I think you form a project and then you create a user group. So you have the inner circle and then you have the user group. Well, you've ended, you build something, you wait for someone to try and sell it and then you have users. I mean, that's the traditional open source way. Build it and they will come and use it and, you know. But in OPNNV right now, we do have, quote unquote, end user. So there is the AT&T's, there's the entity Dockermost, there's the. Orange. Yes, the oranges, yeah. They're part of the project. They're helping stand you over a thing up, they help test it, they help code the stuff. So a different level of participation certainly happened by making them part of the party as opposed to a user group. So, well, and we're not only consuming by the way, right? No, we participate. So we're only trying to go and fix these things that we, and you identified some of those things already. So yeah, certain projects like OPNNV does service function chaining, but they have a problem. Do they have a deployment environment where they can go and test the thing and then find out whether it really works at system level or they don't. So, well, we have a system project in OPNNV now that, well, deploys SFC for a living and creates a load of test cases at system level so that we can go and test it. So overall, I think this creates or slowly starts to create, hopefully, an ecosystem where we bring all the individual components together. So this is where, kind of, this is the fire site where people kind of gather from the various areas. Exactly, I think one of the things to remember about OPNNV is when not our own community, where your community, I think that's something that we've tried to be from the outset. You won't go to OPNNV and find 10,000 lines of code in OPNNV because that's not where we're gonna be writing code. If we wanted to make an open stack, we would come to open stack to do that, right? If we wanna work in open daylight, we go to open daylight to do that. We don't try and keep things. So it's kind of fun to try and articulate the value of what OPNNV is to people that say, well, where's the code? And it's like, well, it's in open stack. It's in open daylight. It's in OVS. And that's a nice kind of transition into the next thing. So in many cases, OPNNV is upstream. We drive change and we work actively in upstream projects, like open stack. And open stack is one upstream. But there is other upstreams, like I recently launched one FIDO, building another fast forwarder, or the fast forwarder maybe. We need components there and changes there in order to build a full stack of open stack, open daylight, FIDO VPP. These guys need to go change. Open daylight needs to go and change and do the integration more properly. So we've driven changes across the board, but we drive them upstream. So the principle is always upstream first, right? Yep, yep, but we're also downstream. We're also downstream, and well, a load of the things that we do is downstream. Exactly, we pull. So once we've gone upstream, once we've built the capabilities we want in FDIO and in open daylight and in open stack, and we have this new forwarder that's enabling us to do VPN level service chaining solutions or whatever it's gonna be, we bring it back, we compose it, we deploy it, we test it at scale, we test it with resiliency, with redundancy. We will run a service function chaining solution and then we'll just start to pull things, killing processes, making sure that the thing still works, we can still handle our application. And we compose, deploy, test, compose, deploy, test for a number of different scenarios. If I'm gonna use own also, I'm not gonna be using open daylight at the same time, so I have to be able to test and verify both of them and I have to give them the same rigorous set of expectations that I know I can use that component or I can use that component and I can trust that they're gonna fulfill the use cases that we have. And we do the compose, deploy, test iteratively and over and over again. Yeah, and well, given that we're both, some people say we're not upstream, we're not downstream, so where are you? Maybe we're midstream. So it's the create portion, that is a significant portion of the work and maybe I think 50% of the people in OPNIV are focused on creating and getting things done upstream. And in many cases, getting things done upstream is a teamwork function because your voice is louder and is more easily heard if you team up. So I typically compare that to crying babies. Ignoring one crying baby is easy, including ignoring a room full of cry babies is really, really hard. So the create portion is one portion, the compose portion is another one and well, given that we wanna go build a system, we brought that together in OPNIV and so hopefully that clarifies a little what OPNIV does and if we wanna go somewhat up in one sentence, that's probably it, right? Systems integration as an open community effort. Very much. And that's not what we are. I mean, we're not an integration but we didn't come here just to say we're going to plug stuff together. We came here to actually articulate needs, look at NFV use cases, work upstream but we don't feel that we own, we don't own the code in OpenStack, we don't own the code in Linux Kernel, we don't own the code in those places. What we do own I guess is the integration of them. So if we need to find ourselves an entity or somewhere, explaining to people what OPNIV is, we ask systems integration as a community effort. So I had a lunchtime conversation with somebody and they said, well, so what are you guys really doing and then kind of, how would I go get started? Because I have something in mind that is, well, virtual EPC, how does that apply to you guys? And well, we keep on coming back to the very same question and you're like, what do you run on? And then, yeah, well, maybe you run on an assembly of OpenStack, Open Daylight, KVM, OBS but somebody else says, no, no, I wanna go use FIDO as a forwarder or I have my private version of OBS that is DPDK enhanced. Well, somebody needs to go and pull that together and well, that leads us to, is there one OPNV? Is there two OPNVs? How do we go and deal with the diversity that somebody wants something different, you might wanna do something different than you, right? So how do we deal with that? Not only, not only, I mean, it's also the, you have additions and enhancements here, I mean, when you enhance a component, all of a sudden it's behavior changes. So what you had is no longer what you have and you may have wanted that or may not have wanted that enhancement. So it's not just I'm using Enos or I'm using Open Daylight, it's also I'm using Open Daylight and I'm loading these features in that I need to be able to use in the platform. And to try and articulate how that comes through, Brahma Putra was a breakthrough project for us. In the ANA release, we built our CI CD pipeline, we put our first platform together and the Brahma Putra we basically said, again, everyone, let's all jump on board and let's do this. And we ended up with more than 20 different platforms that we needed to be able to deploy over more than 10 different physical infrastructures across the globe. And that put us in a bit of a twist, we didn't really know how to do this, we didn't know how to plan this, we basically set off on the path of trying to prove that we can run our platform, although not just one platform, any flavor of it on any hardware. And we didn't have a process. So it led us to defining a new terminology or a new way of expressing things in the scenario. Exactly. So we call this thing that this assembly, you have a mixed bag of things, your choice of mixed bag of things and you pull it together. And we call that a scenario which is a deployment of a set of components and their configuration. So it's like Lego blocks. OpenStack is a Lego block or consists of a bunch of Lego blocks. There is a bunch of Lego blocks for SDN controllers, there is a bunch of Lego blocks for virtual folders and so forth. So you can piece it together and from the Lego blocks you can build a house, but you can also build a Millennium Falcon, right? It's up to how you assemble the blocks. So what OPNV does is, well it assembles things that are of interest. So some people are interested in houses, other people are interested in Millennium Falcons. So if there is a community of interest, we'll assemble it for you. Exactly. And it comes back to the question that EPC guy, virtual EPC guy asked, how do I get started? Well, just get started by articulating what you need in a platform. What do you want in your platform in order to be able to run your EPC? And then we can come in and we can have a look at, do we have scenarios which provide those capabilities? Do we have integration points for you to start to work towards the platform from your virtual EPC solution or not? If not, then where are the gaps? What is it that we need to try and solve here? And then we go upstream, we solve those problems, we bring it back and then we have the scenario that's going to support the virtual EPC solution. That's more or less the process that we will try and continue to work with, looking at new use cases, looking at new network deployment solutions, and we'll have a bunch of scenarios. And the challenge we have is that the scenarios grow. So we went from one simple scenario in ANO to 24 scenarios in Brahmaputra. What we want to do now that we have 24 scenarios is converge those back. So the 24 scenarios we have which all provide different feature or capabilities, we actually want to start to normalize those and bring that into a smaller subset. So maybe that 24 scenarios that we had ends up being six, something like that, that provide the full capability in a more controlled and normalized way. But at the same time that we've reduced that capability down to six, people are coming in and adding more on the end. So we're always going to end up with quite a large number of scenarios. And you can sort of see it as one of these compressing processes whereby the base scenarios will become feature rich and feature rich and feature rich and we'll keep adding these features on the back end and figuring out how to normalize them, how to provide APIs the industry can start to work with, how to provide solutions that I can deploy a scenario and I can either run my IMS system, my EPC system, I can do remote workload management, whatever it's going to be with these scenarios as we mature as a project. Yeah, exactly. So I do believe that I think right now we have these 24, maybe we can train it to your six. Maybe not. But there will be new, right? So there will always be these things that are kind of more for the masses that have or catering to more people but then somebody will stand up something that is maybe just for the EPC guy, right? And as he learns how the components work together he'll probably also learn how to go and be part of the bigger picture. So hopefully there is this kind of gravity that the larger scenarios create so that we have the diversity but at the same time we can deliver something that is kind of a more of a system or a tighter integrated system with more diversity and more capabilities. And that leads us to another thing, right? So a scenario is a system, right? But does it really work? You've got to deploy it and you've got to test it. You've got to deploy it in a number of different platform types. You've got to test it a lot of times. Brahmaputra was a breakthrough for us in another way. We basically stated that okay, if you're going to have a scenario you need to be able to run that, deploy it four times in a row, run an IMS system on it, run approximately, what would it be, 12,000 tests against that thing and make sure that it stands up to itself each time. So a scenario to make it through Brahmaputra had to have been able to be deployed from scratch on bare metal at least four times in a row with a little over 2,000 tests each time, pummeling it to make sure that it met our standard if you like. And it's deploy and test. And recycle and recycle and rinse and repeat. In Brahmaputra we didn't quite get to the level where we could hit any given lab with any given scenario. There are configuration and software dependencies that cause complications. So we don't have the ability at this point in time to say okay, I'm going to hit this scenario in China on this Dell stuff and then I'm going to go down to California and hit the Cisco. And then I'm going to Montreal to hit the HP stuff and I'm just going to see that it's running smoothly in all of these places. There are some issues with switching and things that we work through iteration by iteration again to normalize how it is that we work with the hardware, how it is that we work with the configurations in order to get the platform essentially deploying consistently across. And I think one great example of this kind of system level testing is the ArtStick project that we want to go briefly highlight here. And I think it's a very, very good citizen because it interacts with upstream and there is another upstream community. It's not even open source. It's standards. It's Etsy. I think that's a one big upstream community that we care about quite a bit. And they laid out- And ITF. And ITF. Should have been listed. It should have been up there as well, yeah. And they came up with a methodology to do system level testing so that, well, if you were a test project and the SFC guys are a very good example of that, they said, well, we need to do system level testing of our service function chaining. Well, do you really wanna go and stand up your own framework for that or build something and hack something up in Python? No, you don't. But you wanna go and have a system set up where I can just insert the things that I wanna go run in addition to the basic tests that you're running. And what does the ArtStick do in a nutshell and very simple things? You're setting up a couple of VMs and then they're pinging each other or they're running IPerf between each other in order to go and have a standard understanding of performance and the like. So they're starting off doing distributed things that you typically wouldn't do if you're just doing component or unit testing. And so they're allowing you to go system level feedback. And if we're doing this continuously, you get system level testing feedback every 24 hours. And the dream that we have is you push a feature in OpenStack and 24 hours later, you know whether it breaks something somewhere else at system level or not. That's the dream, right? I think the ArtStick is a great example of being able to compose a system and test it as well. The ArtStick test cases for SFC, if you wanna test SFC in OpenNFV, you will deploy your OpenStack solution, you'll have a specific OBS which is supporting the SFC capabilities. You'll have your Open Daylight with the SFC features loaded and then you'll be having Tacker on top. And then the ArtStick, it's gonna call Tacker. It's gonna say, hey, Tacker, set this up. Here's a composition that I want you to build into the system. And it's gonna go and it's gonna deploy that from Tacker through OpenStack into the controllers, down to the network, bring up the VMs, and then just make sure that everything's still running. And to be able to do that, I mean, that's not something that's very easy to do. And by using this common framework that is able to compose all of these things very easily, writing that test case just becomes a question of can I describe what I want test? And then can I push some traffic into that? And if I can, then I can run an end-to-end test case which is actually using a complex compiled system that provides, I would say, one of the more complex networks use cases that we have working on at the moment. And if you look at the overall thing, maybe you just go to the website or you go to test-results.opnfv.org. All the results that they have, they put into an influx database and, well, you get nice Grafana visualization for doing that. So you have a history of what worked, on which lab it worked, how it worked out. And if you run and run your own stuff privately because you said, well, yeah, well, I'm doing almost this, but I have something else that I want to go test out and I might even want to go keep it in a proprietary way, but I still want to go and compare it to what happened in the open. Yeah, you can absolutely do that now. And I think that's brilliant that we're building an inventory of things that worked and how they worked and how well they performed. So you can, well, start to understand where are you in the bigger scheme of things. But let's shift focus a little bit to the, not the kind of composition and integration and testing piece, but maybe you're missing certain things. That leads to? Trying to build new features, trying to create, trying to implement. From OpenFV, we don't do it internally, right? So it's a bit of a challenge and it's a bit interesting. We have developers coming to us, I want to build this. And it's like, cool, OpenStack's the place for you. What are you guys doing? I don't understand why I would go to OpenStack. Well, you go to OpenStack because, and we have these conversations around where is it that we need to build things and how do we need to build things. And we have some good examples of projects that have been able to come to OpenFV, described from an NFV perspective the types of capabilities and behaviors that they want in order to support interactions with management orchestration suites or interactions with different networking components. And then go upstream and articulate what they need in the different components in order to achieve those use cases. Yeah, and I think one great example is what the doctor people have done. Doctor is about fault management and maintenance. So understanding that, well, a certain VM failed. And that's kind of useful because if you're running, say, a set of firewalls, then somebody needs to go and tell you, well, that firewall went down so I can flip over to another instance or, well, if I flipped over, I wanna make sure that I can bring up yet another instance from an orchestration perspective and maybe even fail over to the main instance again. So for that, well, the guys in OpenFV said, well, this is not really fully there in OpenStack. So we need somebody go, have this alert send up. We need a proper API. We also need a proper API to bring certain instances down on demand. And well, what you could have done or what they could have done is say, well, let's go fork, do it in OpenFV. I have my own repo and no, they went up. They went upstream, yep. They went upstream, right. A number of blueprints, you can sort of see them listed the different things. And when you come with a blueprint to OpenStack, you guys are like, what does that do? What is the purpose of that? So there was this concept of trying to create the use cases and then linking that into the blueprints you needed and then composing those blueprints into a workable solution. And it starts by fixing things in the components that we have. And you can see here, the Solometer component got updated. Then there was actually some re-architecture that went on there in order to make this work effectively. And the Nova component got updated. And at the end of the day, what we wanna do is make these changes and then bring them back in to the platform and redeploy them so that we have now have a platform with new pieces. Yeah, and once they came back, they said, well, we have these fixes in ALDH that we needed. Now we have them in, and now guys in OPNV, Indigo still, I don't have the feature as such, right? I need to have it as part of the platform. So they came back with requirements to the various installers that we have. Go in and install it for me so that we can go and test it. So they went really full feature, driving it upstream, getting it done, but well, not calling it done because it was an OpenStack because it wasn't really done when it was an OpenStack. It needed to deploy and be tested and they went full cycle. And I think that takes us to the next step. Before you, go back quickly because there was a proof point to what you just stated. I mean, their perception of done was when the virtualized infrastructure could help the application solve the problem that it needed. And they couldn't prove that by testing it in OpenStack. They could prove that by testing it on a physical infrastructure that had an automated suite that was going to over and over again validate that that application could be helped by these features and it could be helped in a timely manner that was good for the platform as a whole. So I think, yeah. Yeah, and then they've done it once and I think the key thing is we don't do anything once. There is no such thing that's done once in OPNFV. That would be useless because the minute you're done, you're almost kind of irrelevant because the world moved on. And so let's move with the pace of the world. That means we gotta go constantly iterate. And well, you mentioned that already. We don't run a scenario once and test it and call it done. We pretty much kind of run it all the time. We hammer it all the time. I was at the ONS a few weeks back and I asked my release engineering guy, how many OpenStack clouds have we stood up this year? And he turned around and he said 1,377. It's a little bit more by an hour, I think. We're way beyond 2,000. We're closing on 2,000 OpenStack clouds, but we don't give a full on. How many people in this room have stood up 2,000 OpenStack instances so far? Anyone from the area? Good. They had to be someone on the info thing. Of course. Yeah. Thank you. So we do this obviously in an automated manner. So who's your friend? Mr. Jenkins is your friend. And I think the key thing is this overall cycle that was already very well articulated by doctor. So they go deploy, learn, and well, they're pushing things into Mitaka. They're pushing things into Newton. And that's ultimately what we want, right? So ultimately want to go deploy latest from upstream so that upstream learns at system level, whether the whole thing really works, whether this alerting mechanism, whether you can fail something over here and then you can bring up another instance there, whether this really works at system level. And I think we're getting very, very close to being able to prove that dream. I think that's one of the focuses we will have over the next six months. So today, when we're doing this, we take the OpenStack release and we deploy it with the various components. And generally what we're talking about is taking OpenStack and then iterating around OpenStack on the various components and deviations of those components. The work that we do in OpenStack generally doesn't come to us until the next release comes out. It's very hard because OpenStack has so much composition around that needs to be done. One of the focuses we have and will continue to have the next six to 12 months, I guess, is to make sure that we are able to support third party CI in other words, we should be able to deploy OpenStack main on our labs. This is something that we would like to achieve because then we can come to OpenStack and say, hey, we've got these things we want to get done and we can actually see that they solve our problems immediately and we can then work with activities that are ongoing in the OpenStack community. A lot of what happens in OpenStack is very important to OpenNFV. It doesn't start with us and it doesn't come from us but it's important to us. There is 98% overlap between the communities at the end of the day. The differences are different. And one level of inspiration that maybe we want to go and share here is another interaction that we have with another community, which is Open Daylight, where they, we ask them, can you do pre-builds of your release? Even though the thing might not work, just give it to us so that we can start to go and integrate. And we did that prior to the Borrelium release which came out late February and they gave us code drops late December knowing that it's not working. Guess what? We started to integrate against this thing and well, nowadays they're all trying to do that as early as possible because they really saw the value. Multiple projects really saw the value because we suddenly were able to go and test drive things that they weren't able to test drive at all because unit tests didn't really reach far enough. And so that's an interlock that we seemingly got to work with one community and now I think for Colorado, so for the next release of OpenNFE, we're trying to go and repeat that. Hopefully I think with a large set of people, well, in OpenStack, with a large set of people in OBS, what have you, Fido, to that. I think there's another point on the slide that we want to go highlight, which is there's multiple bubbles there, right? So what are these bubbles? Multiple labs that we have, the reference systems that we have. OpenStack is putting together a reference lab now, which is great, which means that you're gonna be testing against physical infrastructure and you're gonna be able to do the things that we've been working with. I think that maybe the big difference for us is we intentionally set out to have different labs. We intentionally set out to make life hard for ourselves when it comes to how we're gonna integrate with physical infrastructure. Can I ask a question? So how many people, so I think you stole more than 2,000 instances of OpenStack so far. Have you stood them up on a variety of infrastructure as wide as Ericsson, Dowell, Intel, Huawei, Cisco? Do we do that? Now even that lady shakes the hat. We started with- She's smart, we're not. That's why. We have an edge. So I think, yeah. Well, these community labs, remember we just flipped the slide, right? We're deploying worldwide on a variety of hardware systems. And, Don't forget the ARM labs that we're bringing up now. Oh yeah. And I think it's part of an exercise even of the PlugFest, right? Where we are trying to go get even beyond just x86. Anything else that we wanna go say there? I think, I mean, so performance is really important for us. And there's a lot of reasons why we have this. Performance is really important. We wanna be able to get the best out of the platform. We wanna be able to feed the things that make the platform perform best back upstream so that everyone has that. We wanna also be able to do that across a number of different hardware types. It's not good enough that we can run this on a Cisco system or an Ericsson Blade system. I mean, it needs to run on all the systems and it needs to run in ways that you can get the best out of that system because we also recognize that certain hardware's are best for certain use cases as well. It's not just the softwares that are good in certain conditions. So being able to support and create a solution whereby I have a use case here and I need to solve it and maybe I need ARM here because I have certain environmental conditions that require me to have an ARM system. I wanna be able to use the same platform there because I wanna be able to deploy the same applications and I want them to feel as though it's the same environment for them. And there is another aspect, right? You can become a blob on this map. And you don't necessarily need to go into NAID and say, well, I wanna be a full community lab and be fully hooked up and get jobs scheduled from OPNFE, but you can hook up to OPNFE's Jenkins system. So, well, you create a Jenkins slave, you hook it up to the master and we have even a recipe for that, a guide for that. And then, well, you can get certain jobs pushed onto your system and ongoing by. So if you're interested in a particular scenario and running that scenario in an ongoing basis, even with the changes that OPNFE is applying all the time, you can absolutely do that. And I think that's another thing, like you don't necessarily have to stand up all the infrastructure to participate in the infrastructure. So you can become a semi-field lab relatively easily. And as I said, we have a guide for that. Now, well, talking a little bit about Brahmaputra, so what do we have in Brahmaputra that I think from a future perspective is exciting as part of the, not only the pipeline, I think we chat a lot about the pipeline, but... A lot of the features we have in Brahmaputra, I think are foundational features. We spent some effort on IPv6 in order to get IPv6 into Brahmaputra and we have IPv6 for support, SFC, Lanthru VPN services, resource reservation type use cases and fault management. Now, these in their own right are useful, but they don't solve the NFP problems. And what we'll see in Colorado, for instance, is we're gonna have use cases where we start to do multi-data center reference implementations over a V6 network. So this is something that the guys are setting out to do now. So a couple of those little blobs you saw on the lab, on the slide, sorry, a couple of those labs, we'll be connected together to run a sequence of multi-site data center use case tests with IPv6 connectivity between them. This is some of the targets we have in Colorado. Service function chaining was put in in a rudimentary fashion. It doesn't necessarily support a lot of features that people are very interested in, like multi-encapsulation through chains and things like that. These are the things that are being looked at in Colorado and that then starts to really address some of the broader NFP use cases. I need to come in on a metro ethernet and then I want to basically encapsulate on VxLan with NSH headers in the data center and I don't want to have to set up different networks to do that. I would like to be able to articulate that through a single chain. And some of these things are going to be coming through in future releases. I think a lot of the features that we have in Bramaputra have established scenarios, so a platform somewhere that I can iterate on. Test cases so that I can prove I haven't broken anything when I start to add new capability. And of course the ability for anyone. You can come in as Frank said, come in today with a server, hook it up to our Jenkins and you can deploy there. You can take anyone of our scenarios and press a button and you'll have it installed and you can play with it, you can break it. You can even help us fix it. Yeah, I think some of the things that we also enabled and you see that pretty nicely from an integration and then testing perspective, we added a ton of diversity to what we've been doing initially. Initially the Arnold release was just if we were to dab honest to scenarios based on the very same set of components but with two different installers. By now we have a choice of pick your install tool because there isn't industry convergent on one or the other install tool. So the industry convergence is reflected here. Absolutely. Well, we're reflecting the world as opposed to we're trying to go and pick a winner. We're unable to pick a winner. We're open source. We're a pure map bureaucracy. So we wanna go and well create a little bit of competition even so that people have choice and can pick and choose. That's what we created and this is also what scenarios are about. So you can choose from a variety of compositions that are maybe doing a little bit of the same thing but one is stronger at that one, one is stronger at that one and well to the earlier point maybe for your EPC you're standing up a difference or you're using a different underlying stack than the other thing. And I think we're trying to mature that over in the Colorado release even to how we're dealing with scenarios, how we're composing them, how we're releasing them. And that maybe leads us to the what's next, right? So the what's next is for sure gonna be called Colorado. That's an easy one. So not this, I think we're spreading out over continents now so. We seem to be touring the globe, yeah. Yeah, we're traveling in the globe with the river. So what's next in Colorado? We sort of alluded to it more features, completion of some of the foundation work that we've done, more stability. I think a lot of the focus is gonna be on that farce infrastructure making sure that when I'm going from one lab whether I'm trying to run on UCS or whether I'm trying to get in a Dell solution that I'm not getting caught up on issues with the switch anymore. I don't wanna be caught up trying to reconfigure the switch just cause I wanna run on one lab or another. We're gonna look a lot more at alignment and normalization of those interfaces just to make sure that we can get things running smoothly, we can start to test more freely across different infrastructures. Arm, arm, arm, there is a lot of focus on arm. We have three labs coming in already as far as I'm aware. We already have some scenarios which are running. They didn't quite make the Bramaput release but they'll be out in Colorado. So get your little Raspberry Pi's out and build yourself a data center. Yeah, and we're building new stacks. So we're bringing the recent enhancements that we're done at the data plane so FIDO becomes a part of the overall picture. With OpenStack, Open Daylight, FIDO, or maybe directly OpenStack and FIDO directly integrated. And we are hopefully also not only normalizing on some of the delivery mechanisms, we're also normalizing on how we configure things. So many people said, well, can't we describe scenarios, network setups in a more uniform way so that we can harmonize things and more easily deploy to certain hardware environments? So that you're just articulating your network needs, your component needs in a uniform way and then an installer is gonna go pick up that configuration file, some YAML, and gonna go and do the setup in the call-in so that I think from a scenario composition perspective it becomes less hard than it is today. So hopefully we're getting there because it would help the overall CI CD pipeline quite a bit because right now in many cases we're doing from a testing perspective, the testing guys look at how the thing is configured and based on that they're running certain test suites. They can't go to a uniform description of what the deployment should look like. So you're looking into this and say, well, this particular SDN controller doesn't support that feature so it doesn't make sense to test this. This is gonna go fail for sure. And that's all, right? And we're trying to go and get rid of that so that the testing guys are providing test infrastructure. Here's a test description of what you wanna go run for a particular scenario, go run it and then it's very easy to qualify the outcome. And then you can really say, well, I want 100% of this but right now we're gonna go pick and choose and saying, well, we want 90% of this compliant and we're just doing this pick and choose because well, there is no normalised way to go and describe what should really happen. This all takes time. I mean, you have to hit the brick wall. You have to try and you have to fail and then you have to iterate and you have to come back and you have to rinse and repeat, rinse and repeat. All the issues that we found in Ramaprutra when we started to do these complex integrations, we have to start to automate how to get around those. So for us, the automation of the solutions is critical. If we wanna continue to grow and continue to bring in new features and continue to be able to verify that they work release over release year over year, iteration and automation. These are the things. And I think the other thing is be closer to upstream. Right now, I think if we're just composing things that are released, quote unquote, versions, you're almost six months late, whatever you do. Six months is a long time for a developer. For somebody that wants something stable and deployable, hey, absolutely, we're right at that. But we also have to go cater far more for the developer. So fast feedback, you don't care about releases if you want fast feedback, but you care about the results. So let's find a vehicle for us to go and publish these results, integrate latest as opposed to stable and then publish these results really quickly so that upstream developers and OpenStack, Open Daylight, OBS, what have you, we'll get that feedback really quickly. Because I think more people will care about what we do. We want our releases to be boring. We would like all the excitement to have when we're designing stuff, not when we're trying to plug it all in at the end. We need to get things more real time, more attached to main. We need to have integration as a part of writing a line of software, that's all. By the way, I think we're obviously over time, but we started late, nobody's worrying, nobody's shouting, but I think we do have questions for, given that we're over time, we have time for at least one question. If we have a question. If we have a, of course there's questions. Do we get a, that was a request for a plug. Let's go repeat. The question was, will we get an update on the progress of the OpenFV Summit in Berlin in June of this year? Go to openfv.org to sign up, la, la, la, okay. Yeah, yeah, so marketing is here, yes, absolutely. There is a summit in Berlin. Berlin is really nice in summer, go come and join us. By the way, when is Colorado? When do the waters of Colorado start to flow in? After summer. After summer. We're aiming for six months. So it actually is a really good point. We want to hit a six month cadence. We want to be coming out every six month on time, every time we're approaching it. I think maybe at this point in time, we haven't really figured out where in the calendar year those cadences will be. Colorado, I think we, when we set out to do it, we set out to do it in August. It may be that it may make sense to do it in September. Can you say that I'm being European? I can go and embark in a nice vacation in summer and then after that. I'm not going to criticize six weeks of vacation. No, I don't get sick. Probably late September. Probably late September. Okay, good. So do we have any more questions here? Quick one. Do you anticipate some benchmarks being submitted upstream, like for firewall or load balancer type of workloads? So they can be part of your SFC chains? So I anticipate, I don't know about the term benchmark. What I anticipate is I anticipate repetitive verification of performance latency and system state and system capability repeated over and over again on different labs with different infrastructure solutions. As far as a benchmark is concerned, well, how does it run now compared to what ran before? I think that's perfectly okay as a benchmark, but I don't think we're going to come out and say, this is okay and that's not okay. Because different deployments require different contexts, require different characteristics. It's very hard to say, okay, we need, you know... Well, okay or not okay are a case-by-case customer-by-customer decision. So what we're going to go and enable you to do that. You can do that today. You bring your stuff and we run your firewall as part of yardstick test cases, why not? Okay, but why I ask that question is you said you'll boil down your 24 scenarios to maybe about six and you mentioned the EEPC case and all. So to say, hey, this will work for EPC with this sort of reference configuration. We need some way to quantify what's an acceptable latency or performance. Exactly, and at this point in time we have absolutely no idea what that looks like. But what we will be able to do is we will be able to publish, this works around about here with so much compute and so much networking and so on and so forth. I mean, we get these characteristics and behaviors out of our platform. That's what we'll be able to say. And then as we do work upstream and as we isolate issues and improve things we'll be able to demonstrate that it improves over time, I would hope. And the documented behavior you get today. So one of the test cases that we run is Clearwater's virtual IMS case. That's something that Orange stood up because they said, well, our management really likes that test case because we're standing up 10 VMs and we're running all the test suites that are part of the IMS. And we're documenting the results, right? They're published. They're all public because, well, the Jenkins results are all public and we publish all these things and so you can go and compare. Yeah, and I think you do the comparison on us. I think the point that we're trying to make is we don't want to state what the benchmark should be. That's really not something that we as an open source community want to be doing. What we want to be able to state is this is what you can get and this is how well it's gonna perform. And it supports these use cases in this way. If it's not good enough, come and help us fix it. Okay, that's fine. Well, I think we need to call it a day. I think we should get off. I mean, it's over. Someone's probably standing up there waiting for you to come to the stage. Thank you so much. Thank you, everyone.