 Hi, welcome everyone on this gorgeous Monday morning here in Santa Clara on the Open Networking Summit We are having a session today a tutorial session on interoperability Which is a really hot topic effects open source and We feel it really important to talk about it We will start with a really short introduction that who we are and for technical reasons We will all do this sitting down to avoid the Slides in our face so My name. I'm Ildiko Vanchi. I'm working for the OpenStack Foundation. I'm the ecosystem technical lead there I'm working together with our member companies and community members to help them be successful with OpenStack as a software package and also with OpenStack as an open source community Beyond this, I'm also involved in in OPNFV. I'm an OPNFV ambassador as well And I'm really passionate about helping the cross-community interactions and collaboration Okay, I'm Carsten Rosnerville, managing director and co-founder of EA NTC. That's the European Advanced Networking Test Center We are an independent commercial test lab and my role is I'm CTO Basically at my company and I'm also a standards rapporteur at the at CNFV industry specification group Thanks, so my name is Christopher Price. I'm elected board member of OpenStack and OPNFV and I work for Ericsson And today I'm bringing a vendor perspective to interoperability To try and put some balance between what comes for free and what costs us a lot I guess Yep, please read slides I have an old secretary today All right, so we thought to bring you the big picture before we go into the details that how open source the vendors and the users operators and service providers Are Placed on this big picture. What's the relationship between them? What's the responsibilities of of each group and I would like to ask Carsten to Introduce the diagram that you can see behind us a little bit before we go into discussion Sure, so so the big picture, right? So what's the big picture and probably all of you are aware of one or more or many testing initiatives that are going on in the Industry, you know, everybody's testing these days, which is wonderful, you know for for a long time You know, we've been the lab rats and you know, if you went as soon as you said like, oh, we need to do some testing Hello testing quality assurance people say like yeah I will take care of this in the last few weeks before we publish something, you know, which it'll all be good So now in this industry testing is really at the forefront and gets much more attention Which is wonderful and now we need to create this big picture like how does everything fit together? You know, because if testing we we need to make sure that testing is not like fireworks like you know, boom test even here Boom pock there, you know, boom some some pluckfest over there We need to make sure that all of the initiatives work together and integrate in a way that the industry actually benefits And that's the reason for this diagram basically testing pipeline We think is needed or you know forms actually as we speak I'm similar to like the the pipeline of coding, you know OPNFV would notice this take like let's reinvent code that OpenStack has done before, you know, there's this whole notion of Taking into account taking into account upstream projects code and feeding work Experiences and so on and we think the same needs to get in place with the testing You know, you need to take into account testing that has been done by other entities before and you need to you need to provide feedback To upstream testing initiatives to make them better and more automated and that that's basically what this diagram tries to explain so Basically, it all starts with open source testing open source initiatives and then Commercial Implementations are derived from open source typically most often these days. So the commercial implementations need to be tested once that is done and Industry-wide into our ability needs to be tested across all of these different initiatives and Implementations and in the end the operators the service fighters do their own testing Usually at this point they do a lot of testing, you know You are probably aware of the the big North American operators programs of integrating and FV solutions it's massive testing effort and The problem with that is the more we get on the to the towards the right side of this diagram The more expensive the testing becomes because everybody is doing this in their own So it's what our premises and everybody's you know recreating the same experiences. So the idea is like Test plans need to be upstreamed over time like the purple arrow pointing to the left To basic transport that testing experience to a place where it's Where testing can be better integrated and where it's cheaper because it's only done once instead of a gazillion times And at the same time the here. It's green. There is yellow yellowish anyway So that that's the reason for upstreaming tests to reduce the cost and the effort and also the time It takes to do this at this time I I heard from you know some unnamed sources that Integrating virtualized network function into into production takes typically like a year or more and that's surprisingly long and in order to reduce this Experience that service providers are seeing we need to implement this testing pipeline and the last part is of course on the blue Arrow pointing to the right the integration level increases not all testing can be done on the far left side of this diagram Because naturally you know the more you get towards the end service wider implementation the more individual the more integrated the more Enter and the solution becomes Do you want to say something I was I was I was gonna add a little I think One of the things this diagram sort of presents that we're gonna get into a little bit more today Is that each of these blocks has a different owner a different responsible entity? Open source testing. It's it's very much a community driven activity commercial integration That's that's a vendor responsibility And then the industry-wide Interoperability is is cross vendor and other neutral parties and then you get to the operator late integration at the end What we what we see or I think what we would hope to see coming out as we move into an environment is that there's less Responsibility split between these the arrows sort of indicate that the things need to start to flow one way or the other and NFV provides an opportunity for us to do that in that Historically the operator late individual testing was installing boxes plugging in cables putting power and so on and so forth in a virtual environment it can be running a script and That's when you can start to do this sort of thing, but in order to be able to run a script there is An ocean of work that has to go into Creating an environment where that script will actually work And I think that's one of the things we're gonna dig into a little bit and sort of talk about how we're getting there Yep, thanks. I think just one more thing to to mention here is the importance that no matter where you're sitting I mean which part of the pipeline you and or your company are Representing we all have our responsibilities in this pipeline, and it's it's not one box that will carry all the burden And the others will use and benefit from it We all need to do our own work in order to make interoperability happen All right, so we will go through a little bit On this pipeline and we will start with the open-source interoperability parts what we can do there why it is important We can switch one more slide. Thank you. So We have this tutorial today because we said that interoperability is important But why do we need to have a tutorial about it? Why is it not that simple and easy? Because you would think that okay, we have a few tasks we we plug things together. It seems to work and it will be all fine Which could be like this but It's way more more Complex than that because whenever we get to an open-source component, that's kind of a base of our systems more and more nowadays, we would like it to be modular Flexible we would like to configure it in various ways And therefore when we get what we want we get all this flexibility in it And we can just use it as a box of Legos in our environment we have to realize that Validating and Verifying that it really works. It really does what we want to the pieces are really fitting together becomes a really really Difficult game to play Because whenever you change a configuration option that will change Behaviors that will change how things integrate together and there are just so many things that can let's say go wrong or Make the process that Carson just just mentioned maybe even longer than that year or or two Therefore we we need to look into What we would like to achieve What problems that creates for us in the field of interoperability What challenges we have and how can we solve them? So you can just see a few examples mentioned that that how we figure out policies how we make them discoverable How we can deal with for example release cadence which is always a Topic in the discussions because obviously we would like to have new features and and all the bug fixes and everything out as soon as possible and As quickly as possible So when for example, we are talking about a six month release cadence sometimes that even that sounds of sounds a longer period but When you look at the open source part as the base component that someone has to Productify and work on it then it turns out that the release cadence even a half year one can be really really fast Which can result in many versions and many release versions in production which have to be able to Talk to each other and Interoperate with each other which gets a really really challenging problem when you Face it and try to do it yourself So the release cadence is important because it it it dictates how the industry has to react and it's a good point One of the things which helps with the fast release cadence is stability We talk about open stack as not being usable and so on and so forth for many years It's seven years old now. It's reaching a certain level of maturity and and thus You can start to adapt to the changes that come through that community and expose through the community I think one of the challenges we face across the industry is that not everyone's as mature as open stack at this point in Time when it comes to to the cloud environment, so the release cadence is really important to shorter release cadence means Less change from how do I adopt and into work with this? But then it also means that I have very short cycles that I have to get out into the network and as we talked We still don't have all the pieces in place where we can go to an operator network and run a script And and if I'm gonna do something every three months, I'm gonna need to have everything automated anyway No, thank you. It was it was a good extension to what I said and as you mentioned open stack we can switch to the next slide So I I brought here open stack as an example for you because as Chris mentioned we We started to build the community and to build the code base seven years ago. It started with I Think our first big event was 75 people and then six years later. We had 7500 we have millions of lines of code and thousands of Developers and engineers Working together from all around the globe so as the as the years pass by and the code base Started to grow and not just the code base, but also the functionality That the software package provides We had to realize that we need to Think about interoperability And we need to take on the responsibility of the open-source side. We need to help our Ecosystem and ecosystem companies who are working with the software To be able to ensure that what we are producing and what they are putting into production That means the certain levels of quality and they will be able to build it into into bigger systems As easily as possible Therefore The community and and the foundation created Working group Which is today called interoperability working group The mission of this group is to ensure interoperability and Work on work on the testing initiatives of this In this working group now we we have obviously the whole community involved. It's a public activity Fully open But beyond this we have the open-stack board and the open-stack technical committee Actively working in this group So it's really something that we think we are all responsible for and And we take it as a as a high priority activity It was founded in late 2013 and by 2015 We got our first guidelines produced because this group is working on Creating guidelines And not Implementing the test cases themselves on the first place The guidelines are consisting of list of capabilities That the software needs to provide it identifies code parts and components That has to has to run and it also identifies tests that it needs to fulfill and These guidelines and whenever a software component fulfills The guidelines that this group provides It means that that for example the distribution that a that a vendor provides can use the open-stack logo because we say that that they fulfilled what we identified as requirements and It guarantees you that you will get what you expect from from that platform when it gets deployed And you will access the APIs This group Works with a with a tool set. So if we switch one more slide, this is called the ref stack This is integrated into our tempest framework, which we use as an integration test framework within open-stack So you can you can run the interoperability tests as part of that framework the tests are usually run locally on premise and The the results are uploaded To our website and stored in a in a database and gets evaluated after that and Whenever So, okay, what's the back the interoperability Interoperability working group produces a new guideline every half year it covers three open-stack releases and The board and the TC are voting on what goes into the guideline and what tests are going into Into the test suit and These are the tests that that you need to run against a certain Version of open-stack it gets the the APIs validated and I think later on we will go into that That what are the aspects of testing and what what do you want to get tested? For open-stack, we are testing the APIs and we are testing the user-facing APIs Which is important to mention So for example, you will not find anything within the test you that that would be let's say an admin only API for example or something that's intentionally pluggable like for example, middle verse or drivers You will only find there. What's what's user-facing? What everyone is able to access? Who has the product Access So just to chime in on the ref stack or the interop working group Outcome as a as a as a vendor that that provides you with a guideline and it provides you with a dialogue with other vendors Where where you're able to sit down and agree? This is what we will all support We will all make sure that these capabilities are presented when we have an open-stack Logo product thus as a consumer as an operator or someone building on to open-stack It doesn't matter whose cloud I'm running against it doesn't matter who is vending it I can trust that I will have these user-facing interfaces and and This is the first point in the entire value chain where we start to establish some sort of trust between the consumer and what's being produced So it's it's really important Thank you But open stack is is just one piece it provides an open-source cloud platform You can use it for private public clouds or part of a multi-cloud environment, but you can also Look at it as a as part of a of a bigger system So if we switch slides I Can just hand back the mic to to Chris and we can look into how Open stack fits into the bigger ecosystem. We will touch on on opn fe And see how this whole works out Thanks, so I'm going to talk a little bit about the opn of these CVP compliance and verification program We kicked this off last year from opn of V in order to try and achieve what open stack is achieving to try and achieve a level of Trust and certainty about what it is that we're producing So the CVP is is set up against a set of fundamental principles And objectives the objectives I guess are the most important thing we want to essentially help build a market We want to try and reduce risk for operators and end users. We want to make it easy For people to consume what we're producing in opn of V and and as with open stack The ref stack toolchain to produce an automated way of building trust with a consumer Opn of V is slightly different than an open stack the opn of V We have opinions about who our customers are and what we're trying to serve which are a little bit more constrained than I think Open stack in general we we are trying to address the telecoms market We are trying to address NFV Thus We have certain Expectations around what the control plane should do we have requirements around how it should be deployed We have requirements around how an application should behave when it's on the platform And we have expectations around what the platform should actually look like that. It's running on how what is the hardware How is it configured and so on and so forth so our CVP is maybe a little different in that in that open stack provides You with a way of producing trust for a consumer the opn of V then takes that to the next level Which is how to build a way of starting to establish characteristics of a system which can then be trusted And we're getting to that a little bit moving forwards I wanted to talk back as to to where the CVP came from whereas the foundation for the CVP It's a great idea. Let's just let's do some sort of standard way of doing stuff It's it's hard to do that just at the outset so We have been in opn of V building a lot with our upstream communities We work a lot with the open-stack testing we work with open daylight with onos OBS with FDIO And we take those tests and we integrate those tests so when we spin up an opn of V based data center we then run Many thousands of tests against them which are primarily coming from upstream communities We don't carry our own code or we try not to carry our own code So all of our code is based on upstream and we contribute back upstream So if we're working with open-stack, we're writing code in open-stack and we're consuming that and our test cases Very much aligned with the open-stack testing. So we pull tempest we pull rally we pull robot We pull a number of different test frameworks and then execute them in an NFV context So what opn of V does essentially is it takes a lot of open-source software provides an NFV context for it And then allows us to automate Validating that software in that context and so this is sort of an overview of the test framework that we have in opn of V you know we have we have the Compliance activity which we're incubating today and the dovetail is the project we use for that And then we have a number of functional testing activities based around a common framework, which we call func test And furthermore we do a number of performance testing Where we're going to be doing data plane testing we can be doing resilience testing. We can be doing failover testing Which which fit into another bracket if you like And then we have tiers through which these tests can Can start to build At this point in time, we're still relatively young as a community We don't have the seven years behind us that that that open-stack have so a lot of these are I guess not as well governed as they potentially could be which is why we're still incubating our CVP activity But it gives you a general idea of what we have as far as a testing foundation is concerned what we're trying to do then is Between that framework and the little map on the right is is actually representative of a project. We call faro's That's our hardware labs project We have a number of labs. You can see spread over the globe and and what a faro's lab provides is a fixed infrastructure That complies to opi NFV's view of an NFV cloud That we can run any particular composition of a stack software against So what we do there is we deploy we test and we iterate And we deploy and we test and we iterate we build data centers on a daily basis. Well, we build Hundreds of data centers on a daily basis. I remember I was I was on the the keynote here a couple of years ago And I was able to claim that we'd spun up 1700 clouds by the time this Conference was on I couldn't even imagine what that's like today We've we've grown considerably since then but what opi NFV does is it builds a cloud up it tests Validates it makes sure that the features that we want to see in that cloud of there and then it tests it down and starts again With a slightly different flavor with with open daylight or with onos or with OBS or with fdio or We were able to basically integrate all of these into a cloud environment. So we deploy we test we iterate Constantly we use Etsy As a very good reference for how we should be testing and what we should be testing we integrate that into our test activities We also take itf test standards and we use those in the same context So what we're trying to do is is take what's happening in the standards world and apply that to what's happening in the open source world In a telecoms context so that we start to do this this real-time integration and we start to prove things very quickly There's a cycle though because we do this over and over again We're able to evaluate and then we're able to improve and then we're able to iterate again So so every time we find a fault or every time we find something new we push it back upstream if we're going to write a test case for an open stack feature we will generally Potentially do it our own our own repository to start with but eventually we'll push that upstream to open stack As quickly as we can So we have this cycle going round and round where we've been able to Improve the quality of our testing and improve essentially the quality of what it is that we're testing for telecommunications environments And what what we get out of that is is we get a very good knowledge of how to use open stack How to work with cloud native how to use a open daylight KVM How to work with Linux in an avian environment what it means when I have different Linux distros with KVM and another You know guest operating system on top. How does this behave? In different scenarios when I compose different components together into a platform and run it in a telecommunications context What it further provides us is a way of upstreaming that because if we go back to open stack for instance to say Hey, we've been running these tests and here is how it looks and here is what we're trying to achieve And here are the faults we found then it's very clear for them to understand Why we're trying to make a change or why we think this is a problem? And then we can work very quickly with them. So the CVP as we move forwards We plan to release this year and have our first CVP When we released that we hope to be releasing that with a number of upstream components and ref stack Of course the toolchain that the open stack provides Gives us a really neat foundation for the cloud testing and we're working with the interrupt working group to try and Identify if we need an NFV based variant of that that we can then we can then adopt and use and that's an ongoing discussion, I guess So hopefully what we see happening then is through the open of VCB P We can then have an NFV reference in ref stack for open stack that people can use in their own contexts Because at the end of the day, it's not all about one project or the other It's about all of these projects figuring out how to work together So that's I guess the CVP in short form The other thing guys if you've got any questions just raise your hand And we'll be happy to take them where we were trying to be conversational I speak fast and I get through my slides fast So you'll be leaving early if you don't stop us and ask some questions Not that that's a good thing mind you And then we're gonna start a testing process start looking to possibly deploy it Are you also looking to try to pull these same operators? in immediately have them be collaborators Up front such that you don't deliver a car and you go they go well. Yeah, but I really wanted Didn't really want a manual transmission. You know, I'm worried that there's still this this concept of The long pipe line Well, no, we haven't the operators have absolutely found ways of getting into the pipeline and and everyone's very happy about that I think I want to flick I'm gonna flick a bunch just to go back right to here because What what we have is a traditional flow right a traditional flow You've got you've got these lines and and we sort of talked about trying to blow those lines a little bit in that what what we might be doing For the operator led individual testing previously. We want to start to push that back up the track We do have operators. We have AT&T for instance heavily invested in open daylight and using that in their network and contributing Contributing test cases and we have we have AT&T and opnv contributing test cases And and actually helping drive a number of the feature projects we have there Orange is a really good example of a group that have invested a lot in testing in opnv The the structure of that test architecture has a lot to do with a guy called Morgan Rickham from from Orange France I Think what we've actually started to do already is break those walls down. We still have from a commercial perspective There are still gates we have to pass through But what we're managing to do is shift the dial a little bit as to who's involved where and when And I think that's that's it's it's a really good question because if that wasn't clear I think it's something that we're trying to sort of project as well as we move forward and I'm just Wanted to say to this point that you mentioned for example AT&T. We have them in open stack as well and So when we are talking about open source Yes, it's a it's a come these are communities consisting of developers, but the the developers are in most of the cases employed by companies like AT&T or orange or Ericsson or All those Ecosystem companies so it's not this this pipeline is not really like there is open source Which is some kind of a cloudy blurry? We don't know what that is and then the vendors come and then the operators come This is whole of a mixture so open source also provides a place for The vendors and the operators and service providers to collaborate in one place so they can way earlier in the process share their feedback and Participate in the process so open source does not only mean open code base. It also means the open open testing initiatives so We also would like to encourage all of you to come and participate Find these groups find where the activities are ongoing And help us to bring it to bring it to the next level and also Many of these companies are running these open source releases just you know downloaded from from github as proof-of-concept Installations and and in labs so it's not it. I think that in most cases It's not true that they see the whole thing as In the form of a distribution as for the first time they already tested it in labs They already tried to figure out what it's capable of how they can use it They will just not put the pure open source version into production But operators will work with vendors To get the distribution get it in deployed managed supported and all these sort of things That makes the business run So I don't know what what your view is but but actually I think the operators also getting way more Involved in the whole process now recently with a virtualization again because a couple years back You know we've been around for like 25 years and we've done a lot of transport network testing And you know in the MPLS transport times the operators got very how do you say in English late back? So basically it's like okay The whole industry will do their guess the game and in the end we'll just buy what's available and that doesn't work anymore and I think it's it's very We're sureing to see that the operators try to get involved in this whole pipeline at as early as possible And you talked about open source I can talk about Etsy sanitization Operators are very active there and trying to steer things of course everything has a horizon, you know at at day Day one an operator probably doesn't know exactly how to deploy a 5g mobile core in two years from now So there will always need to be some testing on the right hand side By the way, there are there are still a few more seats here if you want to get seated even I mean the first row obviously But also here on the second and third row So I did want to so just just to sort of sum up where I was coming to and going from There's a couple of initiatives that I wanted to call out There is an initiative called Xci which is which is ongoing today And that that goes from open stack to open daylight to be in a V to Fido to a VF So it's across almost well not almost but we're intention is to be across all of this The intention of that initiative is that we establish continuous delivery from open source projects True continuous delivery from open source projects, and then we have continuous integration in their consuming entities What that means is that any given day at any given time? I can spin up the latest software from trunk and run it And and that's critical because what that enables is the next thing that we're working on and that is Not as mature you won't see any stickers or anything around around it yet But there is a cross project test strategy initiative spinning up So the plan there is that if I happen to be writing some code in in FDIO for instance on the forwarding plane And and I'm targeting you know an operator's central office deployment Then I should be able to push a patch and my patch will go into Garrett and that will run a few jobs And it will start to build and and the continuous deployment will kick in and that patch will eventually not not in seconds But minutes to hours make its way into a into a data center solution that is actually deploying and validating that feature end to end Thus I have the ability to have a test chain for any patch that I push that validates that Patch no matter what component it happens to be in end to end in the network These are kind of huge activities that they're not huge in the context that there's millions of people working on them There's a there's a few there's a few small groups really starting to incubate this But the change that they can bring into how we approach these types of problems is is enormous So I'm playing audience for a moment. Yeah, is that going to be a bag of rice problem? You say this in English as well. You know the bag of rice trips in China and then Nobody cares, but in fact, you know if you would Include a small change in some some open source code And it basically starts rebuilding the whole test for all opnfv flavors and open daylight and onus and open Contrail and then it comes gets into open stack and basically the world's servers will be busy for a week because of one line of code Change, how do you manage the whole flavors of configurations? through through intelligent planning so It's part of the it's so I talked about continuous deployment before I moved into continuous integration because Understanding how the continuous deployment feeds into the continuous integration and how that stepwise Chain approaches to the point where you get that end-to-end solution is really important It's not it's as I said, it's not seconds It could be minutes to hours because you don't necessarily want every patch To propagate through the network. You want to be able to do that in a managed and maintainable way So essentially what you do is you do set up a cadence across all the components that enable them to have a trustable cadence of deployment and testing That then everything sort of feeds into in a manageable way It's not done yet. It's a work in progress, and I don't know how it looks. I can't explain it today So we did before before I move to the next slide. I wanted to ask some questions just briefly so It's good to know who you're talking to and we haven't actually asked yet So if I break the if I break the audience up into I am an operator I am an NFVI vendor. I'm an application vendor or I'm none of the above Could you just sort of raise your hand so that we know we know who we're talking about So if you work for an operator who's going to be work operating telecoms networks Cool, okay For an NFVI vendor someone someone selling cloud platform Just just keep raising your hand And then for application developers we have any application developers The number of not off the above feeling very very lonely. Yeah a number of none of the above's I guess yeah, okay Great. All right. Thanks guys. So That's kind of the open source world. It's it's moving fast. It's it's extremely exciting. There's a lot going on But the challenge comes when you sort of move into Commercial interoperability, it's fine to solve everything in the open source world But then then we need to move into the point where we have maybe more accountability and for this I'm going to hand over to Karsten. Okay, thanks So just a quick interslide. I'm not sure how many of you know EANTC So we're an independent test lab. We mostly test commercial implementations we test them both together with vendors and with service riders and also with Enterprises and governments, but that doesn't play a role here. So we're seeing commercialized implementations typically in NFV of open source projects So basically a vendor X takes OP NFV implementation or takes an open stack implementation modifies it Expands it amends it by proprietary Aspects and then releases it and then then we get to test it So we can move to the next slide. I didn't want to put too much propaganda to you. So basically I tried to Summarize the state of the NFV industry and of course as non authoritative and I also don't have like a huge amount of Public data that I can throw at you I we do have some some experiences from NIA that I'm going to talk about later So basically if you look at the left-hand side, that's the probably Familiar at the reference diagram and consists of the NFV infrastructure in the bottom paired with the virtual infrastructure management So these two typically come together Then there are these blue VNF's which are the virtualized network functions together with their element manager Which manages the virtualized network function at the application level and on the right-hand side. There are two green Modules functional blocks, which are called VNF manager VNF M and NFV orchestrator These two together are often also called the management and network orchestration manual and then on top of this it's The big white elephant, which is called the next gen OSS, which nobody really knows about yet so anyway, so what are we what are we looking for and That's on the right-hand side We're looking for interoperability, of course But then the other big question here is performance and we didn't talk about the types of testing We can do this later on but interoperability is wonderful And that's like the the bare-bone what you actually definitely need as a mandatory precondition to get anything running But the question number two basically the next morning will be how efficient does a solution work? because if you have to substitute like a Two rack unit traditional firewall with like two full racks of x86 service. That's not going to fly well so efficiency and scalability is the next question and then the one after that especially for for telecom operators is high availability and I think that's where As far as I can see there is a quite a substantial difference between how Enterprises use open stack and how service what is use open stack with you because service fighters are and into a different way of high Availability they don't just so like okay any service can die at any moment They have to understand exactly like which service dies exactly when and how do our customers continue doing their stuff even while it's dying and And then of course the big new thing is manageability. I think that's basically what the what the business case lives and dies with In a multi vendor world they hold manageability of getting Services instantiated torn down migrated moved around data centers Getting service chains of you know virtual firewall plus virtual router service spun up all of this is governed by management in Solutions and so manageability is a topic that we see much more as a testing focus than in the past and the past transport network You know people just he's a used command line interfaces or they use their homegrown management Implementation and that doesn't work anymore Okay, we can go to the next slide and basically this is going to build into different flavors of how to deploy this the first and most simple flavor is To just buy everything from one vendor and actually I was surprised But there are some service fighters out there who follow this model. So for example, I'm not sure if I think there's public But I'll be careful here on the microphone. So there is one one Pretty well known service fighter in Western Europe in the mid-sized Western European country and they just buy everything from one vendor and So they said actually it works fine, you know, we're happy we're happy people, you know for now And we buy the infrastructure from the vendor we buy the VNF's mobile core and everything from the same vendor We buy the management from the same vendor and we also buy the billing systems the management systems the OSS from the same Single vendor and of course, you have a very clear responsibility in this case. There is no finger pointing There is no questioning of like why doesn't it work? It's it's very clear And I tried to represent this with these little smileys on the right-hand side to basically to say like Both on the performance side and on the high availability side on the advanced functionality side. It's it It's pretty much well understood by some solution venus out there Not, you know, huge amounts, but there are quite a few That understand the game and are able to implement it. The problem is this is not what most service fighters Started with as a thought like why would we even want to virtualize and one of the reasons was to you know buy best of great solutions be more agile be more flexible And not also not be required not required to issue RFPs for the whole network and everything in one piece so that's why I think the Majority today try to go at least for this model which says we still buy a lot from the same vendor like the infrastructure and the Mano and maybe also the next generation OSS, but we reserve the right to buy VNFs from different vendors So that's the first level of integration that almost everybody in the industry is seeing right now You have an infrastructure you have a management and then you need to integrate VNFs plug them into this environment and these VNFs can be You know virtual routers virtual firewalls virtual DPI virtual mobile core policy management you name it all of them and Typically the RFPs go out and then there are a couple of implementations tested and Plugged in and that's where I would expect the solution to still be quite well interoperable It's it's a straightforward game. I think but in fact that takes service fighters often like a Year or more and we had an experience in in our lab, you know, we don't normally Cancel tests right so we get into testing projects We run tests we write a report and in the then sometimes there is a second round if something fails and in the end Mostly it works, but we've actually had one or two tests Which were formally cancelled because of the finger pointing between the VNF and the NFVI vendor You know like oh, this is your fault. No, it's no. It's yours. No, it's yours. No, it's yours And that's that's what can happen in the industry. We'll talk about NFV ITI, which is one way to prevent this from happening but I think I'm still pretty confident on on some aspects of light multi-vendor Like the service agility and functionality is good because you can choose from different VNFs But a manageability might be more difficult because the element managers from different vendors have to plug in with the with the orchestration and Also the performance usually is a big question mark because that depends on on exactly this like Multi-platform multi-flavor testing to really work and even the small change of a BIOS Parameter can throw people back. You know a couple of days in in testing Okay, so the end goal as far as I see it would be like a total of four plus vendors in in this environment and Basically typically you are serviced by infrastructure from one group of vendors or from one vendor Mano from a second vendor and the VNFs from other vendors again And in this case, of course, there are a couple of interfaces that need to be multivendor ready Especially the one on the right hand side are the more difficult ones the one between the Mano and the infrastructure industry starts testing that right now and As you can see there are a couple of red smileys but there are also a lot of question marks and The question marks mean I don't even have enough data the industry hasn't tested these commercial implementations broadly enough yet for me to even get to a Mature idea of how this is going to work, but I think a lot of work is ahead of us there so We talked about open source testing initiatives another testing initiative that verifies commercial implementations and Is the new IP agency? We are that the test lab that currently works for the for the new IP agency I don't plan to be the only test lab working for the new IP agency but for now we are we're piloting the program and It's a this is a not-for-profit industry association like many industry forums exist out there and it's primary focus areas and goals are to educate the industry and to do industry-wide testing so we actually Take into you know, we accept anybody from whoever whichever camp or open source group they're coming and We test the commercialized implementations So so so far we've done two major programs of VNF to NFE I Interoperability which is like the light multi-vendor model as I explained and This in this interoperability. We validated different commercial virtualized network functions interoperability with commercial NFE infrastructures and We started that in December 2015. We ran quite a few combinations. I'll explain which and Last year in the summer we did a first showcase of the whole Heavy multi-vendor interoperability. I think it was really just like fireworks, you know to the first first showcase in this area with some of our members and we're going to substantiate this with a more Structured manual interoperability testing program Starting right now. So basically we're testing the orchestrators and FEO's and VNF M's To from one vendor group together with the infrastructure from a second vendor group together with VNF from a third vendor group So that's going to be a lot of testing and requires automation as well And in the blue circle, we also party we as ENTC not the new IP agency But my company ENTC participated in the first Etsy plug test of the same area of the management and then took orchestration So to to give you an impression of how this what this looks like We create a database as well. And by the way, I'd love to interface with databases that open source projects Maintain, I think that's part of the the testing pipeline paradigm that we need to get the results Stored in a kind of interoperable way So you see on the left-hand side the VNF's on the top you see the NFV eyes So these are the vendors that provided their solutions for our tests so far for the VNF to NFV I testing And you see a number of paths is you and you see a number of NAs because we refuse to test same vendor to same vendor So we said like this test combination is not available. They can do this internally and There are still a lot of open empty slots and these empty slots should not happen in the end, right? So in in a fully automated world, we should have all of this matrix filled Currently most of the testing time is spent in configuring adapting and doing pre staging so it's not easy to automate all of it because of the many different ways how vendors configure their solutions and We have to work on filling the empty slots So the results were were pretty good. I would say in general So if you look at 36 vendor to vendor combinations past 17 17 was failed or not completed due to time constraints, which means like when this didn't completely give up But they couldn't get things done within the time we had available for them And that is a total success rate about of about two-thirds. I think that's that's Representative as far as I'm concerned and of course the success rate of tests always depends on how advanced the tests are So if we would make the test too simple, of course, everybody could pass Most people test life cycle management these days like onboarding VNF's instantiating them tearing them down manipulating some operational parameters and that's going to be that's becoming better And but we we plan I plan to keep the success level at around two-thirds I think that means our tests are challenging enough and actually provide an added value right, so Yeah, good good question. Thanks. So I repeated for the microphone. So we're reporting only past combinations and not failed combinations. The reason for that is that You know in the commercial world you always work under NDAs non-disclosure agreements and The NDAs are sometimes a nuisance But sometimes they're also great because under an NDA a vendor is going to be much much more open so if you want to test something that's really innovative and really new you have to sign an NDA and Then the vendor will disclose the latest and greatest if we were to prove to report failed combinations We could not have these NDAs which means the vendors would only bring their most proven most well established Implementations which are probably a year old or they would not come at all and so that's why We report only the past combinations and the other reason is that in my experience in the industry Anybody who participates in the test and who failed something really wants to try to learn from it. I mean, that's the whole point, right? Anything, you know participating in a test campaign only to show past results is nice for marketing But after all is a waste of money. So you really have to learn from the experience of failed tests If we were I mean so the vendors that participate run back to their labs fix this and hopefully come back a couple months later With the next campaign and pass more tests The only ones that we should be worried about are the ones that are not mentioned here in the table at all Because they didn't invest in the testing so on this Interability showcase that we ran at big communications event in Austin last year We saw that there is no two-dimensional matrix anymore And that's the one of the big problems with more advanced multi-vendor testing So each of these little okay, it's it's barely readable, but each of these little Topology diagrams on the left-hand side there are a total of six They have multi-vendor combinations with the NFEI with the VNFs with a manual So you see it's a three-dimensional matrix for now and testing a three-dimensional matrix, of course a huge more Edit amount of work compared to a two-dimensional matrix. So that underlines again. We need to automate the whole thing and The results were were nice. There were a couple of implementations Interoperability that worked and specially we tested service chains of multiple VNFs Chained or you know, you know connected to each other in a forwarding graph and Providing advanced services. We tested them in some cases even with a Mano coming from a different vendor as the NFE infrastructure and we got a number of combinations going and that's what we have to Work on further to broaden this whole experience testing on an open-source NFEI yes, of course and Typically our challenge is to get this supported because if we knock on the door for you know, OP NFE and select are you supporting? The your implementation in terms of like sending engineers to a test and then fixing bugs You know together with the vendors. This is not how open-source projects normally work Maybe maybe we can change this but in general typically we require a vendor who supports an open-source implementation and This vendor will then you know provide the support We had thought about implement, you know, just taking a few servers and installing opnfv For example, there's just a Colorado release or whatever ourselves, but then the big question is like who is going to? provide support for it It's a challenge. We managed to get the opnfv Into the well, I guess it comes up soon in the Etsy Plug test recently But the only way to do that was for for vendors to volunteer resources Intel and Erikson for instance volunteered a bunch of resources for a few weeks Just to make sure that we could run the open-source stuff in that context And then of course that that was me that arranged the Erikson site and I got beaten because you know We spent money and what did we get out of it? Well, we actually got some good stuff out of it But it's it's difficult for a vendor to actually be able to say okay We need to spend you know thousands of dollars on running this open-source software here, so it's A non-trivial challenge, I guess yeah, so Erikson joined the new IP agency and I hope we'll we'll see some opnfv implementation there soon as well okay, so upcoming is Manual tests and I'm just highlighting this because of the underlying test plans So whenever you run a test, of course, there should be a test plan That's the big difference between a test and a plug fest a plug fest or you know even further Earlier in the pipeline a hack fest is a lightly planned activity from my point of views You're basically relying on the experience of the participants You're relying on their understanding of the relevance of what they do on a hack fest even more and on a plug fest as well You basically lock people into a room and hope that something worth what will come out if you throw a couple of topics at them for our tests we Provide much more detailed test plans and I think these test plans shall be available for the whole industry and not only for For you know a single test again speaking in favor of the pipeline paradigm, so In the new IP agency, we're not creating proprietary test plans We're only creating open open test plans, and we're doing this through the Etsy and FV industry specification group so NIA actually new IP agency doesn't even have a technical committee by by intention we say any resources that are knowledgeable and can Contribute to a test plan should go to Etsy so I'm a reporter for the Etsy TST 007 document and It has nothing to do with James Bond and It's it's actually an interoperability test plan for the management network orchestration So we're it's currently in is still in a so-called early draft But we have like the 10th version of it and we plan to get into a stable draft by the next Etsy meeting in May and Get it ratified before September or something. So it's in a pretty good shape So what are the challenges of commercial implementations? We published detailed test reports. You can read them on new IP agency.com or on light reading or just Google and Interestingly we saw a number a fairly large number of issues We documented them in detail and then we had a call with the with the OpenStack team I think it was two months ago, and I I basically Listed all my challenges and the response from from your colleagues was like, okay This is an education challenge. This is a commercial adaptation challenge. This is, you know, just a pure commercial issue so We saw that this pipeline is really necessary because not all problems can be finger pointed back to open source So what vendors do is they they take the code and they implement it Sometimes on their own. I mean couple vendors, of course are looking more towards redhead or Canonical now for like a centralized maintenance of at least the NFEI But typically vendors take code from open source and then they modify it. They in the best case They just you know provide the modifications back in the worst case they branch off and that typically means they are quite a few releases behind so we tested a lot of Kilo and Liberty release stuff last year Which I'm not sure if it's still supported at all, you know, it's it's like old stuff from from the OpenStack view But from the telecom vendor view it's Still, you know what how fast they can work and of course you get into a version into our ability issues You know the the one version doesn't crack with the other and the VNF assumes like a more advanced environment That's one thing then another point is The the whole policy improvements, you know security security aspect is always important for telecoms and it's implemented security aspects are implemented in OpenStack But you know, even the more restrictive you make the isolation of tenants or Services the more you get into problems So there was often an issue that we saw and I think it was also mostly due to configuration or lack of understanding But that's the reality today and the last part is licensing Etsy has now started some work on licensing alignment naturally commercial licensing is something that open source project would not deal with traditionally, right? I mean, there are no licenses in open source. So why even start a project and in reality, of course in the commercial world licenses are a big point and everybody does licensing in this different way So we had a lot of issues with VNF not working because they they lost contact to their license server or you know, we are weird things happening so we we hope to be to provide as much details as needed to reproduce and fix these issues back to open source where feasible and For for all of the other parts, we just need to educate the vendor world and tell the new joiners that They should look at these aspects and try to avoid them There's another aspect to the to the implementation deviations, I guess if you Turn your mind back to when when Juno and killer were being developed That was a period where a lot of telecommunications vendors were showing up open stack and waving their hands in the air and saying guys This is terrible. We can't get anything done There were and that's indicative that there were gaps between what what the telecommunications industry required and what the open stack community had as a foundation Since then I think there's been a lot of good work done By by the open stack community and by the telecommunications company by stop waving their hands and actually getting down and contributing But those gaps are closing between the need and what's available and I think this is something which which speaks Very strongly to open source in general in that we found a way to move forwards We found a way to find a common foundation and I would I would Put money that we're not going to see the same Interoperability issues that we have had traditionally because we've learned and we iterate and we and we fix this moving forwards And I would suggest from a talk or onwards. It's Well, it will be smoother sailing for sure. Yeah, absolutely So where we're progressing with the topics and I think Mano interoperability is currently the issue Towards the end of this year in Q4 we want to Look at the integration of SDN and NFV in the telecommunist tree Everybody always says SDN and NFV as if it is a six letter acronym But of course, nobody really understands or few people understand What does it actually mean to create interoperability and to integrate an SDN orchestration and an NFV orchestration into a Common end-to-end service. What happens, you know, if data center X failed and all of the associated Paths like the SDN flows towards customers. They all need to be migrated to another data center together with the services For example, so that's my hope for 2018 to get something working on a multivander level there And then one big area where I think a lot of interoperability challenges exist right now is resource management, of course Service fighters hope to be able to use Their data centers in an in a flexible way, you know to share workloads to scale things up and down as the customer services require on the application level and That resource management doesn't really exist or doesn't work well today. I Think it's it's probably we're probably due a short plug on the open network automation platform project Which is which is for those that aren't familiar with it. It's it's being launched this week. I believe formally on app LNAP It's it's essentially the network operators have come together and said whoops They said whoops For very good reasons they weren't able to operate the networks but they've actually come together and Collaborating on building an OSS BSS next-gen solution in into the Mano layer as well And they're inviting the vendors to come along and and it's it's it's really an interesting dialogue to watch because this is where AT&T China mobile orange Bell Canada these guys are sort of getting there and and then they call up the the Erickson Cisco's Nokia's and who a ways and and say we want to solve these problems and can we do it together and can we do It in such a way that it that it helps to facilitate this integration that we have so that you can actually start to talk about things like if that data center fails because of an Earthquake how are these other data centers going to pick it up and then you have a software base to actually start to articulate those conversations and this is kind of exciting it It's going to be slow-going is my prediction because you're actually dealing with some really complex topics But it should help with this right and I mean number one is always requirements analysis and I'm not sure if the service writers all agree. I mean a lot of people a lot of organizations understand their own Business and technical requirements for operating their services, but how much the industry can align on this. I'm it's an interesting challenge Okay, just if two more things on the Etsy plug test that we participated in so it's not an ENTC or new IP agency initiative It's an Etsy initiative Etsy run some some Tests themselves and this plug test Allowed both commercial and open source implementations to participate So here we have an example of a case where vendors delegated staff to officially support open source Initiatives to participate. It was a fairly large crowd there and it was actually a face-to-face In in person test event the test that we conducted with a new IP agency. They are mostly remote and distributed We have we counted a total of 85 vendor engineers participating in one of the campaigns, but we only saw 13 of them ever on site so This is a face-to-face test event and they tested Mano as well so was focusing mostly on the VNF life cycle management network service life cycle management Next slide So plug test participants are quite a few and grouped into the VNF's on the left hand side The manos are in the center and the infrastructure on the right hand side and I tried to color Those in yellow that are open source. Apologies if the yellow doesn't come out the same I didn't I felt was I failed fighting with Google Docs here and in the colors But basically you see everything that's open on the left hand side open baton open o open source mano And opnf e open vim open stack platform So so basically these are open source based or direct open source implementations that participated and from the results they had a Very positive outcome so around like 97 98% of all combinations were okay And I think again, this is due to the fact that vendors who started testing and they figured it would fail. They just didn't report it and So that's what you have to deal with the Etsy plug tests have a different focus than a new IP agency New IP agency says we want to report anything and we will actually log everything on My team logs all the results and there is no way to escape this in the Etsy plug test It's an engineering focused event. They don't publish detailed results This is the only results that are available on on the actual implementation level and They say this is if the vendors don't want to report it then it's it's their own deal I think on the open source side I know for opnfv the approach was that if we find a fault we run everything if we find a fault We raise a ticket Right, so right. Yeah. Yeah, sure with that alone And again, you know the main benefit is learning right it's not about like finger-pointing or blaming or something and and that's of course On the commercial marketing side, that's always a conflict of goals You know if you if you learn then you want to be as much failing as possible to learn from it If you want to be great on marketing you have to be able to publish a press release like we completed thousands of combinations successfully and These are different goals Good so we have 20 minutes left 19 minutes left. I'm gonna quickly talk a little bit about the vendor challenges that we have and We can more or less stop and take questions as well. We don't we don't have a lot of time Remaining we've done pretty well thus far, but I Did want to cover two main points one is when it comes to building an interoperable application Back when I was building applications in before this nfv time It was great. You bought a box. You put an operating system on it You built your software you put that on top of it You then tested it and you knew it worked and you could go out to a customer You could put it into the wall and plug it on and switch it on and it would work just the way it did When you built it and life was relatively okay Yeah Now we're in a virtual environment now now I don't even know what the box I'm running on looks like anymore let alone have control of which operating system that I'm running on so the world changes a little bit But even though I'm in a world where where I don't necessarily have the same foundation I still have certain expectations when it comes to selling my software I need to be able to provide predictable performance predictable scalability And predictable operability which we talked a little bit about thus far I need to be able to an operator someone buying my software needs to be able to use it And and that's fine if the software is relatively simple, but more complex the software gets the more it has Dependencies on the traffic coming in or going out of it and the more its government regulated The more challenging that becomes It also needs to be life cycle managed So I don't necessarily know what operating system I'm running on or what's what is life cycle managing my software However, it needs to still work effectively as a life cycle manage object It needs to be able to be upgraded. It needs to operate in a resilient manner It needs to have methods of reporting and methods of getting input that are well understood regardless of where it's running And this is this is a huge challenge and we talked about how it was with Juno and and we were running on VMware I was red hat this or canonical that and they were all very different a few years ago, and it was You literally had to do everything over and over and over again depending on what your target was And that's what we need to move away from because if I write a thousand lines of code I want to be able to sell a thousand lines of code I don't want to be able to sell a thousand line of code and three million lines of tests because I have 20 platforms I have to hit then it's not worth making the software The other is is interoperability of course Standard interfaces so it doesn't matter what I'm running on it doesn't matter where I'm running it I still need to provide the same standard interfaces. Otherwise the phone can't connect to the system You know, there's some basic things that we still have to maintain and bring forwards and predictable characteristics If there's too much lag, it's not going to work. If there's too much delay, it's not going to work If there's enough throughput, it's not going to work for the services that consumers want to buy So from a vendor perspective Life sucks right now. I think is the best way I can describe it But it's getting better and it's getting better because of a lot of the things we've talked about today. I Think if I'm an operator, I think we're buying. I want to buy What do I expect when I'm buying? Well, I expect to be able to click to buy I would love this to be in an app store type of thing I want to press that button and I want it to be up and running and I want a new You know horizontal service deployed. That's cool In fact, that's an awesome target and that's the target we have in mind When we build things these days, that's where we're trying to go But it's it's kind of challenging. We need application portability It needs to be able to run on this cloud or that cloud and that cloud can be dense data center cloud Or central office cloud and it can be running different different versions of an operating system We still need to be able to manage this the management and orchestration layer needs to be able to tell my application Where to run and I should just follow the marching orders and go and run where I meant to run Functions, of course have to be interoperable and the tricky part is it has to be multi-vendor In other words, everyone else has to be able to do the same thing that we're doing and that's that's where it gets really tricky because Unless it's available in an open and common way, which is what open source is all about is very hard for us to all approach a problem with the same mindset And into end automation of course comes into it. So we have this little arrow I have on the side and it really talks about the foundation pieces doing the common things in open source using the interop working group in open stack using the CVP in open opn of v and pushing things there as a vendor I want to push as much there as I can because if I can push stuff there Then I can start to answer some of the questions of where is my application going to land? What is it going to be running on? What are the interfaces? It has to use the more certain I can be about that the faster I can write software And the more certain I can be about deploying that software. So really we talked earlier about pushing a lot of the testing back down the track It's really important as an operator if you're thinking about buying a particular system If you know how that system should look find the earliest possible point in that chain to get that test running So that operators also that so that vendors trying to sell you that can can hit that context as early as possible Then of course things like Etsy and a VTSD. It's a standard for testing. This is how we're going to test This is how we're going to be measured and we can also have a line and agree on that It's relatively straightforward and it provides us with a really nice foundation for Coming out to a customer and saying we followed this and the customers like oh good now I know now I know what you've done, and I'm relatively comfortable with that Then some of the vendor activities in a V in VIoT is up there And that's that's been around for a long time now That's that's where all the vendors got together and we said okay I've got an EPG and I've got an MME and I've got all these sorts of things and we plug them together And we make sure they work as vendors and that helps us solve that interoperability challenge What it hasn't traditionally done is made them all virtual activities and run them in different data centers and things like that So there is a new initiative called the NFV ITI which is which is looking at addressing how we can get those applications running on Each other's softwares and hardwares Which is which is in a similar area to the the the certification work or compliance work that the new IP agency does So a lot of this sort of feeds up to the point where you get to that Commercial readiness piece right you do need to do the interoperability testing before you go to someone's lab to certify that it works You need to know that it works You need to know that it works based on what you've pushed into open source So that you know that the applications and interfaces you're going to work You're going to use are going to actually feed into that so it's it's kind of a chain that we see Which comes to the point of Where and how do I achieve click divide? How do I make it easy for operators to iterate because if operators can iterate quickly if operators can click a button Have a virtual network spun up and be able to test that safely in a secure isolated environment Me as a software vendor I Can start to have a conversation with them about how much life cycle management I have to invest in How how much backward compatibility has to be through every component in the stack? Where can I start to do stuff quicker so that they can start to do stuff quicker? It's a cycle that we have to get into and it's not the vendors and it's not the operators alone that have to Do this we have to get together. We have to collaborate and we have to find the ways to make this Faster better and and all of these interoperability Challenges that we're working with all feed into that And by the way the application layer is an interesting point That shouldn't be underestimated in the beginning everybody saw well the application layer doesn't change You know if a vendor had a virtual an EPC implementation before they just take it and they ported and then it's cloud Cloud Something but it's actually only cloud ready. Maybe it's not cloud native And I understood the difference can be pretty major in terms of like a cloud ready Implementation can be very pretty monolithic. It doesn't really take any of the major benefits of the virtualization it just you know says like okay now we can run it somehow in the cloud and On the way to getting cloud native the application needs to change quite dramatically in some cases So all the application they are needs to be retested Yeah, anyone raise your hands if you've tried to run a diameter stack across a thousand containers Yeah, you'd be insane to do so. No, you wouldn't you but it these are the types of challenges that we have to address When when we were standardizing around diameter We weren't thinking of spinning up 500 containers here and 5,000 containers over there and that they should all interoperate that wasn't Part of how we envision the network working so this changes across the stack as you say the application level adjusts But from a vendor perspective, I mean I come back to the ONS harmonize harness consume This is this is really the message. This is it's spot-on for where we are as an industry at this point in in time I mean we need to find ways of harmonizing around the interfaces the environments the processes the workflows We need to figure out how to make the most of the software that's available to us And then we need to get that into the network as quickly as possible and this this cycle of harness harmonize Consume is what we need to speed up And that's that's really what all of this is about. That's why we want CD. Why we want CI why we want interop Why I want automation that's that's what's gonna help So that's the vendors view I guess on on where we want to go The vendors you on on how much effort it's gonna cost to get there is is maybe not as shiny but We're still okay, so That's really all I wanted to talk about from my perspective we have a few we have ten minutes for questions And we have a concluding slide So yeah, any questions. Otherwise, I'll ask my fellow panelists questions, which you don't see happen So the question was was where do we go to work on that universal template that helps us on board applications? wrapping in processes and policies and and allow us to click to buy There is some work being done in in Tosca by by Etsy There is there is work being done in the newly forming on app project And then there is a lot of work being done. I mean, you know Products like juju and other products have done a lot of work around this But that universal template that agreed on this is how we're gonna do it that's a Work in progress at this point. We don't we don't have something we can point out and say there we go And the reason is it's a it's a tough question to answer. It's and and there are a lot of things that you know if Ericsson sat down to try and do it and we even talked to Nokia and we agreed on how we do it and we can bring Who away in and Cisco and we can sort of agree on how to do it We're still only addressing 40% of what it needs to do. It's it's it's a big industry question But I hope to see Etsy. I hope to see the O and AP project I hope to see the work being done in Tosca sort of over the next 12 months or so really start to answer some of these questions and give us You know at least 80% of what we need to get there that that would be ideal and the team forum, of course. Yes all right quickly Definitely, and I think we are we're seeing something Maybe a little further down the stream in the STN on the STN transport side of things so in STN Implementations have adopted netconf and young models to describe to describe the services and The challenge there is always to find a young model which is multi vendor ready everybody has their own young model like everybody had their own SNMP map in the old days and that of course of course not going to help much because even if you agree on the protocol like netconf to exchange it That's only half of the game, right? So hats off to any initiative here That's trying to harmonize the models because they're going to find a way for vendors to still differentiate from each other because when this can't differentiate anymore, they can't sell their services anymore their solutions and At the same time standardize the model sufficiently enough to make life easier for service providers Game for me So I was curious you had that sort of graph the up and to the right graph and if there's a place that OPN of V fits in there somewhere, you know, you want right from open source right into it seems like Yeah, that's what I've got the logo down here next to open stack it it's it's actually interesting because OPN of V is a little further down the line than open stack, but when it comes to How is a vendor you remember how I as a vendor approach doing this OPN of V is still before I Try and productify something. So for me, it's still part of that pre-design cycle It's kind of open source integration and open source Consolidation so I do have it down here in the open source interoperability piece Yeah, we can it can be debated it was I quickly put together slightly I didn't say that We weren't sure about the sequence so it's not a strict order So if you if you take screenshot of this don't say like this comes before the other because the NFE ITI people They were somewhere here. Yeah, so they're here But I as far as I understood the initiative. It's often, you know, working with service providers and Avoiding like the finger pointing to it's it's like it's an MOU that says like if we find like we The vendors find ourselves in in the same service provider and our versions over software doesn't work with each other Then we do not finger point. We actually help each other and That's that's you know sounds trivial, but it's a major achievement And so that the pipeline might not be in the right order, but in any way It's I think it analyzed the whole paradigm thing and we have to sort out how this would work All questions Do you see containers creating even more problems as they start to get more popular? Docker Kubernetes Well From open stack perspective, we are trying to support containers as much as possible We are seeing containers be a hot topic from multiple perspectives not just That can be There's an initiative that that's easily get the the fingers pointed at from the perspective of all we just put our Things into containers and then it will solve our problems because then that will most probably will make it cloud ready native Dash star, it's not actually true So containers Are not equal to Solve our problems out of the box that we didn't spend time and energy and effort on on solving it So from this perspective, I think containers are even a bigger challenge So it's not how the technologies fit together, but how the mindset Will be ready to deal with containers on the other hand We also see containers for example in in HPC and scientific workloads Like for example, I don't want my researcher to be An engineer and do all the all the infrastructure layer coding because it's not the main purpose So there's another area where where containers can can come into the picture, but we are just dealing with the Same similar issues then then how we are dealing with the VMs and how we are onboarding it Exactly and From it from a again vendors perspective from a vendors perspective actually It can it can help and it can hinder depending on what the challenge you're facing is it helps in that I just have to target a kernel version That I want to run on and I don't have to necessarily worry about host and guest Interoperability and KVM versions and stuff like that anymore It can hinder Not so much in the container itself, but the container control plane and how that interacts with my with my for instance Open-stack control plane and how I couple these things together the networking is is well We have some work to do on on networking when it comes to how we're adopting and working with containers There are some great work on on the K&S stuff about about how to bring some of the Kubernetes stuff into open-stack I see a lot of really good progress and if we continue to make that progress I don't think they're going to be a huge problem for us something like OCI Open-container initiative gives us a standard packaging format When containers first came out or like yay, we have a new version of a table because that's really what they provided you a way to package up stuff And and you didn't have to untie it to run it. You just downloaded it and run it and it was perfect It's it's moved on from there when we talk about containers We now talk a little more about control planes and networking and how they interact and how they scale And it's that control plane part where I think we we need to continue to invest But in general, I'm actually very supportive of containers and how they can help us with interoperability If all I have to worry about is my is my my OCI format and all I have to worry about is that I have a compatible kernel And as long as I can trust that the control plane is relatively cohesive to what I expect I think from a deployment perspective, it can help simplify And I we may be out of time so I'll let these guys wrap up. That's me done. Thanks guys Thank you So, thank you for coming. This is what we had for interoperability for today, but as you could hear it's We don't have the conclusion We are not ready with everything so I would like to encourage all of you to look up the related Open and open source activities and and join there and provide feedback participate write test cases Or code and just share your experience with us because this is what will bring us and the industry forward Yeah closing words. Thanks very much for attending and I hope it was useful and you can you can find us around the whole