 Booker, I'm the open stack architect at VMware and I'm also the co-chair of the deaf core committee and Chris I'm Chris Hodge and I'm the interoperability engineer for the open stack foundation. Okay So what we want to talk a little bit about today is Some of the interoperability problems that we're seeing in open stack today and what the open stack community is doing to address those If you noticed up on the right-hand side of the stage during the keynotes this morning There was a sign about open stack being the integration engine And being able to incorporate a whole lot of different technologies into a single stack that people can use Whether that's different hypervisors whether it's containers VMs Even bare metal One of the things that's very interesting about being an integration engine is it means you have to make a lot of stuff Look at some level the same And especially when we look across different open-stack clouds Maybe they come from different vendors Maybe they're public and some are private and some are private hosted some of them are appliances Some of their distributions. There's a lot of different ways to consume open stack And it's very important increasingly that we figure out ways to make things more interoperable across all those different methods of consumption of open stack So we have real workload portability and keep application developers sane So Chris you want to take it away? Yeah, sure Okay, so I'm going to start with what is a very brief introduction to deaf core This idea of interoperability with an open stack is actually kind of an old concept And it was baked into our founding documents in September of 2012 when the foundation was created The bylaws required a faithful implementation test suite to ensure compatibility and interoperability for products you know, so right from the beginning when when open stack was created there was this idea that Open stack installations should look similar enough To one another that when you call something open stack it it it has a meaning and it has a minimal standard of How it runs and how it works with one another But although this was part of the guidelines it was something that you know really didn't take off until The deaf core working group was founded in the fall of 2013 and You know, this was a this was a board-driven initiative You know, you know to kind of fulfill this fits mandate about a year later The first guideline was approved and this was in the winter of 2014 and this is about the time that I started working with Working with the open stack foundation and then after a lot of effort the The first the first guidelines were put placed into effect in 2015 in spring of 2015 And so if you remember back in Vancouver during the keynotes We got up on stage and and we announced that I think it was 19 products had had passed the deaf core test Suite and we're carrying the open stack powered logo with testing behind it Since then there have been five guidelines the two latest guidelines are for 2015-07 and 2016-01 and why do I bring up the latest? You know, you know the two latest of the five latest guidelines Well because if you have an open stack product right now that you want to sell and you want to get the open Stack logo for that these are the two guidelines that you have to meet and we've been incrementally changing them improving them Sometimes we remove capabilities because we find that they aren't they aren't necessarily suitable Sometimes we remove tests because the tests aren't suitable. They don't necessarily test what we're looking for Or we've been adding capabilities and we've been trying to expand the capabilities to you know including you know moving out of just the Nova APIs, but also into the Glance and Keystone and You know neutron and cinder APIs and so and so there's this kind of attention of pushing and pulling where we're trying to find kind of The sweet spot of what defines interoperability And you know and and and how do we grow that? So what is a guideline a guideline consists of a few things it contains components which is essentially a product and So the current DevCorp guidelines have two different types of components There's a compute component which lists all of the things that you would need if you want to run an open stack powered compute and a storage component which When we talk about storage in this context, it's object storage and so if you want to run essentially a swift cluster And then you can combine these two components to get what you would call an open stack powered product These are the components that we have right now, but there's not there's there's nothing that says that there won't be more components in the future It's just that right now. We're kind of focusing on what we consider to be you know kind of in the name of DevCorp What is the core functionality? You know what you know what what makes a core open stack installation? So these components have within them two different categories. There's a capability Which is you know, which is essentially Saying that some API exists, you know, so for an example of this would be Creating a server right that that's a capability or getting a list of images or Attaching a volume all of these are different type of capabilities that you as an end user want to be able to perform on an open stack cloud And do so in a way that's predictable across those clouds And the way that we measure that a capability exists is by running a test against it and all the tests and the tests are chosen by a set of 12 criteria And then there are designated sections which are Which are a definition of what open stack code has to live inside of your cloud for it to be considered open stack now Not all of the open stack code has to exist You know, and this is in part to allow for vendor plugins for hypervisors for storage for different authentication methods those aren't prescribed but by and large It's the API's and the code that drives the API's must be running inside of an open stack product so Open stack the good news about open stack is it's incredibly flexible right talking about these drivers before there are any number of ways That you can configure your open stack cloud you can you have your choice of hypervisors storage drivers network drivers You know, it's a really powerful platform You know, and it's and and and you've seen that in the marketplace where you know all these products have sprung up to You know, you know offering open stack will kind of different flavors tailored for different needs The bad news is that open stack is extremely rich and flexible, right? You know that it becomes possible for To you know for for for to stand up different open stack clouds And they may not necessarily work with one another an example would be if you're running open stack backed by KVM versus open stack backed by Zen They they behave differently and you put in they they run different types of images and they're sort of there There are some things you get with some and some things that you get with the other You know so when you so when you see like the you know this this multitude of way of Configuring things and even more than one way to do things like image upload is is a perfect example of this The you currently have three different choices of API's with an open stack right now You can use the version one API you can use actually Four different ways the version one API the version one API behind the Nova proxy The version two API and the version two task API You know, so this is just an example of things that are all open stack, but choices can make it difficult to You know decide which one's the best way to do it and on top of this policy isn't discoverable Right and so right now if you have glance implemented in your cloud in some way You actually don't know how it's implemented and it's difficult to discover Plus there's a rapid release cadence and so products built on you know There are products that are built on many different versions that are in production and you want to be able to say that you know that that one type of open stack and talk to another one like a kilo can talk to Liberty and Upstream development has actually done a pretty good job of this in in in kind of providing guarantees about how long API's will live and What deprecation policies are for that and so you know That's actually one of the nice things about open stack where they where they have decided to take a longer view and Try to make sure that when an API is there an API is there for you You know, but still there are many tools out there and sometimes it's also hard to know exactly What what clouds those tools support and so you have a favorite SDK or a favorite tool or a favorite application, you know How do you know that it's actually going to run on top of an open-stack cloud? And all of this has manifested in how many people here are familiar with the shade library So it a few a few hands have gone up shade is an interoperability library that was developed by the infrastructure team to and to kind of pave over the differences between All of the donated clouds that they that they've seen You know that they're using in QA right now, but it's also you know since you know since it was since it's been used as a tool By that it's also become a popular client to be able to access different open-stack clouds and it's and it's become a very powerful tool So on one hand, it's wonderful that we have this open-source community that allows a tool like shade to exist But then you have to wonder, you know, if we have an interoperable standard, why does something like shade have to exist? So Here I'll turn it over to Mark So now that you've got kind of the context for why we care about interoperability between clouds and maybe a little flavor for some of The different variables that are in play different versions of open-stack different policy configurations those kind of things We thought we'd talk a little bit about some of the challenges That are out there today that we're hearing about both from developers from operators from end-users. You name it So these are these are a few to kind of get the the wheels going And should give you a kind of a down-to-earth feel for some of the things that are that are out there today So one example that Chris mentioned earlier was image operations today if I want to get an image into a cloud There are several ways I can do that. There's also several different ways I can do other operations on images things like listing images Where that manifests in sort of a problematic way is that different APIs and different toolkits or sorry different SDKs and different toolkits Have chosen to implement sometimes one of those ways So maybe if I'm using j-clouds, then I get the Glance v1 API Whereas if I use fog then I wind up using Glance v2 and those Make it a little difficult to to write up a bunch of different apps and make sure that they actually run on the same cloud because all your apps Might not be using the same SDK or the same tooling So images image operations are just kind of one example of that And we'll talk a little bit later on about something some of the work that's going on in the community around that Networking is kind of an interesting space as well There's when people think about networking in OpenSack They generally think of the two different ways to do networking, which is Nova networking or Neutron Turns out even if you look at just Neutron, which is the vast majority of OpenSack clouds nowadays There's a lot of nuance in how You can set up your Neutron networks as well. You can use provider networks You can use floating IPs. You can do tenant routers There's lots of different ways to do networking and particularly external connectivity has come up as kind of a pain point There are certain clouds where when you boot a VM and attach it to a default network You've automatically got an externally a routable IP address With others you need to actually go attach a floating IP to that So depending on what product or what cloud you're using you may wind up with lots of different ways to do external connectivity So that's one that's kind of kind of come up a lot as well We talked a little bit about policy and configuration discovery turns out in lots of different clouds, especially in the public cloud space People are pretty opinionated about the policy settings that they pick So just for for background for those of you that might be new to OpenSack Almost every API in OpenSack winds up being controlled in some way by a policy.json file Which says this is an action that's available to regular users or maybe only to admins or maybe to some other role There are default settings that sort of ship with the upstream projects And often those are tweaked for various reasons. So maybe I don't want to expose the Glance v1 API to the general public in my cloud because I Have performance issues or security concerns So it turns out there are quite a quite a number of providers that actually disable you from using Glance v1 As a as an end tenant Same thing goes to some of the other APIs. Those are those are just kind of examples But there's not a good way today to do discovery of policy settings So if I'm looking at a couple different clouds a couple different open-sack products, maybe a private cloud in a public cloud I basically have to try and catch to figure out how I'm gonna do some of those operations And what ways of doing that are available to me Which is not a great way to go anytime. You're injecting a whole bunch of if loops into your code when you're writing application You've probably got an area that could be simplified API iteration is also kind of a concern So if you've seen some of the I think they showed some of the keynotes this morning And if not then the user survey certainly has them if you look at who's adopting what versions of open-stack and what's in production today There's actually a bit of a lag. There are not many production metaka clouds today There are quite a few Liberty clouds. There are even more. I think kilo clouds still So, you know, there's there's sort of a lag in adoption so That kind of manifests itself in terms of API deprecations When you look at say the top three or four versions of open-sack out there It is possible that you may see API is deprecated and or removed over the course of three or four different releases So if I have say, you know an ice house private cloud and I'm also wanting to Run some of those applications on say a public cloud. That's running something much newer The same API's may not be available to me in both places And so that's that's kind of interesting Nuance people run into when I think about the full lifespan of what's being adopted in open-sack today And how durable some of those those clouds are Probability is an interesting point for us on the on the deaf core committee when we Receive testing results we mentioned earlier that in order to get a open-stack powered logo You actually have to submit test results that show that your your product actually Does all the things that the deaf core guidelines say it should do At the end of the day, that's really a text file that you're submitting to us So it is possible that you could fake those results We would hope that you wouldn't and there's actually legal consequences for doing that built into the logo contract But it is possible for us to receive falsified data And implicit test requirements are kind of an issue for us as well When we look at what's in the tempest tree today the vast majority of the tests that we have today for our suite Are our tempest tests? A lot of cases the tempest tests make opinionated choices about how they set up the thing that they're actually going to test So for example, if I have a test that says start of EM Well, if I'm going to start a VM, I actually need an image first to do that And in some cases the tempest test may pick say the Glance v2 API To accomplish that image upload in order to set up for the real thing that they want to test Well, maybe it turns out that Glance v2 for some reason doesn't meet the rest of our criteria And that's not a thing we want to require everybody to offer Well, how do we ask people to run that test if we don't require the the thing that is required to stop the the test So that's that's a bit of an issue for us now that we're working with QA to rectify And then also finding good data on what's actually used One of the criteria that we have is that capabilities that we require in def core should be widely deployed We're essentially a trailing indicator of market acceptance in a lot of ways and a lot of the other Criteria actually kind of center around that theme as well things like Is it widely supported by SDKs? Is it widely supported by external clients? If you have something that's not very widely deployed the chances of say J Cloud or fog Picking that up and supporting it are probably pretty small So the other day we have to make a judgment call and what we think is widely deployed out in the industry among private clouds among public clouds among appliances you know, there's there's a lot of Data to pour through there and sometimes it's hard to figure out what what good data there is that's rarely available That doesn't require weeks and weeks of research Project documentation as well This is kind of interesting thing that's come up a couple times over the course of the past year Projects often offer different ways to do things And sometimes the there's some sort of tribal knowledge about what ones you really should be using So for example, if you look at the Nova community, there's there's sort of a lot of tribal knowledge that says yeah Maybe you don't want to support or expose sorry with Glance. You don't want to expose the V1 API externally That was really a thing that we built for Nova so Nova should be the only thing talking to it And maybe you shouldn't expose that to the outside world Well, if you look at some of the documentation that wasn't always there And so there are a lot of products that do expose Glance V1 to the outside world And in some cases, it's perfectly fine to do so. That's a choice they they've made Other things we ran into we talked a little bit about Keystone v2. It is Fairly recently, I think been deprecated finally but ours on the road to it But it's been listed as supported for quite some time and when we go talk to the developers from Keystone They actually said, you know, nobody's really maintaining that anymore. It's just sort of sitting there So as an end user, should I really consider that thing to be a supported thing or should I'll be looking at something else? So again, it's it's a little bit of tribal knowledge sometimes that we have to work our way through Other challenges we talked about discoverability. We talked about it mostly in the context of policy There's also a versioning that we have to worry about both of the apis and of the underlying cloud in some cases Because there may be market difference not necessarily Functionally, but maybe in terms of performance or security Of the way that clouds do things between different versions of open stack So that's kind of important as well Image formats, there's not a good api today that says this cloud supports VMDK uploads and this one supports only raw uploads and this one only supports Qcows In fact, in a lot of cases glance will just let you upload whatever image format you want And then it turns out when you try to boot from that thing that doesn't always work And so that's kind of a pain point for a lot of folks So, you know, again that that really boils down to what does the cloud provide and how does it actually do that? Is an important thing for people to be able to discover and if you're interested in discoverability I'll say I don't have the Time on top of my head, but there is a whole session on that later this summit So look for that on your schedules and we'll have some interesting talks Lack of awareness about def core. There's there's def core is a relatively new thing So like chris was saying we only really started having guidelines being enforced in the past year So for a lot of folks, this is still still new material that they're still figuring out So among among the developer core within open stack There's still some confusion about you know What what kinds of things should products or projects be taking into account when they're making technical choices Things like do we need to keep this api on life support or do we deprecate it? Things like what should our policy settings be things like how should we write our tests? In our case, it turns out it's kind of important that you don't use admin credentials when you write your tests If you don't need them Because that way we can have end users actually run these tests against say public clouds where they don't have admin credentials And prove to themselves that these clouds actually do do the things that they say they're going to do And among consumers as well. It's still kind of interesting that Although we've kind of been up in the keynotes a couple times now People still don't necessarily have a good feel for what having an open stack powered logo means for them and why they should care And some of that's just because it is very simply a badge It is a a logo that you can put on a product right and the right to call yourself open stack Under the hood of that there's this whole list of things that that means your cloud does But those things aren't in that logo And so while it's very easy to look at a logo and say, okay, that's probably a cloud I should gravitate towards and look at It doesn't actually give you the full nuance of what's under the hood So kind of getting to that next level of understanding can be a little challenging for folks And then finally mapping capabilities to apis When we define a capability in def core We generally do that in in sort of a loosely plain English sort of way like chris was saying earlier. It's things like create vm It's things like upload image list image In some cases that doesn't always Have a readily apparent mapping to a particular api Because like I say, you know, maybe there's multiple versions of an api or maybe there's multiple ways to do a certain thing at open stack So it's it's kind of helpful for us to be able to map the two together And that's kind of a project that we're we're brainstorming about right now And moreover when we talk about the actual tests that we use To to test those capabilities there again, they may map to several apis in some cases As part of sort of a test fixture and in some cases the thing that we're actually trying to test So there's a lot of work for us to do in in sort of sorting those things out so lots of lots of interesting challenges Like we say, you know, it's it's fantastic that open stack is so incredibly rich and flexible and lets us do all these things But it obviously makes a little work for us when we talk about interoperability So we want to talk a little bit about some of the things that we're doing on it About it so that we don't leave you all with the impression that these are just problems that nobody cares about Clearly a lot of people actually do care about these We've had a whole lot of discussions over the past year and a half and some pretty tangible actions that have come out of that And that's also why you'll see deaf core and the open stack powered logos show up in in things like keynotes and on the open stack marketplace Um, so first of all, um, we exist Plain and simple the open stack foundation of the board of directors cared enough about the interoperability topic To actually set up a group to work on this And actually invest quite a lot of sort of marketing push And sort of put some some real wood behind that arrow So that was that was something that was Concerned all the way up to the board level. Um, so it's it's pretty pretty interesting to see open stack You know marking that as a thing that they really care about Um, and you know, we we actually do use a measurable standard. We have a set of tests that you have to pass So there's there's kind of a standard being set and improved all the time We roll out a new guideline about every six months. So that's kind of the cadence Matches pretty well with the open stack software releases. Although we're we're offset by a couple months So, you know, it's a it's a continuously improving standard It's very difficult to get a lot of the research done on these decisions And so it can be a little bit slow at times But the fact that we we've actually got a standard that folks can see Can get their heads around and have tools to test Makes a big difference Working with vendors to understand challenges of downstream deployments in some cases, we get it wrong We may require a capability or put something into advisory status and all of a sudden we get a bunch of public clouds or a bunch of Private cloud distributions raising their hands and saying we don't support that and we don't have any attention to and here's why We do have pressure release valves built into the system for that So we can actually flag capabilities and say, okay, we got this one wrong It doesn't actually meet criteria because of xyz And we can make that not required If I can actually say something add something to that also Um, uh, it's the these these safety release valves actually I I think maybe I'm realizing that We I there have been vendors that have Approached us and have said we're interested in in passing and you know passing these tests And then when they see they're not passing some tests They go away and they try to understand why that is and maybe you don't maybe we don't hear back from them What the what you know? I is a Foundation staff member, but also as a deaf core working group member would like to see as more vendors coming in and saying these are the These are the problems that we're having You know so that we can try to understand and and work with them on solving them together So really if you're if you're a vendor you know, you know You know, don't be shy about expressing your problems because because really we want the standard to be The best standard that it can be and that and that means making sure that uh downstream that the the vendors are producing products that are You know compliant with the with a minimum standard, but also that we're setting that standard fairly and that You know and and if it's not fair what we can do to make it better either by amending the capabilities or Or amend, you know or or fixing the problems in upstream testing and upstream development that would you know, you know Take these problems away You know so so so really, you know on this on this point I'm just strongly encouraged vendors to you know really approach us And you know and talk because you know, we're you know, it's we're That's that's part of the process that you know should be exercised And kind of on a similar note, um, we also talk spend a lot of time lately talking with developers upstream as well to look at What apis are out there? What problems people are having and what we can do to solve those things? So, you know, we for example, we've had a lot of talks with the Keystone folks about whether V2 is actually going to be maintained or not And so those conversations were maybe at least One contributing factor leading to the the sort of deprecation of those apis since they weren't actually really being maintained very well On a similar scope, we've uh, we've kind of had some discussions with some of the neutron folks around get me a network One of the the new apis for Sort of glossing over some of the implementation details of how to get a a VM on to the network And you know various other examples as well Working with qa to improve testing so we talked about sort of the atomist v problem that we have with some of the tests Where a fixture for a test may require things that def core doesn't So there are cases where we've worked with qa to solve those and we've also Run into some cases where you know tempest tests are buggy or maybe the capabilities that they're testing are buggy So there there's kind of a feedback loop Being formed there to help us both improve the tests and improve the products um Collaborating with technical means to identify key issues with real clouds public private, etc. So When we talk to There's kind of a two prong piece here We talked to end users of clouds when we can and we also talked to vendors who make open stack products Because it turns out in a lot of cases vendors can actually aggregate a lot of the feedback that they're seeing from all their customers to us So being in contact with with people that are producing those things is important to us One of the things for as an example that we've started doing Is asking that when vendors submit their test results to us? They don't just run the tests that we require them to pass But they actually run the full battery of tempest tests And that gives us an idea. Okay. Yeah, you passed great. Here's your logo agreement But that also allows us to see. Oh, you know what? Hey that thing that we were considering adding me the next guideline Five of these new results that we just got in don't actually support that thing So let's go drill down on that and figure out what's going on there So it's it's good good feedback for us That kind of reflects back up to the technical community in some cases as well So we've had feedback about, you know, why vendors have chosen not to expose clients v1 to the outside world. We've had feedback about You know, there are too many ways to do list images So those are those are kind of bits of feedback that we can feedback upstream as well Yeah, and anybody who submits, you know who runs there's a there's a project called ref stack, which is used to As a front end to tempest And can be used to submit these test results You know really go and run all of the api tests against your cloud if you if you can and submit those results to us because You know, it not only helps us now But it helps us for the future too as we refine the guidelines We're actually able to go back and compare the guidelines against previous test results You know and that you know and it you know as mark was saying it's a way to see What's out there and um, you know So even though we may not be looking at particular capabilities that your cloud has right now In the future it can become very important for us to evaluate those Um and last bullet is is providing some meaning for the open stack logo Basically at the end of the day what the foundation really would like is when people see a product that calls itself open stack And has that stamp that badge on it, but they actually know a little bit about what they're getting And they that's a meaningful thing that people actually seek out in the marketplace And so kind of by providing this list of capabilities that you know You're getting when you see that logo somewhere where you see a product that calls itself open stack That's really kind of good for the open stack ecosystem and and users as well Conversations that we're having so awareness is half the battle A lot of like I said in a lot of cases Defcore is just so new that people don't really understand either what i is a developer for one of the open stack projects Need to care about or think about Or i as an end user of open stack clouds don't know what that logo is really buying me So we have a lot of discussions both within the project and with with folks outside of it So just to give you kind of examples of some of the things that we've had conversations with technical teams about You know, I won't bother to read them all through here But there are a lot of things on the slide here that these are either conversations We've already had or conversations that are actually still in flight with a lot of technical communities And looking at what we can require in future standards and what we should probably cut out of our standards in the future as well Okay Okay, so So defcore is now a year old and you might be wondering what's what's new and how has it changed Um Well, one of the major additions that's happening this year Is networking capabilities are advisory Um, and so this was you know, one of the reasons that networking wasn't um A direct requirement in the initial versions of of of defcore was because we had two different networking models We had nova network and we had neutron Um, but now it's very clear that the community has has has gotten behind neutron as a networking model It's become much more mature and much more stable And so we're we're going to be moving to requiring network capabilities in the next set of guidelines, which will be approved in August of this year Keystone v2 has been dropped. Um, so previously we were requiring keystone v2 and keystone v3 Largely because when you installed an open stack cloud by default both endpoints were running And both were supported apis But you know discussion around defcore and what different deployments we're doing has you know, kind of you know You know the keystone team has has um decided to deprecate v2 You know And you know, so in some ways since this is a forward-looking decision of the community Um, it gave us a chance to say that that v3 capabilities are really the interoperable standard going forward um Ref stock ref stack dot open stack dot org went live At the previous summit and started accepting results And so that's become a place where Now when you Approach the foundation for for for the interoperability logo We require that you upload the results to ref stack And that does a few things the first is it it allows the working group to actually look at the results and try to make decisions Based off of that, but it also provides a public place for for for To link to so that customers can see that yes, you pass these tests We've also we've also been looking at how do we expand What defcore covers and you know in the different projects that we cover You know as as we've gone into the big tent world more and more projects have started to run their own Functional and api testing inside of their own projects And so we needed to have a way where we could start considering bringing other projects in and um Now, you know, we've we tempest has expanded its ability to run tests via plugins Um, you know, and this is a way where we can keep the interface for running the tests somewhat similar You can always run it as tempest using plugins, but allow for admission of expanding capabilities for swift heat um and other projects You know, so this is something that's been pretty exciting and um, you know, we worked with the qa team to to move forward on Um, you know, finally, there's been a there's been a really good discussion lately if you've been following the mailing lists about what to do about the nova proxies Um, you know, the reason that the proxies exist is because in the early days of open stack there was just nova and swift Um, but then as projects started to break out and we saw You know, you know identity and storage and networking, um, you know break out of this In order to maintain backwards compatibility, there were these api calls That were being proxied through nova But now there's an active discussion of whether the nova proxies are really serving us anymore And if it's time to start thinking about, you know, using different proxies And so this is being reflected in um in the capabilities as we begin to add direct api calls for images storage and networking Um, and so this is a pretty um, you know, this is a pretty exciting development that's coming up And I think it's going to add a lot more power and weight behind what you can do with an open stack cloud So coming soon, uh, the def core working group is, uh, putting together a report on the top interoperability issues Do you what's you know in the deadline for that report is? Uh, it will probably come out around the time of the next summit. Yeah, so so look for that in in barcelona Um, you know, again, we're you know, and it's going to be periodically updated so we can measure the progress on the big barriers Um, you know, and really that's what this last year has been has been has been You know getting the standard out there so that we can truly understand the barriers that are in the real world And I and I think that in that way it's been a it's it's been a A success You know and and what this is what this does is is is that we're trying to drive conversation. We're trying to create accountability Uh over the next cycle one of our one of the biggest things that we have facing us is working on tests There are There there are ways that we can improve the tests you know An example of this is there are a number of neutron capabilities that are Required administrative credentials to be able to test them But you actually don't need administrative credentials to access them And so this is an example where the def core committee can become You know, you can become very active in looking at the tests and and And modifying them so that so that they can actually be admitted to the guidelines It's also looking at the tests and saying Are there unintended side effects from some of the tests that that that unintentionally bring in new capabilities? um, so You know and this and this is this is an example that was brought up to me this morning where A test to list images actually exercises the image snapshot Uh capability But image i don't think image snapshot is actually capability where we we require so in some ways the You know that you know, there's a particular test that requires an additional capability You know almost by accident So one of the goals is to be able to look at is is to look at these tests and figure out how we can refactor them So that these external dependencies can be moved out and you're really testing You know that you're truly testing what you say you're testing Uh, you know, so this involves working with the qa community You know, you know for you know unnecessary admin credential use and you know, you know some of these other things You know and you know and and what's coming out of this too is a more formal discussion of You know, you know what it means to be an interoperability test And there are going to be some great discussions later this week that are happening and both there's a There's a def core working session where this is going to be one of the one of the big issues Which is going to feed into a joint def core qa working session where You know these you know these issues will continue to You know advise them on you know on on ways forward in how we can collaborate um You know, we're also talking about use cases like nfv like is there is there a Is there an argument for looking at special applications like nfv and saying that Maybe we should be looking at different guidelines or different standards for what it means to be an nfv ready cloud and um You know and this is you know, you know, and so this is something that we're going to be looking forward You know, you know that we're going to be considering going forward In these discussions so that we can maybe have a little bit more differentiation on a cloud targeted for particular use um And then there's other stuff You know, we are we are an open working group and um, you know you If you're facing issues you are Everybody is welcome to come and participate and contribute and talk and ask questions You know, we have weekly meetings on wednesday mornings Do lost our text there. Okay. Well, that was the last slide anyway. So are there any questions? If you have questions just run away these mics for us nobody He was all perfectly clear All right, well, thanks for coming