 Hi, Dan Kahn here. I knew Zoom is having trouble today. I was able to get in fine as we're by definition all of you, but hopefully we'll get enough people here that still have the meeting to be worthwhile. And as usual, we're going to wait two more minutes and start five after the hours. Okay, well, let me go ahead and kick things off. This is the first meeting of the Year of the Reference Architecture group. We are largely shouting about the landscape, but other related kind of topics as well. Unfortunately, it looks like Ken has not been able to join yet. For the agenda, we have Mamet from Verizon is going to tell us a little bit about some of their architectural work. But just before that, I did want to give the overview about where the cloud native landscape stands. And I want to introduce, I know it looks like he dropped off. Oh, no, there he is. Andre to the to the group. Andre is our developer contractor in Spain, a former until a month or two ago in Russia, who has done the vast majority of all the coding for the interactive landscape. And so if you find the tool useful or have ideas on how to make it better or such, he's the one to thank and the one to make suggestions to. He and I have both been pretty busy over the last over the holidays and the last month in refactoring the application to be an upstream npm module. And I had included links to this in the email, but we're pretty pleased now that the app itself lives here. And then since the app is one downstream of it, but the Linux foundation deep learning is another. And our hope is that over the year, a bunch of other Linux foundation projects and potentially other projects will make use of the code and wind up doing their own downstream versions. And as with everything from CNCF, it's all licensed Apache 2.0. And so any of you are also welcome to make use of the code and create your own versions or your own ideas for landscapes and such. I don't know that there's a lot of other updates that we have. I mean, you're very welcome to subscribe to the feed of commit changes and you can see every single change that's made. We are making several different changes per week as projects come in or companies go under or projects disappear or stop getting committed to. And so there's been a pretty nice pace of changes and updates. The one other comment that I would just maybe bring up to this group for a second is someone I forget who opened the issue mentioned that the application definition and image build section has a lot of very different stuff in it. And I fairly agree with that and it does remind me a little bit of what previously was the service management section, which we then were able to successfully pry apart into the remote procedure call, service proxy, API gateway, and service mesh. But when I look at this application definition and image build, it's not instantly clear to me which of these are which and that's splitting it, trying to come up with two sub things. And they don't have to be exactly those two of how we would go about doing that. So let me maybe call on Lee for a second or Randy, where I'm curious if either of you two have a view on that section in particular and an idea on how we might clean it up a little bit or segment it out. As I look at the landscape, I filter on definition and development. Okay. Now I see. So I had a little filtered and the the three projects that came back were the test NATS and HALM. And yeah, it's maybe the test that it's maybe the database portion of this that has to feel a bit forced, which is I think is what you're already identifying. And then, hey, what you know, is there is there a pre existing bucket that it may need more appropriately if it's into? Right now, platform is for the most part relegated to Kubernetes. Is that right? We are the very final section is the non Kubernetes Okay, the PAS container service. And so unless I'm, you know, misthinking, mischaracterizing the test. Well, yeah. And so there's some stateful concerns there with the test. The Roke is certainly Lee, I'm still having trouble hearing you, but I'm not quite sure what you're looking at. Are you looking at the same over the cloud native landscape? Because the test is at the very top left under database. And then platform is over on the right middle. Hopefully maybe the microphone. Oh, yes. So wait, wait, so let me stop you for a second. So can you click the landscape button? Great. Now, right in the top middle application definition and image bundle and image build was the area that I wanted to ask you about. And you might go to the top right and increase the size a little bit, a little lower. There's a no, yeah, there. So see, we've now made the form the static landscape totally interactive here. So it's and if you want to dive in, you can then click the header that says application definition image build. Yeah. And that'll show you that there are 21 projects in this space. And I mean, there's a lot of variety between Helm and Packer Intel presence and open service broker API and open API initiative. I mean, open API. So I'm definitely not trained. So my concern here is, oh, these are kind of a lot of different things. But the, oh, are there two subcategories that we could easily split them into? It hasn't been clear to me yet. Yeah. Yes. Yeah, between habitat, cube, vert, mini cube. Yeah, they, yeah, yeah, I see what you're saying. Yeah, because some of these are just, some of these are just definitions. And then some of these are like habitat in particular uses is its own, I don't know if you'd say run, you wouldn't necessarily say run time, but it is a day to, it is an ongoing and cube verges being really an API. So, Dan, let me digest that. I'm not off. Yeah, I wasn't, I didn't mean to put you on the spot too much. And Randy, do you have any, anything you might add? Yeah, I'm sort of in a similar position where just kind of digesting this a little bit and looking through it. But there's a, there is a little bit of a kind of a crossover between, for me, like when I look at some of these things, you know, it gets into the, some of the stuff, some of the stuff sort of has a little bit of a crossover with configuration management. So, you know, you got habitat in there, you got kind of Bosch-y kind of a flavor to some of those things. And, you know, I wouldn't like, at first blush, I wouldn't have any obvious like, oh, that just shouldn't be there or this is a bad structure. So, I don't think there's any, you know, critical flaw. Is there a better way to organize stuff? Maybe, but I think, you know, kind of like incremental refinement, you know, is where we're at here at this point because that category is pretty broad, right? Application definition. And image build. Yeah. Makes it even broader. Yeah. Okay. So, I would sort of say the mailing list is open. Would anyone else like to comment or make a suggestion here? Well, as the person who got, this is Ducey. So, as the person who got habitat on this, back when I worked at Chef, the challenge that we had was, yes, it does more than just application definition. And so, there is a lot of day two components, but we ran into the blocker of, we need to be in one box. So, I mean, I think that's just the thing that we're going to continue to suffer with on the landscape. Every project is a beautiful snowflake, Michael, that deserves its very own category to fully appreciate its unique aspects. I do think it's helpful to think of the categories or the subcategories as being the least bad place to describe what it does. My only, my perspective is when you look within a category and say, how far apart are the different things here in this subcategory, I feel like that subcategory maybe has the biggest range of stuff in there. Yeah. But I think to your point, having this NPM module gives us the capability to go back and there's no reason why we couldn't blow up application definition and image build into subcategories and then talk about which one each tool does within that broader scope. So, if you could drill down into application definition and image build, and then inside of that, you have subcategories, right? That could be something that's useful. I'm open to that. It looks like Ken just joined. Ken, we were just before we went over to Mamut, we were just talking about application and definition image build just maybe being the widest category or most expansive category that we have on here right now. And me, since you're looking around, would you mind just popping up the LFDL link that I shared a minute ago in the chat window, landscape.lfdl.io? You bet. It's in the chat window, if you want to. Or I don't know, is that, yeah, there you go. And so, this is just obviously a totally different space and we don't need to spend any time on it and I'm not going to try and defend the exact category choices and such, but I do think it's pretty cool that all the underlying code was able to be, is continually able to be used between it and literally the only difference on this downstream project, you can click on any of the boxes there, is that it has a different YAML file for the landscape and different images loaded in. But otherwise, everything works exactly the same way. And they're also very open to feedback. Folks have ideas that they'd like to see your different suggestions on how it should work. Okay, go ahead. This is Radia. I did have one area that in the landscape that has been in the back of my mind that I thought maybe would be interesting to discuss, but I don't want to mess with your agenda. So why don't you go ahead? We'll see how if we can just re-write a whole thing in the next three or four minutes for them then we can get to meet them. So I'm navigated there, but if you want to just go back to the overall landscape. It's the different tab, yeah. There we go. So, yeah, so one of the things that you see here in orchestration and management in that layer is you've got the RPC stuff down there. And to me, that is really an application development component. If I'm designing a distributed application, I might decide to use GRPC or Apache Thrift. I would say that to me, those kinds of decisions, those sorts of architectural decisions are going to be the same ones I'm going to make as to, like, I'm going to use NATS or I'm going to use, you know, Rabbit and Q or something like that or I'm going to use Kafka. These are communications schemes for your microservices. And while, you know, I like the bucket RPC, I think that's a really important bucket and people should recognize that bucket. It's very, to me, parallel to the type of use case and decision process and kind of componentry that you would have for messaging. The two ways that, you know, you're going to have your microservices interacting are going to be, you know, request response style like REST and RPC and then, you know, more async messaging types of solutions, which, you know, are different in kind because most of the messaging systems don't involve, you know, the client talking directly to the server. There's some platform components that you have, you know, your Kafka brokers or your NATS brokers or what have you. But in an RPC scenario, you know, they're talking directly, but you're, it's more the technology that you're selecting for that interoperability in the, you know, the library and IDL generation stuff. So that's one thought. And then, inside the RPC, I would, I don't know if there's anybody who's promoting Avro as an RPC solution. If there is, I would love to chat more with them about that. But being an RPC buff a bit, every time I've looked at Avro as a realistic way to do RPC, I have found it to not, that's not a shtick in other words. You know, Avro is great for serializing data to disk where you want to have the schema embedded with the data so that you can, you know, retrieve it many years later without having to remember what the old schema was. It's really awesome for that. But that approach hasn't worked out well for RPC. And a lot of times the RPC stuff that they show on their website doesn't actually even work. It's not maintained from what I can tell. So I would say that pointing someone at Avro for RPC is almost, you know, just courteous. Maybe I'm wrong, but this is the last time I looked at it about a year ago with some, with some care. And then the other thing that I think is a really useful tool that's not mentioned in that bucket, though it doesn't fit in there exactly as protocol buffers, I'm not sure exactly where on the landscape, because I'm sure it's on there somewhere, but it is a serialization scheme. No, it's not. It's not? I mean, just, no, we don't have protobufs on here. Obviously, their dependency of GRPC, but the argument has been that they're more operating in a different layer and not necessarily a layer that we're covering. And it's true, but Avro itself really is a serialization scheme. Like, so it's a lot more akin to protocol buffers. And I think also that protocol buffers is used by a lot of people before GRPC ever existed as really as people built RPC schemes on protocol buffers without GRPC. So while... Oh, yeah, and you still can. I mean, my issue on it is just we pasted in where Avro does claim to be an RPC, to also offer RPC. I totally believe you that they don't do a good job at it or that it's not well maintained or such, but it is right there that they're not just rich data structures in a compact, fast binary format, but they also are a remote procedure call. So in that sense, they claim to be while protobufs has never claimed to be an RPC mechanism. Although, you know, if you look at the protobuf website before GRPC, that there were all sorts of pointers to how you could quickly do RPC with it. And I guess, you know, if you want, it would be interesting to get some feedback from people from the Avro project if they really felt like it was a good idea to offer Avro out as an RPC solution. Because I really, really... Yeah. And I don't think anybody's actually... If you wanted to do an email exchange with folks there, I mean, I'm very happy to remove them. I feel like, you know, if we had a protocol to make it okay to have them in there, then you're kind of, you know, addressing the fact that they are great for serialization. And if you wanted to build your own RPC scheme or try to make the one that they show work, you could. And then protobuf would be kind of a lot more of a, you know, related kind of a project. But if it feels like we should have protobufs in there somewhere, I don't know. I mean, we could rename this category to be serialization and remote procedure call. That might be great. And then put protobufs in. My slight concern is, oh, then do we need to put in another 25 things as well? True. I mean, we're not going to put JSON in. Yeah, I mean, thrift has integrated serialization, abro has integrated serialization, gRPC has an almost explicit protobuf. But in that sense, gRPC has explicit serialization. Right. Though they do suggest that you don't have to use protobuf if you don't want to. I'm happy to come back to it. The bigger thing to me would be moving it up a layer, right? I feel like it really strongly belongs next to like streaming messaging and that sort of stuff rather than in orchestration. Yeah. I mean, I think the reason it's here is more the management than the orchestration. And then just the belief that the RPC is often a building block for streaming messaging, even though it can be a direct competitor to it. But I'm curious if anybody else wants to voice an opinion on the topic. I mean, I don't disagree with you, though, that RPC cannon is also used for app definition and development. I mean, that's principally what it's for. I do think that even with respect to the RPC discussion we're having and back with respect to the application definition and image build, that when I was trying to provide thoughtful feedback earlier, I was not taking into context the fact that this particular collection was also in context of being the highest layer in a series of layers. And so it's a good reminder to reflect on that. Part of what Dan just said that resonates with me a bit is that is the notion of these layerings, is the notion of potentially hardware things, potentially the notion that some of these get layered on. So it's good to take into context. Yeah. And I think about it like this. If I build an application and I have an architecture where there are five microservices and two of them are going to talk to each other through async message streaming over NATs. And the other guys are going to talk to each other and through the head end of that system through RPC, I will have to pick an RPC solution that I will actually use in my application. But building that into my go or JavaScript or whatever it is application that I build, it makes me absolutely completely, you know, it has no dependency on the underlying orchestration and development platform. I could run that same app on Docker swarm with Compose or in Mezos or on Kubernetes or on bare metal. And yet it's using RPC. It's an application development tool. If my application that I'm developing is Kubernetes and I happen to choose GRPC, that's fine. But now you're sort of shifting your, you know, it's an orthogonal vector, right? Where you're saying my app is the orchestration platform. So I get it, GRPC is important to people who build distributed application platforms because those are essentially microservice applications too, but it's an app development tool. You know, it's not, it's not a platform orchestration thing. And while yeah, you should be able to instrument all calls between multiple applications. I should be able to see messages flowing from app A to app B over NATs. I should just as equally easily be able to see messages flowing over GRPC or thrift between app A and app B. That's, you know, that observability is a cross-cutting concern. And that's why it's in the tower on the right. So that's how this stuff comes to me, right? If I'm an app developer and I don't care about the platform, I'm going to consider my RPC solution. It's going to be an important part of my decision-making process. And it's going to be one that I make very carefully because unlike the innards of my microservice that I can turn over 50 times without affecting anybody else, we're talking about the contract between systems in my, or between microservices in my application, which is the same kind of contract you get when you commit to Kafka or NATs or Vitesse or something like that. Mm-hmm. Management. Yeah, interesting to reflect also on the constituency of the other groups within the orchestration and management layer and how infrastructure those feel as versus Randy, I think the thing you're sort of highlighting is being a bit more of a developer-centric concern. You know, maybe another way of characterizing things and looking at it through a different lens is, you know, who's the core persona that is maybe has a higher degree of concern for that layer? That may help to reflect on. So, Randy, you're saying this is pretty developer centric? Absolutely developer centric. Yeah. And then, you know, potentially not necessarily the same for the majority of the rest of the consistency of these projects. Yeah, and one of the great things about service mesh, you know, projects like Envoy and what have you is they let you monitor all of the traffic between all of these systems. That's their job and the observability projects like, you know, open tracing, those guys are the plugins. It just really speaks to the point that operators don't want to have to care about what the developers have chosen to use, right? You could be using the MySQL protocol, but on the back end it could be Vitesse. It could be actually MySQL. It could be, you know, MariaDB or any number of other Aurora or something else. But the hooks that let you view this stuff, you know, they need to know about those things. But clearly MySQL is an application developer level construct. It's not like those things are independent and should be completely firewalled off, though the more they are, the nicer it is for the operators. And that's what, you know, tools like Envoy bring to the table. Envoy yes does let you, you know, get some extra juice from a GRPC interaction than it would over something else. But that sort of just goes back to the protocol. HTTP, you know, being mined for data, you can do the same thing with other schemes. Of course, if you have, you know, if you have the protobuf specs for your, you know, exchanges that are protobuf based, even if they're messaging oriented, you know, your instrumentation can decompose those and grab stuff out of them. So, you know, there's never going to be complete independence. But when it comes to the, the, the, the development process and the kinds of things you need, you need to pick an RPC solution, you don't want to build one from scratch. So need to pick a messaging solution, you need to pick a, you know, storage solution. And all of those things kind of are up in that higher level. And if I'm building a platform, I don't pick GRPC. I use GRPC because Kubernetes is using it. And Docker and Container D are using it. Not, not because I picked it, but because some developer picked it up at that higher level. And is CAN available to come off mute? It's star six normally to, to get off mute. Yeah. And guys, I also want to throw in there that this landscape is so awesome. I don't, you know, this is like really nitpicking. So yeah, it's only good because people are working to improve it. I mean, the fundamental thing you hear is that RPC is a different category than streaming messaging. I don't think that's always true. And I do think GRPC is very much used as a streaming solution at times. And some, sometimes NASA GRPC are directly competitive with each other. So I do see the argument for that being up on the application layer. I also do see it as being a core piece of infrastructure that interacts with Kubernetes and, and Envoy and Prometheus and a lot of other projects. So I, I can see the argument both ways here. But I think we should just like drop parallel there in that in the open stack world, RabbitMQ at least for a while was the way that all of the services talk to each other. So you'd have to put streaming and messaging down in the platform layer. I guess the main thing to me is that there are some fundamental things that you do when you're building a microservice solution. And one of those fundamental things is you pick an async messaging solution. And the other one is you pick a request response scheme, which might be REST or GRPC or a combination or something like that. But those are so parallel. And to your point, you can implement messaging on RPC. And you can implement RPC on messaging. You know, you can easily make RPC requests over Nets if you want to. So they're, they're similar, but, but they have some, you know, if you're going to do messaging, you sort of take certain fundamental decisions and, and go that route. And then you, you, you can do the RPC stuff, but you're a little less efficient data, or it's a little bit more work for the developers and then flip flop for the other side. But they're, you know, fundamentally targeted the different technologies and different schemes for interaction, but they, they operate, I think really clearly to me at the same level. Okay. Could we put a hold on this for now? I am happy to revisit it by email or, or in a call next month and go ahead and hand it over to Mamet because I think he had some, some architecture work he was going to present. Sure. And another perfectly valid argument for leaving it the way it is, is there's a lot of icons in the messaging one and the top layer has boxes with lots of icons in it. We are a little bit running out of space, but you know, one of the advantages of it being open source is that you could go ahead and lay out your own version of it and show it as it actually fits better that way. Then should I start? Yeah, could you please? Sure. First of all, my name is Mehmet Toi. I'm with Verizon and I'm really new to the group. And the first thing before I start this, is there a document or is there a link that you can send me about the current status of the, or where the architecture that you are working with and what the architecture is working with and also the status of the development. That will help me quite a bit. So far I haven't gotten much information about the group. So what I'm going to present is certainly very high level, but you're going to see the connection to the containers and what I am as a service provider, what I'm really looking for at this point. So for that is, I need to give you the cloud service architecture concept, describe that and then tie that to the containers. Cloud service architectures has been working in the Metro Internet Forum. And the first draft was out a couple of months ago and I'm hoping that this will be standardized in the first half of this year. So cloud service, as you can see, we come up with a description, but it's very difficult actually to describe it. But nevertheless, the cloud service really contains connectivity, personal applications and the connectivity to the applications. So for example, if you look at the NIST definitions, let's say so far as a service or the cloud service and things like that, you're going to see really only the inclusion of the applications. It does really, does not include the connectivity to the applications. And that is definitely good reasons for that because initially the cloud stuff started with the public clouds and public cloud people access via internet. So therefore really, there are probably much, people didn't want to talk about the connectivity or the network and really access to the applications. However, now, as you will see the organizations like AT&T and Verizon, we do offer the cloud applications. So the life change, we have the connectivity and the applications, but also everything else around it, which is the management and so on and so forth. So therefore, all of them really constitute a cloud service. And cloud service will have, mostly will have, non virtualized component as well as virtualized component. So if I look at the applications, probably most of the applications are not virtualized or software written in software, but in the networking, you will have non virtualized component on top of. So I basically put all of them like in one chart to show how they can be related to each other. Of course, this is not the one way to do it. But mostly at the bottom, you will have a network as a service. And then you build on top of that infrastructure, platform, software, communications, security and maybe others. And as you can see, either you can build on top of each other or maybe you can skip, for example, you can have a platform as a service on top of the NAS. Now, that doesn't mean you're not going to have infrastructure components, you will have it, but that may not be offered as a separate service. So if you look at the characteristics, characteristics is those virtualized and non virtualized component, as I mentioned, and it could be networks, applications and also the connections. Connections, as you will see between a service subscriber and the application or between the subscribers. And also the components like VNFs and the PNFs can be provided by one operator or multiple operators. Again, as you will see later in the diagrams. And the other maybe key characteristics is the elasticity. Elasticity in terms of the on-demand service configuration, even self configurations by the subscribers, and also the collaboration. And scalability, as you know, skating and out, that is one of the common features that is being expected. And also service level specifications. So there are some quality of service associated with it, with the end-to-end service and maybe other parameters and things like that. So there are quality service parameters with the service. And also the usage-based billing. And that is the, it really depends on how the service provider is capable of supporting them. In other words, you can see maybe hourly, maybe by minutes, maybe even less than that. So depending on the capability, they will have the usage-based billing. So this one, I give you a couple of examples. This is one of the common example that you have a customer or subscriber using the public internet and accessing to the public cloud providers such as Amazon, Google, and things like that. And there is also the private networks. The customers or subscribers do use the private networks, like Verizon. They use the Verizon private networks to get to the public cloud providers such as Amazon. And also the combinations. They may use the private networks, but also as they back up, they may use the public internet. There is also another variation over here. That is the subscriber may go to the private network and they access to the private cloud providers applications such as Verizon. And that private cloud provider may have actually communications or connectivity to the public cloud provider such as Amazon. And the end user or the subscriber thinks actually they are using the Amazon, the applications on the Verizon, but they may actually use the applications on Amazon. So with this, the communication between the private cloud provider and public cloud provider are important. And this is another quiz that we're trying to send you. By the way, this work has been presented to Amazon and a couple years back presented to Microsoft and others by myself. And that connects. So there's also the subscriber who would like to, whether they use the public internet or the private networks, but either way they do want to have access to the multiple public cloud providers from the same networks. And one more is the cloud exchange gateway. This is the, they would like to have a gateway that can cloud carriers, cloud carriers, such as Verizon could be counted as a cloud carrier or AT&T is a cloud carrier. They can talk to each other or cloud provider can talk to each other such as Amazon and Microsoft. And so this is actually another pretty much desired capabilities that I heard from Amazon and anybody else to see, to have a cloud exchange gateway and use them properly. So two more examples and one of them is the cloud in a box. And that is really, it's a customer premises equipment, which is providing the utilization infrastructure. And on top of that, it may or may not provide applications. So it's connected to the cloud service provided as you can see, which has the cloud carrier and also the cloud provider. So in this case, for example, the subscriber at the customer premises do use the infrastructure provided by the cloud provider in addition to what it has on the customer premises. And this is UCPE, universal CPE or universal CPE is the best example. In another version is the, even though a subscriber or the customer will have a application in the UCPE, for example, they may actually establish a service chain between the application that they have and the applications offered by the cloud provider. And another simple example is, of course, the subscriber will come a very simple device and basically connected to the cloud carrier and then from there, maybe simple browser access to the various cloud applications. So with that, let me describe this one and then I'll stop if they have any questions so far and then I'll continue the rest. So this is the describe, this diagram describe the cloud service actors. And on the left side, you see a cloud service subscriber. And on the right side is the cloud service provider. Cloud service provider is the one responsible from everything really for the service. So from the provisioning to the maintenance and plus the bidding and single point of contact to the cloud service subscriber. That cloud service provider may or may not own the cloud carrier facilities and the cloud providers facilities or it may own, maybe it's just cloud carrier, portion load and so on so forth. But in some ways, you need to have a cloud carrier for the connectivity and also you need the cloud provider to support applications. Now, as we had like 10 minutes ago, we were talking about applications. So I'll try to give you a little bit insight to what we meant by the applications and how that really maps to what you're saying there, to what we have discussed a couple of minutes. So now I'm going to go into the cloud service architectures in more detail. If you have any questions, please stop me. First of all, the intent was why we came up with this thing was really simplify the cloud services in such a way that we as a service provider, we can manage it. And also we can hide the complexities from the subscriber. That was the main intention. And of course, on top of that is, can we use the tools that we already have? Now, that doesn't mean we're not going to change it. We will change it, but at least we may end up with the minimal changes if we have an architecture that can at least resemble to what they had been using so far. So the key objectives are, as I said, hiding the implementation and also, of course, allow subscribers to have self-configurations and also use the LSO architecture. What is this? This is actually LSO means Lifecycle Service Orchestration. And this architecture came out of the MEF and mainly is trying to define these management interfaces internally and as well as management interfaces between the operators. And for example, two interfaces are defined between the operators. One of them is called the sonata for the service ordering. And the other one is called the interlude to define the service provisioning or whatever the configuration needs to happen between the operators so that interface between the orchestrators, between the interfaces of two operators is called the interlude. So, and of course, on top of that is the service O&M. Service O&M is really health check, periodic to self-check, and also maybe the loopback and those kind of things. Can we also use those if we have an architecture again somewhat similar to what we have been using for other services? So with that, if you look at the, by the way, I don't know if I'm hoping that my slides are visible. So maybe. Oh, we can see them. We're looking at them. I just, you do have 10 minutes left. So I'm a little unclear. Oh, okay. If I have 10 minutes, then I have to see that so that I get to the containers. Okay, that'd be great. So this is the user interface to the colossus pro either and there's a standard interface between them, which we define and the interfaces, as you can see broken into two, two levels. One is called the application interface and the other one is called the connectivity interface. So in this case, the subscriber is only using the connectivity. So and but in this case, subscribers do have the application interface as well as the connectivity. And, and this is the protocol stack. And protocol stack is the, we call the cloud connectivity uni and cloud application uni, cloud connectivity using uni is from late one up to late three. And the application uni is from late two up to late seven. And the cloud application interfaces could be a VM interface, as you have seen, as you can see on this diagram. And all it could be a interface to a virtual neck. And even further, more than that, it could be an interface to a virtual network function, whether it's been supported by the container, where it's been supported by the virtual machine. And I want to go one step further. And we're thinking that we use that cloud application interface to represent the container interface. So, of course, there will be some differences in the attributes, but nevertheless, genetically, we would like to call that container interface. So now, I mean, the cloud application interface. So for us is, how do we interface to a container? How does this container talk to another VM? Or how does this container talk to another container? It's very important. And again, this is what we're trying to standardize. Again, not just for the containers, but all other virtual components, VNF, things like that. But nevertheless, we need to really identify what is interface for the container. Now, there is one more interface in the container, for the interface that we are, for the container that we are interested, that does not show, these slides don't show, that is the interface between the container and the kernel. So that interface is also important. We need to standardize that so that we can use the, you know, the containerized VNFs from various vendors to be able to run on top of the kernel. So then this is basically operator interfaces and so on and so forth. It's really the same approach. Even the operator interfaces, we have defined the application interface and the connectivity interface. Again, hoping that the, if we use the container interface, that will be, that will comply with the application interface that we define for these services. Again, this is the protocol stack, which we pretty much the same thing. And then comes to the connections. The connections is, if I have an end user here, and let's say I have a container over here, or, you know, application on top of the container, then as a subscriber, I establish a connection, it's called a cloud virtual connection. And this connection will have endpoints collecting the cloud virtual connection endpoint on one side and cloud virtual connection endpoint on the other side. So if this is a container, container is supposed to support that too. And the connection can be provided by multiple operators, as you can see, and if that's the case, then you can have the segments in each operator that we treat them separately, maintain them separately and so forth. And finally, maybe this is the picture. So if I am the subscriber over here and receiving the service from cloud service operator, cloud service provider, then as you can see, I have the segments for each cloud operator. And then they are terminated by endpoints and same thing on each operator. And they form the end-to-end cloud virtual connections, which the services basically are riding on. That's basically it. I didn't want to go to further details, but I stopped and I want to hear your questions. Thank you. Me look, I mean, this is Dan Kahn from CNCF. My initial feedback is that this is a higher level of abstraction in terms of block diagrams and such than we're used to operating in. And CNCF definitely tends to look at Kubernetes-related solutions as opposed to solutions that are kind of abstracting away to work with any possible technology. So now that's not a hard and fast rule. I mean, there's, and I guess there are plenty of counter examples to it. But I'm curious if you run into the network service mesh work that's being led by Edward Nicky of Cisco and the project Legato. So I don't think I have. There is another project that's called the Legato, but that is being the MEF. Maybe they are using the same name. So I am not familiar with the, what's going on in the Cisco under Legato name. Okay. So I will make an introduction on the mailing list to Ed and encourage you to get involved with that group. The Legato I'm speaking about is on the, I'm pasting this into the Zoom chat window, but it's a cloud-native platform for developing plug-in service agents. But it's essentially a format for out-of-bans signaling on, in Kubernetes, to allow a different kind of pod networking, more of a layer two pod networking for particularly higher performance kinds of interconnects. And it's essentially a way of doing carrier-grade networking that is implemented in Kubernetes as a custom resource definition, a CRD, and so doesn't interfere with all the ways that networking works today. So I definitely would encourage you to look at their work, and I believe they're out doing weekly calls and are quite engaged on it. And I think that might be a good group to try and engage with. But maybe you could just talk a little bit about one of the outcomes that you're looking for here. This is a sort of a high-level architecture diagram. But is there code that's been implemented for implementing any of this? So first of all, I appreciate for all these, and I certainly get connected with the Cisco team as well. What my intent was over here, I wanted to see if you are being able to define the interface to the container. Is there a way for us to standardize that? That's one. Second is the so interface between the containers, interface between the from the container to a kernel, as I mentioned, and from the management perspective, we are not feeling tied into the Kubernetes. It could be Kubernetes, it could be something else. So I was trying to see if this team has looked at those issues and whether we can utilize those and where we go from there. That was really my intention to coming to this team and presenting this architecture. Okay. And just to be clear, the network service mesh is actually folks involved from Red Hat and Ericsson, a number of other companies as well. It's just a Cisco person that's taking the lead on it. But definitely within CNCF to date, I think CNI, the container network interface is the dominant way of networking with containers. And then network service mesh is designed to be an alternative to that that doesn't need to work with CNI. But I'm not sure that the approach that you're talking about here is, I guess, I don't really understand the details of it of how it would compare to CNI or network service mesh. So I would really encourage you to go out and work with Ed in that activity for the next few weeks. And then as you kind of get a better lay of the land of the container world, please feel free to circle back. And maybe we could also just do an offline call to talk about some of these issues. I think that would be great. I really appreciate that. I'm sure you're going to post this thing on the chat window, and I'll take it from there. And I appreciate it. Well, I'll keep it on the mailing list just so that everyone can see it. Oh, okay. Cool. Thank you. Thank you very much. No, Ed, I know Ed very well. I can be on that call with you. Thank you. Ken, is there anything you want to add on that front? No, no, I think I don't want to be overly discouraging here. No, no, no. I think you were right on, Dan. I was thinking the same thing. Okay. Why don't we stop there? But the mailing list is available. And I do think there's a ton of interest in activity, mainly in the network service mesh work. So I would definitely encourage you to get involved there and see if it just fits what you're looking for. Cool. I'll definitely do that. Thank you. Thank you, Dan. Okay. Well, thanks for the call. And let's circle back next month and talk more about whether RPC should move up a layer. And feel free to suggest anything else for that call. Thanks, everyone. Thanks all. Thank you. Okay. Bye now. Thanks, everyone.