 Hello. Hello. We'll get started here in a minute or two. Okay. Sounds good. All right. I'm not sure who's going to join but let's get started. We have anything new in events. This is the week. We're here and the telecom TV. We're having a online. Taka Taka summit. Open source summits next week. With the one summit regional day. Maybe some stuff there. Coming out related to the working group and other things that we're doing over here. I think Oliver you said matrix is going to be going to the. TWA. That's right. Yes. Is there any type of online virtual for that? No, not that I am aware of. I will take a peek, but I don't think so. All right. If there is. Maybe drop the link in there for that. Yeah. Hey, Nicola. Anything else interesting happening. Soon. In September or early October. All right. Let's see. So telco day schedule. I think it's coming out today. It's a half day event. Afternoon. Day zero event. And. We should see the schedule announcement sometime today. Oliver is there anything we need to talk about or think about for the. Elephant developer testing for me. Very good question. I'll have to come back on that one. As well. I don't know what the latest is. Maybe we can follow up about that. It seems like something. That might be. Relevant for. The cloud native telco day and other stuff we're doing. Maybe to extend or continue with what we're doing there at telco day. Yeah. Make sure it's aligned. Welcome. Guess let's jump into the PR review. Let's see if there's any others. You know, there's at least one. Okay. This is it. Has anyone. Had a chance to review it. Looks like we have something from Tom. I'm going to tag you. Oh, why you weren't already tagged. Well, you're already tagged. Let's see what Tom had to say. Well, actually, I guess we can just go a quick overview. So. Been working towards trying to get some more best practices. This is the first one in a while. Proposed fully. We've had a bunch of ideas, but officially put a pull request down. Hoping to get. Keep the momentum and get some more in. So everybody be thinking about. If you're motivated to contribute to. At least write up something summary or anything. Then please help put it forward. This is the first in a while. So single concern per container. Or a single process type. So a CNF. A CNF containers. Or I'll say a CNF itself may have multiple internal services providing the functionality. And if those services that are providing are broken into different processes. So they're not the same process type. For example. Apache. Web server and a database. So. Would be two different process types. And they have different concerns. So one is about data storage. Other is about serving. Each P request. The recommendation for this best practices to split those into their own containers instead of putting them into a single container. That's what this one's about. I'm going to call and for Ildico and Nick line. In case y'all haven't seen this. Actually, I'll just ask. Have y'all looked at these. Nick line. I'll go. I haven't had the chance. All right. I'll just go through it now. Nick. Have you looked at it? No. All right. So the summary is pretty much what I just said. We're saying that you should split it out. And given some references. So there's a little bit that's related to micro service practices. And that could be like size and there's a lot of other things that will tie in on the motivation and goals that we're trying to achieve the benefits. Quick overview though. Provenance scalability. I'm trying to leverage the. Platforms orchestration system rather than internal container orchestration. Upgrades are going to be based on the individual containers. Start at the start. So you're building in like for upgrade process. Dependencies and other things. Versus having it. Any type of a process. Maybe a. HB server is upgraded and there's some type of problem or dependency between it and the database is tightly coupled because they're in the same ones. Rather than loosely coupled in different containers. With strong API's to talk to them between them. And that's kind of tying in with this as well. This managing the service concerns is individual units and you're splitting them into containers. This is just the high level. And so this is this little blurb here. It's a best practice that Docker talks about. So splitting them. This is the. Single concern principle related thing. And applied it to containers with darker Docker. They don't specifically talk about microservices, but that'd be another area. Alright, motivation. So this section where you're trying to. Look at. The different areas that. Would be important to. Both the end users. So CSPs and whoever else is running these applications and. The developers. Integrators, that sort of thing. So life cycle management. So you could look at motivation would be like. Problems. Challenges that when trying to solve. And so the first thing is about. If you have multiple. Process types in a single container, then you need to somehow manage how those are working. The orchestration of this process is keeping them alive and that sort of thing. How are you going to scale each of them up? And if you are splitting them into containers, then you're going to be more likely to leverage. The. Orchestration engine that you're using. So probably for most people, that's going to be the default orchestration Kubernetes. But you'd be tweaking it there and using all the capabilities that people are working on for lots of applications versus your own individual. If you're looking at. A more efficient. Use of all your resources. Then you can think about. Allocation per service or concern. This also, I'm going to jump into the response time. So you want to do faster response time. So if you want to be very responsive, but you also want to be resource efficient. If you have them all together, then you're going to be scaling based on a large container versus one that's scaling, maybe individual. Sets of containers. So if you only have one service that's getting hit hard, like maybe the web server, but your database is fine. Then you only have one service. That's going to be the web server, but your database is fine. Then you only need to scale up your web server and not your database. Upgrades. So if you're doing upgrades. It's talking about. Problems with the dependencies between them. This. Security would be. That's semi related, but you could find it even without upgrades. So we need to have vulnerable abilities. So if you're looking for a separation of those into containers, at least you're taking advantage of the container name space. There could be some other things. Limiting the attack surface area. So observability. For whether that's debugging and other stuff for developers, they'd like visibility. Maybe you have two different teams of developers working on the different services, like maybe there's some people that are, you have a custom storage engine. And then you have some type of API engine for request and doing other type of things. So this could be two different teams. And. If you have them together, then. Trying to figure out where things are. What's going on services. If you're having problems or whatever. You're. Potentially going to have more trouble if, if they're all together than split. So some of this digs a little bit more in the development cycle on the tightly coupled. And then test coverage would be. Maybe even more. So if, if they're tightly coupled and how are you doing test coverage versus. They're self contained in the container. So. Then we have the goals, which relate back to those. Dives and so the orchestration using the Kubernetes orchestration engine. Talking about microservice architectural practices. If you're already following these. So if you're a end user. Looking at the ops team or doing integration and you're already trained and maybe you're trying to move to microservice patterns. So if you want to take advantage of those, then this practice would be aligning with that. The scaling that I was talking about. So just reasoning about it. So this could be. Maybe the. People creating these applications. They could give feedback. Or the. Definitions that are provided that talk about best ways to scale the different pieces. You can reason about those as a developer and now. Okay. If we get this type of load on this service, we need to scale like this. This other services go like that. Similarly for the end users, the operators, you know, maintaining whoever's helping to run and looking at services. They can reason about it because they're split into different services. That makes that easier. The resource utilization. So if you're only scaling the HP server and not the database. You could scale that up rather than both. Maybe the database or storage, whatever is. Allocating special type of nodes. The HP server or whatever service this is the split. Maybe doesn't need those so it could scale much faster. But the main thing is both the developers and recommendations and the operators can decide on that and be more efficient utilization. Upgrades so ideally like any type of upgrade. If it's a single container and you can try to reduce the risk because you know it's only going to upgrade to one part that's been well tested. We'll go through every one of these security. I think I kind of talked about it. I don't think we had this specifically up there, but finer control of permissions because. You can set them per service. So maybe the database needs some. Some type of storage type permissions, but the web server wouldn't. So. You would have finer control over the web server service. And maybe you tighten it on the database for some other area. And on observability thinking about log messages and other things for whether you're debugging from. It's already in production and trying to figure out some area. Or development wise. If you're looking at output from a single container as a single service and trying to figure out something. It's going to be easier if it's not. Combined split out is going to be easier. If you are looking to monitor the activity, maybe for optimizing. And. Scaling or some other thing between the service communication, maybe the. You're having a spike of requests to your storage backend. It's unexpected. So you're thinking, okay, we need to scale that. So that can tie in looking at debugging. Maybe it's not in the log output, but there's something going on. So if you're exposing that inter process communication and sort of it being within the container, whether it's harder to view. If it's between will document defined API's. Between the services that's going to be easier to monitor. And we have a lot of similar stuff here in the software development cycle. Talking about the improvements and some non goals. So these are out of scope. We're not saying these are part of it. They may be good practices, but they're not part of this. Suggested best practice. So. The how best to communicate on the API is we're not saying how you should do that. It's probably a good idea to have some practice around that, but we're not doing it. If you're any type of supervision and management of the process within a container, which may or may not be needed. If you're using external orchestration, but we're that's out of scope. Any type of. Issues with homegrown management systems out of scope. Implementation details. Specifically saying how to split. Like we're not trying to say you should definitely split your storage and your web server or any other thing. We're not trying to define what should be split. We're at a higher level saying that they should. And if there's anything that gets into best practice that we want to expand on for specific ones, then we could talk about individual micro surface best practice, but that would be a different proposal. Any questions before going in a little bit more into where this applies where we're thinking applies and stuff. Questions or comments. There is no questions. All right. So this is the a shorter version of the proposal versus the summary has a little bit of all of it. So a CNF with multiple current concerns should be split into services or process types for each of those concerns. Into separate containers. Service dependencies should be handled between containers through well-defined interfaces. It's high level. We're not saying how that should be, but that's kind of related to this. POD specs for the CNF should provide scaling and monitoring information for each of the services during running in different containers. So these last two are the high level of directing where we want to go once you split. So these are going to help with the deficiencies. And on resource utilization, helping with monitoring, helping with any type of going from loosely coupled to being able to chain the different services and reuse them as different parts, whether that's internal or connecting to other applications. So this probably should be all POD types. So this would be whether they're core to Kubernetes, maybe escalated privileges or non-escalated. We think this is a good practice in general for any type. Some user stories. This one is based on Intel document, there's a link at the bottom trying to have a 5G applications that are configurable. You can mix and match them. High degree of programmability. And in mixies. So separate cerns is going to help support that flexibility and programmability, including hardware requirements. If you have a specific service within the application that has requirements, then this will help because you'll know that that single container is going to need some type of hardware requirements and efficient scaling and other things. So that's what that ties in. Also supporting automation goals because all of the pieces should be well-defined interface and containers. Course drain, drain dependencies. So meaning that they're going to be contained within the container, but external the container, you're going to limit the dependencies. So it'll be easier to put those together. And then the testing as well. So all of those are there. So this is one of the use cases. Here's another use case that we're putting forward as a diagram. So looking at service-based architecture, the SMF, that's this right here. And it has a lot of different interfaces. So if you're SMF, so this would be only if it's like this, if it's split or it's implemented to have multiple processes that service these different communication, UPF, AMF, UCF, UDM, all these different things. If you haven't split into different processes for that type of communication, it's recommended that you would split that up. Because you may have, whether it's potentially servicing upgrades, maybe your PCF is getting upgraded, but the container that's running these services wouldn't be interrupted by any type of upgrade on this side, because you have these running in different ways, or the dependencies are going to be limited to the interfaces for the service that communicates with the PCF. Or another thing would be maybe your communication between the UPF and AMF to the SMF are much more variable. They scale up and down based on the peak end-user usage during that period. And so maybe those need to be scaled up, but the communication to the UDM PCF don't. Well, that would be another one. So this is just to give context on how these could look and why we're recommending this. There could be a lot of other use cases. The simplest one up there at the top of the application would be the web server and database. Some notes. We're not, we're not saying that a container can't have multiple processes. So Apache can start multiple worker processes. This is a web server. It can have multiple worker processes that it forks off to handle the request. It can also have multiple threads. So those are both fine. Java is going to run a Java application. And the service might have many, many threats. That's fine. Definitions. So there, I think we referred to monolithic applications. So this is what we're talking about application not separating the CERNs into micro services. We're considering monolithic. And then we're saying a monolithic CNF would be any application, a monolithic application that's focused specifically on network type concerns. That's what we mean when we say a monolithic CNF. And multi-concern containers. A container having more than a single process type providing services for different concerns. All right. And then we have a bunch of references to many different places, including some vendor stuff like this Ericsson information. The intel that paper that I was referring to a minute ago. And testing. We want to validate. That is there more than one process type. And we actually already have a test over in CN test suite. There we go. Are there any comments or questions before we look at any reviews? I think it's very comprehensive. I like it. All right. I wanted to add something and ask a question. If my microphone is working. Yeah. Oh, sorry. Okay. So one of the things that I might have missed here. I don't know if it was so from the benefits. Typically. Also. It's mentioned in such documents pulling gloat. Like the community to have, you know, the. For example, the front end return in the, I don't know, no JS or whatever it is suitable for a front end application to be written into. So that's, I don't know if it's crucial here, but, you know, it's a benefit. And the other thing that I wanted to ask, I'm not sure if it's again, if it's the place here to, to, to recommend this, do we have any, any, at least feeling and internal understanding within the group when we recommend this, do we have any specific recommendations about. I would say if I split my functionality into two separate layers. Do I, can I put a restriction to run them on the same worker node? Or do we recommend that these functions should be completely. Distributable across multiple worker nodes, multiple data centers, even I don't know, you know, cluster these days can span geographies. Hey, I guess it's a very polite question. Yeah, I guess in this particular principle, we, we didn't talk about both. Basically, we were just centralizing, discussing about how to split things in the container. I guess in order to achieve what you're saying, it's like, yeah, it's maybe. You know, like, you can use like the definition to make sure that all these are going to mean all the containers inside of that body is going to be in the same location could be working or whatever. For some reason, you need to keep it more separated. Yeah, you can use certain. You can use the scalar policies to distribute like a have like a higher high level it. Yes, I guess for this particular aspect, it just we just centralized in how to. Make the decision like separate like separate the process types inside of the container. Yeah, I think it would be good to add it straight in as a comment. So we have four cross history. I'd like it there first and then. I think it would be good to add it straight in as a comment. So we have four cross history. And. I'd like it there first and then. The place that I'm thinking immediately where we should at least say something about it. And you're welcome to add something would be the note section. Okay. And I don't know that we want to say out of scope because I actually. I think we should consider that in scope. One thing to think about and this could be for you as well. Nikolai is. Do you think that we should stop the poor quest? Is it something that you feel strongly about that we should address? Like as directly part of the proposal. Or is it something that we could put comments into the poor quest and then maybe do an update later. I think notes. I could see it as added in right now and probably how I do it would be a suggest edit. Where you go down and. Where is it? Past, I think. Oops. Yep. Sorry. I don't know what just happened. But if you do a suggest edit where you go in here and review. If you. If you think you have an idea for that, then I think at least in the notes section, it would be good. Yep. I will. Okay. I would hesitate to put it in the main proposal unless we're going to talk about it more. Yeah. But it's. Relevant. Oh, no, no. The only thing that I was going to say, like, if. If for some reason the document is not. Reflecting that those things. I mean, it's valid to just. Make it more implicit. So maybe we're. Just assuming things and. Is there to. That the response or the explanation has to be. In the document. Sorry. Taylor. No, that's good. Let's look at Tom's unless someone has anything else. Ildeco. Lucina Oliver. If y'all have anything. Speak up. Otherwise I'll jump into Tom's comments. I'm good. All right. All right. Let's jump in then. Great. So he's going to suggest that it. I think I'll go over here. It might be easier. Okay. So he's talking about potentially the. The. So motivation is challenges versus what we're claiming. Are going to be the benefits or, and really it's not claiming the benefits. It could be a goal for benefits. We hope to achieve. Not that you're going to achieve them, but we're talking motivation here. So right now we have resource utilization and. I think I'll go over here. It might be easier. Okay. So he's talking about potentially the. That so motivation is challenges versus what we're claiming. Are going to be the benefits or, or what we're claiming. So right now we have resource utilization is less efficient in multi-concerned containers. Which require allocation for all components. So all services, all processes, rather than individual micro services. And then. He's saying maybe soften it. It's. Likely to be less efficient. Yeah. But I think the best way to get the resources is for all components. I think that's okay. To say that. It's, I'm not. I think I'm okay with that. I'm going to give a thumbs up for this one. What do y'all think. I think that. Go ahead. Oh, no, I wasn't saying anything. Sorry. We don't have any group or any metrics to. Reject that. Make it like a hard statement in this case. So I guess. So it's valid to just make it more. So I'm okay. Can you give a thumbs up on it? Yep. I'll do add suggestion from batch. Let's see. Next one. So this is still in motivation. Should we add a may here? It's not given. All right. Seen us multi-concerned containers have a large service area. Seen us with multi-concerned containers may have a large service area. I'm fine with this. I think this one is probably not as big a deal as the last one. I could probably argue that just immediately because they have multiple processes, they actually do. In fact, I'm kind of just thinking about this. If you have a web server and a database server, they may not have any security problems, but their attack surface area is larger because people can try to. Find vulnerabilities and two different process types. I think they do have a larger surface area for attacks and bugs. I disagree with this one to change it. What do y'all think? Well, the example that you give us, that's right. Anything, as soon as you have, we're talking about multi-concerned. So if you have one process, then it's only going to have one surface area. But as soon as you have two process types in, so one process providing a service in a container. Now you have two processes providing two different services within a container. You're at least doubling at that point. I don't see any other way around this. Yeah. The example that you gave was a little more clear. If you have, by definition, two different types. So. The libraries and. And the security types that you're addressing in every single process are going to be completely different. So just by definition. Yeah. But in that sense, you're increasing the. The surface. The doctor face. Okay, I'm just going to add a comment. I'm not going to add that one to the back. Does anyone disagree with. Have a comment include or whatever. Any objections to not accepting this. No objections. Okay. Security vulnerability in one process type affect all of the process types in the same container. Okay. So this one. I agree. So it may have no effect. So you may have another process that even though there's a vulnerability. That other process. Has. There's no effect. You know, you could say that. It's kind of. It would get very nitty gritty on this because I could say. Maybe your web server has a bug. And someone gained access to it. But your database is so secure that they don't access the data. But you could, if you have access to the web server, you could stop all traffic to the database. So that's affecting storage request. I don't know. This one's getting. Nitty gritty on it. I'm kind of good either way. I might word it to use the may though. If we're going to say security vulnerabilities in one process type may affect others. It may affect other processes even likely. So I just make it lighter. May affect. May not affect. What do y'all start. The thing here is like. I mean. What we are saying is. Not necessarily like if you have. One. One process. And it's not necessarily that. That will affect the other. Processes. Like. Like. Like. Like open the proceed to do not affect other. Processes or. Yeah, I'm going to put my own suggestion here. But I think it should be may. That's all. I'm not going to accept it. I'll let someone else look at both of these and decide what they think. All right. Observability reduce visibility communication activity of services in multi-concerned container. So just adding because statement. Okay. All right. So. He's saying that he's trying to make this more understandable this statement. So the container runtime will only be monitoring the in it process rather than the internal container signals. This is true, but he's not thinking about the actual. Communication between the services, but that's fine. He's not expanding on that. He's just saying they're only going to see. The in it process signals, which is true. I'm, I'm okay with this. It comes to the same thing anyways. We're saying that if you split it off, then you're going to see more. And he's giving a reason why. Are you all good with adding this. The only thing is like, do we have like a definition for supervisor? I mean, or like just providing context. No. The container runtime, but that's known to people on Kubernetes. And I think for the end users. Building these systems, they would. They're, they already understand the idea of the, the runtime. And you have some that are actually using different run times. Instead of the default. So I think it's okay, but. Do you want to say. No, no, no. The last, the last word, like. In parentheses. Supervisor. I don't know if we have to. Oh. I don't know context or. The rest is fine. Like, I mean. I agree with you. I'll just put it like this. Victor. All right. I'm just going to put that. And not accept it for now. All right. What's next. All the way down to. Oh, user stories. What do you say? This is a long sentence. I'm not sure it redirect for me. Maybe the following. Okay. That's great. Here's what he wants an SMS. With different services. Providing communication to an AMF. UPF, BCF and other services in an environment. Where many sessions are initiated. And in during peak times. Where. Okay. This may require. What did he do? And see how he did it. May require. During peak times. So he broke it. This may. May require. I like the two sentences. I'm just good with it like that. He's split it up. Yeah. More. Yeah. I like it. Any objections to accepting. No, I'm great. All right. Notes and constraints. So this is just saying, if you're going to run multiple processes in the notes. And this is like a sub recommendation. Not the main proposal. It's under notes. We recommend instead of writing your own supervisor. Use one of the notes. We recommend instead of writing your own supervisor, use one that's already out there. That's well. Develop tested, et cetera. Supervisor. Okay. So what did he say? Perhaps add some additional context. That was provided from the discussion. So this is in the. Working group discussion section. As the container runtime monitors the PID one. And uses its signals to report events. Knowing when a container has stopped. So this is talking about what actually happens. How do you monitor? What are we trying to do with the application? So this is. Giving an idea of what a supervisor is. If someone doesn't know without having to go off. That's fine. He had a little bit more, but we're not trying to add the entire discussion into this best practice. We give a link. If you want to go understand, go read it. If you're complaining and saying, we can do a supervisor, but you don't know what a supervisor is. I'm not worried about that person. They should go read or provide context and why it's important. I'm good with this one. Any objections? No objections. I'm adding it. All right. Does he have more? Nope. That's it. I'm going to sign off and commit three suggestions. All right. So that brings us back to conversation. I'm going to do. Re-request review from Tom. Oh, while we were going, there's more suggestions. But we only have three minutes. I'm going to accept that. Lucina, thanks. Oh, that's awesome. Did you do a suggest edit for that? Nicolai. I don't see. I guess. Okay. Okay. Good. Do a suggest edit and we can accept it. Probably later this, you know, in the week, we'll do it a sync. That's a good one. So this would be going. They software development. Yeah. Yeah. Add that in. That's great. So for those that didn't read it, you can have multiple languages. Libraries, dependencies, all of that stuff with your service. Which can be nice. Especially with large services or multiple teams. Or working with maybe different. Totally different works. Yeah. Right. I think that's it. We got to the end. We've added comments for everything. Lucina, I'm going to re-request. And Nicolai re-request since I added yours. So we will try to go through those this week and then maybe accept it in next week. If you have ideas for. Another practice. Especially if you're having. Any questions with work that you're doing. Your. Think something's needed and you'd be motivated to help write up stuff and please. Add it to the. Slack working group ideas or drop an issue or go take a look at the issues we have a bunch of best practice ideas. And you can thumbs up or comment on them. Like to get started on the next one. I think that's it. I'll see you guys soon. But then next week I'd like to maybe get, be able to get started and, and start having some sessions like we did. We do have some drafts on some of that are issues, but welcome to have any others. One area that we're been thinking about, especially Victor and I would be. Looking at Nephia. And the best practices in there. It leverages KPT. And. Nephia is also doing a lot of stuff with. GidOPS patterns. So that would tie into other projects like flux and argo CD. So deployments and the automation side of things. Could be some best practice areas. So just to be thinking about it. And hopefully we'll get going on, on that one. We'll have a best practice. Maybe the top three picked next week and. and get this full request merged. Thanks everybody. Have a great day and a great week. Please review and look forward to next time. Cheers. Have a good week. Thank you. Bye.