 Hi Tom. Good afternoon. Oh yeah, how you doing? You well? Yeah. Good. Hello, good day. How are you? Everyone have a good weekend? Yeah, eventful, but yeah. Eventful, okay. Spent an hour and a half on the side of the motorway. My tire blew out. Oh yeah, that's not great. Well, I share in your experience somewhat. My son, my senior high school student went to a friend's graduation in another state, drove there on Friday and had to drive back on Friday night and called. He woke me up at 12.30 at night to tell me he was on the side of the highway with a flat tire and he needed help. So yeah, I was that was an interesting experience doing that over FaceTime. Yeah, not the nicest thing in the world. No, a little stressful when you see the cars zipping by at a 55 miles an hour, 65 miles an hour. I had my daughter and my dog with me as well. It's not nice at all. Anyway, we're all safe. Let's find out. Right, meeting notes in the chat. Right, I've got a hard stop at half four today, but obviously carry on after I've gone. Just so you know, I'll need to drop off. So I think last week was the Big 5G event. I don't know if anyone on the call went. I think Taylor saw some posts from you about it. Yeah, it was an interesting event. I've never been to that one. So I think it was, there was a good mix of hardware focused exhibits and some talks and government and government programs, rural and private 5G discussions, which ended up bringing in a lot of service providers that are more focused on that, including like Metro, Metro, smaller Metro and as well as large Metro type of communication solutions. And some of that ended up going into like Wi-Fi 6 overlapping into 5G and how those can be hybrid type setups. And that won everything from people talking about deployments that are either active or going in right now to folks that are doing like, I think next, the next generation type research or building out hardware demos and products. IEEE folks were there. So it was a pretty mixed event with R&D all the way through, whatever you would think on the business side. And we ended up with a pretty good interest. I mean, we were definitely a small presence for the whole thing, but the entire event was smaller than say KubeCon. So it's easier to get around and speak with people. But had tried to get more of the service providers aware, the smaller one specifically. And then on the larger ones, spoke with several folks from like Verizon and T-Mobile. And normally we've had the most engagement for the cloud-native telecom work that we've all been doing from Deutsche Telecom and, you know, Svetofontam. And I would say a lot more of the European rather than the North America on the larger telecom. Now, we've had stuff from service providers like Charter and Cox has been interested. They have a cloud, a telecom cloud that's now a lot further along than the last time that they were talking with us. In any case, there's interested in the cloud-native telco day, interested in the working group as well as the certification and test suite, including from some vendors and a lot of system integrators. There was a lot of system integrators there. But I did post what I think the comment there was, I put as much as I could, I tried to put summaries on LinkedIn. So some of the panels and talks that I was at and Lucina Watson, they've shared and put some stuff up as well. We'll be gathering up more of the content that we've had and then trying to get more folks to join. Okay, cool. And so there was interest in joining the working group as well as using the test suite. Yeah, some of the questions were actually about where the resources should be allocated and then it came down to what type of folks would be engaged. And so then those discussions seemed to happen to decide on, are they wanting to be hands-on for something like implementing tests in the test suites or they were definitely interested in writing up challenges and those would be the working group would be a good place for that. Okay, that's good. There was a lot of talk about automation and programmable networks, that there was a whole set of talks in one track that were related to that. And I saw it in other discussions and it came up when folks were just talking about problems and challenges outside of that. So FIO would be an example, GitOps in general as far as using those patterns that you see that are under what's called GitOps. All of that would be very applicable for a lot of the automation. And then I think there's a blend between private 5G stuff that you're seeing over in ORAN and the automation end-to-end that a lot of them I've talked about. But I mentioned the multi-network, which some of y'all have been seeing the pull request from multi-network, adding a new network object that expands on the capabilities. And I think that from like the network automation point is going to be a key change in automating for Kubernetes in a non-vendor specific way that probably has a pretty big impact on innovating how you do that. And a lot of new projects are probably going to really push forward and we may have new projects pop up as a result too. I expect stuff like DanM to take the ad support. I'm interested to see what happens with projects like multis and everything. The new network object from the multi-network is supposed to be a super set and it'll support CNIs. So you could just continue acting like a normal CNI, but then you won't be taking advantage of that stuff. So anyways, I put forward as any of those type of things that seem to be related so that we'd have entry points for discussion with the folks and be follow-ups and trying to get more engagement for the next few weeks as well as MWC in September, MWC Las Vegas. So there's requests to some of you want to meet. If there's enough folks that are going from CNF working group, then maybe we can put something together. We're seeing what else can happen for that. Do you guys have any questions? Otherwise, that'll be it as far as my summary. No, I don't even have any questions. Sounds like there was a lot of good conversations there. Certainly sharing the challenges, that's always useful. Good stuff to work from. So Taylor, do you know if they are going to make available the recordings? Or do you know if anyone can add access to the videos? I don't. I was wondering the same. Thank you. Yeah. It's on my list of things to look into more this week. So I didn't see anything mentioned. Did Watson, Lucina, did you all see anything or drew about recordings for all of the talks? I haven't seen anything myself either. Yeah, we'll share them for sure if we find those. Cool. Thank you. So the next event on the list is the LFN DTF, which is virtual only. CFP is actually extended to the end of today because there were so many bank holidays at the end of last week. Across Europe, certainly. So if anyone does have any ideas for that in terms of the working between the test suite and the Anacut projects, for example, then the CFP is still open today for that. Other than that, I've added another one here, which I know is on the same time as the open source summit, but there's a team formed ETW event, which is perhaps more telco IT focused, but it could be interesting in terms of the use cases for telco potentially not long now for weeks until the CFP closes for coupon North America, which has a new telco and edge track. Anything else on events you want to talk about for me? Please and everyone keep trying to help get sponsors for the Claudine of Telco Day. Make sure it happens and forget enough sponsors and it could expand either a full day or a remote virtual, which has been asked every time. We need to get sponsorship to support that. Yeah, sure. Okay, so I'm going to skip an item. So next week is public holiday in the U.S. and also the UK and possibly elsewhere. Just double check. Yep, on Germany possibly. So do we want to keep or cancel the working group? I think given the three co-chairs are either a U.S. or UK based, I'm going to suggest we cancel it unless anyone objects. Then, so Victor, you wanted to talk through this proposal and some of the comments that have been made in particular. Yeah, so I guess it's a good time to start focusing on some proposals. I guess it's a good candidate. I mean, Paulie and I were having an interesting chat the other day and I guess it's better to try to focus in certain proposals. Probably this could be a good candidate. Okay. Try to, there's a comment and move forward. Yeah, I know that this could be a very controversial topic, but I guess we have a lot of information to do something. If this is something that we can make it like a best practice, let's move forward and to meet the proposal. If this is not a best practice, probably we have to document why it's not a best practice and document all the reasons why we didn't consider this as a best practice. But yeah, I mean, reading the comments like, for example, Gehry was saying like his main point is like best in performance. So I don't know if, yeah, definitely there are pros and cons. So I just want to bring this particular now and try to do something like a sweet spot where anyone can feel agree or disagree. Do you ever get a response to your suggestion? Well, that's what I found in most of the relevant topics about this. What I have heard is like the best practice is to use like an additional or an external like service to manage the different processes. I found like some groups like a super vice or D or tiny or all these things. The last week, Ian just just put the latest comments. So regarding, yeah, probably the same thing, like what is this going? I guess it's like a best way to summarize it. Now that we have like more form here, I would like to hear your ideas and know what you think about if this could be a good potential, like best practice or like, if not, like what do you think? So that some of the comments and there's make this a little bit confusing. It overlaps a couple of things. So this is a, the issue was put forward as a best practice to discuss and then and that could be documented put forward. There's comments in here that are related to the CNF certification and not saying whether it's a best practice. It's just saying remove it from the, well, there's three categories in the CNF certification, essential and normal and bonus. And one of the comments from Gergay just is referring to the certification and whether it should be essential or not. So I want to make sure, like, how are we talking about this? Because we could respond to a piece and then it's not relevant to the whole context of what it means in the certification versus, is this idea in general a best practice? Okay, we can talk about it that way. Should it be part of a set of tests in a certification? That's a different thing. Is it going to be in the test suite at all? There's a related pull request that's just over in the test suite. And I just want to make sure when we're discussing this, we're, what are we talking about? And I will bring up the certification just momentarily and then I want to bring it back to say, what are we going to discuss? On the certification side, there's a whole set of tests and it's not past failure done. It's how are you doing in an area? You can have, if you are passing a certain number of tests, you can still pass the certification. And when we're looking at potentially saying that certification may have different levels, then it would be how many tests? So this is unlike some tests where they are going to be, you must pass everything. So I think that's important. And then related specifically to this idea as a best practice, this is related to microservice best practices and we could go down to a specific area. But there's other tests that are in progress to go into the test suite, which could also be written up as best practices, which would be related. At which point on the certification side, you could say, well, we passed three out of four of those related tests. We didn't pass the one process type or we passed it, my past, a couple of others. We still are showing that we're following best practices for microservices. Likewise, if we're looking at what are we doing for documenting and talking about guidance in the working group with our, we have a CNF Dev document that we're trying to write up and fill out. To me, this would be related to the grouping that you can see in other best practice guidelines where you have a subset of practices that may get into more details, but you can still talk about the higher level. Like the principle of least privilege, it seemed to be general agreement that, yes, you should try to follow that. Now, when you talk about specific practices to get to least privilege, you may or may not choose to follow those during implementation. But the idea that least privilege principle is good and should usually be followed just as a guideline. There was no question on that, including from anyone that would say a specific one, you could find documentation from any of those groups. I want to start from there and try to, what is our context? I know Watson, you were actively working on those newer related ones, but what do we want to do context and focus here in the working group? Did you have some thoughts, Victor, on that since you put this forward? Do we want to focus on moving this forward as a proposal for the working group so that we're adding it to that? Can you open that dev, CNF dev doc, Victor or Tom? I think this is your screen, Tom. This one? No, the Google doc and the working group repo for best CNF dev under docs. I'm going to share it in chat as well. If I can find the chat, there it is. This one? Yeah. So this has categories and areas and then we'd be adding best practices under any area that we think is the most focused for it, if this makes sense for the categories. But the idea is to add recommended best practices here. We're not talking requirements. I want to restate this. Some folks already know. We're not saying hard requirements. Everyone must meet every best practice that is listed here. This is a guide, it's recommendations for people to follow. The CNF certification is a separate thing. The CNF certification would be looking towards best practices that are listed here and going, well, here's some that we are listing in the certification. If we have different levels of the certification, then some of the best practice we may say are essential, which are strong recommendations, and get maybe weighted more in if you're following them or not. And then some are going to be bonus where we may say, this is good and if you're doing it, we know it's going to make it easier for deployment or automation or whatever it's for. You got to step away, Tom. Yeah, sorry. I didn't want to be mid-sentence, but yeah, I'll put a link to the doc I was just sharing in the chat. But yeah, sorry, I need to go. I understand. Victor, can you take over the screen share? Sure. I suggest that we focus on what we're recommending in that CNF Dev best practices, which we may expand this as we're looking at stuff like highlighting practices from Nephia, which is about automation, or GitOps, which is related to a larger part of the life cycle. And it may expand beyond what people are just thinking CNF developers, but we could just think cloud-native, telecom, best practices for cloud-native, telecom development, something like that, and operations, whatever. But as we're trying to publish something, that's really a point in the working group. Coming together and talking about what are the challenges, what are the use cases related for context, and then what can we say are best practices to solve those cloud-native and Kubernetes-native best practices that we were seeing and would recommend for the community to follow. That's our goal as the working group. So I suggest that from that GitHub issue, for discussion that we focused on, how would we recommend it? If we want to have a different discussion about certification, then we should do that as a separate conversation. Okay. For me, this particular proposal seems to be related with the microservice area. I don't know if you have a place to put it. So I think that that particular proposal posed in this category. Yeah, I know that there are different ways to implement it, but I guess it definitely makes the manage of microservices easier. So what do you think? Do you think that that could be a possible way to associate this proposal? I mean, Victor, I think it, and I guess Taylor as well, I agree with what everything Taylor just said. I mean, I think it just feels like this is a best practice proposal. I agree it probably goes under microservices. I think it just, I don't want to say it got derailed, but maybe, I mean, there's probably a way for us to handle that when we have questions that sort of take the, take it, you know, causes the best practice to take a turn or sort of the comments take a turn on the, on the, on the proposal just to bring it back to this is a best practice proposal. And so I guess it's, we don't have to worry about certification or anything like that. And I think that's pretty clear to me. So I think it's just a question of how we take this now to, at what point does this become a best practice? Do we all give it a looks good to me? Or what is it that we need to do to get to that state? Yes, agreed. This proposal, you know, I open this issue. So this was, when I opened this, it was for putting forward a best practice in the working group. I've added a comment in the Google doc. I want to put one, I got to fix my login here again on GitHub, but I'm going to add one into this issue. If we're going to do something for the CNS certification, let's open a different issue for that. All right, let's dig into this just from a best practice. And I don't know about responding live for anybody that hadn't, I've just saw Ian's, I hadn't even seen the message until today. So, but anyone else is so pretty to talk. And what's not to ask if you can just give feedback, not on the Ian stuff, unless you want to, but the general reason this was a best practice. I know that maybe length the microservice, drop a link to the microservice article that you worked on and which has a lot of references to other people talking about this area. I'm going to stop talking. I'd like for you to speak into this with focus. This is a recommendation. This one process type is a recommendation. Why are we recommending it? As far as the article, is that what you're asking about, Victor? I'm trying to look for it right now, so I haven't put it in. Yeah, I mean, from my point of view, I think the discussion is fine. It's nice to receive a different perspectives and opinions about this, but eventually we have to move forward. If this idea is not making sense, that's fine. But I guess even in that case, again, we have to document and say why it doesn't make sense to have it. Maybe it's an anti-pattern or something like that. If this definitely makes sense with several exceptions or something to consider, we need to document it. That's basically my idea. I cannot refuse any of the things that have been discussed here, although these are very valid points. I remember we received a feedback in LinkedIn, someone also pointing or saying something about this particular thing. It's nice. It's great to receive this kind of attention and different opinions. But again, I guess eventually we have to reach that point where we have to deliver something. Can you all hear me? Now I can hear you. I can hear you now. Great. Okay. All right. Yeah. I think it's interesting or it's useful or even mandatory to when you're talking about architectural decisions, you should put forward the trade-offs. That's really what we're trying to do. The trade-offs here are from what I can tell. There's a performance trade-off versus are you basically writing your own sophisticated supervisor or administrator? If you have multiple process types, non-processes, that'd be confusing, process types. To simplify it, if you had a web server and a database inside of one container, those are two concerns and process types, that's of processes. You would have to have something smart enough to know the life cycle of both at the top end. It would basically be the equivalent of what Kubernetes does as the top process. You have two totally different reasons for restarting, monitoring, and all that. That's really all of the arguments go under that. When you're able to separate all that, you have deployability and your logging, reasoning, and all of these other things that come into play. I would say that that is the only reason why this is even a consideration is because of the performance requirements of network and network functions. If we were to say, no, we're going to go ahead and put all the process types we want inside the container, whatever, just don't even think about calling it a microservice. The rest of the world is just going to say, we're just using the word just for cool points. That's really what it comes down to. That's where it comes. As far as saying it's a best practice, not a requirement, but a best practice, I have no idea how this is even a discussion really. Another interesting thing is one of the critiques of the article is actually saying we're not going far enough. How can you possibly think that you would need to even worry about putting more than one or separating the concerns in by container? You have pods, so you can have multiple containers. The person didn't even understand the article. They want even more. My position in the article is that, oh, no, we have to worry about low latency and all this stuff. Here's some things that you can do to try to work within the system, like inner process communication, that will increase the speed between processes and all this. That's kind of the high level, I would say. I would say for Ian, he probably needs to read the article because there's some things that he's saying. Are we talking about only one process per container? No. We're talking about process types, these types of things. I also address that there's new schedule or provisioning or, I guess, configuration for Kubernetes that can allow you to do certain things that before it couldn't. There's lots of things there, but I'd like to see more on the performance side from the network, from the telecommunications. So what's the example that you provide? I mean, it's kind of very radical. Obviously, try to separate like database from the problem. It's very obvious. At least for me, but about those cases, for example, what I have seen is using memcache inside of a front end application, that is a little bit more tricky, right? Because, as you said, that you get a little bit more performance, but you have two different process types running in the same container. I mean, even for me, it's better. I'm in favor to separate this sign of process types in different containers. Anyway, I'm also in favor of not trying to rewrite and having everything, like trying to use an additional process manager inside of a container. So you think that we have other those particular use cases? I mean, it's not clear. Or the advantage to separate them, the cache portion versus, so I don't know, we have other these particular cases where things are very tight together. Are you saying that in the example of using a cache separate from the whatever functional of application process that you're doing? Yeah, that's a bad example, or you're saying that that's a more realistic example? Yeah, I guess that's more realistic example, like, or at least a little bit more hard to distinguish the separation and the benefits. So how can we address those particular cases? Like, we're definitely we have a better performance in favor of maybe architectonic design or whatever. Yeah, so I mean, I think that's a good example to reason about. So you have a functional cache that only you are using, only, I say you as the process that's inside the container are using, or is it being used by other containers? Yeah, that's a good point as well. Right, so I would just say that if that functional cache, so that's one process, has a life cycle, reasons to restart all of these things that are different than the actual application, let's say the other process that's inside the container, then you've got to trade off, you've got a problem, the decision to make. Now, why not have it in another container? Well, someone would say, well, in order to reference it, I need to go out to the network. No, you can use in a process communication, you don't have to go out to the network. That still works within this environment that we're talking about. Another thing I wanted to make sure that I think sometimes people, for whatever reason, they're calling threads a process. So no, the thread is not a process. New threads is not what we're talking about. Do all the threads you want, you're going to do shared memory, do whatever you want inside of them, fine. But we're talking about total processes. So we're talking about execution and memory isolated from each other. That's two processes. Sometimes they call it a P thread within the Linux community, that's a thread, not a process. So that's fine. We're not talking about that. For some reason, within these examples, someone is saying, I want to start up a new process, a new OS process. I want to tell operating system, start up, allocate new memory isolated and all that and start it over there, but inside of this container. For what reason? That's what we're trying to figure out, trying to find out real examples of that on why you would do that and not have it in another container. So there are some slowdowns that happen when you have it in another container. I addressed them in an article. One of them is security. You can do some things where you can try to, you can reduce some of the security provisioning. You want more speed. Again, it's a trade-off again. But you're handling all that within one pod. Yeah, also now that you mentioned about security, one process can have different security requirements versus another one. For example, your application maybe requires certain privilege that maybe the cash portion doesn't need it. Or they need a completely different set of permissions. So also managing the security permissions for every single process is a different thing. If we want to follow the last privilege, best practice that we created, I guess the best way to do it is isolating the process types and manage their permissions per process type instead of like having multiple processes having all the privilege. I think so. Yeah, I have in the article I'll just read out some things that are arguments for from the trade-off perspective on why you would want to have one concern per container or one process type per container. One of them is isolation. That's what you just brought up. So isolation where processes use the container namespace system. They are protected from interfering with one another. So you can lock down security on that level. There's pod-level security, which is multiple containers for pod. And then there's also container level. So scalability, scaling one process a process type is easier to reason about than scaling multiple types. This can be for complexity reasons. So one process type is harder to scale than multiple process types. Or because the rate of change is different. So one process needs to be increased based on different conditions than other processes. So again, scalability for the example of a cache versus the application in the same container. We want to scale up the cache. We need more processes for the cache. Is it hard coded to one process or can you increase it? Remember, as far as talking about cloud nativeness and all of this, one of the arguments is we want to develop things so that they can scale up. So by increasing processes or pods or whatever. And those different processes have different arguments or rationale or conditions for scaling up. What are the implications of that? You need all of that should be configuration declared based off of the process. But if they're both inside of the container, you're essentially writing your own orchestrator and having to do all that yourself. So again, complexity reasons or what have you. So that's a scalability argument. You have testability. It's easier to test when the processes are separate. Deployability. So when the processes binary dependencies are deployed in a container, the deployment is coarse grain and relative to the rate of change of the binary and the container. But a fine grain deployment relative to the rate of change of the other processes in their dependencies, this makes deployments adjustable to where when a change happens in your dependency tree instead of redeploying everything in lock steps. So essentially what I'm saying is when you have a process, you can have a process that all of it is spawned from one binary. In your example, like having a cache versus an application, that's two different binaries. And so those all have their own dependencies and everything behind them. But now you're having to deploy them in lock step with each other. Guess what? When you have a security deployment or patch for one of them, you've got to redeploy both that of it having in per container. That makes sense. So composability is another reason telemetry, someone posted in here, it's easier for instead of having your logs interleaved with each other based off the processes, they're separate and you can easily reason about them per container. And as far as orchestration, I would say orchestration is the number one. You're writing your own sophisticated supervisor, something that knows when to restart, the order to restart, all of this other stuff inside of the container. You're essentially rewriting Kubernetes and its declarative configuration, all that stuff inside of it. I just don't see how, I know old school programming, that's what you did. But no, we're trying to get away from that. Trying to have agnostic orchestrators. So I'll set up and let people comment. Well, basically what I was reading about that is, so I mean, yeah, I'm agree with you. It doesn't make sense to repaint the will, like I'm trying to put everything there. So probably the recommendation beside just one single process type per container was to making sure that your application is managing properly the OS signals. So I guess like it's something to just keep in mind as a developer, like, okay, if the OS is going to raise this particular signal, I need to catch them and propagate properly to the orchestrator, which in this case is very like to manage properly Kubernetes, like I'm not saying that has to be these internal supervisors. So probably that could be only just to consider like a, or as part of the best practice, like, okay, once you separate these things in one process per container, also consider like that process signal properly. If not, this is going to, you're going to lose some of the capabilities, but probably this is something a little bit different, or maybe another best practice to be considered, but associated with this particular case, I think so. Yeah, yeah, I dressed those in a paper too. You have to do those as well. It isn't either or you have to do, so those aren't sophisticated, what I'm calling a sophisticated supervisor, even though it's a supervisor D run it money and all that stuff like that, those will do two things. One of them is a platform, they will do proper signaling. So if you say, I want to do a graceful termination, or Kubernetes says, hey, you over there, I want you to gracefully terminate. And then the PID one, the main process, accepts that signal and is able to communicate it on to the other processes. It says, please shut down nicely. Now, what are the implications of that, just to make sure everyone's on the same page. If you have files open, and you don't shut down, you don't close those files, you will corrupt the data in those files. That's what's at stake here. So you need to handle graceful termination. You will have a serious problem. So again, we can go off into the rabbit hole here. You have things that handle state like writing to hard drives and all that stuff. And then you have things that don't do that, having that all interleaved, that's another problem. But now we're not talking about that. So handling signals, so graceful termination, having the capability of doing that, that would be the PID one needs to be a process manager, something capable of passing on signals. So Supervisor D, run it, monet, penny, dominate, S6, those types of things like you talked about in the paper. That's one problem. Another problem is zombies. Handling zombies, PID one needs to be smart about when the process dies that it doesn't just stay out there. It needs to run a check. It basically needs to do a call, I forget the name off the top of my head to kill the process and to remove it from the PID table, essentially. What are the implications of that? There's a finite amount of PIDs that are available and you will blow up the system. If you do not, that's what are the implications of having zombies. Those are two things in addition to you should only run one process type. So you need to do these things. Most programming languages are pretty good about handling signals. So if you were to run Java as your PID one, it can handle signals and handle all of its processes and all that. Now, will it handle the zombies out of the box? Well, we're finding is oftentimes, no, sometimes yes, sometimes no. So those are things where having a PID one that handles zombies, that you know handles zombies correctly, is a good thing. So yeah, you need to do these as well, and it's recommended in the paper. And as an aside, we have these now as part of in the Tech Suite. Checking to see if you have these top end PID one, something within this list. Also, do you handle zombies? We actually create a zombie attached to your process and then send, kill it. And then if it doesn't get erased from the PID, we know you don't handle zombies. The fun fact, a lot of containers out there are not handling zombies properly. Even Kubernetes itself had problems with zombies. And then handling signaling. I have examples in the paper of GitLab. Very subtle and insidious errors happen when you don't handle signaling. Very hard to track down. So yeah, all of this, by the way, is exacerbated when you have more than one process type. Work, made work. So there. Well, we are reaching the top of the hour. So I don't know if someone else has any additional comment on this or what could be the action that we can take on this particular topic. Can we start writing the best practice or like? Victor, do we? Do you know if we have a draft document for this one? I was trying to look for it. I know we do. We started creating Google draft docs, but I didn't find a link for this one. And I'm trying to see if we have it somewhere. But that would be what I would suggest. Let's write it up like we were doing for the other practices as a draft so that we can see. I think some of the problem with why this is sitting so long is the content is linked out to other places and everyone would have to go read like this article that Watson was talking about. There was a lot of other content. This idea of separating concerns. That's a pre-cloud native. People talk about concerns as far as separation concerns for a long time. So if we can take all of that, including like going into the article where Watson just went there and talked about the specific things that are addressing these questions and write it up into the proposal, I think it'll be easier for folks to come in and see how we're recommending it. So that's what I'd say we focus on. We'll write up a draft, source it from content available this article and other places, and then link that into this issue. And then at that point, we can put it forward. If we have enough in the draft or when we have enough in the draft, then we can put in a pull request to add it as a best practice recommendation. Okay. Well, that sounds like a good plan. Maybe I can start collecting that in the draft and and put that in there. By the way, if anyone has any other particular comment or like I will invite you to put your thoughts in this particular issue. That way we can also consider for the draft and make it better, I guess. Taylor, you're going to say something else like I just interrupt you. Oh, it's fine. I didn't find everywhere that I'm looking, I'm not finding a draft, so I'm going to create one over in the working group folder and I'll share the link, I'll add it to, I'll add the link into the GitHub. And then anyone that wants to help, you can jump right in the document or ping me on Slack for a working session as well, where we can focus on this. All right. Well, the next meeting is going to be cancelled, I guess. So we'll see you in two weeks. Thanks everyone. Thank you.