 Hello, we will give people a few moments to to join in while we are waiting Let's go ahead and Please make sure you add yourself to the to the attendees list. So right now we only have Five people to stay we have more than five people here welcome all Hey Quick housekeeping notes before I presume Fredrick is about to get started this meeting is recorded Automatically it is published to YouTube automatically. So keep that in mind. Yeah, we have add on auto whenever he joins He just automatically says that Yeah, I'm thinking of just recording it and clicking play Yeah, also, we have no control over the recording itself. So we can't stop it and start Yeah, anyways, we need somebody to to put in the The meeting notes So it's not gonna add the meeting notes to the chat. That'd be very helpful Okay, let's get anything I would like to discuss that is not on the agenda yet. I actually have something. Oh Real quick, I did put the link to the meeting minutes in the chat if folks could go ahead and add themselves to the attendees list That would be great Let's go ahead and There's a little bit of noise on the line as well So let's want to get started. So events coming up. We have Cubecon EU the call for papers for Cubecon EU is now closed The we have multiple Topics that were that were submitted Doing do we know when oh Yes, do we do we know when the decisions are going to be made? Shouldn't be hard Yeah, I mean nice to add that to the agenda We have also a co-located event at the at the cubecon EU the vital mini summit the call for papers date is to be determined I believe is Well, I don't have any information on that yet. So I have to look up to see when the call for papers is for that Fairness no one has information on that yet, but we do it to talk about the number service mesh went super well There are last time so I expect they will as well Yeah, that was really enjoyable. I would gladly talk there again. We have Mobile World Congress coming up And so definitely looking forward to seeing what what pops out there we have ONS North America the call for papers are now closed as of yesterday The talks will be that conference will be held in San Jose in early April and It's very very telco very NFV centric. I think it always is we have We also have some other interesting events coming up which may or may not have NSM and involved but it's interesting to to look at anyway, which includes pause them and In Brussels we have An MPLS SDM and V it's occurring in Paris There is container world 2019 in Santa Clara. That's gonna happen in mid April and service mesh day Where I I need to submit a talk to service mesh day, which is here in the California Excuse me here in the California area The call for paper on now and if anyone else wants to submit anything is on February 8th That's why it closes on February So we have a couple announcements So Nikolai you're at the KM plant main main go preview You want to say a few words about it? Yeah? I Can share the current cold We'll see my VS code We can see you clearly. Yeah, okay good so Yeah, so as of last week, I believe yeah, we have merged This new concept of how we can write and assist and assist so endpoints and clients and one of the major interesting things Besides the fact that we have frapped all the you know details of how you set up The communication with the NS manager with with the NSM manager And all the details there So the other thing that actually was Came up during the the reviews so with that the discussions is this Idea of being able to compose small pieces of software Yeah small pieces of functionality and to Produce a larger You know more complex functions This is specific for the endpoints for the NSC's and This is the main how the main of the Firewall that we use in our VPN demo how it looks today So it's a very very simple as you can see but the most important part about the composition and being able to compose the The small functional blocks is this one here Essentially you instantiate Here is the monitor and then you say set next and then the next is the ACL then you have the cross-connect then you have the Client And then you have the connection So what the client and connection are Actually doing because I guess that the monitor ACL and cross-connect are somehow self-explanatory. So the client Is essentially a way for the NSC to connect to the next endpoint to the next NSC so Using the configuration or the environment variables you you can configure it and actually this In this specific case it lets the firewall connect to the VPN gateway. So this way we have We achieve endpoint composition or chaining So this is let's say one of these interesting Concepts that we have introduced with the SDK other than that you just Create a new endpoint pass configuration here is a context or it can be new and Composites which are actually the functions that needs to be called once the endpoint receives a request or the chain of the functions composite of the functions Just check for errors and then you do start and Defer the delete of the endpoint then you wait for the signals and etc. So that's how you do Today or you can do today The endpoints of course if someone wishes Can of course import the right pieces from from NSM and can do it all by itself, but I think that this is a nice nice way to actually achieve quickly some results so Um With the SDK we have some of the Sorry, this is super cool because what it means is you know when you when you want to make a network service endpoint It does something you can take lots of small pieces of functionality off the shelf and just chain them together In and then you only have to write the tiny piece of functionality that is different in your case, right? So, you know if you wanted to say for example construct something that was going to be doing a netting behavior On the way through you would just have to write a little piece of functionality. They can figure the right map behaviors And you could reuse all these other components Just compose a different component in Yeah, that's actually what I was also going to say. So here you can see how we are Composing together a predefined components because all these one that come from the composite package They are something that are distributed with the SDK. So we have the monitor and the client and the Connection they're already there and you can chain them with the ACL for example That is something that is written only within this specific endpoint Example with the within the firewall Um, so yeah, so the SDK comes with some of the of the packages one of the with the composites one of them are the So one of them is for example the IPAM which actually incorporates all the IP pool Prefix pools that that were lately introduced So you can you can have IPAM management and All the nice things there Lastly for this I would like to just show you how the client looks So except from the nice Jager that actually is another nice thing that we introduced Can I just jump in the moment? Yes, the things you're composing there. Are they API's or functionality? So there is an interface which actually has a number of methods that you need to implement so request close and Couple of others and once you just implement that interface you can you can do this this Composition and and again that those API's or functionality I mean is this just a list of the API's or is this both the API's and the functionality you're pulling in to compose Yeah, I think this is probably to be clear The API is simply the network service API Local network service API this defined in product. So it's exactly the same API that gets spoken to the server As you go along These blocks are functional blocks that do things so the new connection composite when it received that it receives a request You know the new client Receives a request it then goes and asks for a new connection and plugs it into VPP the cross-connect composite when it receives Okay, so let me take a different approach at that, right? We have a anacle composite there, right? So that's the implementive implementation of ACLs on VPP with both an API and the code that programs VPP But the reason I'm asking is because it seems to me that we should have an abstract a API and a Concrete implementation of that API for whatever forward are we using and I'm asking are they separated? Here's the thing in the API is the network service request API the basic pattern here is Well The ACLs are not the basic request API. I might have ACLs or I might not have ACLs This is a simple implementation that simply does the does the work for ACLs for VPP if I wanted to do something ACLs for something else. I would wire on a different component that did the work for ACLs. Yeah, I get that but again You're saying this is an NSM API, but I fail to understand how ACLs come in as an NSM API because there's more than one API that would implement ACLs potentially We might change it or somebody might choose to do it differently whether or not we've got a standard So again, I'm a little confused here why NSM is getting involved with accols No, I think your confusion is that you think we have an ACL API here. We don't Well, you said that's an API and an implementation. So where's it getting its information for associating the you're associating the wrong Semantics to the API the semantics, right? So tell me what the right semantics are Semantics the API are simply request clothes that's the API semantics. So what's this? This is an implementation that when it receives requesting clothes will apply ACLs According to whatever it gets in whatever manner it gets which may be from a configuration That's passed into the component at creation time. It may be crawling from environment variables. None of that is actually our concern It's sort of the guy who's writing this little so it's it's effectively a list of hooks on the request clothes process That makes sense So it's it's super super simple and super super flexible and it gives us lots of small little reusable components Okay So quickly, let's see the just just the client. So you just establish the client I mean instantiated then you say connect with Interface name what type do you want kernel on mem interface some description and that's it That's the very very basic line. Of course, you can do a little bit different things here Maybe we will show next time the proxy, but that's what the client does today And that's it if there are any there aren't any questions I will stop sharing and we can move on to the next thing Is this already been merged in or is this still in progress? No, it's it's much. This is Yeah Any other questions or do we move on to the open-tracing demo? Andre up next to show us the open-tracing work Yes, let's let me please share about screen. So basically what we have is open-tracing solar which is deployed as a part of our infrastructure In the pot and also all JPC also instrumented within within the code Very simple way do you see my screen? Okay, yeah, like we need to add a couple of interceptors to JPC servers and interceptors to JPC clients So after infrastructure deployed we need to forward ports open-tracing server ports and Play with open-tracing. So there's a request to increase the font size on the terminal if possible Yes, I'll try Okay, so we have forward imports us so Let's see some nice Chases they display it in this form. Also, you can see payload of the call like Request parameters and responses also errors. So something something probably not deployed by machine. That's it Yeah, this should make debugging quite a bit easier. I know before we were scraping through logs trying to correlate things to figure out What's going on? So from here we can actually track these things we can also get a sense of how long things are taking it's super helpful Okay, well, where are we picking up latency in the system? So this is overall really exciting So Questioned so I know there's this for open-tracing. We can notice the time for inner services But what about other means like like say how many attempts we try to start a service or? Failures those kind of informations. Do we have to do that? Yes, there's little code in the yes, but not automatically Parallel question So obviously being able to look at this stuff live is good but is there plans to kind of come up with some like long-term debugging tools to and like some like log scraping tools things like that Cool, so I think that you can actually dump the results of these traces Into longer-term storage as well the thing is Think of logging and tracing this way right so logging is a linear collection of events in time Coming off of a particular container, right? So lots of events are coming are happening But not all of them are directly related but you have a time order of them tracing gives you a time-ordered set of events related to particular sets of calls So it's actually quite a bit better for figuring out Okay, we have errors what happened in those errors You're you're kind of hopeless simply scraping logs in that case or at least it's a hugely greater amount of work So that ends up being super useful and I think then the other question was sort of about metrics Right capturing general metrics on the system like how many affiliars that we see etc. And for that I think we probably want to integrate with something like Prometheus Sorry gonna be my next question Yeah, it's kind of like time series database to consume all this information right like if you want to get operations groups Across the industry to buy into this and they need disability tools And they need to be able to like do forensics when things go terrible Well, this is actually one of the reasons why Yeager is really cool because there's a whole community of people working on things Like the back ends to go store these things on So that you can keep longer term runs of them and then poke at them and so forth So open tracing and Yeager give you a really broad community of tools and since we're just literally outputting open tracing records to Yeager You have access to all of those tools from an operational point of view In fact the way you open tracing works is if you don't necessarily have to use Yeager You should be able to switch it quite easily to something that's not Yeager if that makes a preference There are more than one kind of tool out there Excuse me, sorry cold Yeah, I know that there are there are a bunch of different tools available Yeager just happens to be one that's super popular But and it ends up being fairly easy to instrument things that way so Yeah So Jeff I would think of this it's kind of useful from an operational perspective But you can see that what you're getting is traces of the internal bits and pieces for the most part Maybe not ops teams, but the engineering team that backs them up could probably make sure make use of that I think the ops teams would You would probably enable this when you need to jazz noise an issue that has has defeated your ups team rather than necessarily Hand this over to the ops team No, I agree a hundred percent and it was more of a nebulous question I mean, this is a great tool for the stuff that I want to look at I'm just curious. It's like the long-term vision is going to be like a bring your own type of thing You know Open tracings definition is these API calls when you want to trace something so Theoretically that means that again you can bring your own if you want to bring your own or you can simply use Yeager because obviously as Edward saying there's there's an output phase of this where you basically taking the trace and storing it and then an input phase where You're taking the store and turning it into something a human can read so you can do this multiple ways round It depends how much time you want to spend doing that sort of thing or whether the default will do for you Right, that's what I'm asking is like if we're looking at a more robust default Like I totally get like the underlying technology and what we're trying to accomplish here I'm just like as far as when I try to sell this internally, you know, like is this kind of like a Some assembly required type of mentality or is the long-term vision to you know Have stuff a little bit more ready out of the box for an SM For this for this one if you were going to consume it in Production I've been inclined to think you would turn it on when you needed it on for specific requests And I've seen that done in open stack for instance where where enabling logs is on a pair API basis rather than Sorry enabling traces is on a pair API basis rather than across the board. So yeah, so there are perhaps patterns we could steal The other the other thing is how expensive is the tracing? Because you know, and this is something again that operators are gonna have to decide for themselves But my general attitude is if the tracing is cheap enough just leave it the hell all on Yeah, my my preference when it comes to the tooling is to ship with something that we consider to be usable for the general community And of course you can always replace it with something that meets your needs if it doesn't so So my hope is that we end up Is that we end up with you start that you start network service mesh? And it also includes everything else that you need in order to in order to be operational which includes some form of a system to to log it and and To perform this tracing rather and to be able to analyze the traces But did you have the ability to say no, I don't want your tracing thing I'll bring my I'll bring my own because we have something that we we've bought or built in house By the way, yeah, that's exactly what I was asking Check Yeager supports elastic search already and Cassandra as storage packets If you want to go and just say okay I'm waiting up all the traces into the place where I can then access them in all kinds of interesting ways Just so you know from a an end user perspective being able to have this by default activated maybe not always on but being able to do it is a given and From our viewpoint is we're really if we really need to push so that every application is able to leverage that open open Tracing and supports it which is not the case yet So if we push it forward that might be an incentive for application and Workloads to be more open tracing compliant Yep, I do have actually one question for the folks in the room who are actually our operators looking at deploying network service mesh One of the things I've been sort of toying with in my head is right now We're wrapping the the spans in the trace around gRPC calls So each of these things, you know, the each of these little lines represents the span of a gRPC call with the stuff that With the stuff that Nikolai did For putting together the composite stuff we could in principle give you a span across the composite So when you would drill down into something you could see okay, here is where it connected to the The VPP here is where it applied an ACL here is where the cross-connect and so you could see those pieces as Sort of span choke child spans as well. Do you think that might be useful sort of seeing some internal spans as well? Yeah, yes when we when we have like a Application failures and we've seen this with like for example Home gateways and things like this being able to have access to all that data is what they is actually able to do a problem resolution faster Cool, and then I presume you would also want sort of aggregate metrics on some of these things like Right now we've got you know the NSMD making a call and we can see here that the time it took 158.96 milliseconds in this case and then I presume you'd also like to have like It tells you okay Here's the the statistics on the latency for those calls over time. Oh, yeah, all that data ends up at some point in the data lake So we're able to cross cross reference Awesome, so we share the agenda again. Yep, let's go ahead and move on so I'll jump straight into the next topic. So we have a repository move that is been successful. So I Was a relatively uneventful Event we perform the move Some of the tools just worked like circle CI the only major change that we had to do was is that the length expects the Get repo URL to match When you when you do an import so we have to do a bunch of renaming of stuff and Beyond that We haven't ran into any other issues since then so if you run into any problems due to the repo move Definitely, let us know so we can get them fixed We have also have moved over from go depth to go modules so this has a couple functional changes where if you have a get repo checked out that That you'll have to I'm not odds of documentation on this in the repository, but there's there's a couple steps of doing that You don't have to take potentially so first The go path is while it's still there the source code does not generally live in your go path anymore You just check it out Where you would normally check it out if there were any other language and When you do go build it'll see the existence of a go mod file and will then go and download the versions listed within that go mod so that is a manifest and We'll just compile and build at that at that point. So This should make it a lot easier to onboard people where we don't have to explain create your go path this way Go download in this specific way or so on. It's and everything should just Should just work and build from that point on We've already tools the the build as well So that's the continuous integration system is now building on top of go module So the entire repo is now on the modules if you want to leave it in your go in your go your go pass then for that specific repo you'll want to turn on a Flag there's a go one one one module environment variable that you can steps to on not yes, but on on and When you set that on then It'll tell go modules to to turn on and then it'll it'll work So again, I'll add I'll add all of this into a into a document I just want to let let people know the implications in this scenario But this should make the overall build process much more simple and we also were able to remove our vendor directory Which nicks about a million lines of code off of the repo? so So other than so other than that Yeah, if anyone has any questions on on this definitely definitely let us know so pain thing us in the thing us in the chat room if you have any problems and we'll we'll help you through it and One last thing Make sure you update your version of go to the to the latest version if you are if you are Compiling it from your from your system The latest version is one point eleven point four There are differences between one point eleven point two to four and Those those differences may break your your build on any go module. So make sure you make sure you update Beyond that That's pretty much the in the entire announcements. Are there any questions? Cool barring any questions off to Ed for the GKE progress Yeah, so I spent some of this weekend beating on trying to get network service mesh working in GKE And so far the the one interesting thing I've encountered there and we've already merged the fix for it is normally when we're doing curtler faces in Network service mesh thus far. We've been using a facility called tap V2 which uses dev V host net It turns out in GKE that unless you deviate from the default behavior and run your GKE cluster on Ubuntu Dev V host net is not available on the basic images that they run GKE with So we've merged a patch now that will if that device file is not available It will fall back to vEath Paris plus AF packet now. This is slower But there's a strong desire to make sure that things always work and then you can work on the things that you would have to Do to make them work faster? So this this moves us towards the always work behavior Do folks have any questions so far? I'm still sort of plowing through the the shifts We've got quite a few assumptions that we've got to work out on the process Yeah, this is John Side question. Do you have to muck with his tool and TX off to get AF packet to work? Obviously not If it's not ringing any bells. No, we see it's had some issues with Google and AF packet. So I'm trying to track it down. I just want to be outside experience with it Yeah, so this is where I've not been seeing issues with AF packet thus far Okay Doesn't mean I won't you may just be further along my arm So far I'm not hitting anything. Okay, there we had UDP checks and errors so Oh, oh, you're hitting checks and errors with AF packet those I can't look into you probably what those are I'll I'll get you Offline That's one old favorite with DHCP if you've seen that. I mean it used to be the DHCP demon would manually check the UDP checksum, which didn't fly very well when It wasn't being corrected because canland's faces generally don't correct it Um, I don't know whether that's your circumstance or whether it's something else Yeah, so I mean here's the really quick, uh, a few bars version john If you have a v-eath pair It's infinite wisdom and this is actually smart from the colonel's point of view There's no damn reason to get the checks on right Across the v-eath pair because it's on both ends and it's work Um, and so if you stick AF packet on one or the other ends of a v-eath pair If you are not smart enough to also ignore The fact that the colonel is just not going to do the checks on right Um, you will have issues Yeah, I'm trying to figure out. Okay. I'll get you. I'll get you on chat That's true for a v-eath pair as well Ed. So I mean, I'm not sure that would necessarily come up as a distinction, but Well, I mean with a v-eath pair if you're just using colonel interfaces, you would never see it that way I mean you only see it when you use something like AF packet or or tcp.com Yeah, if you don't read and write raw packets, which you can do with a v-eath pair, but yes, I mean fine I mean it's one of those things that was super super clever on the part of the colonel guys And it's cost-paying for those of us who are in need less than the average case Anything else on gke stuff before we move on? Cool. Nikolai, do you want to take the share again on the roadmap stuff or would you like me to drive? Um, no, I don't want to I mean, yeah, you can just open the page and I don't know if we have any Big updates here. I don't feel like we do but Um, yeah, maybe we can just reiterate quickly about what is it here? Yeah, so I mean this this was just trying as a community and and I would appreciate folks in the community sort of adding to this list to capture first the brainstorm of the things we think we want to do in 2019, right? um, and then um to try and work out sort of a basic roadmap and I think for example doing a release for kyu kaniyu um, is a super good idea And then there were various discussions about what cadence we want to do after that and sort of the things that folks were wanting to work on inclusion there Um, and I know that that we've got various people who are interested in different things. So for example I know we have folks here interested in srv6 Um, which you know means that if we can find someone to work on srv6 I would love to have that in this release But if you would go ahead and add that to the the add the things that you think need to be there for table stakes for kyu kaniyu Then we can sort of have a sensible discussion around them and figure out who is interested in actually working on them It's a lot of these are very cool. Like, you know, the auto reconnect stuff for auto healing You know the telemetry and iim stuff is super cool Because that would allow us hopefully to do some to carry the debugging that you just saw in open tracing all the way down to l3 So you could actually give users the ability to say, okay Um Gee, I i'm getting really slow connectivity on this g rpc call. What's going on? Oh, yeah on the third hop through the network a lot of packets are being dropped. That's bad um that kind of stuff uh, how about putting um srv6 under telco features for 0.9 as a hope for goal Um, you folks are willing to wait that long. I know that we have some people on the call who are um, very anxious to Yeah, I do a plus one on this one I'm personally a huge fan of srv6 So like it makes me happy and I would like to get it in sooner rather than later It's just a matter of getting various people to work on things This is where I keep wanting to talk architecture though because I think we're comparing two things that don't relate to each other sri of e and srv6 Are not the same thing by any stretch of the imagination And srv6 shouldn't be something we have to build into nsm. It's something we should build into a service that nsm can run So i'm a little confused how we've managed to I think we have a problem with separating the layers out here Okay And I think it's a problem we need to talk about because it's not going to get better without talking about it Okay, I think we do have an item further down the agenda for architecture discussion Yeah, it was on last week and we didn't make it last week though. So we'll see whether we do this week Okay Anything else on the road back to gili or Well, just just a call for participation. I mean any ideas notes Whatever these are just examples. I mean it's not set in stone that we need to do this release and it should be 0.1 should be for kubecon. It might be a good idea, but It's it's up to us to decide so the more input we get the better One of the things I would really throw out there is that's the folks in the call Who are actually from operators who are looking to deploy this stuff, right? And I presume that looking to deploy this stuff you're looking Working so you can show it to people If you could capture some of that If it's just sort of pulling a use case out and figuring out what features need to be present to meet that use case That would also be super helpful because it gives us something to run towards Yeah, I mean like a hugely fleshed out use case, but things like the we'd like to be able to use SRV6 as a remote mechanism Um is certainly like that. That's a very clear kind of meeting um that people may have so Cool, um The next step was how to improve onboarding of new community members So we have a bunch of you who are new to the community A media group I know have commented on a desire to actually get your hands dirty And so I wanted to get a sense of what kinds of things would actually make that easier because We want to make sure that we get the the right things into the hands of new community members so they can start contributing Yeah, so so basically I just got on board a couple weeks ago. So so for the past week I was doing all those um deploying stuff And I know the one thing and I talk about uh, and I talk this about to an equal I just a little bit earlier today Well, most of you guys were slipping. So I noticed that in the quick start The instructions, um, I found some minor change minor changes or that may cause issues or difficulties for the New community members to get their hands on um, so I don't know whether Because I still need to verify where you got is a general problem or just just me that have that have the problem is uh Apparently if that's the case, um, I don't know whether it is necessary for us to make the changes so that we can always deliver the Most up to the date and the right instruction for deploy the I would say that's actually super important because You know, you want people to be able to come in and be successful out of the game So getting the quick start right and and you know And the thing is even if you happen to have stumbled into a particular corner that most people that stumbled into Um, we still want to document that because you're not going to be the only one who stumbles into that corner Right, so if you know eight out of ten people don't hit the problem you're hitting We still want to get a documented in the quick start. So in the two out of ten do Uh, that they don't get stuck cool So so basically some of my uh, just uh, encourage me is to Like say to open an issue or pull request for that So I don't know, um, any any suggestions for me to do those kind of updates or catch something So documentation pull requests are always super welcome Um, also feel free to open an issue to track it if you'd like um, but it's it's sort of the you know The there's a unique value that you have as a new community member that you will never have again And that is you have no idea what's going on And that's an incredibly important resource that I literally can't reproduce neither can make a lie neither Neither can uh, Frederick and so capturing the kinds of things that you find confusing and turning those into documentation pull requests Is unbelievably helpful. I will stay hungry. Stay foolish, right? Absolutely stay docked important to stay Cool. Thank you So we we have ways to to become a beginner again, but none of them are pleasant. So True true, um Um, sorry, were you uh going over what you're gonna say? Yeah, I was gonna say other things that folks are looking for is newcomers That would make it easier to get up and going and contributing Yeah, like once once you're comfortable with the repo Bring a friend so we can continue that uh, that resource Um, I'm sorry Um, I just think that that if we have a clear road map with kind of some some I don't know if this is exactly the type of thing that the the first thing that that uh newcomer will look for but If we have a kind of you know road map with some, you know issues and say, okay, we want these and these things implemented Even if possible broken down to smaller things that people can just grab out of the door like you come to the project Read the quick quick start and start doing things. Okay, this is probably Uh, we you top here, but yeah, maybe we can there's some time some point yeah, in fact, um Something that may end up being useful down the line is eventually somebody taking Taking up the mantle of of helping People who are new which could be own the documentation Own the quick start prog process make sure it works Uh and try to work out these type of things that could be easier for For people to to join in and to actually own that as a as a thing That that that person does So, uh, like right now, I don't think that any of us have the resources to take it on at this bigger point But as community grows, uh, that's Definitely something that we want somebody somebody taking on It turns out sorry go on No, it's just a so I know we've got a lot of newcomers here We've heard from one of them so it could be heard from more folks who are trying to get their hands during the cut Yeah, we we don't uh, we don't judge so feel free to speak up As a not a newcomer, but less of a Fully full time on this one. I think the trick is how can people contribute in the community If they're not able to be a full-time developer on this So we want to contribute or some newcomers might want to contribute but due to other For example us and to end workloads We might not be able to the to to be a full-time Associated gnsm, but we still want to be able to make it progress and because we want it happening So how can we make that happen? How can we be a part-time contributors or a selective contribution do selective contributions? Okay, are you thinking about contributions in code or in the more general sense? both I think uh if we're both to My my mindset is if we have use cases and can we work on us part-time like going to the use case and and even add code That would be a good thing at some time. This might be more architecture or use case and sometimes it might be just full-time code But I think that's the part how can we be selective and not be Taking on all the pieces just to be able to deliver one piece of the one piece of the puzzle I think that that point about uh use cases is a significant one because it feels to me that We're at the moment kind of choosing the use cases that the code is able to implement rather than necessarily writing down the use cases without really thinking too hard So yeah, I mean use case documentation would be fantastic Cool Well, I mean you so I mean that would be absolutely awesome and we have you know places we can accumulate that you can certainly Either accumulate you you've sin patches the doc directory in the repo Um, or if we want to start a you know use cases page on the website That's also basically done in markdown and it's a get repo you can send patches to um And then you know, even if all you're really feeling up to is going into google docs and putting together a set of slides for your use case We can leak to that if you make it public So there's there's a lot that can be done there to help sort of drill out the use cases I think um, particularly um, and this is a personal opinion, but you know If the use cases are basically a thing that derives the requirements of the things and then having use cases in the repo so that you can say Either this code implements this, you know is usable against this use case or alternatively. There's a bug against it That's kind of useful But it's a personal opinion the problem with use cases that are not tracked is that You will write codes that that that's true of the use case document to a certain point in time But not the use case document as it currently exists. So you have to be a little bit careful about that Yeah, so I'm all in favor of putting things in the repo or even writing issues in github for for use cases Um, I was just thinking in terms of I would rather have a deck thrown together in google Because it's what the person writing the use case can face doing right now than have nothing, right? Yeah, no, absolutely To be fair google's a fantastic place to actually get this prototyped because um, It means that we can all have a go at it Which means that you know, you can ask opinions and get feedback and you don't have to Take notes of what people are saying. You can just let them fix it Yeah, so, um, let me sort of put you slightly on the spot here, uh, daniel So if you could capture just a few bars Box or somewhere, I know you have a vision as far as an srv6 use case Um, you know some simple version thereof that would be super helpful Um to figure out sort of like, okay. Well, what what needs to happen for that and when would we like to be able to do that? Perfect, and I as a comment I would say that the code the sdk point that nicolai added and devowed Although maybe not complete. I think it does help for part-time contribution if you want to do and dig into a specific use case Versus to have to understand the full breadth of all the nfm code So another thing that is going to be very helpful in the near future is that as we have the roadmap, uh, and we start to produce actual architectural design documents on them to help people Join in and start helping Any type of review or comments on those Especially for use cases you care about If they if they affect your use case Will be immensely helpful because like We're the three of us are trying our best to to keep it as flexible as possible But having more context from other people and how they actually want to use it Uh helps helps tremendously and Uh, there's also experts in various technologies that That are on this call that that They can help us with With getting the semantics right Uh getting the details right so so even if you're not able to contribute full time on this Like even even being able to spend an hour or two hours of time To review those documents and comment on them would be immensely helpful But to know Yep, cool. Um, so One of the things that I've heard a couple people complain about Um on a number of occasions that I'm also hoping will be easier soon is Right now we've got vagrant things that people can stand up to go and do development and test things And that's great, but I have heard people complaining that running a couple of Kubernetes VMs to go and run All of this on can sort of exercise the fans a little bit on the laptop Um, so hopefully as part of the gke work, um, we'll have an option To go and offload all of this to the cloud. Um, up to and including the image builds Um, which hopefully also will make it much easier because then have gke, you know, how have gcp account will travel And of course, we're open to doing that for aws and Azure as well Um, if folks are interested in doing work on that Uh, we're also talking to date taylor and people yesterday on precisely this subject And rather than reinvent the wheel I wonder whether we can kind of use some of the ci stuff so that If we change our own deployment system, then we we improve the ci at the same time Because obviously it has to do exactly the same job Yeah, that's how the um, that's how the mechanisms work and I actually debug the bunch of packet stuff back when we were first getting packet online by uh, effectively just running make commands and targeting the packet systems rather than targeting my local vagrant systems So that's totally possible The one thing we'll have to do is we'll have to be careful to make sure that continues to work because it is easy to to break that That's nothing Yeah, but but well again, what I was talking about with uh, the other guys was um, That we're maybe lumping things together where at least one of these things we should be consuming from outside We don't need to know how to bring virtual machines up We need something that we'll do that for is and then from there we can take it on And technically speaking, we don't need to know how to install kubernetes either That's not necessarily something that should live in our repo. We should start with the bit that's nsn specific Um, and and be using a library from the other half if we can find one or build one Yeah, the the two are the two are separate, uh, like we we tie them together at the top of them with the make file But they are separate, uh, they are separate But they're separate in our repository is my point rather than being you know, again We're not the only people who we need kubernetes cluster spinning up the that code exists elsewhere Yeah, yeah, that's true. We have we have a fair bit of modularity around all this in the big files right now Uh, they were sort of designed so that you know, people could pick their poison um So I do apologize I've got a hard stop at the end of the hour I have to drop off folks should feel free to remain as long as they'd like to And do remember to sign off at the end because that's what triggers the Flushing of all of us to youtube that way we don't get 30 minutes of oddly silent time at the end of the video cool So I have a little bit of extra time left. Uh, I think we should drop the main agenda And if anyone has any Comments or questions that are more freeform for the rest of the hour we can definitely do that And saying that thank you everyone for attending the mid work service mesh meetings Meeting time will be same time next week And uh, we'll we'll make sure to put some of the architecture stuff at the beginning of the meeting so we can talk about them So and with that, that's uh, thank you very much for for attending I'll thanks everyone Okay There's no questions and I'm going to hop off as well then Take it