 I don't actually, I don't have any agenda items. Is there anything that anyone wanted to talk about? I have a question I'll ask that I could probably just ask on GitHub instead, but do you know? We're all sitting here, fire away. The adaptive concurrency thing? Yeah. It's one of your coworkers working on it, right? Yeah, yeah, I, sorry, go ahead. I'm just curious what the, is there a timeline for it just as it, when it gets done, it's done. So we just merged the concurrency controller, which is the hard part. So he's working on the next set of PRs. He has a couple more PRs to do. So there's obviously stats and then there's the full, like wired up with integration tests. So I think we're probably, you know, I want to, he's working on it as his first priority. So I want to say like two weeks away. And then our plan is to get it into staging pretty quickly and get some, you know, get some traffic mileage on it. And I'm sure there's going to be issues, but we're really excited about it. So, you know, if there's, sorry. So for people that don't know, we lift is porting, Netflix has used, an adaptive concurrency system for their internal microservice architecture for quite some time. The big TLDR is that basically it's like a circuit breaker controller that works like TCP. That's the easiest way to think about it. So it detects failures and latency and then backs off and then does slow start. I mean, that's, you know, it's a little more complicated than that, but that's basically the way that it works. So in a good situation, it avoids manual configuration of circuit breakers, which tend to go out of date. So for high throughput services, Netflix has found it to be very effective for auto tuning circuit breakers. So we basically have been working with Netflix and took their algorithms and their papers and are porting it into a filter in Envoy as part of our effort to generally reduce the number of Envoy knobs that people have to configure, which are very confusing. So we're very excited about this. I think it's gonna be great from talking to Netflix. For reasons that we don't actually understand yet because we haven't deployed it, it doesn't work for all services just because I think the algorithms don't tend to work as well if the traffic patterns are bursty, you know, or if they're not regular, but they have found it to be extremely effective for high throughput services that tend to be fairly steady state and not bursty. And those are the services that tend to self-doss anyway. So they have found it to be very effective for reducing most of the self-doss type situations for high throughput services. So yeah, so we're hoping to get that out soon and then there'll be a long deployment and big time, but there's other people out there that are interested in helping to test that or contribute. I think what we're trying to do is trying to get the basic version working and then there's quite a bit of future work in terms of either better or more modern algorithms, being able to bifurcate the concurrency controls either based on cluster or priority or et cetera, et cetera, et cetera. It's that part that someone here is particularly interested in. Yeah, I mean, that's the kind of thing where if you wanted to shoot me an email, I can introduce you to Tony and I can connect whoever is interested and they can chat about it or feel free just to talk about it on GitHub. Okay, yeah. Wait, did the person, I think someone just asked about this on GitHub, is that that person? Yeah, he's, yeah. Okay. Matthew McKean is his name. Yeah, okay, yeah. So I mean, we can do it on GitHub or if they want to talk to Tony directly, we can make email intros. He seems pretty hot for it. So we'll say, well, wait, I guess until you get like sort of the basic version of landing. Yeah, sure. Maybe we'll contact you or we'll just do it online either way. Sure, yeah. I think we would like obviously any, you know, any additional testing and validation would be fantastic. Sure. Thanks. Sure. Anything else that you'd like? Does anyone else have things that we're talking about? I actually wanted to, I guess while, is that Bill Rowe I see or William Rowe? Bill Rowe. The problem that you're having with the Proto Gen and command line length? Yeah. So, and I don't know how, first time I'm trying to use the app on Android's phone. So anyways, the issue that we're having is exactly the problem that we originally had with Link and with CC or cl.exe on Windows. We just have 8K or now 32K of command line args and we have an absurd number of includes to list. So it turns out Proto C is all we've supported command line arguments as a input list of args file input. And so if we can get that implemented, basically with that point, we are ready to run a whole bunch of fuzzing and other protocol validating that can't be run right now on the Windows. I think those changes have to get made in the, excuse me, in the upstream Proto buff related projects, if I'm not mistaken. That's sort of the conclusion that we came to with this for the OSX bug. And then I think it eventually got fixed and then we never sort of circle back and close the bug or reported on it in there. But I think the problem was the Proto Gen C or Proto C, whatever it is, the validation one. Was what was producing the huge command lines. I'm still, by the way, trying to get someone from Microsoft to actually help with this, but not having much luck yet. I mean, this is just a straight up basal and... No, sure. I would just be really nice, high Microsoft out there if you could help. So maybe one day. Anyway, I just wanted to let you know that because I think that's where the changes have to get made to fix this. And I think maybe we lost him. Oh. The one nice thing for when he comes back or he listens to this is that I think now that we're on Azure Pipelines, it should be really easy to get a Windows CI. So I think we could get that going very quickly. Right, just to back up for a moment. When I was looking at basal and not only core basal but PGV, it seems like the work that needed to be done to use command line of files has already been done for Java or PGV. Some of the built in logic. So like you said, there's a number. We have our Python PGV generator macros in ourselves that we rolled. If we can get those mainstreamed to the typical implementation that's already built and baked in Bazel, then it works all done for us. So that was my impression just looking at the source code two weeks ago. As far as Azure, yes. I think that getting, I totally expect Microsoft to support that. And I just wanted to point out that just to give you an idea of Microsoft's attitude toward this all, the Visual Studio 2019, I actually came to baked in with Ninja, came baked in with CMake. They're actually shipping most of the third-party tooling and I see no reason why. In the next iteration, there might not even be a Bazel built into it. Yeah, it's more that they have a port of ongoing that works on Windows. So I'm just trying to work offline to get them to help with the open source side of things. That's awesome. Which would be really nice. I was gonna tell you on the Bazel front, I mean, and this would be my personal strategy, but you can obviously approach it however you want. I think honestly, when most people see the type of bugs that you're opening with Bazel, their eyes glaze over and they just run away screaming. It would be probably easier for some of the specific issues if I can potentially help route them. I think we can maybe get you to people who can help you a little more directly, just because I think when you open those kinds of issues or ask about them in Slack, I don't think anyone's actually gonna answer. Well, actually, Elizana's done a great job of routing me throughout this in terms of redirecting me to the PGV sub-project, the Bazel sub-project. Same with security and security, she's done a great job, again, routing issues. It has been productive and I'm just leading every question with, which is the right form, which is the best project. Right, it's more that for some of the more arcane issues, like I don't know that even within Envoy community or Envoy Slack, you're gonna get a great answer and we may have to route you to people that actually work on Bazel. We understand that, we know that going in. Okay, yeah, but I'm just saying like, I think if you're, what I'd recommend is that for some of the more arcane stuff, I would time box it in terms of trying to figure it out and randomly ask people and if you feel like you're blocked, maybe just reach out to me or Lisen over email or both of us and between the two of us, we can probably find someone somewhere who can help you. Okay, and thank you. Sure. I have something else I wanted to bring up. Sure. So I issued a PR to add an endpoint that would help us see whether or not in the conversion to stat symbol tables, we still have potential points of contention. And I guess this will only be useful if somebody actually runs it on live traffic at some point. I wonder if, who might be willing to help do that? Of course that PR has to be iterated and merged, but it won't. Don't you run like a cloud service? We do, but we don't have a lot of problems yet. We can do that on ours, but I don't think it'll be that interesting. Yeah, I mean, we can potentially do that. I have so many pans on the fire right now that I can't promise when, but I think there are many people out there that could probably help with some smoke testing. So, you know, this is kind of an interesting topic that I don't, you know, something that's worth thinking about in terms of, I feel like every so often, every month or so, you know, we find ourselves with the change, whether it's the buffer change or like this change or something else, which is sufficiently scary that, you know, we would request that some people do some smoke testing. And I don't know, I wonder if we could have some better process around this versus just randomly like asking people. And I'm not sure what that would be, but I mean, I think it's worth thinking about what that would look like. Yeah, I think that, and we could ostensibly bribe people with, you know, again, early access security things or maintainers' presaries, some of the things that you get for contributing of like, okay, I agree, we're willing to burn a bunch. So I'm doing this, you know, you get a little bit more of a potential sway. I think that's a great idea. I think that's something, maybe when Harvey's back from vacation, since he seems to be working on most of the, I don't think Piotr's also on vacation. Maybe when the two of them are back since they seem to do most of the security policy, I really like that idea of potentially bribing people to get onto the early distributor list if they're willing to do smoke testing of potentially scary things. Especially if we're going to be doing, like we needed, especially for that, and then could you sit for other things and it's kind of, that the early security testing, if you do the early on. Yeah, and you know, it's worth thinking about just in the sense that we obviously, we want people who are going to drive enough traffic through it that it's worthwhile. So we need to think about what that would look like. But I do like that idea in terms of maybe, you know, they do some work but they'll get something pretty nice in return. I think that could help us a lot. Okay, let me make a note actually. I'm just going to open a GitHub issue on this and maybe we can just do this discussion in public. Sounds good. Yeah, I love that idea a lot. Hold on a second, let me take a note. Yeah, but in terms of this immediate one, you know, maybe the best thing to do would just be to, for now, just to like email Envoy Dev or something. I don't know, because that would at least reach people over email and they might see it and we could try to find some people. Sure, I'll also try to get, see if I have contacts in Istio that can do this. Yeah, I guess, you know, I actually haven't looked at that PR yet. My only concern about the admin only solution is are there contention stats too? Just because I feel like with only an admin endpoint, like no one's going to look at it or potentially notice, you know? So it's like having the admin endpoint for details is obviously fantastic. I just wonder, it seems like you'd need some higher level signal that you could monitor, you know, that maybe you're having some problem. We do have the contention stat as you noted. It doesn't give you a lot of deep-up information. But that contention stat is only if you're running it on contention mode, right? And you can't even tell. What do you mean by contention mode? Sorry, are we talking about, I mean, we're talking about two different things. We're not talking about mutex contention. We're talking about something different. Right, so there's the slash contention admin endpoint, which actually it's the same thing as admin endpoint, so you have to go look for it. But it will tell you if you actually have mutex contention in your live traffic, independent of where it comes from. And so that is at least a signal that is available that you could build into some kind of monitoring. Then we could, I had considered adding some logging as well, which I can still do, maybe. I thought that was, I thought it would be pretty easy to add logging once we had the basic data there. Yeah, and I mean, on the logging point, we also have this thing come up every month, which is wouldn't it be nice if we had a log type, which logged only every whatever one second or X seconds, which is very easy to implement, just like no one's ever done it. So I mean, that would be, I think another useful thing potentially. Yeah, maybe that would be a motivation. So just like doing that basic log primitive where we could do rate limiting on an individual log basis, I think that would be a generally useful thing within Envoy. So I mean, if that would be useful for this effort, it seems like a pretty low hanging fruit thing to implement. Sure, that sounds good. I think that could be like a follow on to the current PR. Yeah, all right, let me look through your current PR. Yeah, I mean, I actually really love the way that everything is coming together with this data stuff. I'll admit that I'm still, I'm a little, it's still complicated and I'm a little confused. It's not clear to me like what the performance cliffs are and how we would know about them. And like I understand that's the work that you're doing now. It's just, that's where I become a little hazy, right? It's like when it's working well, I get how it works and it's pretty straightforward, but it's these edge cases that I'm not sure about. Yeah, fair point. I don't think that we'll really do until we run it on massive multi-core machines too. Yeah. So if in the end we decide this whole thing just isn't worth it, we can back that stuff out. I think along the way I found a lot more like memory and performance improvements that are already in that are independent of actually symbolizing. But the biggest thing is to symbolize. Wait, sorry. You're saying to potentially back out all of symbolizing or just some of the built-in versus other stuff. I meant that if it looks like it's a complete disaster, at least along the way, I found a whole bunch of other stuff to make smaller. Yeah, okay. But I'm hoping we don't have to back it up, but I'm not. I don't think we will either. I think the only thing that I'm curious about again is just some of the dynamic stat cases. And I do think that with better learning either from how to determine some of the built-ins or configure them or I'm not sure. But I feel like- Well, that's certainly where we are now is learning what we have to. I also, there was somebody who has a GitHub name that is a lot of random characters. That person's up-lipped, right? Yes, yeah. And I'm reviewing their PR, which does something to adding stats to Redis. And I actually, when reviewing that, I proposed and he or she agreed that when we came across like a Reddit command we never heard of, that we wouldn't dynamically generate a new stat for that and we would just call it unknown. Right. If we actually go with that kind of pattern, that reduces the risk a whole lot. Yeah, and that might be the way to start, right? Is let's just do that and then let's just see how it goes. And then if people complain about that, we can iterate. That would certainly make me a lot more comfortable. Yeah, that way we would err on the side of maybe the stats don't give you the detail you want but you would not run into- Yep. That would be hard. Yeah, and that's something that again, I mean, we can always change or make configurable later but that certainly seems like a safer way to start. Yeah. Yeah, the person, I really don't understand why he picked that GitHub name, but it's like a hash of like, I don't even know. But his name is Nick. If you want to talk to him directly, just let me know and I can make a choice. Okay, I think that FPR is almost good to go. I just had some- Okay, all right. Cool. Real hard to post the UnvoiCon 2019 schedule, which has permissions issues. So it's set. Sorry, what was that? Which schedule? The UnvoiCon schedule. Oh, it's posted? Not, it's not public yet, but it will be soon, sorry. Oh, exciting. I'm excited. We have people on our team that want to register. We should have them register before that goes public. Oh. Then we'll instant sell out. I have a calendar reminder for today of actually buying tickets which got buffed next week because there's never time for anything. Well, that may become impossible when the schedule goes out. I think there's still plenty of tickets. I think though that they're saying that when the schedule goes out, people will start buying tickets. So it's probably a good idea to get your stuff. Awesome. Anyone have anything else? I think so. Okay. See you guys. Talk to you later, bye. Bye.