 We're going to do an un-sanctioned karaoke session. No. Okay. Hello, everyone. My name is James Strong. I don't think it's moderated, so we can just get started. Yeah, yeah, we can. Awesome. I will self-moderate us. Gotcha. Welcome to our comedy session. Thank you. Open mic. No. I make jokes when I'm nervous, so be prepared. Hello, everyone. My name is James Strong. I'm a Solutions Architect at Changard. And what that means is I help people secure their software supply chains, look at gaps of those, and I also help maintain Ingress Engine X. And I wrote a book about networking in Kubernetes, but please don't ask me about your IP tables. Hey, my name is Ricardo. I am one of the maintainers as well. I am a software engineer at VMware. So when I'm not creating bugs at Ingress Engine X, I'm creating bugs at VMware products. So today we're going to talk about understanding the future of Ingress Engine X. We're going to talk, so we did a community survey probably about four or five months ago. If you filled out that survey and you're here, thank you very much. And we're going to talk about the stabilization project that we've been working on. And we're going to do a little discussion about the roadmap, some of the stuff, some of the bugs that Ricardo's introducing, and a little bit of Q&A if you have questions. So this time is the community time. So if you have questions for us, just go ahead and yell at us, throw things. Ricardo will catch them. So we're going to talk about the community survey results. One of the first things we asked, I thought it was pretty obvious, but we wanted to understand what people were using and what they thought was most critical in Ingress Engine X. And that makes sense, right? Load balancing and the fact that it was open source. So good to know that everyone's using it for what we intended to use it for. But we need to talk about that guy. He's on our roadmap. And for some of you who don't know, Mod Security is being deprecated. So we're going to have some work to figure that out. Versions of Kubernetes. Who's running anything older than 121? I do for testing. This is good. We wanted to put this out there because, again, we wanted to understand what folks were using. And when we talk about what's supported from our perspective, it's looking at the end-to-end tests that we run. So you'll see that we run a lot of end-to-end tests. They take about 45 minutes to run each time we have a PR. So we want to make sure that those are supported. So we're also following the N minus 3. And as some of you are probably aware, I think 122 is being deprecated soon. I'm looking at our SIG release here. Carlos, I always have questions from. So we're moving as Kubernetes is moving and looking at understanding what we should be supporting. So if you have any feedback on that, please let us know. What versions of Ingress are currently running in production? So, of course, we have Ingress versus versions of Kubernetes. Really good. So back when we put the survey out, we were on 121. I think we're getting ready to release 15 with a couple new features and deprecating some things. But what's going on here, guys? I started with 5.1. I did the 5.1 release. I was a year and a half ago. Yeah, it's not supported anymore. This is really a good thing. We try to make sure that we're keeping the documentation up to date as new annotations, config maps, features get added. We want to make sure that we're answering those questions there because it's not fun answering the same question over and over again. And I eventually will get the client IP thing documented because I think in the year and a half I've been working on this. We've been asked that about five or six times. So we have good answers to questions, so keep asking them. Keep demanding the documentation. This is also fun. I would like to talk to the never person, but this is always exciting that people like Ingress Engine X. They continue to use it and they would recommend it. So again, that's all part of what you guys are doing, the feedback that you're giving us and the work that we continue to do. So thank you for that. I have actually a feeling that this part of the survey was just answered by our family. I may have sent it to some Changar folks. Yeah, maybe. This also makes sense. We get a lot of questions about home. As you'll see further on in some of the surveys, some of the open-ended questions, we get a lot of questions about helm support, helm documentation and helm changelog. So again, the multiple instances per cluster, that's going to be something I think that we need to discuss and probably document a little bit more. There are some hidden gems on trying to run multiple instances on one cluster that I know need documented that are in our backlog. But for the most part, continuing to support helm and keep that updated. This is also something that's really encouraging to see from the folks that answered the survey is that it's pretty easy to upgrade. But as you'll see in some of the open-ended things further on, it can be a little time consuming because there's lots of things you have to test and we all know about. There's no easy way to upgrade without reloading, so that's always fun. So going to the open-ended responses, we see a lot of things in here. Again, the engine X version, we know we're running a little old behind that. It'll become apparent when we talk about some of the complexities with managing this project. Time and issues, we know that with the stability project, we've probably been neglecting some issues and we're working on making that a little easier. Some of the things we're looking to implement are using a GitHub project to triage issues and work on that. Before that, who knows what's the stabilization project that we are running because I think that's something that... Yeah, that's going to come up after this. Okay, sorry. But yeah, no, that's a good question. You guys know we've been working on that. Who reads the Kubernetes dev emails that they get all the time? No. Okay. That's all good. Also, it helps to attend the community meetings because we talked about that there and we sent out the Kubernetes dev. If there are other forms of communication that might be more helpful, I know I'm looking to investigate getting a Twitter handle for Ingress Engine X. I think that might be helpful. What if anything stops you from updating Ingress Engine X? As I was talking about some of the instability issues that we've had lately in the past eight or so months, working on the V1, the leader, the election, the lease API, all of the upgrades trying to maintain those things. We know that there's been some issues. Ingress Class is the big one. A lot of these things, the change log, that's a lot of manual work. We're working on... That's part of the stability project is making these automated. Some of the open stuff. We see Gateway a lot came up in the survey results, which makes sense. Everyone wants Gateway implementation. That's open source. Who wants Gateway API? In Ingress. It's here just to ask that. No? No? Maybe. Yeah, yeah. Well, the survey said otherwise. Yeah. So that is also one of the things that's on the roadmap that we'll discuss. But again, as I said, for those who fill out the survey, thank you. It did give us an indication that I think we're on the right track on the things that we're trying to accomplish from the stabilization project and what's going to be on the roadmap. And it gave us a little bit of direction of things that, you know, we're working the project day in, day out that we don't see. And yeah, can you work on it? So again, thank you for that. So again, the stabilization project. Not a lot of people know about that. So that's unfortunate, but we've been trying to work on some of these issues. We set out to do these about nine or 10 different things. So we've accomplished in, I think we put this out in July and we've accomplished these three things. So the N minus three we've put out there. I think we've talked about it. We put a blog post out there. But again, a lot of people aren't seeing that. So we want to make that a little bit more explicit. So we put that out there. We've also looked at implementing some of the open SSF recommendations and trying to make sure that the project is secure. Security's been since our Lua fiasco a little while ago with that last major CVE that we've had. We've been trying to focus on security a little bit more. We've implemented some security scanning so that we can see the CVEs as they come out. We're not blocking on those yet, but I'm starting to think that we probably should. We're looking at introducing the GoVonability checker that just came out. So working on trying to make it a little bit more secure. And then from a features acceptance criteria perspective, release notes. We're using the SIG release tool to generate those automatically for us so we don't have to do this by hand. Enforcing that they're in there in the PRs that you're putting together and that it's documented and that you have an end-to-end test. That's the bar for us right now. My opinion is a little low, but that's at least what we need. I'm going to go ahead and let Ricardo talk about this because he's the one who's been doing a lot of this awesome work. Not introducing bugs, but making it better for us. So, sorry. I think that everybody was aware of the CVEs that we had in past and we are still getting some of those to be announced or happening. And a lot of people actually asking me, like, what's wrong with Ingress and Ginex that you are getting so many CVEs and should we be concerned and maybe moving to another Ingress or not, right? So, first of all, if you want to move to another Ingress, I won't hate you. I used the Ingress HA proxy for a long time ago and I was one of the person that actually asked the guy that created Ingress HA proxy to create that so no hard feelings, folks. The thing is that when Ingress Ginex was created we had an architectural choice which was right by that time because we didn't know how this thing was going to scale that runs the proxy in Ginex together with the controller, right? On the same container, yeah. So, if something goes wrong and you can kind of force the controller to write in the configuration file of the Ginex you may maybe extract information like the token that is used by the controller to connect to Kubernetes API server, right? And we did a bunch of explanation on that in some of the meetings in one of the community meetings, really explaining and being transparent with you on how this actually can be explored because we want people actually to let us know that there are other ways of doing those information extraction that may cause problems, right? And if you look into Ingress that are Ingress controllers they are probably the most risky component that you may have in your cluster, right? Because you don't expose your API server to the Internet. You don't expose the Qubelet ports or whatever to your network. But you expose Ingress controller in Ginex because that's the proxy to your application, right? So, we are aware of that. And then we started thinking how can we improve this bar looking how the other projects of the community they are doing. So, we have this amazing project that's ongoing which is called capping. That's the new Q-proxy that have some sort of similar problem. We know that contour folks and envoy folks do the same thing which is we should split privileges right now. This is the way that we should deal with this. So, that's what the splitting control plane and data plane is. So, effectively what we are doing is we are re-architecting Ingress in Ginex to have just a control plane that is responsible to connect to Kubernetes API and it will just do all of the calculations and create the right data model for the data plane. The data plane will consume just front-ends, back-end, certificates, whatever gets updated but won't have access anymore directly to the Kubernetes API server. And this we should, I hope, probably also help getting some better performance in your cluster because you won't have more like 50 or 100 Ingress controllers instances consuming the Kubernetes API. You'll have probably three control planes doing that and one just in a later election sort of working. We didn't decide that yet. And then you have the data planes just consuming via gRPC or XDS, we are just figuring out how the data plane, the wire is going to work, but the control plane. And the control plane, if something gets compromised in your data plane, you just be able to consume whatever you are supposed to consume anyway which is the front-end and the back-ends and the data model, right? I'm also discussing, starting to discuss with the Capping folks if instead of doing all of this work which is almost done in Ingress controller, I'm just trying to fix the end-to-end tests. Otherwise, James has not done allow me to merge. But if we can use what they are doing for Capping in a layer seven fashion and how this can be helpful for other Ingress implementations as well. So we are going to discuss about the gateway API but can I use the same approach to have a control plane for gateway API generating the data model and back-end implementators just need to implement like in Ginex, HAProxy, whatever, consuming that control plane and not having to consume the whole gateway API model. So that's what we've been looking for and I'll be kind of looking forward to work next week with the Capping folks as well. Yeah, and then on the EngineX side so there are a couple things that stop us from going directly to like 123 and things like that. A lot of the functionality that you'll see probably from like the Enterprise EngineX version we implement that with Lua and Open Rusty. So we can't support the EngineX versions until all of the I think 43 dependencies that we pull in support that version. So it takes a lot of time and effort to make sure that all of those dependencies we're using support that EngineX version and pass all of our EngineX tests. I think as of last week that was completed and I think it's just sitting in our queue to actually accept it, but we're waiting till Tuesday and for those who are paying attention probably know why. We'll have another vulnerability we'll have to patch. And then as part of streamlining the release process that goes along with automating the release notes, automating the build process. One of the specs that we talk about is that it takes about four hours if we do everything properly the first time to get a release out. And I don't know about you, Ricardo, but I've not got it right once. There's always one little thing that we miss. ChangeLog doesn't get updated. There's a version that didn't get updated. So we're working on trying to automate that entire process. Yeah. And looking at additional this build, more to come on that one. But we didn't get finished. We know that there are issues with the ingress class logic that we've implemented, so that is on the backlog after we get the data plan and CP split working through. There are lots of modules that we use. There's lots of dependencies and looking at and understanding what we need to remove. Again, for the stability of the project, there are lots of other containers and tools that we use that haven't been touched in two or three years. So we're actually actively removing those from the project and using newer versions of things for that respect. So trying to do that. And we also still have the legacy branch out there. So anything that's pre-119, so we're looking at removing that availability. So we haven't done that yet, but they've been in discussions. So that's the question of why are people still using 5.1? You said about pod security policy? Pod security policy is on there as well. It wasn't part of the stability project, but it is on the backlog. It's going to be removed. So heads up, if you have pod security policies, we are going to remove them from ingress as well. So as part of the roadmap, before we discuss the roadmap and as we were talking about, I wanted to just run through and give folks a little idea of what's in that repository and what all we actually maintain and look at. So we use all four of these technologies. So when we need to upgrade something, all four of these have to work. So the Go-Ling version, Alpine version, IngenX and the Lua versions, we'll open Rusty in there. So when we update one of those, that's that four-hour build time that we talk about. And make files and shell scripts. Make files and shell scripts, shell on shell on shell. I'm working on getting rid of our shell exception as I've been liking to call it. We also maintain a QCTL plugin. Anybody know that we have a plugin? Some people. Anybody knows that it's not working? Yes. Two monitoring frameworks. So we support Prometheus and Grafana dashboards. We actually produce our own Grafana dashboards. We get a lot of questions on those as well. Three third-party plugins. So Mod Security is one of those plugins. We have to figure out how we're going to replicate that functionality. I'm sure there are other people in the community who have that same exact issue. So we have not engaged anyone from that conversation. Nat Geo is one of them. So GeoIP. And I forget what the third one is. Open Telemetry. Before Open Tracing. So we do have the Open Telemetry PR completed and Intest is passing. And that's also probably going to be part of the one-five release. Like I said, it takes four hours to build this so we do things properly. We have seven static configurations. So not only do we produce a Helm chart for this, we also support specific implementations for like AWS, DO, things like that. So we have to help maintain those as well. 12 other container images. So we build two container images, Ingress Controller and IngenX. So IngenX is our base image, but we also maintain 12 other container images and have to keep those updated. The new Alpine version comes out. Yeah. Anyone knows what are the other ones? We have Echo Image, the full backend. Website Generator. Custom Errors. I don't know. Httbn, but that's the one that's Httbn, but yeah, we just, yeah. We also support 30 IngenX modules so when we compile it, that's the bulk of the time we're compiling IngenX. With the 43 dependencies, we compile it and test it across four architectures, three Kubernetes versions, and it all has to work in Helm and the other static configurations. 68 Ingress IngenX command line flags. So being able to configure it, what port it runs on, where your certs coming from, where your logs go. 100 plus E2E tests, and if anybody else is looking at that math, what about the configuration options? We don't have all of our tests covered so it's not full end-to-end test suite. 118 annotations. So that's how you customize IngenX. There's a couple different ways, right? There's annotations and there's config maps and there's 186 of those configuration options. I didn't do the math on that, what that permutation is, but that's a lot to make sure it continues to save running. So when we talk about when we haven't seen your PR or we haven't accepted your feature request or there's an issue, there's a CVE out there, we just wanted people to understand what the complexity is to manage this project. So please be kind, but also if you have an issue or if something's wrong, we have a Slack channel, we have our community meetings and just come and engage with us and help us and help. I was going to wait to ask that later. Okay. So from a roadmap perspective, I think we talked a little bit already about the data plan control plan split. Once Riccardo gets all of those 100 plus end-to-end test passing, we're going to do the same thing that we did with the V1, we'll put out alpha, let people test it once that moves through alpha to beta. And please, please, please test because we did the V1 and we did the alpha and the beta and we got a lot of bugs when it was released for GA. And we said, hey folks, because we didn't get any feedback. Yeah, yeah, yeah. We didn't have some clusters to test that. So if you have some cluster that you can test it, I can run a cannery with these ingress doing not directing my traffic, but at least applying my own annotations as James said, some things we cannot test all of the combinations. And if you figure out something, just let us know. It's important for us because we don't want to know that we broke something just when we released something to stable. That's not stable. Once we get that out, we didn't really talk about, so when we first had that CVE come out, we added the CH root environment. So we produced two containers, the CH root which helps with that CVE vulnerability that we talked about. We'll go ahead and remove that once we get the CPDP split to GA. Adding the digital list bill as well. I'm about 90% there. I've got the build time down from four hours to about 10 minutes now. So that's great. We're not recompiling everything all the time. And as soon as Carlos gets me a GCP bucket, we'll be good. Again, continue to improve the release process. Open telemetry, like I said, that pull request here. It's waiting for us to hit the LGTM button. Explicit deprecation policy, again, continuing to make that apparent to people. When we say what's deprecated, we're saying that it runs in the end-to-end test suite and we know as well as we can that it's working. And then reviewing those third-party dependencies, like mod security. And then I'll let, I think we already talked and hit on the gateway API with the Kaping unless there's anything else. Anybody familiar with MTLS and GRPC calls? No. Okay. I was asking questions. And then one of the feedbacks that we got was, you know, you guys, I think I made it pretty clear about the complexity of the project. People get upset about the stale bot and life cycle rotten. We're going to remove that bot and we're going to move to the project. So as issues get added, make sure that we're actually moving through the project instead of just trying to move through the 255 open issues and, I think, 55 pull requests that we have open right now. That's 70 pull requests. Yeah. Two other things. I went to the tax security meeting yesterday. We're going to go work through the threat modeling and get that evaluated working through the project and understanding what else we can do to help secure our ingress engine X and then engaging SIG contributor X to understand how we can continue to work evolving the community. And then this one might be a little bit controversial but trying to understand Lua, we have difficulty getting Lua developers. So either more Lua developers or looking at migrating to your engine X and JS. Here, I want to use this moment actually to give a big shout out. I'm not sure if Elvin is here. He was with me yesterday. But Elvin from Shopify is the one that actually implemented all of the Lua stuff because inginex didn't support hot reloads, right? So he solved a bunch of people problem with that. The original reason that we moved to HAProxy in my previous company was because of the hot reloads as well. You have a reload and you keep reloading and you just break the connection. So Elvin and Alejandro are the real persons that actually created all of these and I want to give a huge shout out to them. They have moved forward doing their stuff. Elvin I hope that he can keep contributing with us as well. I will speak with him because he is our Lua main faner, right? So he is the person that did all of those things. But it's hard for us like I do this on my Sunday really. And I do mostly of the ghost stuff and inginex stuff and maintaining all of these Lua burden it's kind of hard for us. So Jin Tao, which is the other main trainer that's in China he does a lot of things as well. But the thing is that if we don't have more people actually willing to maintain the Lua part of code we cannot keep accepting features. So people ask us like hey can you add Lua or color red in the query and it like I don't know the impact of adding that thing so if I don't know I cannot say hey I'm going to maintain that or not or if there's going to be a CVE or not. So this is a call for people actually we need help on this part of the code we need help in any part of the code actually. But the Lua part is something that for me right now it's a risk on the project because we don't have enough people maintaining that and it's the core of the project. So that's where what happens. And with that like I said call action we discuss all of these things in our Dev Slack channel we have the user support channel for folks asking questions and then we have the new contributor docs where we're working on trying to understand putting together like all of those architectures understanding how the build works and things like that so we've been trying to put new contributor docs together there's the meeting notes we meet every other Thursday at eleven and then we support them all and then there's the survey data if anybody wants but thank you for coming and now if there's any time for questions things like that we want questions actually we've got like eight minutes let me just put my mask here Thank you very much for the product I've been using for probably two or three years and it's very helpful for me only one thing that I've tried to use and wasn't supported was the user support because we have ReptumQ and we wanted to route ReptumQ stuff through NGX and that wasn't supported so I had to do some stuff to handle it other than that everything was great one question that I have is there is some sort of interpolation with the gateway API and I know that there is a project right now called Gateway API can you guys talk about this like what is it going to be a blend or how is it going to work it out together or is it not going to work it out together I'll be here okay you don't mind I'm short anyway so we have we have gateway which it's not the case we have gateway API the project that's a new brand API for layer 7 routing and layer 4 also inside Kubernetes clusters which is what we are targeting to have support in ingress right and the Envoy community created Envoy gateway so I'm not sure if that's the mess with this which is kind of a way of creating the same thing that we want but for all of the ingresses but in their case they support just Envoy gateway Envoy backhead project so you have an Envoy control plane that deals with all of the gateway API objects and you have an Envoy model for Envoy backhead controllers right so in our case what we are looking for is actually supporting this new brand API from Kubernetes that Rob's caught and all of the other folks from Signature they are drawing and architecting I guess it wasn't the same room yesterday and it was packet right so we know that we need to figure out about that we have a bunch of questions of how to support a gateway API until all of those CVs appear so my question now is do you prefer gateway API or a vulnerable ingress right so that's I do prefer right now fixing those CVs and again if someone wants to help me and implementing the gateway API help me when I say that's helping the community but like mostly I'm finishing the split so we need more people do you want to speak about the timeline on that? he's like a product manager he's just going to make promises that I'm not going to cover no like I said July we said we were going to complete all of those things from the stabilization project but we're all working on this several times and I get some time from ChainGuard because Sigstore does use ingress engine X so I get some time from that because I don't know if I'm going to use a gateway API or a gateway API or a gateway API because that nobody likes their certificate keys being stolen wasn't a popular decision but someone had to take that and we decided in the community meeting to take the decision of freezing features and having something stable even if I'm not sure if I'm going to pass through I just want to say for thank you for maintaining the project we've been using it for I don't know five years so it I'm not sure is there is it an official like CNCF whatever the status of the project is incubating or something and the other idea of the how many users the project can the CNCF help this project become more organized so it's not just on a few maintainers I would say and I don't think it's a controversial topic when we say that all of the projects need help maintainership but we got 13.7 thousand stars there are some other depth stats I tried to pull but we are a subproject in the sig networking so it's not a top level project like you're you know like other things like that so there's a differentiator from that perspective but that's why we you know working with sig contributor x working with sig release understanding how we can make things better from that perspective you know working with sig security every time you know they're very responsive on helping us figure out how we triage these issues and how do we coordinate the communication so we do get support from the CNCF projects and how do we work with their product marketing there. Could you clarify that part because I always get is it ingress engine x or engine x ingress is engine x ingress totally different I know plus it's different engine x that was actually the first topic that we broached with engine x we have an open issue they have a blog post and we've been trying to work through and understand that but we're not doing it as a controller so we do have an open issue for that so we can understand that difference but engine x is working on helping us support the project. Yeah I see a lot of like all like you said all the open source projects are desperate for help and I'm just curious is there any kind of packet or you know set of instructions to help IT professionals and IT teams to make the case to contribute to open source rather than each person trying to make that case on their own like maybe if there was some kind of template or some kind of guide or help to kind of take that to upper management and you know maybe just an idea to get more contributors. That is a very good suggestion and I will take that I know that Paris has been talking about I think she was in contributor x she was talking about that like how it helps organizations supporting open source so we've got to continue to have that message but having a solidified message a template I think that would be really helpful that's something I think I'm going to take the contributor x and see what we can do to get that added but as far as helping the project like I said we have the new contributor docs the community meetings are open and available the notes are all there from all of our meetings so if you have any questions just let me know I have a question I think we have one more sorry my CEO was reached out by nginx the company to say that if we purchased the support contract it would help that's not ours if you purchase the support contract from nginx it's for the nginx ingress controller not the open source no problem we get that question a lot I know from versions when someone asks a question and they drop the version that's f5 that's not Kubernetes we get it all the time we are out of time we are available for questions afterwards thank you all for coming thank you folks