 All right, let's do this. Welcome everyone to this webinar, where we're gonna be discussing the Istio Ambient Project and more specifically how to contribute to it. So I'm really excited about this. I'm joined with Jeremy and he's gonna be sharing kind of all his trials and tribulations as he's been contributing to the Istio Ambient Project. So we think this is gonna be great content and there's also a demo involved. So it's just not us talking. I think Jeremy's gonna do some hands-on stuff and show off some things that he's been working on. So with that, let's go ahead and do quick introduction. My name's Phil Gibson. I'm a senior product manager here at Microsoft in our open source org. And I work with the likes of Jeremy and the other OSS team here that is working around service mesh. And with that, Jeremy, tell us about yourself. Yeah, so my name is Jeremy Morris. I'm a software engineer on the upstream service mesh team. I joined back in May. Before I worked at Microsoft, I worked at a company called DigitalOcean, the smaller cloud provider. I worked on their managed Kubernetes product, subcontainer registry stuff, a little bit of billing as well. And before that I worked at a few other companies like Raytheon and an advertising agency called Publicis. But I have like a wide array of experience, but as of Lee, like the past few years, I've been really trying to like narrow in on distributed systems to get really good at that. You know, my first exposure to that was really Kubernetes in terms of like big complex distributed systems. And over time as I gained more experience in that, I want to like learn other ones and that's where SDM came up. An opportunity came up on this team to work on open source full time and it was still in the cloud native Kubernetes container space. So it seemed like a perfect opportunity, so here I am. Also, yeah, we're happy to have you aboard and working with us on this. All right, so let's get into it. We're talking Istio ambience, but before we jump into that, let's just give some quick community updates in the service mesh world. As you know, if you've been living on the rock for like the last year, you may not know any of this stuff, but today we're going to be talking about Istio ambience and this is a new mode of architecture and in operations with Istio. And this was announced last year, April, 2022. And then also this year in July, Istio has become a graduated CNCF project. So it's met all the criteria, it's battle tested and people like it. So congrats to Istio for making that graduation. Last, I wanna talk about the open service mesh project. So as those may know, this was a project that was heavily contributed by Microsoft and since the introduction of Istio into the CNCF community, we have decided to join forces with the Istio project and therefore we have archived the OSM project as well. So thanks for all of those who supported OSM, but if you are now looking for a new service mesh, there's several under the CNCF umbrella, we're gonna be talking specifically about Istio, but please do your due diligence and test out service mesh that be appropriate for your needs. All right, so that's service mesh updates in the community. Next, let's talk about the dusts that has settled on the KubeCon North America and Chicago, the greatest city in the world. Tons of great people come from there, wink, wink. But I wanna highlight personally, and if you were out there and met me, it was great to see you. And if not, I hope to see you at the next one. But I found to me a couple of sessions that really were, I guess, impactful for me. The first one here is the past, present, and future of Istio, and that's with our friends over there at Google, John Howard as well as Louis, I was solo. This is a great session that really talks about the whole evolution of Istio. So if you are looking to understand Istio, I kinda wanna know some of the tribal knowledge through the years. This is a great session to go and view. This next session is actually with a team member of ours, Jackie Elliott. It's not service mesh specific, but what she does is she talks about the fundamentals of PKI. And as you know, PKI is a huge component in service mesh, how authentication happens. So if you just wanna kinda understand, hey, what is this PKI stuff about? How does these MTLS handshakes work? I thought Jackie did an excellent job of really kinda like boiling this down for anyone to kinda consume. And then this last one, my hat's off to Keith for putting this together. This is probably one of the best sessions that I've experienced at any KUKON that I've gone to and I've been to several. But this is the service mesh battle scars session. And I'm not gonna give this away, but look, we got representation from Google, Solo, Isovalent and Boyant talking about their projects, their products around service mesh and it gets a little chippy. That's all I'm gonna say. So I'm gonna leave you a suspense about that. Check it out, it is great. And I hope to see more sessions like that to where in a technology stack, we can get kind of that diversity of all the projects. And then people talk about pros and cons and then other people and other projects can kinda ask questions toward those other projects as well. So great session, please check that one out. All right, so with that, let's get into Istio Ambient. And this is a hot topic, it's red hot. And for those that don't know, Istio Ambient basically takes the sidecar out of the pod. And so just as some context in standard mode or just kind of like the original architecture of Istio, you have kind of what's being displayed here, which is you're gonna have your application that's gonna be in a pod. And then traditionally, we will deploy a proxy next to it. We use the Envoy proxy here. And then all the communications that originate from your application are transversed through the sidecar proxy. And then that's where like you got your control plane that's doing all the programming of what it's allowed to talk to, what policies, et cetera, et cetera. You know, this has been pretty much the architecture for a long time. And you know, it's debatable, but what people are saying is in this in a large deployment, you just have this proliferation of a ton of sidecars that can eat at resources on your cluster. So with that Istio, they introduce what they're calling ambient mode. And so you'll see what was done was the proxy sidecar was taken out of the pod and put on the actual node itself. And so you'll see here that's depicted as the Z tunnel. And so now your application hop will go to the node proxy. And then that node proxy will then relay that communication over to either, you know, the other nodes proxy where it will then find the service that you're looking for. And this is all done at layer four for the Z tunnel. And so the next question is, okay, well, what if I got seven policies? I got like some API, some particular path that I wanna either restrict or, you know, put some type of, you know, controls around. They answered that with what is known as the waypoint proxy. So what you see here is if you do have your last seven policies, you'll still hit your Z tunnel that's local on the node. But then if there's a policy for you to only be able to get to a certain path of a service, then that is will transverse through the waypoint proxy. It will then validate if you can talk to this particular service at this particular path. And then if everything checks, yes, then it'll pass you on to that. So Jeremy, please come over top and correct anything that might have stated there. And then, you know, maybe talk about, you know, how you've gotten into this whole new architecture and then kind of all the things that you've been working on with Istio. Yeah, so when I joined the team, just a little context, and this relates back to what Phil mentioned in regards to archiving OSM, I joined the team when the team was just getting into, like really ramping up on contributing to Istio. Before then, of course, there was contributions being made and involvement within the community, but a lot of it was internal as related to OSM. With it being archived, a lot of the focus has transitioned to, you know, having a team that just really focuses on our participation in the community. So when I said that there's an opportunity to work on open source full-time, I was like one of the first people to, you know, from the outside to join the team and be a part of that, a lot of the other folks on the team were already on the team when OSM was the main focus. So they went through the experience of transitioning from focusing on internal stuff to focusing mostly on external. I came in as like the first, I guess, like purely externally focused hire, and now the whole team does that. So that's been a pretty cool experience to, you know, if you're a software engineer and you join a new team, there's always a transition or at least I experienced this of going from some type of legacy, either process or project to a newer, you know, more exciting thing. In this case, it's not, you know, just going from product to product is literally just like not even just, it is our entire like model of how we work as engineers is going from working on something internal, internally facing to an externally facing day, we're gonna collaborate with a bunch of different other companies. It's a whole new experience, a whole new culture, and we're still learning and brewing on that. But I think that the progress that has been made since I joined the past like six or seven months has been really like cool to see and experience. Before I joined this team, or before I joined Microsoft, I didn't get a chance to work in open source full-time, I had to do it all outside of work, all those Kubernetes contributions that I've done, were typically at night hours or like, you know, weekends and things like that, trying to find ways to bring it into my day job. Totally different situation here. You know, my day job is contributing to open source. So it's been a bit of a learning curve surprisingly, even though I really like contributing to open source, just the whole like political aspect of trying to even just get PRs and has been a really great learning experience. You know, there's a mention of battle scars as a topic for a panel discussion. I think that, you know, there's a ton of battle scars to be talked about here too, so maybe I'll do something like that in the future. But yeah, I'm focused initially when I joined the team as Z-Tunnel actually. When I first joined, I think that there's like a little rust experience. You know, we still don't know a whole lot of rust. It's a hard language to learn. But when I first joined, one of my tasks were to get like ramped up on Z-Tunnel, on Ambient in general, learn rust to be able to contribute to Z-Tunnel. And I started working on that. And some of my first contributions were like little things, like, you know, this log filtering doesn't work as expected. It's actually not allowing you to add different filtering levels. So I had to learn what a log level was, what a log filter was. What does that even mean? And what does that mean in the context of Istio and Ambient and Z-Tunnel? So I went through that process of learning that, made some PRs to fix that. So that was a cool, like, you know, cool small wins I could get as a contributor. As I progressed through that, shared my knowledge of the team. We also had an intern on the team as well that was also doing rust work. So we'd like, you know, share our experiences with one another, talk about it as a team and like kind of leverage other people's experiences to grow as a team. That's something that's big, I think. Or the strategy we're taking as a team that's getting into a community that's dominated by other companies, you try to like, roll together. So if I'm working on this area, whether it be Z-Tunnel or maybe it's Waypoint Proxy stuff, which we'll talk about soon. You try to gather as much information along the way through your learning process as you can. And then you share it in like, you know, some succinct form with your team so that they can also gain the knowledge that you've gained, hopefully, at least some of it. So that's kind of like the strategy we approached with that. And now that's fun, but it's also hard. Coming from a team where like everyone else was like experts and they've been working on this product for X amount of years, to now a team where everyone's brand new and believe in myself, it's interesting. Yeah, yeah. Now, I mean, you know, as a company, Microsoft, we definitely have made the investment to work with Istio and you being part of that investment as well. It's interesting to hear like your take on it. I'm just curious, like, you know, for those who are looking to, you know, when they look at Istio, and it is a massive like co-based project, right? Tons of stuff everywhere, right? And so those who don't come with, you know, I'll keep using this term of the tribal knowledge. Just curious from you, like, obviously, you're thrust in here, you have a clear objective to actually work on this project, but what would you say was like the number one, like, I wouldn't say obstacle, but something that you had to overcome to really get, like, you know, into the project and get things moving for you? Yeah, I would say that the whole structure of how you can communicate and get work to flow through, you know, from ideation all the way to, it's merged to the code base and being released with a specific version, that took some getting used to, and I'm still getting used to that, and I think we're making improvements as a team and I'm making improvements myself, but it's not the same as working on, like, you know, when I worked on the Kubernetes product, like DigitalOcean, it was very easy to say, like, okay, my current project is to improve, you know, maybe the way resources are associated for a cluster, you know, like nodes and load balancers and volume snapshots, volumes, that's another thing. I'm already forgetting some things from my previous job, that's how I've been there, but like all those different resources that belong to a cluster, they're backed by some services that are owned by different teams. They're all in the same company. So what do I do? I reach out to that lead, that manager, that product manager of that team, and then I get collaboration that way, like, hey, you know, say Phil's a product manager of the firewalls team, I could reach out to him and say, like, hey, I'm trying to add a firewall controller that's gonna interact with your service, does, is there like any rig limiting that I should worry about or anything like that? And then we can collaborate and get the right solution out there. Right. In Istio or in the open source world, it's not as straightforward as that. I'm not reaching out on some internal chat. I'm reaching out over, reach out over a Slack channel or a working group meeting or, you know, maybe a GitHub comment on a PR or, you know, an Istio that was created, and I'm trying to collaborate that way. And that's all you can really do to reach them. You know, you can't really bother their manager or anything like that. You're hoping that the engineer that you think will get back to you at some point. So the best way I think to deal with that obstacle, you know, the communication aspect is to make your initial communication as compelling as possible. What does that mean by that? I think that just like, and I apply it to a lot of things I do, like when I go to ask a question, for example, of something I'm stuck on, try to give as much information to the person you're asking for help or whatever it is, you're trying to set them up to be able to answer your question as best as possible. If you just go on a date request, like, hey, I'm stuck here, or hey, I'd like to work on this random feature. You get no extra information such as a design doc reasoning for this, adding this feature. Why are you actually reaching out to this specific person? Like what do they have to offer that you decide to reach out to them for this particular question you have? I try to have all those things, like it's like a little mental checklist, I try to have all that stuff ready to go so that the person, when they look at it, they see I put the effort, I feel like people are more willing to put the effort into answering my question that way. And that's how I try to collaborate. And I find that like, even with PRs, my descriptions prior to all my PRs aren't like perfect, but I try my best to fill it out in detail. And then the people that I tag, I try to write why I tag them and politely ask for the review of a doc or PR. And I usually get pretty quick feedback and you should also remember, I think when it comes to open source, that we're all sharing this code together, right? It's not a personal thing when someone rejects your RFC. One of the first things I wrote for Istio was allow per pod DNS settings, had to learn what that meant and read like some specs, I got all excited because I was gonna be able to implement something from a spec, which is apparently a pretty common thing that I loosely learned. I had an idea that a lot of technologies were based off of specs and research papers, but I didn't really understand like to what, like how often that happens, I guess. So a few times we're looking at like gateway API specs, we're looking at this DNS policy spec and Kubernetes and seeing how that could apply to being translated to Istio. That's a pretty cool experience. So I was really pumped about this per pod DNS setting thing. Someone else from another company made the issue. They said they needed help and I'm trying to like, be helpful so I can get some report built here and be able to work on like, other things that I might wanna work on in the future, but the help of the people I helped in the past. So I'm very excited I picked this up, right the RFC he looks at, he's like, oh, I think this makes sense, but funny enough, people make issues and things and Istio, doesn't mean they know all the information. So the person who made the issue has some questions or some confusion too, even after I drove in and said, maybe Jeremy, you might know the most at this point about this, is bringing to the broader community. So after going back and forth with that original person and finding out that, we need the broader community's perspective on this, because kind of going as far as I could go in terms of my own research, that's an approach to the working group meeting. And it's at that point that it was like, oh yeah, Jeremy, this is great, but it's actually not the work you wanna do right now to get in the into data. So sometimes that happens, but I think the biggest lesson I wanna hear is that it's not personal. At the end of the day, we're trying to build what's best for both the community and the other community. That includes all the maintenance contributors, but of course the actual users. And if you think about it in that perspective, we're all just working together, we're all one big team. And when I get rejected on that, I just eagerly take on the next thing. It's, you just move forward and keep on tackling the next thing, the next thing, asking for help. And I think people also like that persistence. Yeah. As you get better and improve upon your selection of issues that actually matter, you'll find that things are going to quick. Like there's plenty of PRs where I'm like, wow, I actually got proof as soon as I put it up. Now that's happened a few times. There's obviously somewhere, I get tons of comments and it doesn't move around as I thought it would. So, yeah. Like kind of like separating or disambiguating that whole experience of attaching emotions to the work that you're doing. You know, it's good to be pumped up about it, but don't feel bad if it gets rejected. Everyone's ideas get rejected, PRs get rejected. There's also a bunch of stuff to go through to keep at it. Yeah, I think you touched on what I think is like the biggest like cultural shift to be aware of when you have historically been in an enterprise or corporate environment. And then you want to now go and work in the open source. Like just the different paths and avenues for things to get done like this. And you mentioned it. So it's not like, you know, when you're in a corp and you need something by maybe another group or another individual, sometimes you can pull that manager card. You can say, hey, look, I've been waiting on this to be done. I'm going to escalate. I'm going to CC the guy's manager, the person's manager. And then hopefully, you know, that lights a fire and then, you know, we can move a little more expeditiously. But yeah, in the world of open source where you don't have those kind of corporate alignments and org charts and things. And you're dealing with, you know, kind of like the, what would we say, like the committee, you know, the overall committee of the project. And then things like you mentioned, like now it's not email. It's Slack messages and it's going back and forth on issues and things of that nature. So for those who are stepping into this world, just be prepared. It's a different experience in working in open source. But the fun experience too, once you get things rolling. So don't want to just encourage people to do it, but just be prepared that things are done a little different in the open source world and for good intentions too, right? So now that's awesome. And I think that's really valuable for people to hear like your firsthand take on your approach into this specific project. Now let's see some stuff. Do we got the demo? Can you show some things off for us? Yeah, so actually someone recently, there was a update to the documentation for getting started in here. So I figured I can go through some of that as part of the demo. Essentially, just a little contact. One of the things I got to work on recently was working with my team to add a target ref field. That basically allows you to set a policy target reference for a particular gateway or a waypoint of proxy. And you've done it for a few CRDs. Can you break that down a little more? Because I know if some people aren't close to this, like target ref, what is it going on about? So can you kind of explain that in a little more detail as far as like what's the value of target ref, et cetera? Yeah, so it was the case before that you would have to basically target your workload or use the workload selector model to basically apply it to specific namespaces or cluster or what have you by leveraging labels. Now, this new way is you could target the gateway by using this target ref field that's set within the policy itself. So when you go to apply a policy to your cluster, there's four that we've updated, authorization policy, telemetry, Watson plug-in, request authentication. You can now specify within that spec a target ref field that specifies a specific gateway that that policy applies to. So this would be that waypoint proxy. And there's a lot of benefits to this down the line. A lot of reasons why we want to do this. But I think one of the reasons, if I remember correctly, is having layer targeted authorization. And there's a few other RFCs out there that will benefit from this new way of selecting workloads, but our workload selection. But I think this was pretty cool. This was the first time our team got to collaborate together. We had the RFC that someone on our team wrote. I took the lead from there and started making tickets and working with the rest of the folks on the team to figure out how to get to work on. And then we just got it done. Got it released in 1.20, I believe. And that whole process, that's our first time going through like, ideation. This is the RFC that was created, presented to the working group, approved on. When I say approved, you gotta get it. So you see some people here like Lin Son, you have Eric, John Howard, Louie. There's some other people here at Mitch Connors. All these people are like the big head honchos when it comes to Istio progression of features or ambient progress we're trying to make. They say yay or nay to a lot of things that work on. So getting approval from them, that was a big first step. Once we did that, now is actually time to implement it. So I took the initiative of volunteering to make the tickets and try to treat it like a typical internal product experience I was used to. All right. Yeah, it's kind of like a lot of us sometimes. Sometimes we'll see in Istio people are just making PRs without a ticket with it or RFC. I did not want to be like this. So I wanted to tie everything to an umbrella issue and call that our project level issue. That's an idea or a way of thinking that I shared with the team from my last job. You have your project level issue and then underneath that is all your different tickets that will help you implement and achieve that completely issue essentially. So once we did that, we were able to break out the tickets for specific policies as to what needs to be done to update those. We also had to have some work done and Jackie worked on this, Jackie Elliott, the one that presented earlier that you shared the link for. She worked on actually updating the proto definition. So if people aren't familiar, probably understand what a REST API looks like. You know, the typical like JSON stuff or you know, like Gats and all that stuff. All your verbs, these are posts. Yeah, you need to be able to make that available to the callers in your code, right? And there's like some API stuff that happens. Well, with GRPC, it's also for APIs, but it's done through, I think it's called a protocol buffer. Yep. And this is supposed to be a more like efficient way of exchanging data. And at my last job, a lot of those GRPC calls were happening internally. So we started out using GRPC and then the external stuff that was called by front end applications would be done through REST. In this case, I'm assuming that, you know, the communication happening internally within Istio is also being done through protocol because it's, you know, more efficient. So we have to update the GRPC endpoints that matter for this particular change. And in this case, the definition, the protocol definitions that define what the policy looks like, that needs to be updated with target ref. And it was a little interesting that back and forth on that. It seems like a small thing to just add a field and I'll get to that in a second what target ref looks like in a spec. But to add that particular thing, there's a lot of back and forth. There's a lot of thought behind like what policies we should add this new field to, how it should be like, I guess, put together within the code and like, you know, like what is the abstracted, I guess you can think of this as like an abstraction. And when you have, I'll get stuck on words here with the customer that out, but. No, I think the biggest thing that I'm pulling from that is it going back to like the different, you know, the difference in culture from like, you know, traditional enterprise in open source. And one thing that you're talking about is, because I guess, you know, me as a product manager, when I've talked to customers, one of the biggest things that I'll hear from kind of someone who's in the enterprise environment is, hey, we need this feature. Can this open source project get this feature? And then I'll return to them and I'll say, okay, yeah, you want that feature? Create an issue, go. And so that's like a new concept for a lot of people when they're working with open source. It's like, it's not like a traditional like ISV that you got a contract with. And so what you were showing is, hey, you wanted to either enhance something and you took the initiative to kind of, you know, do the RFC, start to present it to, you know, everyone will call like the gatekeeper, so to speak of the project. And then really like drive that, you know, all the way through to it getting actually merged into the project. Well, just to be clear, Keith, the RFC, but another thing to know, I didn't point this out. We did the RFC initially and there were some assumptions that were made. I don't remember the exact details, but that's just how it works. I mean, whenever you work on, you work in tech, right? You make some assumptions based on the data that you have at the moment, but in open source or especially Istio, there's a lot of context, a lot of context we don't have. And I remember Jackie's first PR to update the pro definitions, those policies that have the target graph. There's a lot of assumptions that were made that were quickly addressed by the maintainers that we didn't think about or write the RFC. So that was also interesting. And that's something we kind of vectored on, like, you know, having it more filled out RFC in the sense that we try to think of even more RFC cases that we're now aware of, I guess. There's different things with the UX, the user experience that we didn't initially think of or other things that were in flight that might affect what we were working on. That stuff came to light once we made the PRs. So that's, you try to avoid that. You try to avoid that by making good tickets and get the approval for the actual proposal, but even the maintainers might not know at that time until the implementation starts happening. Yeah, for us, I feel like the proposal went pretty quickly and then we started like just pushing out PRs and then all these questions are popping up. Yeah, well, thanks for getting a little more tangible. You know, that's like, oh, okay, what's going on? All right. I wanted to show the... Yeah, let's go ahead, let's jump into it. Cause then I'll ask, can you, any way you can increase your font? So kind of just shows good on the video. What should we do? I'm not sure why I'd be paced at something. I'm not sure why I can't, is it really bad? Let's see, maybe if I make this... No, I think it's doable, but anytime you can make it big, but yeah, let's not waste too much time if you can. Oh, I like that, but yeah, talk to us. What you're doing here? What's happening? Yeah, so here's an example of one of those four that we updated, one of those policies. This is authorization policy. So if you're going to hit a specific, it's like there's a book info application that is like canonically used in the SEO documentation. This book info app has like some different pages reviewing, different books and things. If you wanna access that, I was just like an everyday person. Behind the scenes, there's usually some authorization that's happening. Maybe a specific page is only for admins, so you can only post it, post to it or delete things if you're an admin. The authorization policy is gonna enforce that. You apply this policy to your cluster and essentially, in this case, we'll target a gateway. Everything that's going through that gateway is gonna align with whatever the policy rules are. If you look at this, there is a couple of different sections. You have target ref, you have the action and then you have the rules. The rules are telling you what we're trying to affect here, I guess. That's one way to look at it. So in this case, there's a sleep service that's running a service account and then we have this gateway service account. So we want the rules of this authorization policy to apply those things and then the operation or the methods that we care about in this particular policy are get and we want to allow them. So that's what the action is. So we want to allow get requests for these particular service accounts. And the gateway here, defined, it's something that we'll add later on. This is a policy that I'm saving locally so I can apply it and not have to rewrite it, but we're gonna have to create this gateway. If the gateway's not there, we apply this policy and it won't work. So that's what we're gonna go through. We're gonna go through the steps to make our cluster Indian cluster, we're gonna install the gateway series, do the necessary steps to set up the book info project on there and then we're gonna add this authorization policy. We're gonna do a check before, we're gonna hit those endpoints, add the policy and then do checks after to see that we're being blocked on certain things, like maybe a post or delete. And that would be the demo. So right now we have, I have a cluster up already. To, is it bad if I use like, if I go like this and use like the shortened. Yeah, that's fine. Yeah. So right now we just have a typical kind of cluster set up. I'm gonna do this. Those are my nodes, it's got three nodes, it's control point node and the two worker nodes. And this is actually how they recommend you set it up right out of the documentation. So it's pretty straightforward. And there's a link that I'll be at the end of this presentation that you can use to follow the same steps I'm doing to get this set up. Let's look at my notes. So the next thing you wanna do is install the gateway series. So we have this cluster up, just a normal local kind of Kubernetes cluster for people who don't know. Kind is a tool that you use locally to run or set up Kubernetes clusters for testing and playing around with things. So we have this cluster set up. Let's add the gateway series. I do that, it's a big kind of ugly command. But we add this because apparently, and this is something I've noticed with ambient 2 as a new contributor. There's a lot of little, like, not a lot, but there's some gotchas that you have to know. And I think you see it as a good job of documenting this, but not all clusters apparently have gateway series installed or guaranteed to have that. And if you miss this step, you might get stuck because it's saying we don't understand what spec target ref is, because it doesn't have the appropriate, like essentially that total definition that was updated. It might not be updated for your cluster. So we gotta make sure it is. And then all these CRDs are updated and created. Now we should be able to install Istio, so we haven't done that yet. And actually, let's look at the pods that are there now. There's no Z-tunnel or anything like that or IstioD. Once we install Istio, we're gonna see all those things pop up. And actually, when you go and install Istio too, you gotta make sure to set the profile to be ambient if you want to be an ambient cluster, which I do. So I'm gonna do that. Yeah, so you see that's processing the resources for IstioD, Z-tunnel. It's also adding daemon sets, so we'll check those too. And verify that the pods in the daemon set has been updated accordingly. So this takes a few seconds. So you said, hey, I'm installing Istio. I have to set the profile to be ambient and that's what's gonna enable like the Z-tunnel and then allow to have those waypoints to do layer seven policies. Can I still do the normal stuff when I'm in this ambient profile? If I don't wanna traverse my traffic through the Z-tunnel, can I still use the traditional sidecar in this or do I need to install this on a different cluster with that original one? That's a good question. No, not at the time I had. Okay, all right. So it's my understanding. There's some things you can do. So I think the goal is to have it be close to the experience of Istio. So like if you're an Istio user right now and you're trying to transition to ambient, I don't think there should be a huge difference, but there might be some things that you can do with the sidecar stuff that you can't do in ambient or vice versa. So I know I'm out of my head to be honest, but I know that there's a lot of discussion in the community about what kind of experience do we want ambient users to have? Does it need to be the exact same thing as what we call legacy Istio, maybe that's not accurate, but we'll say legacy Istio or sidecar Istio. There's a lot of people saying that, no, it shouldn't be the same exact thing. And there's also some opinions about some things need to be cared for. So it really depends on like what kind of things you're doing, I guess. But I just wanna know all the different use cases. So that's another thing too. As far as like being an open source contributor, you don't have to be an expert operation person, right? Like I'm not an expert at operating Istio. That's not my day job. Never has been before I joined. I never really even knew much. I didn't know much about Istio before I joined. I never operated it, never used it really. When I joined the team, that wasn't a prerequisite for contributing. It never is in most things, right? Even for Kubernetes. Sorry, the Kubernetes before I actually really used it. So I think that has something to keep in mind. That's actually a really cool point to point out is, I think what keeps a lot of people from, getting into open sources, they feel like they need to be level 500 before they can even do anything. But yeah, from your experience, you're like, hey, I knew of it and just started dabbling with it. And then, bam, I'm a contributor. So it's not as scary as people think out there that you need to be an expert in the project before you can start assisting. Yeah, exactly. Just real quick for context, I'm applying the sample demo files that exists to make it so that book info comes up. That's not for the cluster. So this is something that the documentation will reference a lot. It's book info app. Cool thing about the CO2 that I, as I talked about, but the cool thing I noticed is with the documentation, there is documentation for free girl. And it has like, usually cool examples or pretty straightforward ones that you can use. It's pretty consistent across the board. So if you want to learn a new topic or component within it's field, you can look at the documentation, follow their steps, and probably yourself. Not do things correctly, learn that way. That's how I've been doing that. Yeah, and in fact, that's also another way you can contribute. I think I actually updated some docs and those pages are out there as well. Exactly. So if you look, you see that the service accounts have been added. Yep. We have all these things, right? And now let's, the next thing we need to do is deploy an ingress gateway. So you can access the book info app from outside of the cluster. It's another big command that I'm not going to type, I'm going to paste it in here. It's a set command. I'm not an expert with set, but it does some stuff. And now that's updated, has been applied, that particular gateway YAML. And next we need to actually set some environment variables. If you don't set gateway host and gateway service account, things don't work properly. So we need to make sure to set that. Someone paste that in there. And there's some waiting, that it wants us to do. There's a condition in the gateway called programs. We'll look at that in a second. I'm not 100% sure exactly how that works or why that matters. I wanted to look into that. Actually we have Azureport learning day today, so maybe I'll spend time learning that. But it's important for us to wait for that to be finished, apparently. So that's what we'll do. We'll look at the gateway stuff, how to dispenishes. You can just do, I think it's K or QCTL get GTW. I think it stands for gateway. That's the short one. So we can look at that and see what gateway is in there. So can you see? That's our way of pointing to it, right? Yep. Program false. Again, I'm not, I think the documentation might give more detail. Not too sure why it recommends waiting on that condition. Let's see if we can, maybe there'll be an error later on in the thing, we can try to figure it out. So let's move on to the next thing. Now we can test that our broken info app is working with or without the gateway. We can do these commands. Are we just gonna kind of curl to that endpoint? Yeah, and specifically we're gonna specify the service account that or the deployment that we're curling from, right? So we have an exec into the deploy sleep for the sleep one. And we're gonna make the curl command from there. And we'll also do it for not sleep, which is another one that we have up. And we're gonna make sure that the communication is happening to us by then. Good, crap. So this crap command is just the title that's there. Yeah, we're gonna filter out the HTML coming back to you. HTML is not my forte, really, but there you go. It is showing up a simple big bookstore app. That is correct. And then we can do the same thing. So without the gateway host added there. I'll do it like this. My page. And so this, if I'm following this, right? That should fail or is that gonna work? This should work. Okay, even without gateway specified. Okay. Yeah, yeah, yeah. So now, once we verify that, again, this is before we've applied the authorization policy before we add the application to ambient yet. So that's the next step. We need to add the application to ambient by labeling. Basically it says you can enable all pods and they give namespace to be part of ambient mesh by simply labeling the namespace. That's the documentation stuff, so that's what we're doing. Now, I think it says to send test traffic. This one is a little weird to me. I think the reason why it will do it, but I think the reason why it says that is because there's a step in there which we're not gonna do. So I'm using the VM. I'm not too sure how to surface it, but there's an application or a tool you can use called Keali and Prometheus. You can use these things together to visualize like the network stuff going on, right? Seems really cool. But I couldn't figure out how to get it to like, so you do it to localhost and all this stuff, but I'm running on an Azure VM. I feel like the past few months, I'm just now learning how to use Azure properly, but I haven't learned enough yet to be able to show something that's running on here locally in my browser or something like that. I'm sure there's an easy way to do it. I just have to spend time figuring that out. So we're gonna skip that step for using Keali and Prometheus, but I think that's something you should explore if you were able to do so. Think we can actually skip that send traffic part. Now, we can allow the sleep and get away service accounts to call the product page service. We'll copy something and paste it in here. We'll talk about that. We'll set in the policy now, right? Yeah, so this policy being set is without the part of it, okay? This is just a basic authorization policy to allow the list of service accounts there to be able to communicate with the book info app as expected. But that policy is not prohibiting any like- No, no, no, no. It's seven, anything yet, right? Yeah. Okay. So now we can call the product page service from those sleep and gateway service accounts. And then we can confirm that some things will succeed and some will fail. The things that will succeed are the sleep service account, right? That's gonna be able to communicate as expected. But if you notice, we don't have not sleep here as a service account, right? So all right, we go to make a request. It's actually gonna fail and it should do so because it was not authorized to make that request. And there's a specific error code that we should be looking out for. I think it's, yeah, error code 56. It's failing because of that, I believe. So just copy that in there. This is something that we did before, gateway host, product page. It works, should work, because we authorized it to. Right. There's another one, right? Now let's do it with the not sleep. I should fail, see? It doesn't work, okay? Now, here we go. Finally, to the thing that I was able to work on with my team, the waypoint privacy. We need to deploy waypoint privacy for the product page service to do that. We use something called experimental. I think that's, yeah. So it's the OCTLX is the experimental stuff that you can get access to, I'm not specifying that initially. So waypoint stuff is instrumental directly because we're still trying to get in at the data. So that's where it was. And then we could look at this. Actually, let's, so before we do this, just for context, I think we talked about it before, but this target ref, this is the thing that I'm referring to. Okay. Book info product page, the name of the gateway. Here it is, we're making it right now. So that's how that ties in. You're wondering if we're like, oh, where is it? Where's book info product page of kind gateway? Where does that come from? We're doing it right now. So this is actually how we're making the, we're gonna make the gateway and then have an authorization policy target. So simple as that. So now we go back over here on this. By the way, I think it might have been this step or it might have been the gateway YAML part before. I'm not sure. But I think if we don't install the gateway CRDs, it's around this point in time or a little bit before where it's like complaining, like, oh, I don't know what's, I think it might have been here. I don't know what spec target ref is. And that's because the CRDs aren't like there to, you know, it's not mapping correctly. Cause I think that's how it broke last time I did it. So watch out for that. If you get some weird issue where the spec target ref field doesn't recognize that as an actual field, it doesn't know what to do with it, it might be because your CRDs, gateway CRDs aren't installed for your cluster. So let's move on to the next thing. Now that that's there. Oh, and now you can see that the, okay, here we go. Yeah, so that waypoint proxy styles should be set to true or programmed should be set to true. So we looked at that before. Let's see that. Oh, not that. I think you could just do a little. That's just me. You were just listing it out, right? There you go. Yeah, here we go. Yeah, I mean, you could look at it. You could go like this. Just grab for a program maybe. Yeah, see, you can see that program true. You can see it's true. You can also just do the, get the gateways for all new spaces. And true means that it has a policy, right? Okay. I'm asking you, right? True means that it's processing a policy going. I think so. That sounds reasonable. Again, I don't 100% know for some reason right at the time I had what exactly that means, but that sounds reasonable. It sounds like maybe it's being leveraged or something like that. I will look this up though. Yeah, we'll follow up on that. Yeah. Again, you know, being new to the project, you don't need to know every single thing that you'll be productive and contribute. You know, I've merged PRs for Z-Tunnel, this target ref stuff for overall Ambient, stuff for like core Istio. And my operational knowledge is probably worse than a lot of the people viewing it right now. I'm still able to contribute. So. I like that. I like just the honesty behind that because again, I think what prohibits a lot of people from jumping into open sources, they're like, hey, I'm not expert. Like, you know, people are gonna laugh at me if I post a question or people don't like the way I code or, you know, et cetera. But I like that you're just being really candid about, hey, I think that's how it works. But, you know, I'm doing this other thing and I've got that stuff merged and so on and so forth, you know. Yeah, there's no point in not like, so I think Microsoft has a thing called growth mindset. And it sounds a little corny to like mention it, being able to work at Microsoft, but like that's something I always think about. You need to kind of go in to open the mindset that you're not gonna know everything. You might be the dumbest one in the room, but that's where you want to be. You want to be in a situation where you're having to grow and learn, it gets to the level of the other people around you. If you're in a situation where you know the most, it's time to find a new room. That's like how you apply, I apply that to life and a lot of different things, but especially work. If I want to grow as an engineer, I need to be uncomfortable, be comfortable being uncomfortable. You need to be in a situation where you're constantly having to learn and oh, I don't know exactly what programmed, true or false means. Let me go, now it's something else to look up. And then you know it's over time, you're like building a knowledge map right here. Diving in, diving back out, diving in, diving back out. And you do this over time and that's how you get to be an expert. There's no shortcuts. Everyone's born just knowing Istio or Kubernetes. That's a little silly, right? You have to go in the struggles, go to battle scores, what have you. You need to make some projects that stink and don't work correctly. You need to deal with customers getting upset with certain things working at a sub-part level and then you need to improve on it and learn from those experiences. And then next you know, now you're building like, I don't know, fall with Istio abstraction through your company because you're so good at it now, but it takes time to get there. Yeah, it's a journey. Let's apply, while we talk, let's apply this authorization policy and then we'll test that as an effect and that will be the demo. Let me just. Awesome. And then, yeah, when we close the cell, obviously this demo that you ran through, this is documents. So we're gonna share the link. So if people wanna basically walk through what you just represented here and looks like, let's see, that is, is that all working? I see the output there. I think I lost your audio there. Oh, okay. There you go. Yeah, so I was saying that there's an unknown field here and I thought that was when there was a CRD Istio, but that's not, it doesn't seem to be the case. I'm trying to see what field. Yeah, what should we see? Like if this is working correctly, what should be the output that we would see? Yeah, so I just wanna basically verify, we do this command, for example, that because we only allowed. We're basically going into that sleep app and then just curling back to. Where is this? This might be from the previous, let's see, just referencing my notes and I'm gonna look at the documentation to see what I missed here. Hey, doing live demos. That's what happens. Yeah, works. Well, I'll tell you what, like let's kind of wrap it up. I mean, excuse me. I know that you're following what's been posted and we can kind of come back and figure that out. So as we close this out, hopefully everyone has seen this journey that you've gone through, Jeremy. I appreciate it. I think this is really beneficial to the community so that they can really see the day-to-day, the process of someone who wants to approach a project and start to contribute. And I really appreciate again, you being candid and I wouldn't say vulnerable, but just really showing like, hey, this is what it is, right? Am I expert in this particular thing? No, but did I spend enough time to wrap up on these particular things and got things into the code base? Yes. And I think that's huge. And by the way, I just want to point out too, like this experience of me not, I guess like I'm following the documentation and it's not working as expected. Yeah. On the day-to-day. This is it. Yeah, yeah. There's no- I understand why it's not working. Maybe something I did, but maybe it might be an opportunity to fix the documentation too. Yeah, yeah. We can do it. We can go through the docs. If you're finding things that aren't working and you're trying to open it up again, make an issue or make a PR, bring it up in the Slack channel. Ask like, hey, is this supposed to work? And there's sometimes you'll find out like, oh, actually it doesn't work on my end either. Go ahead and make that PR Jeremy. So- Right, right. Yeah. I think a lot of times, you know, you have these really sanitized sessions of things, but hey, we went live, you know, we're doing this as you would do. But yeah, so, but to close it out, you know, hey, how do you get involved in the ambient? Just the things that Jeremy is showing you here, what he's working on. There's actually a drive ambient mess to beta. There's a Google doc out there. Please go view that. That is tracking kind of all the dialogue and all the conversations that the community has been having around, you know, which features we're prioritizing, which components are working, et cetera. So that's a great place to kind of really understand where we're at with the project. And then also, please join the weekly SEO contributor meeting. This happens every Wednesday, 1 p.m. Eastern, 10 a.m. specific, Pacific. And then that's the meeting number there. And then you can also find this on the actual SEO GitHub repo. And then lastly, hey, here's the link that Jeremy was going through, getting started with ambient. There you're gonna find a ton of content on what is ambient, goes really into detail about the pictures that we showed, the whole architecture. And then there's a demo to kind of get you to understand what this whole ambient mode is all about. So with that, we wanna thank everyone who stayed the course with us. I know this went long, but hopefully, you know, Jeremy sharing his experience has been beneficial and also insightful for you as you embark on your journey. And so, again, thank you all. Anything else, Jeremy? Any last thing that you may wanna share with us? Yeah, I think that I just wanna add to the attending the working group meeting. Yes. I feel like, you know, the community's still going back there. The community's still trying to get better at documenting things and making things more transparent. You know, it's not, obviously, not community-sized. So it takes time to get to that level of maturity. I think that it's still at a point where a lot of the information and context that you would need to have to be able to chip you better can be found during those meetings. Yeah. Like these little side conversations or historical contexts that aren't documented anywhere, that you'll learn about in the meeting. So if you wanna just sit by, be a final one, just listen. There's no obligation to talk or anything. You just sit there and listen and learn. You'll learn a lot. Also, if you have any questions about contributing things, whether it be, hey, I'm just a new contributor. I'm not sure where to start. I try following these things and getting stuff. You can post that on the agenda for the meeting in the contributors Slack channel for Istio as well. And yeah, any proposals that you have would also go through that working group meeting. You just add it to the agenda list. Anyone can add to it or request to add to it. And then you can be a part of that meeting and start contributing like that. Yeah. Yeah. Anyone can get involved. I think you know that I think, you know, the very first step is to show up at a meeting, like whether you got anything to say, but just join meetings, hear the conversation, kind of understand, and then after a while, that's gonna help you. You know, there's so many times, again, going back to me being a product manager and people asking, hey, can we get this feature? And I'll say, hey, create an issue and, you know, put that issue on the meeting docs and say, hey, I want this to be an agenda topic. Whether, you know, it's just for you to kind of learn and have people talk to you about it. But, you know, again, just having that initiative to get involved is an awesome first step, you know, for any open source project. So with that, again, thank you for everyone who hung out with us for this long time. Again, I hope this was beneficial to you all. And yeah, Jeremy, we got to do this again. So let's create a series, you know, maybe we could just follow your whole journey through this thing, you know? Yeah, it's fun, it's fun. All right, thanks everyone. And we will see you all later. Take care. Thanks.