 All right, welcome everyone. How's everyone doing today? Come on energy. Yes Energy let's go. All right, so we have a treat for y'all. We actually brought hardware We brought edge hardware on site today. We've got a little Intel nut here Got a raspberry pi hanging over here, and we've got a little mini router and we're gonna see some magic going on So thank you for coming our talk is called pie in the sky onboarding edge workloads into a Service mesh or specifically ambient mesh now this talk specifically and by the way quick shout out to Adam He's like right there behind say wave say hi to everyone anyways so this talk is actually about Taking workloads to the edge but without kubernetes because let's be real We have a lot of types of workloads that exist today. We have kubernetes based workloads We have VM based workloads. We have bare metal based workloads We have tiny services that have to run on small hardware, and so how do we make this all work? And so let's get into it Yeah, so first we'll do introductions. I'm Nina push Kova. I'm a software engineer at on the glue platform team at solo So I work with Istio onboarding VMs some ambient stuff But you might if you're on the dev kubernetes listserv You probably have seen me post a bunch of emails because of the enhancement freeze So I'm also the 129 enhancement lead and luckily we pass code freeze. So we're all good there And yeah, I've been on the enhancement on enhancement steam for the past two or three releases Yes, it's 1.27. So I also do some stuff stuff on the kubernetes side Fantastic and before I introduce myself. I just want to say a Massive thank you tremendous Thank you to Nina for putting together all the effort and putting this demo together because if you actually look at The git commits if you go look at the repo, you'll see that her chart is like all the way up here And mine's like all the way down there. So let's let's give her a round of applause for putting that together My name is Marina. I am a developer advocate at solo So you'll see me all around the globe talking about a variety of different networking related technologies service mesh API gateways CNI and even you know the regular networking stuff that we see in the hardware world like this a little router here Funnily enough in trying to get our demo working We ended up revisiting a lot of former network engineering Capabilities like using IP routes to make things work and it's kind of funny because that's effectively how the subset of networking truly is Right, you're using a whole bunch of routes. You're using a whole bunch of Layer two and layer three functionality to make things move but a lot of that intelligence doesn't live there It all lives at the higher levels. So before we actually dig into the presentation. We have a quick survey for y'all So Go I didn't clear that's okay So go to that QR code right and we'd love to know What challenges come to mind and I want to see everyone pull up that QR code. I'm not seeing phones come up. Come on Let's go. Let's go and tell us what you think challenges are when it comes to edge networking or edge computing I think networking is a big challenge, but there are other challenges to that exists So I'd love to hear your thoughts on that. I'll give you all about 30 seconds to respond And we should see that word cloud start popping up and I'm seeing some good words. There's security. There's compute. There's latency Okay, security is a big one scale updates Disconnect DIL networking. I'm not sure what that is heterogeneity Orchestration yes, these are all interesting job. Yeah Yeah, these and security happens to be the recurring theme through anything you do Whether it's networking in the data center computing in the cloud Whatever it is security is always going to be the forefront of it. So we'll actually dig into that next question Join the quiz. Where's the quiz? Okay, are you currently using a service mesh in production and If you don't know what a service mesh is right if you think about the way we connect our services together are we getting answers No, yes, I think you have to end the next slide. Oh, yeah, here we go. Awesome. Okay So a service mesh provides some key functionality traffic management Security and observability, but the you know our regular networks do this already. So where is this actually happening? So when you think about our containers and the way we abstract them and layer them with services We are effectively providing service to service connectivity, but well above that physical layer And so in that fabric that service mesh fabric when we have different services that have to transact with each other We have to find ways to know what's going on whenever there is a service failure We have to find ways to recover from that we have to find ways to scale But also scale the networking alongside it but we also have to bacon things like identity and Also, they can encryption in the way our services communicate with each other So that's effectively what a service mesh is providing you today You might have heard of the Istio service mesh In fact, there is Istio day going on just a few holes or a few rooms down a lot of great talks on service mesh as well And you'll also see service mesh throughout the kubecon conference one more question for you and we're gonna get into the demo Oh I think there's one more keeper. Yeah. Oh, yeah last question for you. What's your current job title or function? We're interested to know who here is actually playing with edge technology And so hopefully it starts okay, there's some people typing and so I think that Edge computing can be risk fallen of the responsibility of a variety of different teams It may fall into a branch team. It may fall into a platform team It may fall into the world of DevOps as well, but we're seeing a whole different bunch of Okay, so Thank you so much. Let's get into the actual talk and we've got a demo for you So when we're thinking about the edge There are a variety of considerations and hats off to Sergio Mendez who actually put this slide together or actually put this image Together it's from his book Now there's so many different considerations from the expertise to the people to the type of hardware to the type of applications To how many locations to how many different? KPIs you need to capture to really form that edge strategy, right? So as you decided to begin that journey, you have to also answer why why are you doing this? So you might have had this monolithic app that needs to be broken down into micro services But also you need to address this latency concern because you're trying to make sure that services are accessible In a quick manner in a non-latent manner And so there are so many decision points to consider when building out an edge Compute environment I won't dig into the details, but if you want to know more come chat with us after this talk Having said that let's jump into our demo. So that QR code there is actually going to lead you to a repo You're welcome to follow along, but I don't think you're going to be able to in the 17 and 30 seconds we have left because we actually are going to onboard services into the mesh And then we're going to see some action with some of this magical hardware here. So I'm assuming you've all pulled that up. I'm actually going to pull up a diagram here and I'm going to pull up my terminal off to this This is going to be fun Do you have the local host? Yep Close this you got to make this figure and I don't know why it's looking like that It's all pretty small, but I think it's fine. Okay. All right, so Actually the canines if you do Yeah, all right, so what we I'm sorry for my screen real estate. I'm using a very small Macbook here, but in our little diagram here what we actually have going on is a cluster of running kind that has Kubernetes obviously and Istio install But Istio is running a special mode called ambient mesh that ambient mesh enables something called sidecarless Istio So in traditional Istio or normal Istio, you actually have this sidecar proxy that long runs us alongside your your actual Sidecar container or your main container and I can show you this is in the way of if you actually look at the output of the pods You're using canines by the way There is a pod here called not sleep and we see that it has two out of two containers running in that pot There's one container. That's actually the function of not sleep And there's a secondary container that acts as the proxy the sidecar proxy However, there are situations in this world where having a sidecar is actually not Valuable for your workloads and you need this sidecarless approach because you don't want to be intrusive You have to think about your operational instances and concerns and at the same time onboarding sidecars means more hardware requirements because more CPU and memory so Ambient mesh enters the chat and we can effectively do a lot of different things here But in our case we actually have this little setup where we've got well The second Raspberry Pi is not here But there is one Raspberry Pi that actually will be onboarded with something called a Z tunnel in Istio The Z tunnel here is like a a node level router that effectively proxies all connections for that That workload that edge-based workload, which is a Raspberry Pi, but that Raspberry Pi isn't in the mesh It's not actually inside of this cluster. It's outside. It's external. It's this little device right here So we're gonna onboard this into the mesh very shortly But before we do I want to talk about a few other things and how we get something into the mesh So inside of service mesh, we have to expose different artifacts using something called the ingress gateway So if we want to access services inside of a mesh we'll use an Istio ingress gateway Now in the case of actually attaching services into a mesh that are external to a cluster We actually have to expose our control plane to that Endpoint and we actually have to use something called an east-west gateway So the east-west gateway is is basically a control plane path for us to be able to pick up updates Certificate information for the sidecar or the z-tunnel identity information policy information routing information to know where to get to our services and So what we end up having to do is we use the east-west gateway to expose our control plane in Istio To get those updates for our z-tunnel that lives on our Raspberry Pi So there are a few artifacts here that I could dig into but I'm thinking about time And I think we should move on to the next part. So I'll pass it along to Nina. Yeah So Marino mentioned we're gonna onboard our Raspberry Pi into our mesh So there's a couple of things that we need to do to onboard. So let me just make this full screen So there's two parts there's the onboarding steps that happen on the cluster side and then we have to copy some config over and Then actually run the z-tunnel on on the Raspberry Pi side. So if I go back There's the Or actually, can we do the split view again? See Side-by-side. Yeah, yeah, I can do it for you. Well, so on the cluster side We need to first expose Istio D the car control plane like Marino was saying so the way we expose Istio D We use two Istio resources a gateway resource which configures all the incoming traffic into the cluster and a virtual service Which is the basic building block that determines our routing logic. So our gateway resource. We've already applied this This is called Istio D gateway very aptly because it's responsible for exposing Istio D And here we define our two ports so 1512 and 1517 So these are the two ports that we're exposing in order to let the Pi communicate to our control plane And we're going to use pass-through mode here. So Because we have pass-through mode enabled. We also need a virtual service that matches this config So if you go to our virtual services, this again is just like an Istio CR that Istio provides to build routing logic We select our gateway that I showed earlier and we have our TLS route defined So on the TLS route again, we have our our two ports and we have the Istio D host destination so That has already been created on the cluster side. The second thing we have to do is create a service account So if we look at the service accounts, we have If we go all the way down to the the Pi namespace that we've created Yep, so in the Pi namespace We have a Pi service account and this is what we're going to use in order to identify the Pi So later on in the demo we're going to apply an authorization policy and use this service account to Identify our Pi. So this is again created on the cluster side This service account is also used to create a short lived Istio token. That's just a good job token That's used in initial bootstrapping. So we're going to copy that token on to the Pi And then the initial bootstrapping is going to happen And then we're going to do a CSR request to Istio to actually get the service So by default the short live token only lives for an hour for the demo purpose We've increased the timeout to 24 hours just to make sure everything stays in a good state But that's what this is doing here And then finally we need a workload entry. So again, this is another Istio resource That we have defined and this is how Istio represents the Pi in the mesh So here we have the The labels that the workload entry is going to use You can see the address is our Pi address. So it's on the same network as our Linux box And because we're running a Z-tunnel on the Pi, we also need this special label that says ambient redirection enabled So this is going to tell our cluster that the Z-tunnel knows how to speak H-bone And we're going to use H-bone as our Azure connection Cool. So that's basically all that happens cluster side Let's take a look at what's happening on the Pi Oh, so I think we have to SSH it again So But all those logs are actually Z-tunnel logs because we were testing the demo earlier So we're gonna I think we have to CD into the directory I think setup. No, no not that one Set up Z-tunnel. Yep. Cool. Yeah, so we can see it run live. Let me make this a little bigger. Disconnected again Yeah, so this this is why it's hacky, but At the end of the day, we'll make it work All right, there we CD into it. Ah wait kill it. Oh, I'm sorry. Did that again? Yes Okay, now it's running. So the command I ran if we scroll back up Is doing all the steps on the right. So it's copying over the service account the config files to the right directories We're editing at Z-host to have that SNI studio D thing that we saw in the virtual service And then we're also installing the dot dot dot dead Z-tunnel that I've included in the repo so A quick side note Z-tunnel is a rust-based Project proxy that You can actually build on the pie itself. It takes like 30 minutes It's super lightweight compared to envoy. If you've ever tried building envoy It's not a fun time But Z-tunnel is very easy to build and you can even build it on like a tiny little raspberry pi So the address though I have here is actually the east west gateway address because again Like we need to get xds updates from our control plane to our pie So we're because this is the plot network. We're just writing the the east west gateway service ip directly And this is what we're editing in etsy host to set So if we go back to our logs, we can see that we've connected And we should be able to test connectivity to lower service. Yeah All right, so or is this uh, wait, is this the right? Okay, uh, yeah, let's let's check that we got local host because I think I don't see uh, you don't see an entry. Yeah Uh, where would I figure to the browser to local host? So um z tunnel also exposes an admin panel. Um that has Oh, no What happened the demo gods? Wait, if you open a new ssh connection with the the 15,000 one Yeah, that's what happened. Okay Problem fixed Yay. All right. Here we go. So we are able to get the config dump directly from istio d which actually tells us a number of things So i'll make this really large for you and i'll be quick because we're kind of running out of time But if I did a control f and I looked for hello. Oh, no, it's not right I think that's the wrong path on the z tunnel. If you go back, that's what it was So, uh in the github just real quick Or is it do you know where the github is? Um I think the file path is slightly different. I think we're using an old version Oh, no way. Wait, well like just exit then. Let's see. Uh Uh Yeah, it should be Is this the the service ip? Let's double check 26, uh Yeah, yeah live troubleshooting y'all I promise you literally worked five minutes before we actually got on the stage I see yeah, we can uh, maybe just run the the recording, but it looks like it, uh Did not been connected. It's not connecting because it's not getting the updates from xds. Uh, oh interesting Okay, so let's let's talk through what actually was supposed to happen We were actually hoping that through all of this fun exercise You would see these little leds light up, but something must have broke. Maybe I don't know anyways One of the interesting things about how this all functions is that I'm making a call from either my pie to um The the kind cluster or the other way around and whilst just figuring that out What's actually going to happen is as I make that call a curl to Um, let's say a service that's running in my my kind cluster I should get a result I will get a result as expected because the z tunnel that lives on this pie Is connected to istio d that long runs on this cluster over here Now on the flip side I can run A python based server that's actually connected to this led and I can run a curl pod on this on this box And what I would do is I'd just ssh or shell into the curl pod and effectively just You know run a curl against the pi ip or the pies dns name and and specifically a path and that path is just basically Turning on the leds and doing something But that's supposed to demonstrate the bi-directional functionality and communication path between A non kubernetes based workload that lives in the mesh and look, you know the magic of nina It seems to be working. I have no idea. I don't think I changed anything, but uh Do you want to look at the local host again? Yeah, let's take a quick look. I mean we've got five minutes So, um, basically because we're on a flat network on the kind cluster side We've exposed the pod sider and service sider So our pi is like going through our like the linux host like has an ip We're routing everything in the pod and service sider there So technically speaking if I did a curl Actually We'll see Control r curl. Hello. Yep. That looks good And if you noticed you see the successful hello, right? I'm going to skip the policy because I think we should just go to the led But what the policy was supposed to do is actually block this indirect or sorry this direct connection and expect a header and our header would have been istio is cool that We would have applied before we made that curl request and it would have gone through But all other requests would have failed and this would have been from the pi directly to a service in ambient Now we're going to do the reverse and I'm going to let nina kind of take us through the magic here Yeah, so um on the pi side so like instead of applying a policy on the cluster side We saw the pi can talk to the cluster so now we're going to go in the other direction We have have the cluster talk to uh the pi so the way that we're going to do this is um I'm going to run this python python server on port 8080 on the pi And then on the cluster side i've created the service so you can technically use a service entry here as well But um to simplify the demo we were just created a service Because I think everyone here is familiar with component eight resources So um we're uh using the workload entry that I showed you earlier to uh as a selector here And um we're using port 8080 which is what the um the led server is running on So um i'm just going to check that I don't have any authorization policies before I Cool, so we don't have any authorization policies. We're going to apply one later And the thing i'm going to do is i'm going to send some traffic from sleep to um our little led server And we send the requests the led slash exciting why is only half of it like Only half of it. Oh, well we can debug that wait. I know I know what's wrong Is it because I changed it to uh, yeah, it's only 50 pixels. So let's quickly debug it I didn't pull down the latest. I didn't do a get pull before we started this. I'm sorry All right, marina marina. Okay, so uh, this is uh, I'm using uh neopixels or like uh, uh, the Yeah, this library here to um run the server to change the the Lights, um, but it's running a flask server. So that's how we're getting like we're sending a get request and changing the the colors um So let's run that again. And uh, now if we send the same request everything should light up live debugging So that's all great, but like that doesn't really show any cool authorization policies, which is to does really well So what i'm going to do now is um on the linux side. I am or is this the right Is this kubekan? Yeah Uh, we're going to apply the not a teapot one wait, uh policies Which one k apply The l4 one so the z tunnel doesn't support l7 policies. So we can't like block on headers or anything We have to block on just the service account So i'm going to apply this and then i'll just cat it to take a look at it Or actually we'll take a look at it in k9s or no never mind. Let's cat it So the thing this authorization policy is doing again, it's a istio resource We're going to deny everything coming from our sleep app So the curl i sent from before was coming from the sleep app So now when i send the the request again, we should be blocked because we have this deny policy So going back, uh If i send it again, I get a connection received failure And if we look at our z tunnel logs, we can see that we got our back rejected because this is an l4 policy We don't get a nice like 403 response. We just get the connection rejected So now the last thing we're going to do is it's kind of sad that the lights aren't working If we try this again from a different app like net shoot I should be able to send Where is it? Do I have it open actually somewhere else? I think I have a running here Here yeah, so I should be able to send the same thing from the net shoot Thing that i'm running also in the default namespace, but it's going to use a different service account So when I run it we get the same the light slashing and looking back at the z tunnel logs very briefly We see that this is coming from the default default Uh service account, so that's why it's getting allowed in so we can see that the l4 policy is getting applied And our our lights are correctly working awesome. All right, so let's wrap up I think we've got like one minute before Stephen comes on stage and tells me to get off Um, I can't find my deck. Oh, there we There we go Okay, so We would love to get some feedback on this talk because we want to make this better for the next time around So please go to that qr code and let us know how we can make this demo Great for kubecon paris next year, which I hope to see you all at and finally if you want to know more Like if you want to dig more into istio ambient mesh We're actually going to be at the kubecon booth d11 come visit us. We're there like for the next three days We're actually going to be at the psyllium booth as well over at psyllium con come chat with us there as well Also, I'm doing a talk at psyllium con today in literally half an hour So come check that out. It's called bgp or sorry What's smoother than you know your expressive your morning expressive pools? It's bgp and bridging gaps So come check that out. It's over at the psyllium con session And we want to thank you all for your time. So we hope you have a great rest of your Edge day whatever other colo events you're going to as well as kubecon. We hope to see you around take care