 I think we are good to go. Okay. Welcome, everyone. Thank you for showing up. I really appreciate it. So my name is Peter. This is me. I'm a community advocate for the OPPA project, OPPA Policy Agent. I work for Styra. Today, I'm not talking about those, I'm talking about telepresence and kind of just from network engineer to K8's developer, right? My journey moving from a network engineer into K8's developer and kind of the skills and knowledge that transfers along with that, right? Because network engineering is a lot of troubleshooting, following network path step-by-step and kind of a lot of that transfers through into Kubernetes because you're always trying to go piece-by-piece to see where things have broken. So, and as I said, I'm Peter O'Neill. You can find me just about everywhere at Peter O'Neill Jr. Tweet me, connect me on LinkedIn. I'd love to hear from you. Cool. So let's start off, right? Simple networking path here. Once upon a time, websites networking, very simple. You have your home network, connected to your ISP, connected to some unknown number of middle hops, eventually to your website's ISP, right? Very simple networking path and then you have a very simple troubleshooting procedure, right? This is troubleshooting before cloud native, right? So you have your simple website hosted on a public IP, probably served on port 80, unencrypted, maybe just simple HTML website, right? Troubleshooting consists of can I ping this host? Does Netcat say that the port is open and available to connect to? Does NSLOOKUP resolve to the right address, right? Like is DNS working for the site? Right? And then your typical fix for this is either you fix DNS or you trace route to the end, you figure out which hop the path stops at, and then you call your ISP and say, hey, fix this, it's broken. I'm a network engineer and I can see exactly where the path broke, right? So very simple, right? Then we move into the cloud native application, right? Now we have a lot more things, right? You're hosted in a big cloud provider. You might have hosted DNS. You might have some elastic IPs assigned to multiple high-availability load balancers. You're serving on 443. You're doing TLS termination, right? You're passing this all through into Kubernetes, which then might hit API gateways and ingresses going through a service message. A lot going on now, right? So we've increased complexity quite a bit, right? And so now it's not as simple as just where does my networking path stop? Now all of a sudden it's where is my networking path getting all the way through and which services is it stopping at? Which connections are breaking? Which things are not working as expected? So the troubleshooting steps here are not as simple as is the port open anymore? Now it's like which port, on which service, in which place, right? There is a very large, complex system now. And so once again, kind of looking back at the network path for the cloud-native application, right? This is kind of the first half of the journey here, home network, ISP, and then we can even simplify this and say we're paying for a direct connect for our application, connected to the cloud network for load balancers to our Kubernetes service, right? But then that's just the first half of the networking path, right? Or sorry, I jumped ahead of myself here. And so this is first half of the networking path, right? But let's cover this in a little more detail, right? So this networking path to the cloud-native application, right? And so as I said, right? Home network, ISP, direct connect, but this is connecting you to something called your cloud provider's edge network, right? This edge network is where a lot goes on, right? And we'll cover that in a little bit more detail in a second. And then so from that edge network, you're going to be passed into a cloud region. So this is a cluster of data centers, a single zone, which is a cloud zone, which is a single data center, and then down into your Kubernetes cluster, right? So let's cover this cloud edge thing a little bit in a little more detail, right? And so with the introduction of this cloud edge, right? Like you're able to bring your application closer to yourself or to your customers, to your end users, whatever you're trying to access this thing, right? And so with this, what connecting to your cloud's edge gives you is, right, it gives you access to their robust cloud network, right? You're able to share the large backbone that your cloud provider has, right? You're able to skip the ASN merry-go-round being passed from ISP provider to provider, right? Like there's no miscellaneous MTUs that were set bad, TTLs that were set bad, right? Like you're not constrained to all of the mishaps that happen on the wild frontier of the internet, right? Like you just need to get to this edge network and then you're able to get very reliably to your application, right? And so as a network engineer, this is what we're looking for, right? Like five nines of uptime and like, oh, now we only have a couple minutes that it actually ever goes down or a couple seconds or whatever five nines actually equates to. But it's a lot of uptime, right? And so with that, right? Like now that you're a network engineer and you've engineered all this software-defined networking so that your network is very reliable just to get to your application, what does the network engineer do at a small startup, right? Like if you're not consistently troubleshooting broken hops and nodes, right? And so this is when I figured I would join over to the dark side and become the Kubernetes developer, right? And you might ask, why is it the dark side? Have you ever looked at the SIFT logo and compared it to the Kubernetes logo? I'm just saying, what a coincidence. But anyway. So now let's say that I'm working as a cloud-native developer. Where are my networking problems, right? Like where do I find my networking problems? How do I do all my tracing? How do I find where things are going on? So let's look at this networking path to the cloud application one more time, right? And so let's, like I said, this is the first half of the equation, right? Getting that direct connect right to your cloud provider and getting to your Kubernetes cluster, right? But that's the first half. Then we have the second half, right? This is where the networking path continues, but then it is still, it gets more complicated, right? So now it's not just this linear path, right? But now you enter your Kubernetes cluster, you enter your Kubernetes API, you get passed to maybe an API gateway, which is handing your user traffic to a set of ingresses. Those ingresses are pointed to any number of front-end services. Those services are intertwined into a mesh with back-end services. And then all of these services have n number of pods running, n number of containers. And there's a lot of places that things can go wrong, right? And so now something goes wrong, right? Now all of a sudden one of these pods or one of these containers, one of these things fails. So what do you do, right? Now all of a sudden you're looking at this from an end user perspective. You have no idea what's actually happening, right? Like you're looking at a lot of this and you're saying like, oh, it's just not working. It's kind of like, ah, frustrating, right? And so now we have to do some network plumbing, right? Like as any good network engineer does, you kind of just start at one end, fish to the pipe and see what's happening, right? And so traditional network plumbing, right? Like you kind of start at one end, go to the other. These are the tools of the trade. Trace route, MTR, ping, NSLOOKUP, Netcat and SSH, right? Like these are helping you identify where along the path things are not working. Then you're seeing how to connect to things. You're checking your DNS. And then at the very end here, you have that SSH command where you know I want to get as close to the problem as possible. Once I'm there, SSH to that box, and I can probably fix it. I can open up the ports. I can restart some sort of host level daemon. I can figure it all out once I can get to SSH, right? But all of this network plumbing doesn't work inside application routing, right? All this stuff works on the top half, but now that we've handed off to a Kubernetes cluster and we have all this application routing, how do we do this, right? Ta-da! There's a whole bunch of Kubernetes networking plumbing tools, right? So kubectl is the base command here. I'm sure most of us have experience with this now, right? And so the three very basic tools that we use pretty much every day when we're troubleshooting or working with Kubernetes are kubectl proxy to connect your laptop to your cluster, right? If you want to access that API directly, kubectl port forward. If you want to do something a little more specific, connect to a pod, connect to a service or something, and then exact IT, and then pipe that into like a bash shell command, right? And so that kind of gives you something similar to an SSH. Cool. So let's demo some of these commands. Let's pull up a terminal. That looks good. All right. So here I'm just setting my kubeconfig file. This just lets me access my cluster k. Here I'm using shortcuts. So k stands for kubectl. And I'll be using that for the rest of this demo. Let's just do a get all. See it's running on this cluster. It should be empty. Clear it right before this. So here we see we have just a empty cluster. I'm going to CD into my repository. Then I'm going to be looking at... Ooh. What is it? CD. There we go. Okay, it's examples. So these examples are the examples that are actually on the Kubernetes website. These are just pretty used. I was just looking for something simple to work with here. So you can find these on the Kubernetes repositories under examples. So let's CD into this guestbook go. And with this... So I'm just going to cat out all the config files and shove them right into my cluster. This is going to give me a sample application to work with. And here we see that the sample application has three services. And so we just do a kubectl, get all. Right now we can see that this... Ooh, that's not very... Formatted very well. So now we can see that this application is up and running. I see that the guestbook has... The front-end service has three pods running. And then it has a database with masters and slaves or mains and followers. But anyway, so this application is running. Now let's actually... Let's start working with some of these networking tools. So let's do a kubectl proxy on port 8080. And so this is up and running. And remember, as I mentioned before, the proxy here is going to connect our cluster to our local laptop. So this is going to give us access to the Kubernetes API on this port 8080. So we can do things like curl on localhost, on 8080 with API. And so the Kubernetes API now is returning things on localhost. We can see that it's showing me some information about the API. Maybe I want something a little bit more specific. Maybe I want... Maybe I want to see the pods. So namespaces default pods. So you kind of have to know where things are, though, on this API. It's not exactly the easiest thing to do, but you can pipe through things. And then what this gives you... kubectl does a lot of this for you. It masks a lot of working with the API. And when you want to do a get pods, you can see things. So let me actually show that. So let's see. Set my kubectl config. Get pods. This kubectl command is also working with the same API, and it's returning the same data. It's just presented in a way that's supposed to be very easy to use. But sometimes easy to use isn't going to give you all of the information that you need. So if we connect directly to the API with the proxy, maybe we can do something like pipe it through to a JQ and get some more information. Maybe I want all the pods.metadata.name. Yeah. I think that's right. Nope. Say again. Ah, thank you. Thank you. Appreciate that. Here we can see this is going to pull up that same list of pods. We're accessing the same information, but maybe we want to see more information about this. So this is going to give us a ton more. This is just another way to look at this information. It's another way to access specific information that kubectl might be changing for us. So with that, let's look at one pod in particular. Let's describe this first pod up here. This gives us some information. It's very basic stuff of what's going on. Instead of a describe, we do a get pod, and then we do output JSON to really see more information about this. This is going to look pretty similar to if we were to pass this in and say we want the first object in this pod list because we grabbed the first one off the list over there. So if we scroll to the top here, actually I think I passed it. That's the thing. We're working with the API. You get a lot of data piped into your terminal. Anyway, so here we see the top here. Scroll to the top here. This is just another way to look at this information and kind of dig into a pod in just a different way to get more specific information. So with that, I'm going to move into the next tool here, which is port forwarding. I'm going to stop my proxy here. So port forwarding. We talked about the proxy being a full connection between the cluster and your laptop, right? Connecting the whole API to your laptop. Now with port forward, this is a little bit more specific. This is I want to forward one specific pod or one specific deployment or one specific service to my computer so that I can interact with it in some way, right? Like I want to troubleshoot that thing. So let's do a coopctl. Port forward. Actually wait, let's do a coopctl. I'll get services first, right? Let's see what's working on here. What exists on here, right? So I see here there's a guestbook service type load balancer, which means this is probably our content and it's running on port 3000, right? So coopctl. Port forward service guestbook 3000, right? So now this port forward is running. I'm forwarding that service for 3000 down to my local machine, also on port 3000, right? It's a shorthand 3000. If you don't list a local port, it just mirrors the same port. So with that, let's go back over to other terminal here. I'm going to clear this. I'm going to clear this as well, right? And so now this is running. I can look at what this looks like on port 3000, right? HTTP is just a way to do curl with pretty colors. You can do brew install HTTP pie if you want to try it out. So yeah, so I see this is working. Okay, cool, right? But maybe I actually just want to see the servers. So I can pop over to here and kind of just pull it up. Looks like this service is working, right? So whatever problem was being reported is not here, right? Okay, so that seems fine, right? Maybe I need to dive in a little bit deeper. So let's go back to our port forward here and say, okay, the service wasn't where my problem was. Let's take a look at one of these pods or let's look a little bit closer into the application. So we'll do a, k, get. I'll see what's running, see what exists. So then I see, I see more, I see that the guestbook service is associated with three pods, right? So there's three pods running, right? The problem could be anywhere on any of these pods. So let's check out another one, right? So here we go. Let's take a look at this. So if we want to do that, same port forward, right? But instead of the overall service, let's look directly at one of these pods, right? And so this is actually going to look very similar because it is one of the pods in the service, so they should be very similar. You have this nice changing color effect, but you see it's connected to the same thing. It's presenting the same data, but we're just a little bit further into the application now. And so, right? Yeah, so here, let's actually... The last kind of networking command here we're going to talk about is the last kubectl command, is kubectlexec, right? And so with kubectlexec, what this is, is kubectlexecute, right? kubectlexecute is going to be how we execute commands, right? Maybe you need to execute a single command. Maybe you need to execute multiple commands. Maybe you need to bring up a shell terminal, right? So this is going to bring us very close to what we had with SSH, right? And SSH was, I want to get as close to the problem as possible so that I can log in, I can look at stuff, I can configure things, and I can try to fix the problem. So I will stop my port forward here, and let's just check this out. Let's run some commands. Let's okay, get pods, right? Let's try this first one here. So kubectlexec, podname, and let's pipe in just like an ls, right? Let's run ls on this. Okay, that looks good. Let's pwd, see what directory we're in. Okay, let's try to actually modify this. Let's create a peter.txt file, right? Let's just run that ls again, right? And so we can see that we can very easily run individual commands on any of these pods, right? This gives us a simple way to interact with it if we want to test something out. I think a connection's bad, right? There's one way to get into the pod, right? And then if we don't like to do it as a single command, what we could do is bring up our SSH style. We can start a shell inside of this pod, right? So kexec-standardin. So standardin is going to say it's going to read the input from my terminal, right? And then dash-tty is going to create a new teletype or terminal inside of the pod so that you don't interrupt the terminal running on process one, which could disrupt the container. And so one thing that always got me when I started was I used to always try to do this with... Oh wait, let me see this. Let's change the pod name. I used to try to always run this with bash, and I used to always hit this problem. One of the things you have to remember is that you're running containers. These containers are normally stripped down, trying to be as lightweight as possible. So bash normally is a little too heavy, and so it gets stripped out. But there's always going to be a shell, right? So changing from the bash shell to just shell will typically get you in. And so now we have what looks like a very normal Linux terminal. What we're all kind of used to, like traditional Linux troubleshooting, is a very simple way to interact and being able to do stuff. And so with this, right, we can figure out exactly what's going on. Maybe we want to ping the database, right? I want to check out what our connections are. This is kind of giving us just a very simple way to interact and understand what's happening at a pod level. And so while this is kind of a very traditional way of doing a lot of the Kubernetes troubleshooting stuff, right? Like maybe there's something better, right? Maybe now we are using cloud-native application. Maybe there's cloud-native tools that do a lot of this stuff, right? And so, right, let's actually go back here for a second, right? Oh, yeah, so this is kind of a recap. Koop CTL troubleshooting showing with the Koop CTL proxy, being able to directly connect to the API, with port forward, being able to connect to specific things and then Koop CTL exact being able to exact specific commands, right? And then so, as I said, like let's, with cloud-native applications, maybe there's cloud-native tools to help us do a lot of this stuff, right? So let's try telepresence. Telepresence is a CNCF open-source tool. It is currently in the sandbox stage, I believe. And so what it does is it is a networking tool that allows you to bridge your local laptop's network with the Kubernetes cluster, right? And so similar to how Koop CTL proxy and Koop CTL port forward work, right? Like you're being able to get closer to your cluster's network and you're going to be able to do a lot more things. So I'm going to exit this. I'm going to do telepresence. Connect. And so it's going to ask me for my password here. Why need your password? It's starting up a local daemon on your computer which creates a networking tunnel to the cluster, right? So we can see this with telepresence status, right? And so here we see like this is the tunnel that it's up. We see it as running. We see the version. We can see the remote clusters IP address. We can just see more information about the daemon running on your laptop and what it's up to. So with that, right, we're going to check that this is actually working as expected, right? So let's actually try to hit the API and ask for something, right? So cool thing about it, right? You're now connected directly to the cluster's network which means you're able to use things like cluster DNS, right? So here I have the Kubernetes service in the default namespace, right? And so this is something that now you can see that this is working. It's returning a 403 because I don't have any authentication attached to this request. But you can see that the API is responding and letting me know that, yeah, I exist at this address and here's the information that you wanted, right? And then so, right, let's look for some services, right? Like let's, maybe I wanted to check this, right? Ping guestbook.default, right? So I can see that resolution is working. I'm able to see these things. Most of these don't respond to Ping, but it's an easy way, easiest way that I know to just check an address real quick. But then that also lets us do things like guestbook.default, knowing that it's running on port 3000, right? So now instead of setting up a port forward, setting up another way, right, now this is just working, right? So it's not just this one thing either, right? We're connected to all things. So now if we wanted to pull up, or yeah, so get pods, like if we wanted to connect to one of these pods now, like let's describe this, okay, pod. Right, we can also just dive directly into one of these pod IPs. And so pod IPs have strange host name rules and don't just exist. You can't use the host name because you have to specifically append a host name in your config file. So the IP address ends up being easier at the pod level. Right, and so right now we can see, oh, much easier than going back and forth. If you are troubleshooting and you're consistently needing to check a bunch of things, right? So being able to not create multiple port forwards and multiple things to hop back and forth, this is kind of just a time saver, right? Being able to just use one tool that bridges the whole network at once. So with that, right, we also, right, and we can also pull this up. Right, this DNS works anywhere. So let's write the service name and then the namespace here, right, gives us the same thing. And maybe we want to do something, like, oh, once again, we want to check that the database is working, right? So let's see. So we see that it's there. Maybe we want to do Redis CLI, right? Actually, I want to interact with this in some way. Redis. Oh, I forgot the... namespace, right? So we see that that's operational. Maybe we want to actually, like, get some information back, right? Oh, there's the entry, right, that I did, right? Those are the actual information from this database service that shows what I had inputted, right? So this is the information on the screen, right? Just interacting directly with the Kubernetes services on my command line. Cool. And with that, that was my demo. Thank you all for listening. I appreciate it.