 Hi, hello, welcome to proxies pods and ports. I'm Aaron Alpar with cast in by beam cast and makes backup products were currently number one in Kubernetes backup. My name is Aaron Alpar. I've been working with Kubernetes since approximately 2016. Today, I'm going to be talking to you about various ways of accessing pods. Coupe cuddle has provides various means of getting access to a pod I'll be talking about those. And this is going to be a wide ranging presentation and my focus is going to be on covering most all the ways there are to do that all probably almost all of these will be familiar. What I'm hoping is is you'll pick up little bits and pieces that you may not have known along the way. So, this will be a good overview of the available methods, as well as getting into some details that may be interesting. So, pods have various access points using Coupe cuddle. It can be access to the API server. I'll be talking about methods of doing that. I'll be talking about using port forwarding to access pod ports directly. And I'll be talking about the standard streams standard in standard out and standard error. I'll be talking a little bit about logs as well. So just just for complete coverage. What I'll be talking about first is the API server proxy. The API server proxy allows direct access to the API server by HTTP proxy. Here you can access the API server by using a local port HTTP to access the API endpoints and Kubernetes. This is really valuable for scripting for debugging and in some cases accessing alpha or beta features that don't yet have integration with Coupe cuddle. This is similar to using raw the raw option on Coupe cuddle if you're not familiar with it that's okay. I'll be talking about that more. This presentation is going to use plenty of command examples and these command examples are going to be executed against a deployment and a service. Here's the YAML for both the deployment and the service and here's the deployment. A very simple engine X deployment with an engine X container. The service allows network access to that engine X container on port 80. Here's service YAML for that. The first topic I'm going to talk about is the API server proxy API server proxy allows direct access to the API server by HTTP connection from a local port. This is very useful for debugging or executing commands that are in alpha or beta and not fully supported in Coupe cuddle. Forward connections are forwarded to connections are automatically amended with authentication state. So there's no need to provide additional authentication when forwarding connections. Here's how you start up an API server proxy. It's very simple Coupe cuddle proxy. There are other options here for specifying local ports or hosts to listen to. In this case, we're just going to go with the default options. This will start up a proxy that listens on eight port 8001 on local host. In this example below, I'm testing it out by getting the root for the API server, which will show me a list of paths that are available on the API server. Here I'll show you a couple of examples of Coupe cuddle commands and how you might represent those as curl commands. Once you're using the API server proxy. So here I'm getting all the namespaces listed as table. And here I can do an equivalent by accessing local host 8001 API V1 namespaces and that will list out the namespaces. Here's an example of getting a singular pod. In this case the engine x deployment pod. And here is a curl example that I might use using the API server proxy. Now the API server proxy not only allows you to access resources, but it allows you to access ports on pods and services. So talking a little bit about that. And specifically I'll be talking about service services and accessing service points ports first. So, assuming of course that you have a API server proxy running locally. Here's a curl command to go through that, and you'll see here that I'm accessing a service in namespace demo one. The services call engine x service. This is the service that I showed you earlier in the ammo file. So on the end, it has HTTP. So for services, this HTTP that's on the end is a service name, the port rather the port name in the service that I want to proxy through. At the end you put slash proxy, and that is the API servers indication that you wish to proxy through this port. And at the end below, once I proxy through that port is actually accessing the engine x server that's behind that service. So, this is a really handy way of proxying to services without actually through the API to the API server proxy without having to set up a separate proxy. Of course this is only going to work for HTTP or HTTPS connections. So for that proxy URL, you can add any other additional URL path options as well as a query parameters. In the case that I showed in this case, it's the YAML I showed you earlier, where the port has a name HTTP. And you can see that that's the same that maps the same name that's in the URL. Here's the full form for that. If you're using API server proxy, and you wish to access a specific service port, this is how you'd go about doing it. Again, additional URL URL path parameters can be added on to the end as well as query parameters. So, if you'd like to do the same similarly for pod, you can do that. The form is very similar. In this case, I'm using curl against the API server proxy to access port 80 on on a pod. In this case, it's a pod in my deployment engine x deployment. And you can see here that I'm accessing a port 80 on the pod. So once again, just like the service example, this shows that I'm accessing engine x. And accessing engine x it's running on that pod on port 80. Similarly, if we actually go and look at that pod we can see that the container port is exposed on port 80. And that's how I would get access to it. The form is very similar to the services. Except that you specify a port number as opposed to a port name. So, kubectl proxy offers the ability to access resource definitions within the Kubernetes API in the Kubernetes API server, not only that but it allows you to access pods and service endpoints, very useful for debugging. And I'll talk a little bit about that. Next. So, you can use the kubectl proxy to get access to the API server and basically use it for debugging and something that's valuable is to use kubectl v9 dash v is the verbose option. It's developed by an integer and you can specify an integer all from zero to nine. The lower the value the less debugging information the higher the value the more debugging information. What's special about the nine is that it outputs curl commands that can then be modified and used to access resources to your proxy server. So in this example, I'm simply getting all the pods from all the namespaces. And I'm spacing fine verbose nine on the command line. Here's the output from that and this is the first few lines here are all debugging output. And what's buried in there is a curl command. So this curl command is the curl is the equivalent. If if kubectl was using curl it would be the curl equivalent of the URL, or the endpoint that it accessed in the, in the API server. This can be modified and used to then execute that against the API server. Once again, you would set up an API server proxy in order to get access to that. So it's very useful to use this in conjunction with kubectl to be nine to see what the URLs that the kubectl is actually accessing in the API server. And then you can do some debugging using the proxy. What this offer you is non truncated debugging output for kubectl which is very valuable. It's useful for also getting those curl commands for the API server equivalents to kubectl commands. And if you like you can use value values less than nine to output less eight truncates output seven just provides headers so on and so forth. So that's that is it for the API server proxy. What I wanted to talk to you about next is something that's similar which is the raw option on kubectl. Get create and delete all offer dash dash raw option. This allows you to submit you API server URLs using kubectl and it provides JSON output. So I'll show you. Here's an example of using dash dash raw. I'm using kubectl get but instead of telling it what resource I'm looking for, I'm going to give it a dash dash raw option. And then this should look familiar the API v1 namespaces. This is going to list all the namespaces that are available in my API server. If you're not familiar with jq I recommend you get familiar with it here I'm piping it through jq. This is just going to format the JSON for me nicely. And below you'll see that I'm getting a list of namespaces out. Of course this can be used for basically any endpoint that you could normally use with the API server proxy. Here I'm getting the service engine x service and piping it through jq again and there's a JSON for it being being output. So, those are examples of get. Now I'm going to show you an example of a delete and a create. And in order to do that, I'm going to be using this service dot JSON file. All this is is the service that I had earlier in JSON instead of YAML. So, here we go. If I go ahead and I can go ahead and could cut a delete. I use the raw option and specify the API server URL. For the resource that I'd like to delete. I have to specify a full name here. And in this case is I'm going to delete the engine x service. So, I get the JSON response that says that it worked. And then go ahead and recreate that using kubectl create dash dash raw. Specify the resource name or resource type that I'd like to create. Give it the file for the JSON and it'll create it. And if it successfully creates it, it'll output the service back to me. You can pass in query options when using raw. Here's an example of using a limit to limit the number of results that come back. So, the raw option on kubectl allows you to get access to get post and delete methods in the API server from kubectl without the need for setting up a API server proxy, rather handy that way. So, you are all query arguments to do things like limit the number of records that come back, set continuation tokens, so on and so forth. So, it's a very good tool to have in your toolbox. That's, that's pretty much covers it in terms of ways to access the API server or the API's using kubectl either by the API API server proxy or by using raw. That'll give you access to basically any endpoint there on the API server for getting resource metadata. There's another way of accessing your ports and services and of course that's through port forwarding I'm sure most of you have already used this. I'm going to be giving you a very short overview of port forwarding basically for completeness to show you that it's here and discuss a couple of limitations of it. Port forwarding forwards TCP ports to pods and services. Container specifically. It can only do TCP, it can't do UDP. It uses HTTP to streams to go ahead and do this, which can create some difficulty if you're, if you have a proxy server reverse proxy rather between you and the server that you're trying to access. And this is really, really handy if you want to browse the contents using a web browser and one of your pods or containers. So it's handy. You can use this to pass through any TCP protocol, not just HTTP and HTTPS as you would get with the, the API server proxy HTTP and HTTPS endpoints so any TCP protocol will work. It's very straightforward to set up it's basically very similar in operation to API, excuse me, kubectl proxy kubectl port forwarding. Here it is. All you do is specify the pod or a service which you wish to access on the other end. This is an example of the pod. The target port that you wish colon and the target port that you wish to access. In this case port 80. No, I'm, it's, I'm not specifying a number on the left hand side I'm only specifying a number on the right hand side of the colon which is port 80. What this will do is it'll assign a random local port to forward to that port 80. And in this case that random local port that identified was 54688. Here it's porting for port forwarding that randomly selected port to port 80. I could specify a port on the left hand side, in which case it would forward that local port to port 80. I'm not going to be discussing any of the details in the options. Again, this is provided for completeness mostly. So here I go ahead and do a curl command against that using HTTP, and I'm accessing my engine X server on the other end. So that once again, very brief overview of kubectl port forwarding. This is something I'm going to cover briefly, which is logging logs in Kubernetes are simply output to standard out or standard error from your pods and containers. Now this is routed into the logging subsystem, which is then stored on disk standard out and standard error merge together into one stream. The stream is rotated regularly, typically it goes to five rotations, either by time or by size on the on the node. kubectl logs only accesses the last log in the rotation. So that's something important to know. Logs also can be retrieved for the previous container instance, which is handy. If the previous container happened to fail due to an error can access the previous containers logs to perform debugging. Here's some examples of here's some examples of retrieving logs on the nodes. The first one is simply simply getting the logs out for my engine next deployment. In this case is getting all containers within the engine next deployment, which is, which is handy. Keep in mind that you can use label label matching for for logging. So in this next example, what I'm doing is I'm getting logs for all containers with the label of app engine X. I'm adding a prefix on that so I know exactly which container it's coming from, and I get the output. So that's handy if you're looking at a group of containers that are associated with a particular application for instance. Next I'm going to talk to you about kubectl attach like logs kubectl attach allows you to deal with the output from standard out and standard error. Unlike logs, it allows you to deal with standard out and standard error separately as well as gives you access to standard in. So you can use kubectl attach to get access to all three of these standard streams for terminal access or for separate access to standard out and standard error. You can even the option to allocate a TTY, which allows full interactive access to shell for example, that's running in your pod. Here's a very simple example of kubectl attach here I'm running kubectl attach to redirect standard out and standard error from my engine next deployment pod in namespace demo one. So I'm going to take us a slight digression and talk about kubectl run. Talk about kubectl run, because this is closely related to kubectl attach. So, here in this example, I'm using kubectl run to run a pod kubectl run is basically a quick way of getting a pod up and running for a specific container. In this case, it's just running busy box. Here, I can specify dash, the options dash I and dash T, I allocate standard in dash T allocates a TTY. And here I can see the effects of specifying the dash T below. So if I do the kubectl run, it's going to log me in to the to the container, give me command prompt and I can go ahead and type TTY. Here you can see that I have a TTY allocated on the PTS zero. So if I take a look at the file descriptors for my first process, you can see that all of these are routed to TTYs are routed to the TTY that I've allocated. This is a slightly different version of kubectl run. And you can see here I've left off the T option but I left the option on. So I go ahead and run busy box and it's going to attach to it once again, and I can enter a few times and this is going to allow me to execute commands. Here I'm going to run TTY, just as I did in the previous example. And you can see, not a TTY, since I haven't allocated to TTY. So if I look for my file descriptors for the first process, you can see that these file descriptors are all forwarded pipes, these pipes presumably go over the network to my local client, which allows us access. So now, kubectl run, actually, you can think of it as executing two commands, the run command itself and a separate attach. So here I can run kubectl run with the same options but at the very end, I'm going to say attach equals false. This tells kubectl run not to run an attach after you've run the pod. This will just run the pod in the background and wait for an attach, which I'm doing here. So I can go ahead and attach to it. Again, I specify my pod, and as before I can go ahead and run commands in that pod. So, I'm going to show you a couple of examples that show you the various effects of the options from kubectl run on kubectl attach. Here I'm running kubectl run dash int, so I'm allocating standard in and a tty. I'm going to run that pod in the background with attach equals false and then later on I'll go ahead and attach to it. And in this case I'm going to attach to it, but I'm going to redirect standard out and standard error to separate files. I run that, I hit return a couple of times, and since I've attached standard in, I can type commands. So in this first example, I'm going to echo the string standard out to standard out. I'm going to echo standard error out to a file descriptor two, which is standard error, and then I'm going to log out by hitting control D. I can then inspect the standard out and standard error files to see what was actually put in there. And you can see here in standard out, it got both the output from standard out and standard error. And if I can't, if I can't standard error, you can see that it only says, if you see this command prompt try to press enter if you don't see command prompt try to press enter. So here by specifying the T option, it's folded standard out and standard error into the same stream. That is the standard out stream. So if I leave off a T option here, when I run, when I run the pod, and then I do the same as I did before with the redirections and run the same commands that you can see here that the output and the files are going to be different. This actually separates standard out and standard error. So I can treat each stream separately. Very handy for debugging. KubeCubLattach allows you to forward your standard streams locally to your client, which is very handy. If you'd like to be able to split out standard error and standard out, then you should not use the terminal allocation in the pod, either in the TTY options within the pod or when you use something like KubeCubRun or KubeCubLexact. So that's it. Proxies, pods, and ports. A very cursory overview of what I've covered today is that KubeCubLattach gives you direct access to your API server, which can be very handy. KubeCubLattach not only gives you direct access to your API server but allows you to access HTTP and HTTPS endpoints on your pod and services, which is very handy. KubeCubLattach v9 can be used in combination with the KubeCubLattach proxy and just KubeCubLattach commands for debugging. It outputs curl commands into the log so you can copy and paste those curl commands for testing. So you can use KubeCubLattach with dash dash raw, which gives you very similar effect to using the KubeCubLattach proxy command without having to start a proxy. This gives you access to HTTP get, delete, and post methods, which is nice. As before, it also allows you to get access to HTTPS endpoints on your pods and services. KubeCubLattach allows you to forward standard streams to your client. If you wish to split out standard error and standard out, remember to not allocate a TTY on your pod or container. So, and last but not least, port forwarding for TCP traffic by HTTP2 from a pod or service. The HTTP2 sometimes can get in the way of using proxies so be mindful of that and it will not forward UDP traffic. So that's it. I hope you enjoyed the presentation. Here are a couple of references. One covers proxies and the last one's quite good. It's all about cluster access, similar topic to this presentation. So thank you. I hope it was helpful.