 So, welcome to our Network Foundations workshop. One of the reasons we built this workshop was because we felt that there was a massive gap between zero to understanding Kubernetes networking. So we wanted to fill those gaps for y'all and really help you understand all those different pieces and layers that add up together to allow you to do Kubernetes networking. But I had personal motivations too because when I joined Solo, the company I currently work for, I actually struggled with learning Istio, the service mesh. But as I started to dig a little bit deeper and relate these concepts back to some of the stuff that I used to do back in my network engineering days, I started to map out how this would all flow. And so, my partner in crime here, Jason, allowed me to work with him to build this brilliant workshop. So I really hope you all enjoy it. I'll go ahead and introduce myself. My name is Marina Widget. I am a developer slash platform advocate at Solo. So I work with the community, go out to conferences, deliver a lot of talks and workshops while also just engaging with the community to understand what their needs are. And through understanding and asking what they needed, I found out it was networking. Give us the foundations so we can better understand how to use a container networking interface, how to do service mesh, and everything in between. I'll introduce myself. My name is Jason Speck. I also work at Solo. I'm a field engineer based out of Buffalo, New York. And the one thing I'll say right now is that we all have different journeys to get into Kubernetes or how we got to where we are basically today. And so hopefully the networking foundations will kind of establish a good foundation so that we can all communicate freely about networking concepts. Fantastic. All right. So once you all to go to that QR code, if you plan on participating in this workshop, you have a sandbox environment waiting for you. So head over there, or you can even use a tiny URL to get to that Solo Academy link. Once you're there, feel free to launch the environment. We're going to launch it. Actually, I should have launched it right now. I don't know what I was doing. I'll go ahead and kick mine off. Mine looks a little different because I'm just using the backend version of the version you're using, which is the same thing. So no big deal. And so yeah, I'll give you a couple of minutes to just grab that and go ahead and start the workshop too, because it does take about a minute and a half or two minutes to get up and running. But I don't want you to proceed until we begin because there's a lot of information there. By the way, we're going to be going really quickly because we have 90 minutes and we want to make sure we cover the critical components of this workshop. Also I also want to provide some context and help you understand some of the key areas we're going to focus on. So if you look in this diagram, we have the OSI model, or if you're familiar with the TCP-IP model, very similar. The OSI model is a structure that allows us to understand how data moves around a network. And we do this in layers so that we can understand what's going on at the physical layer, how we're understanding who's talking to what, how we get across to remote networks, how we understand the actual name of these different endpoints that are communicating, as well as some higher level information that we gain at, let's say, the top layer in that stack. Now if you look at the technologies on the very far right there, you'll notice that Kubernetes maps to those lower layers because it provides the compute infrastructure, it provides basic networking, but you start to realize that you need more than just what Kubernetes can offer you. And so you start to see that there are other components, like the container networking interface, and I've listed Outsilium because it's one of the more prevalent CNIs that you'll see out there when you're working in various clusters. But to me, CNI provides a good amount of logic, but also works to move packets very quickly. What is a packet? We'll get to that very shortly. And then finally you'll see another open source technology at the very top, Istio. Istio is a service mesh that actually provides connectivity, observability, and security, but at a higher level or a higher layer. And this is because we can provide certain attributes to our workloads and how they communicate, and we can make policy decisions based on those attributes. So that's something that we cannot do at lower layers of that OSI stack. So I will come back to this diagram when I'm covering off-network namespaces, so just take a mental snapshot of it and we'll come back and review it later on. So in this workshop, we've got seven modules. We'll be moving very quickly. We'll probably spend about 10 to 12 minutes per module. If you have questions, we won't take them, sorry, I'm kidding. You're welcome to ask questions. We'll try to answer them as quickly as we can so we can proceed forward. So the team is going to start off with covering off the intro to networking and the basics of networking, as well as talking about the different models that we use, and then I'll start talking about routing and how traffic moves around the network, gets to remote networks, and everything in between. I'll also go on to cover why we need DNS and why this is important because you obviously don't want to sit there and remember everyone's phone number or IP addresses. And then I'll also talk about the HTTP layer because it's important and it transitions us to the area where we start to interact with applications at that layer because most of you probably have interacted with APIs. Well, guess what? We provide services on top of those APIs. And then I'll pass it back to Jason. Jason's going to cover off some of those core components that exist in our network, that process traffic, that allow us to implement policy, that enforces these policies, and also knows or allows us to get around the network pretty easily. And then I'll come back and talk about container networking because that sets up the foreground for how we achieve container networking in Kubernetes. And then finally, Jason will wrap up with Kubernetes networking and show you all the different components working together with Kubernetes objects and resources. So without further ado, let's go ahead and get started. Jason, you're up. All right. So by now, hopefully you've found your way to the construct environment. The construct environment is hosting several VMs, even some Kubernetes clusters that you'll be making use of throughout the sessions. And just to get everybody a little bit more familiar if you've ever used this environment, on the left, you'll see the terminal. You'll have access to run codes. It's going to connect to a specific VM. And then that will interface with other VMs. In several subsequent modules, there will be several other tabs that allow us to access different things on different networks. And then on the right, you'll be able to adjust and modify the prompts to give you some information. And hopefully, you'll be able to get some value out of the images that are provided there. So like Marino said, I'm going to take us through the atomic elements of networking, and we'll slowly build our way up to Kubernetes. And then hopefully throughout the conference, you will build beyond that in various specific domains. And so the reason why we're all here is that networking can be complicated, especially distributed networking across regions, across continents, in home labs, in public clouds, et cetera, et cetera. But the reason it's so complicated is because there's a lot of sophistication built into it that provides a lot of functionality. To start off, we have two different models. Marino mentioned and showed the OSI model, and then to the right, you'll also see a comparative TCP-IP model. They're color coded. I'll bring the sunscreen for a little bit. The green layer being the physical layer. This is when you're basically plugging things in and disconnecting things. The blue layer being the layer that's going to connect things beyond just the physical plugs. The red layer being a transport layer. And the yellow layer being the actual data that you're sending throughout the network. And you'll see that described here down below. In each of those layers, there is going to be a corresponding protocol data unit. Sometimes you'll hear me say protocol data unit. Sometimes you'll see PDU. They're interchangeable. And the next example is going to generate some traffic. We've all probably used Coral at this point in time. This is going to send some traffic to a service called hdbvn.org on the get path. And then we'll be able to introspect that traffic using the PDUs that we just saw for the different layers. And so no surprise here. If we run that Coral command, eventually assuming that the network agrees with us, we'll get the results that we expect, we'll get the header request. We'll get some basic information. But there's a lot more information that's being sent on the wire that we're not made aware of at this level. And so for time constraints, I am going to skip ahead to the full view of the frame. But in that frame element, we'll see, color coded once again, all of the data that is transferred. So we got a lot of the data from the first layer already through Coral. You can see that we made a get request. We're using HB1.1. We're going to a specific host. Going down a layer, we can see that we're picking down the ports, the source port, the destination port, the sequence number, and the checksum to do some validation of the network traffic. Going down another layer, we see the source and destination IPs so that we actually know between whom this conversation is happening. And then the last layer is the MAC address. The MAC address, as we'll explain in a later session, is going to be more apparent on the physical level. We're all here to do more code. We brought up these VM environments to do some things. And so we'll spend a little bit of time here using a tool that I'm pretty fond of called TCP dump. And TCP dump is going to be much more feature rich than we're going to make use of today. But it is going to provide us some good examples of the information that we're transferring on the network across all of the layers. I have here on the screen some of the flags that you can pass into it. If you are familiar with TCP dump, you can just ignore those. But the important thing here is that we're going to mirror what we have just described. So if we see here, I'm going to run a loop. That loop is going to curl HTTP bin repeatedly. And that's just because we're in a current shell session. And I wanted to make sure that I can capture some of that traffic. The first command we're going to do is we're going to run a TCP dump. We'll wait for the next iteration of that loop to pick up. And then we should eventually see some output on the screen. If we don't, I'll just cancel things and restart them. But the curl loop, I do believe. Oh, I just scrolled a little bit too far. Let's start that loop again. TCP dump again. And hopefully eventually, yeah, there we go. The network agrees with us. And it's a little small on the screen. I could make it bigger. Well, if you are on your personal screens, you can probably zoom in. We get some of the basic HTTP information. But there's a lot of information up above. That's not very human readable. We'll parse it down below once we run one more request. Going to do another request. We're going to filter for some of the other information. In this case, we are adding that we are using the source flag. And we can see a lot of the same information available in the first request in the few lines. But then we're seeing more advanced Layer 7 information. We're seeing the actual content from the HTTP request, the 504 gateway timeout, and the several headers, the server, the date of the content type, the content length. So if I make this bigger, what we're supposed to see, assuming the network agrees with us, is content from the frame. We'll see MAC addresses. The MAC addresses that are on my screen won't align with what the MAC addresses on your screen are, most likely, as well as the other information available here. But the packet is available in the source IP and the destination IP. The time to live, 64. We're going to see the segment. That's got the source, the destination ports, the sequence number, the source port. And then lastly, we have the data. The data is going to be the Layer 7 information that gets the stuff that we've seen several times already. Like I said, the TCP dump command is going to be useful in a lot of other domains. And you might even use it later today. But for now, we're just going to move ahead, having covered the PDU, the information that's being transferred on the network, and the layers that those correspond with. So this pkill is just going to clean up the running commands so that the VM is not overloaded for the next sessions. If on your own time, you do want to get a little bit more of a circumspective view of networking tools, you can go through this other commonly used tools category. We'll do the same thing kind of that we did with TCP dump, explain what content is available within them and why they would be used. The commands we go over there are the IP command, the DH client command, the route command, several others. You'll see netcat, you'll see netstat, ping, et cetera, et cetera. And the important thing here is that some of those commands are going to be more or less reading commands. Some of those are actually going to be able to make changes to your physical network. And we'll use some of them later on. Assuming that you have a cursory example or that you trust that the content that will give you throughout the rest of the course, we can skip through that. And that'll bring us to Marina's next chapter, which is awesome. Fantastic job there, Jason. All right, so Jason gave us a great overview of the different tools and the OSI stack in the TCP IP model in terms of how traffic flows and how traffic gets encapsulated and flows through those layers. Now, how many of you have actually ever run into a situation where you had duplicate IPs? Come on, don't lie to me. I know a lot of you have, but you just don't want to admit it's OK. Now, what's very fascinating is that every single endpoint that you've ever interacted with, whether it be a phone, a laptop, maybe a server at home, maybe a little nut device, whatever it is, it has something called a MAC address. How many of you are familiar with a MAC address? Just so that I can. Awesome. All right, so I don't really need to explain it. But it's a burned-in address that we can easily identify in a local area network who owns that device. Who is that device, the manufacturer, what it is, and what it's trying to do on the network. Except we never really use MAC addresses to be able to communicate with each other. Because when we think about it, your name is like your MAC address. But in order to contact you, you need a phone number or some form of communication like an email. But the thing is, if we're talking about phone numbers for a second, we're never going to be able to remember everyone's phone number. That's why we have phone books or a contact list on our phone. And so I'll talk about DNS in the next module. But you'll start to see how a MAC address gets translated into an IP address, which gets translated into a DNS host name. Now, ooh, I blew this up way too big. I didn't realize that. Now, what we're going to do is we're actually going to understand a little bit about how devices can communicate across the network. So I talked a little bit about Layer 2 MAC addresses. Every device has one. Even VMs have one. Interestingly enough, containers do. But we never really interact with those MAC addresses. And the reason behind that is because we only really care about assigning labels to our pods and knowing what these names of these pods are and then working from that going forward. And then there are some other artifacts within the realms of Kubernetes that allow us to communicate with those pods through something called services, which we'll cover at the very end. But it all begins with a device having an IP address being able to communicate with another device on the network. So I have a nice little diagram here, which I'll show you right here. OK, let's bring this up. So if we look at this diagram for a second, we notice that we have two applications on the left and two applications on the right. Now, each of these applications has an IP address. It could be anything except for what you see that's close to that little circle with a bunch of arrows going into it. And I'll explain that in a second. Now, let's say app one needs to talk to app two. Because they are on the same local area network, there is no intermediary device that needs to be involved to transport that payload anywhere else. So the local fabric that these two devices or these two applications, or let's say even virtual machines are connected to, would be some sort of switch, some sort of hub, some sort of device or bridge for that matter. And all this bridge or switch is doing is receiving these electrical signals from this particular, well, in the case of virtual machines, not really electrical. But it's receiving some form of communication from these endpoints and then carrying them over to the destination. Except when we have to go to a remote destination. Now, we have two networks in this diagram. We have one on the left and one on the right. If app one wants to talk to app three, we have to involve something called a router. Now, this router is extremely important because it's able to transport us across to various remote networks because, again, we have all of these IP addresses. They're tied to or bound to one single network. But if you start to think about it, all of you at home probably have a single subnet. It's all probably 192.168.x.x. And that's fine. But some of you might be running home labs, which you have a separate subnet. And sometimes you might need to be able to bridge those two. And you might be using a router to be able to achieve that. Or you might be using some sort of proxy device to get there. But in the case of this environment here, the network on the left doesn't know about the network on the right. The network on the left knows about that router in the middle. And that router is going to be the one that's responsible for saying, hey, I know how to get you to this destination. So in certain situations, you might statically define this. You might put in static routes, which I'll discuss in a second. But these static routes effectively tell us where this remote network is, how to get to that remote network, and through which interface to use to get to that network. So if any of these two apps here were trying to get to that remote network on the very far right, it would say, hey, 192.168.52.1, can you tell me how to get to the 10.13.37.x network? That router will say, yes, I do. Let's go. Let's go over to the other side. And then there'll be bi-directional communications so that app three or app four will provide the response to that router. Now, I mentioned that everything needs an IP address. How do we get IP addresses? So there are a couple of ways that you can get an IP address. You could basically assign it yourself statically, or you can use IP address management. When we're talking at scale, we definitely don't want to be sitting there manually assigning IPs to any endpoint at all. We actually want to use something like DHCP, or the dynamic host configuration protocol. And what that does is any time a device is connected to the network, that device is going to solicit the server and say, hey, I need an IP address. Can you give me one? Now, this server, this DHCP server, is going to go in and determine, does it have enough allocatable IP addresses to hand out? And if it does, it gives that endpoint an IP address, something called a subnet mask and a default gateway. The default gateway is something I previously described, which happened to be that routers interface on that network. And then there's also a couple of other things that are assigned, like maybe a DNS record or name, as well as a DNS server, because we obviously need to be able to communicate to destinations that we don't know the IPs for. Now, once all devices have their IP addresses, they're able to freely communicate with each other, we also start to think, okay, how do we manage this at scale? How do we find ways to get to these remote networks? Now, obviously you're not going to be sitting there deploying what we call static routes anymore. You would want to use a dynamic routing protocol. How many of you are familiar with BGP or border gateway protocol? Perfect, okay, so I don't need to dive too deep into it, but it's the protocol of the internet and it's the protocol for the WAN. Basically, BGP allows us to take multiple remote networks and transport them across the globe to various other networks as well. We have something called neighborhoods that are formed or neighbor relationships that need to be formed so that different routers that run this BGP process can communicate and exchange those routes. So when you do this at scale, you start to realize, okay, it's easy enough because these different neighbors are able to form their relationships, they're able to trade information and they can work around situations where a neighbor might be unavailable. That's the power of a dynamic routing protocol. We're not going to sit here and dive too, too deep into it, but I bring it up because it's likely that you will be using it in your cloud environments, your data center environments, your edge environments and anywhere else. So what we're going to do is we're actually going to go ahead and configure routing. If you skip to this little section here that says let's configure routing, let's pop that open and we're going to go ahead and install some net tools and these net tools will allow us to configure our network namespaces, which we'll cover in a later module as well as several other IP related tools so that we can configure static routing. We can even configure routing within a namespace and we can even forward traffic. Now we're going to run another command here, sysctl-w net.ipv4.ip forward equals one. All I'm just basically saying is if I'm going to turn on routing capabilities, I need to make sure that I can actually route. So it's just turning on the flag for routing at this point so that I can effectively route between two different networks. Now, when I say networks, I may also use the word subnets because in the world of IP networking, every network has a subnet address and then every IP in that network has an IP address within that subnet. Now I will say something quickly about IP addressing and subnetting. I'm not going to explain it to you because there's not enough time. I'm not going to go into the details. I'm not going to subnet with you right now. But when you start to build these networks, you're going to start to realize that everything can't get a slash 24, right? Because guess what? You'll have IP address wastage. So when you subnet and you use a slash 24 subnet mask which is 255.255.255.255.0, what you'll find is that you have 256 addresses in that subnet but you only really have 255 usable addresses, one that actually now needs to go to a gateway so that you can get to remote networks and then the others are assignable. But let's just say I handed every one of you an IPv4 subnet. You would probably only use maybe a fraction of 10% of the addresses. So I have wastage. So what we tend to do in the world of networking is we subnet further so that we can take those 255 addresses or 256 addresses, break it down to maybe 128 or 64 or 32, shrink that actual network size so that we have less unnecessary cross traffic, we have less IP address wastage and we can be a lot more efficient with how we build our networks. So I'm going to turn on routing and forwarding and the two commands netstat dash nr and route provide the same output. What you're looking at right now is a routing table. A routing table basically has a list of different destinations which basically say, I know how to get to this remote network through either this interface or through this IP address. As long as I can reach this IP address, it's going to take me to where I need to go. So in a lot of hosts, like for example this Linux operating system that you're looking at right now, you won't have many entries for the routing table. But if you actually looked at a real router, you might find multiple entries, probably in the 50s to 100s of different kinds of routes that might be popping in. So let's go ahead and configure some more items here. We're going to take a look at the IP address command here and it's going to tell us what our interface IP address is. And we see that it is 10.5.1.132 slash 32. Ignore the slash 32. This is actually a host routing mechanism that environments tend to employ. Quite honestly, it doesn't really matter that I explain it right now, but you will find this in a lot of different environments because it just isolates that network to that singular host. So it makes routing interesting. You might find this in VPN setups at times, especially if you have remote access users using tunneling to get back to their office, which is less and less of an occurrence these days. Anyways, so what we're going to do is we're going to take that IP address and if you see there I'm creating an environment variable that just captures the IP address of that interface and assigns it to that environment variable. And we can see that it's reflected here. And we're going to quickly create a static route. So in this static route, I call on the command IP route add. So I'm adding a route to that table. I'm specifying the destination network of 10.13.37.0 slash 24 via or through the IP address. I just assigned to that environment variable. So now you can see that that new network is in there. This reference here that you see under Gateway is a reference to the interface that it's exiting on. But we're not going to do anything with this because this is going to go nowhere. It doesn't even go to this network. So we're going to go ahead and delete this. And we're actually now going to proceed to build two different networks and wrap between them. So if you copy this command block here, it's pretty long, I'm going to clear my screen so you have some full real estate here. Let's walk through this quickly. So I'm creating two network namespaces. One is called sleep and one is called web app. I'm effectively simulating a virtual network with each namespace. And within each virtual network, I can assign a subnet address. And through that subnet address, as long as I have the right routing in place, I can communicate with each other. But what we're going to do initially is create the namespaces, assign them interfaces, assign those interfaces a network, and then proceed to bring these interfaces online, and then attempt to communicate. And then we'll see what's actually missing. So if I do an IP net NS, I should see my two namespaces are there, which means things are looking good. If I do an IP, well, you don't have to follow these commands if you don't want to. IP net NS, I'm just doing this for verification. IP net NS, IP link, I think, sleep if. Oh, probably that doesn't work, that's okay. We'll see this right here. So if I do an IP net NS exec, oh, I forgot the exec command, that's probably why. IP net NS exec, sleep if, IP link. Okay, forget about that command. Let's go and run this ping command. What this ping command is gonna tell me if there's actual connectivity here. So IP net NS, exec. By the way, have you ever used kubectl exec before? Oh, okay, all right. Okay, this is gonna come up later on. Okay, so the network is unreachable. Why is the network unreachable? If you look up here and look at these address spaces, I have a 10.13.37.0 slash 25, and then I have a 10.13.37.128 slash 25. This implies I broke down my slash 24 subnet, which was 10, 13, 37.0 into two subnets. So it was a slash 24, now it's a slash 25. Every time you divide by two, you're actually shrinking that network space. So that's what I did here, just for the sake of this example. Now, if I try to ping from the web app to the sleep interface here, it's obviously not gonna go through, but what we need are static routes. So those static routes, the moment I add them, they're effectively saying in each respective namespace, if you need to get to this other namespace and this other network, here's the route that you will take. Now, if you notice here, I'm actually specifying the device interface. I permanently do that because, let's assume I don't know the IP address, I know the interface that it's gonna exit on, I know it's connected to the destination network, this should work just fine. So, I've gone ahead and plugged in those routes, let's go verify that those routes exist. And if you see here, right, so this is the first namespace, IP, this is the sleep namespace, you can see that the network is present and we're actually going through the network namespace. Now, what's interesting here is that it doesn't have a true default gateway because we're not going outside to another router, but we are using our local interface to be able to route to this other network. Now, if I try this ping, what do you think's gonna happen? I mean, yeah, you see that it works, right? But it should work, it should work because we do have routing between the two. We have the routes that are present that are necessary to be able to communicate to these destination networks. Now, before I proceed, one thing I need to point out is, if you try to do this at scale, by the way, this is the basis for container networking, if you try to do this at scale, you will fall over, primarily because when you think about how many pods you run in your environment, you can't do this for every single pod in your Kubernetes cluster. So you use tools like, let's say the Cilium CNI to be able to provision IP addresses for pods that come online and then maintain a database of who owns that IP address so that this also applies for other security needs and uses. However, it also provides an IP address management, so every time a pod disappears, well, that IP does go back into that pool and can be used eventually, but we don't initially do it because for security reasons, obviously. All right, so it works. We know that pinging is working between two different networks. Let's proceed to DNS. I'm not gonna cover BGP, I kind of talked about it, but let's go to DNS. All right, while the module's loading, BGP is such a useful protocol. If you have ever configured it, you know that firewalls tend to play havoc on BGP connectivity because BGP uses TCP port 179 to form neighbor relationships with other BGP speakers. Now in the world of BGP, you'll notice that all it takes, especially in the internet, one misconfiguration, poison routes to be injected into the public internet and then make certain websites unavailable. We've seen it happen before and we've seen how it contributes to the lack of DNS. If I can't get to the network where DNS lives and I don't have any resolution, I'm not getting to that website. And this is critical, this is important because when you think about what actually goes on with the DNS and routing component, when I issue a command, or let's just open up a web browser and let's go to httpbin.org. What actually has to happen first is httpbin.org has to be translated into an IP address. Remember the phone book analogy, right? You don't want to remember everyone's phone number so you have a phone book with their contact information. This is the same case for DNS. Now with respect to DNS, we have a very interesting flow and most of the time you'll see this in a lot of environments. You might even use like a DNS cloud service or something to provide you the ability to create records. But in this flow, you have a network, let's say, that you've built in your data center and you might have something of a local DNS server that will actually respond to immediate queries, right? Because it's gonna have a bunch of cached entries and those entries are gonna say, hey, I know what this IP address is. And once you have that IP address, well now your network stack can figure out how to route to that IP address. So httpbin.org translates into an IP address which we need to figure out how to get to. So we're gonna use routers in between to get there. So without DNS this is gonna be a problem because we can't translate that name and we ultimately can't get to that destination. But in this flow, there's actually a few types of DNS servers. There is the DNS resolver and that's that immediate server that you'll probably assign to your hosts or your endpoints. And then there's the root name server. The root name server basically will provide a resolution for the ownership of that top level domain. So let's just say you go to httpbin.org. The top level domain is .org. So .org means I have to figure out, if I go to this top level domain server, what is it gonna tell me? Who is it gonna tell me to go to? So once I figure out the top level domain server from that root name server, that top level domain server will actually tell me the translation of httpbin.org. Because in .org or in that TLD server is where that entry lives. Now this is generally a public server or a public DNS server that some organizations might have access to to make entries into you might roll your own DNS server that you contain and you manage and you lifecycle as well. Now what's interesting is that there's a very, there's one more component called the authoritative name server and it's actually the final stop, which actually passes all this information back to your local DNS server and says, hey, this is the actual translation that you need to use to pass to your DNS server and that gets passed along to the host. When the host knows this translation now, it's able to go out and say, look, all right, let's get to this destination. Hey, router, can you take me to 1.2.3.4? The router says, yes, I know how to do that and then we'll take you there, you'll get your response and everything will be good. Now, this is why you need to have network accessibility or access to this DNS server regardless of where it is. If you don't, I mean, what you normally do in production environments is always have redundancies. So if you don't have your primary server, your secondary will respond and provide answers to any sort of DNS queries that you have. Now when working with DNS servers, there are a few types of records that you might encounter and you might work with. One is known as the A record, which is for IPv4 addressing. So if you have an IP address and you wanna create a host name entry in a DNS server, you'll use an A record. And then you'll also use something called a PTR record or a pointer record that actually does the reverse. So you have a host name, you wanna define its IP address. In certain situations, you may not see PTR records on DNS servers primarily because you might have hosts that are using load balancers and you might have multiple load balanced IPs that might be round robin or this might be geolocated and based off of where you are, you might resolve based off of the DNS server that's close by. There are a few other types. There's the Quad A record and it does the exact same thing, except this is for IPv6. And oh, by the way, I know I didn't talk about IPv4 versus IPv6, but let me give you a quick note about it. So about like, I don't know, 20 years ago, we started to realize that we were hitting this public addressing IPv4 exhaustion, meaning we were running out of public IPv4 addresses. So we decided to take the initiative, well, a bunch of bodies of orgs actually decided to take the initiative to build out another IP stack called IPv6, which generated a gazillion addresses and made them all-routable. And they designed it in such a way that basically every one of you could have, I don't know, how many multiple trillions of addresses if you wanted to, to assign them to whatever you want. But this was an excellent fit for the fact that all of our devices were growing. We were seeing so many more devices on our home network. If you think about all those containers, yeah, they need IP addresses. However, the reason why adoption has been massively slow is because of those band-aid solutions we stuck on top. How many of you are familiar with network address translation? That is the reason why. And we still use this today, even in Kubernetes, as funny as that sounds. I will say though, in Kubernetes, you do have the dual stack functionality so you can do IPv4 and IPv6. But this just gives you a bit of context as to where we are in this space. You might find a lot of websites are actually using IPv6 today, but also still support IPv4, primarily because of some legacy applications that exist out there. Now, one more point I'll bring up is about fully qualified domain names. A fully qualified domain name is the complete path for that host name. So when I go to httpbin.org, that is the fully qualified domain name for httpbin.org. But I might also have maybe some other subservices that exist in that same location, in that same website. So it might be like version1.httbbin.org, and that's the fully qualified domain name for that. Now, when you think about Kubernetes, how does Kubernetes do DNS resolution? So in Kubernetes, we deploy things to something called namespaces. And in those namespaces, we'll have pods or services. Now, if I'm trying to access a pod or a service within the namespace, I just need to reference it by its host name or pod name or service name. But if I'm going between namespaces or coming in from the external side, I might want to do the fully qualified domain name, which is the pod or service name, the namespace.service, and then the domain or the cluster domain. It could be clustered at local, it could be whatever you call it. But the reason we use this format is for remote access as well. Now, what we're going to do is we're just going to use two tools very quickly, and you'll get a sense of how you can use DNS and understand and interpret certain types of data. So a very useful tool, oh, what happened to my terminal? There we go. Very useful tool that we used to use back in the day is something called NSLOOKUP, which was a very easy way to translate a domain name or a host name to an IP address. However, these days we use something called dig. So you basically specify the dig command and then you target your server and then you could basically target a certain set of features that you're looking for. So if I just deploy the simple dig.htbbin, I'll get similar information to what I got with NSLOOKUP. And what you'll notice though is that it's actually spitting out a lot more verbose information. I can shorten this and just give me specific information about what I'm looking for about HTTP.bin. But if you notice here, there are four addresses. There are four A records for HTTP.bin. And the reason why that is is because of load balancing. It round robins the connection between any one of those IP addresses. So if any one of those were unavailable for any reason, well someone else is going to respond. And this is very powerful, especially when you're thinking about high availability, redundancy, and ensuring resiliency in some of your applications. DNS tends to be very favorable or very useful in those particular circumstances. Think about disaster recovery for a second. If you had two environments and you had apps behind those environments and you wanted to ensure that your apps were always available, well you'll have other mechanisms to proceed with disaster recovery. But that DNS preserves everything so that folks that are accessing that service or that application can continue to do so with any sort of disruption. So you can use dig to shorten the output, which will only give you, let's say the A records in this case. You can use dig to basically give us very interesting details about who's mapping to those addresses and what type of record you're actually using. In this case, we're using A records. We could be using C names, which stands for canonical name, which is just a mechanism to map a host name to another host name. Or there are various other things to do. If you use dig at, dig at allows you to specify your own server, your own DNS server. So let's just say you're using dig without the at flag. What that means is you're just gonna use the local server that you're connected to or that you've been assigned, meaning it could be something local to your network. But sometimes that local server may not have the answer you're looking for and so you have to specify another server. So if we use 8.8.8.8, which is one of Google's DNS servers, it will resolve something that our local DNS server may not be able to resolve. So it does it in the very similar manner that the local DNS server would, except sometimes you might find that the local DNS server might take a little bit longer because it has to go through the process of discovery through the upstream servers that it communicates with to get that answer. So there's plenty more in terms of what we can dig into, but what I really wanna emphasize is that DNS is extremely critical, especially when it comes to working with containers and Kubernetes. Using dig is an excellent tool to be able to, maybe decipher whether or not you're getting the right address translations from hosting to IP addresses. But what I will say is that it's not just about using dig. There's so much more. We're gonna dig into some other areas as well. What's up? Are you gonna dig into me? I'm gonna dig into, no. Now we're gonna dig into curl. So let's go ahead and move on to the next module. Actually, before you do, I will point out something about core DNS. How many of you have worked with core DNS before? Perfect. All right, for those that haven't, core DNS is the DNS server inside of your Kubernetes cluster. What's really interesting is this process is automated so you never have to sit there and configure core DNS because it comes with the control plane. Now anytime a service or a pod comes online, an automatic entry is created in core DNS so that it becomes referenceable across the entire cluster. So any resource that needs to know about this name, translate it, what not, and find a way to communicate with it, will reference core DNS for that answer. So you could try it out at home if you wanted to just deploy core DNS as a container. You can exec into it and then mess around with the local host file and start adding your own entries. By the way, if you didn't have a DNS server, every computer here has something called a host file. That host file allows you to statically assign DNS addresses. This is great if you're doing testing but highly dangerous if you're trying to go into production. So just keep that in mind, it's a great tool but sometimes you might forget that you left an entry in there and you can't communicate with a certain resource. All right, let's move on to curl and HTTP. And I need some water. All right, how many of you are familiar with API communications working with the HTTP layer and using curl? All right, can I just skip this module then? No, let's go through very quickly. So curl is an excellent utility to be able to interact with various HTTP applications and will give us different types of responses. As Jason showed you earlier, we ran curl against HTTP bin and what we got as a response was an HTTP 200 success code. Now, when it comes to working in Kubernetes networking, we care so much about success codes or status codes at the HTTP layer because they give us an indication of whether or not services are accessible or if they're rate limited or if there are too many requests coming inbound. Now, the HTTP layer is something that you'll start to notice becomes much more prevalent when you're working with a service mesh as well because that's how we interact with services and provide policies against those services. Now, what should I talk about? Yes, okay, so with HTTP, there's a structure, right? You issue a message to a server and that server will give you a response, but you can actually run several different types of operations or request methods against a server. The obvious one that you probably all worked with is the get method, which actually allows you to just get information or get the output of a certain website, for example. There are other methods like the post method, which actually allows you to update or create resources inside of your application. The put method is very similar. It allows you to update resources as well and then there are a few other methods, one notably the delete method, which allows you to just remove unnecessary resources. Why I bring this up is because when you're starting to think about layer seven authorization between your services, you may want to allow only get methods, but disallow a delete method. This is important, especially as you're trying to establish your security posture in your environment and you start implementing policies to ensure only certain services can communicate with others and only run certain actions on these others. You likely will use something beyond a service mesh to help with the policy development. You might have heard of OPA or Styra, the company that helps with OPA, and they provide the authorization mechanisms and engines to allow you to get really granular with your security. Anyways, enough about that. Now there are various status codes. There is the 100 level status codes, which provides you just informational responses. There's the 200 status codes, which just tells you that the request was successful and you will receive a response. There are the 300 codes, which actually tells you that your request is being redirected to another system or another server or maybe through a proxy. The 400 status codes are client error codes. Now interestingly enough, these are the messages that we return back to a client when we want to disallow something or when the client is unable to, let's say find a website. Have you ever heard of the 404 not found? That's one of the client side messages that you would receive if a particular server or a website was inaccessible. And that could be due to a number of things. That could be due to a lack of network connectivity. That could be due to DNS not resolving, but it's probably something on the client side. Now, when you receive a 500 response, chances are it's the server that's issuing a response saying this resource is not accessible or there's something, there's a problem trying to access this particular resource. So I've listed out a bunch of different status codes that you might encounter. Some of the more notable ones that you'll see is a 401 unauthorized, a 403 forbidden. Oh, and a 403 forbidden is usually tied with a policy that you might have put in place to disallow a certain resource from accessing another resource based off of a certain attribute. So what's interesting is if you go through this list, you'll start to realize that some of these status codes will show up when you're working with service mesh technologies or API gateway technologies. Now, we're just gonna basically use the curl utility, but I have a very strong feeling, a lot of you have used curl before, so I'm not gonna get too deep into this, but curl is a great utility to interact with a server. It basically allows us to provide various options, provide authentication against maybe a secure server so we can issue, let's say, a certificate as we're making that request. And if that certificate is a valid one, then we're able to authenticate into the system and run whatever action we want to. But there are common flags that you'll use, right? You'll use the dash capital X, which allows you to specify the type of method that you're gonna use. You might add a dash H, or capital H, which is a header, and what you might be throwing in there is an attribute. That attribute might be like, I'm gonna specify this key value, which might be user equals Andrea, or user equals Jason, and based off of that header information on the policy side, if you could recognize that header that you've just placed in your request, then based off of that policy, the traffic or the request might be allowed or disallowed. There are several other options too. For example, the dash K, where you're trying to make an insecure request to a secured resource, and if that resource allows it, then for sure you can get through. So we're gonna curl ipinfo.io and just see what it gives us. And if you can see here, it gives us a good bit of information. For example, it gives us the IP address that is owned by that host name, where it's located, and several other bits and pieces that maybe might be useful to us to help us determine if something is going wrong in the environment. So if I run the second command, this curl command, which has a get method in it, and I apply it to hdbbin.org and use the header accept application forward slash JSON, this is going to work, I think. I could be dealing with a connectivity issue. Okay, maybe that didn't work. That's because all of you are trying to access it. It's receiving too many requests. What is it, hdb 429? Yes, that's too much, man. All right, we'll skip that one. We'll try this test domain. One thing with this curl command that I'm gonna run with the capital O is it allows me to download a file and output it to whatever download folder that I've specified. So if you see here, I downloaded a test file.tar.gz. I can do this with another flag, download the same file, but the lower case O allows me to rename the file as I'm downloading it. So if you think about it, let's say you're downloading a binary, let's say it's doctl and you want to call it something else, this might be useful for that. And that just brings us to the very last section about HTTP 1 versus 2 versus 3. Now, HTTP 1 was one of the first protocols that allowed us to communicate across the internet between client and server, but we had to take that a step further and implement security features and capabilities, basically authentication at the next, whenever we were interacting with web servers. So this is where we implemented TLS or SSL and then TLS and that brought us to HTTP 2. Now, HTTP 1 and 2 are based off of the TCP protocol. I didn't talk about TCP or UDP yet, so let me bring it up now. TCP and UDP are two protocols that live at the transport layer of the OSI model. What is going on between TCP and UDP is that we're really shooting traffic over on certain port numbers, but it's very distinct between the two. TCP is something called connection oriented, which means when it forms a connection with a server from a client, it's looking for an acknowledgement. The server will send some data, it'll look for its acknowledgement, and then this will be a repetitive process to make sure that that communication is ready to go. In UDP, however, we just, we fire off information and packets and we keep going and the interesting thing about that is the receiver or the server on the other end doesn't care, right? It doesn't know, it doesn't acknowledge, and this is very fast. So you have a system that acknowledges every single flow but is slow, and then you have another system that just keeps firing away and effectively cares about speed. Now why this is important is there are a lot of applications that are susceptible to latency errors, whatever that might be going on in the network, and TCP might be an excellent fit for some non-critical traffic. Now, when you look on the other hand, if I'm having a voice conversation, we're live streaming now. So you can't acknowledge every single packet that you send because the receiver, whoever's watching on the other end, will see that conversation be very jumbled up or garbled up or whatever and also very slow because it's always waiting for an acknowledgement, whereas if I use UDP, I keep firing away my video packets and the receiver will receive it. Now there's something else that's taking care of it at the higher level to make sure and guarantee delivery. We're not gonna get into that now, but I bring this up because when we think about the next generation of HTTP or HTTP 3, we actually use UDP and a higher level protocol to guarantee delivery. So UDP will optimize the flow of traffic, will optimize how, and by the way, Jason made this diagram, it's a fantastic one, so please give him a thanks and clap your hands at the very end. Okay, there you go, let's clap. Yeah. All right, so I'm not gonna dig into all of these different, these flows here, but the reason why we use UDP is because it's not connection oriented. It moves very quickly and allows us to move data quickly between server and client. Additionally, the higher level protocols that exist, something called QUIC, allow for that guaranteed delivery altogether. And then furthermore, there is some inherent security that's built into HTTP 3 and the QUIC protocol, so you start to be less concerned about things like encryption and identity and whatnot, although they're still important as you go along. And that brings us to the end of the HTTP module. Now I'm gonna pass it back to Jason, let me, you know, get some water and we're gonna pass it to him so he can talk about load balancers, firewalls and gateways. Yeah, now we're really going to start stress testing our VMs and our network, so this is where everything breaks. And hopefully not, whoop, whoop, can you switch it back? Oh, that's okay. I'm a Windows user and we're doing it on his laptop. It do exist in the Kubernetes space. Okay, so this session is going to, this, okay. So this session is going to cover firewalls, load balancers and proxies, it's really dense. But the main motivation here is that over the last couple of years I've been diving deeper and deeper into how Kubernetes is making networking available. You've all heard that Qproxy can be replaced, Qproxy, in order to understand how Qproxy can be replaced, you really need to understand what it's doing in the first place and how the underlying network is happening amongst your containers, amongst your various workloads. The three different sections here, firewalls, load balancers and proxies are all listed in this module because they do have some underlying and overlapping technologies. But the main idea here is that we've established how to make a connection. We know that we can communicate across networks between different services in a very distributed way. But at this point, a lot of what we've talked about up to this point isn't providing as much security. It's not providing as too much reliability. Those are things that we need in order to make those X amount of nines or however much uptime you need in your environment. We'll start with security and security with firewalls here specifically. And the technology that we're going to use, I'll make this a little bit bigger, is going to be IP tables, no surprise here. We could eventually talk about NF tables or we could talk about the future, but let's talk about what we have available now. If you spin up a kind cluster or a K3D cluster, chances are you're going to be messing with this. The important facts about IP tables are there are five tables. The filter table, the NAT table, the mangle table, the raw table and the security table. Each of those is going to have a different focus, but you do have the ability to interact with any of them and it can make everything kind of a jumbled mess. And to show it as an example of that jumbled mess, this is a generalized flow of how traffic is going to go through IP chains and IP tables. You can see on the top, in most of the colored boxes, there's going to be the name of the table that you're in. You can see in the green box, we have the raw table in the blue box, the mangle table, in the reddish-oranger box, the NAT table, and the purple table, the filter table. And then all of the rest of them are kind of external services or external parts of the networking stack. Underneath, in the box that's surrounded by everything, you can see the chain that you're going through, pre-routing, post-routing, input. There's a couple other that are output and forward. That, fundamentally, is what we're going to see in our Linux environment and there's going to be several things that we're injecting to make that flow even more complicated in order to support Kubernetes or to support containers. So the first thing, I want to make this as practical as possible. And so the first thing we're going to do is we're going to install Docker and we'll see that Docker itself is going to start adding things to those tables. It shouldn't take too long. I know we're all spamming the network, probably, but I'll clear this out. And then we'll run the IP tables. Selecting the filter table, in this case, we could select any of the tables and see what's there. We're going to list everything and we're going to list things with numerical order. This is pretty small, but I'll scroll through some of it. So from the top of the command, you can see that we have the input chain, forward chain, the output chain. You can see that there are now Docker labels for targets, for some of the various destinations. We haven't actually installed any of any Docker containers. We will in a second here. But you can see that there's also a couple of different chains in the filter table here that are basically just sending everything back, dropping some things here. We scroll down. We are going to add three different containers. Those containers are just an echo container. They give me some flexibility on how we can actually confirm which container is being hit without having to go into network namespaces or do anything like that. Let's pin up those. Hopefully, we should be able to pull them down pretty quickly. And I run in the same command that we're going to look at now the NAT table and we should see additional rules created for those containers in the NAT table. And as expected, we can see the lower three are correlating with the specific applications that we just deployed. 172, 1702, 3, 4, and those are all going on the destination port 80. There's a lot of IP information that I'm skipping over temporarily because the main idea here is that I want to show, this remember is the firewalling section, that we have the ability to direct traffic and to reject traffic as necessary. So just to check that everything works, we are going to run some curl commands again. Here we can see we're going against each one of the different containers, 8001, 8002, 8003. And this, I will note, for everybody if you are following along, pay close attention to the hashtag comment on the top. Otherwise, the command will give you unexpected output. You might start applying IP tables to different servers. So just be aware that wherever you need to run the command is identified in the command. So in this case, the output of each of the commands is to dump the port that it's listening on. In this case, 8001, 8002, 8003, go to those respective ports. All right, so now we actually want to get our hands dirty. We want to modify some of the information based on the Docker containers we have. And so what we're going to do is we're going to run and we're going to add an IP tables rule. We're going to reject anything that's going to destination port 5678 with a protocol TCP and we're actually going to replace the first rule in the Docker user table. Remember, we're running this from source. So make sure that you're on the source tab. That has been applied. There's a question there. Oh, he should be able to get you the microphone here quickly. All right, this is from a couple, can you hear me? No. No, it's not okay. You want to try to do something? Oh, there you go. Hi, sorry, for a couple steps ago when you ran the Docker command and those IP addresses added to the IP tables, what mechanism does that, is that part of Docker? So Docker is, we'll see this in the next module that Docker is controlling the IP generation and assignment. Okay, got it. Cool, thank you. Thank you. Forge ahead. All right, so up to this point. Thank you for the question. I wanted to, that's the first question. If we have more questions, I'm trying to speed through this so that we can get everything in a full view and then we'll have time for questions later. But if you have anything, please feel free to ask. All right, so with that rule, we would expect that this curl should not work anymore because there is, oops, I'm not following my own rules. I need to run that on destination. And as expected, because ultimately, it is going to port 5678, not the first port 8001, but eventually it is going through the chain to get to where it is designated in the 5678 port and we are rejecting all of this traffic. Cool, and so just to prove that IP tables is actually doing what we wanted to do, we are going to use X Tables Monitor and that's going to give us more of a machine readable output but it is going to confirm the flow of the traffic. So if we go back to source, controls, yeah. So run that again, we'll run the monitor and once we have the monitor running, we will be able to see that from the destination, we make that request, it's going through several chains and it will spit out those chains. Here, like I said, it's more machine readable so probably go through it on your own terminal but you can see several of the chains being listed in all caps. We can see a lot of pre-routing kind of in the center of the screen. We can see there's a jump command to Docker so it's going to jump over to Docker. It's going to leverage the Docker user chain and eventually down towards the bottom, if I am at the bottom of the screen, we can see that it is hitting a reject rule. So just keep in mind if you do need to, for some reason, get into IP tables, you do have the ability to audit it. If you are in the position where you need to audit IP tables though, I apologize. Chances are, we're all here, we understand that containers, all of these abstractions are here to help us so that we can basically just get rid of static manual manipulation and start doing things, trusting the abstractions. Okay, so back on this host, I'm going to end the command and then I am going to skip over the next IP tables command just because it's just for time. What it's going to do though is it's going to load balance the traffic to various ports and if you wanna run it, you can. But that brings us to the second part which is going to be load balancers. IP tables, like I said, does overlap functionality between firewalls and load balancers because you can, in fact, route between different services and control the way that they are routed. But there is, I don't know if it is still lesser known, but there is another option. The option that you have within Kubernetes, for instance, is going to be something called IPVS. Where IP tables is going to be more of a, basically just a choose your own adventure of mapping of chains and tables and chains and tables. IPVS is going to be a little bit more of a programmatic approach. It's going to leverage the actual machine to make routing decisions and it's not going to be, I guess, as directly related to a map. It does give you more flexibility as far as how you can load balance because it is more programmatic. You can see that we're in IP tables, we had the ability to do round robin. Down below in this bulleted list, we can see weighted round robin, least connection, weighted least connection, locality based, so on and so forth. There's a lot more flexibility and ultimately, depending on your scenario, it could be more performant. Okay, but in order to leverage on the same system, if you follow it along, just make sure that you are cleaning up IP tables before you start running IPVS so that we don't have overlapping rules trying to interact with one another. I think we're good because I skipped the previous command but we'll find out here in a second. As we're having several other examples, we need to install the command. It's not built into the VMs from the start. But we have, instead of a chain or a rule that's basically just one long CLI command, we have a new rule that we're deploying on source and what we're going to do is you'll see that we have several ports. They're going to load balance across the three containers that we have, the two, the three and the four. We'll pass that in to the command. And then once again, we'll switch over to the destination server. And if I properly cleaned up the rejectable, which I don't know if I did, things should route success. Ah, wonderful. They route to each one of the different ports or each one of the containers that corresponds with each one of the ports. And on the left, you can see, we did a fair bit of command lined configuring. But basically all I'm doing is I'm counting how many times we get to the 8001 requests, how many times we get 8002 and so on and so forth. So we see we have a rough split that it's split evenly. And that was what we designed from above. You can see that we have just round robin. All right, so if we want it to clean and show a little bit more of a weighted example, this is I think the last example in this section that I'll run, we can now switch it. As you might expect, we can see one, one and 98, meaning that we're going to send the majority of the traffic to the .4 container. And hopefully that's reflected in the request from the destination server. Oops. Oh no, we're good. And there again, you see the tabular format where we are sending one to 8001, one to 8002, and 98 to 8003. So the question I believe is, is how we're implementing this or how are we actually creating shifts in the weights? Okay, yeah. So if you can see on the command on the right, we're passing in the TACW, and that is weight flag. So we're saying that the weight here is 1%, 1% and 98%. One of the things I didn't mention is, because for time again, IP tables does have the ability to do this, but the logical reasoning behind how you know that you're load balancing with specific weights is going to be a little bit complicated because you are going through the chains and the tables. Good question, thank you. All right, and so there are additional algorithms. I've listed them here, direct routing, tunneling, NAT masquerading. And like I said previously, IPVS is going to be in some respects a more robust solution. But ultimately, if you spend some time digging into Sillium, you'll see that there is another option entirely in that you can start digging into EBPF and start using the tables that they provide in order to make the right decisions that'll allow you to provide security or to provide resiliency with load balancers. That brings me to the last section, which is proxies. And in proxies, I don't have any practical examples, but know that proxies are a layer that are either in front of your inbound or your outbound traffic, and they're going to allow you to programmatically implement firewalling rules, authorization, authentication, so many things that you can do from a network perspective without having to implement them directly in your code base. And so some of you who might be familiar with what a service mesh does, you'll notice that proxies are going to be paramount in a lot of ways to what you have to do to enable that functionality. I know there's a lot of talk about various ways to avoid the psychoproxy. And if you, you can talk to me directly later and I can, and I'd love to talk to you about those. But for the purpose of this foundation course, I want to make sure that we're all understanding the basics and then we can talk later about more complicated things. As far as coordinator proxies, I did list a few. One of my favorites is Envoy. You have the ability to run, oh, there's a typo here, AJA proxy, and then also Nginx. You can choose whichever one you want, whichever one fits your programming standard. Some are going to be included in the solutions that you choose them. And so we'll kind of speed ahead into Marino's talk where we get into containers and then stay here for my Kubernetes talk because that's also good. Awesome. All right, so we use proxies and load balancers and firewalls to control the flow of our traffic. You know, proxies are gonna be present in a lot of Kubernetes environments, especially if you're using a service mesh. But what's interesting is that proxy contains all of the traditional networking and routing and firewalling that you would see in traditional devices like a physical router or a physical firewall. We've just slimmed the thing down into this little virtual artifact that runs alongside your main container or can run at the Ingress level or at the entry point. And it's very important to know this because it's more than likely that you're gonna work with Ingress or the Gateway API spec or even a service mesh. Now, containers. I'm not gonna describe containers to y'all because like we're like at KUKON and I think most of you know what it is, but for those that don't, right? It's a mechanism to isolate a process and give it access to disk, network, memory and just a compute environment that it can run. Except if you had to sit there and create the network for every single container, you would be wasting a lot of time. Now, you could automate this using Bash and it has been done before, but Docker was the first to go out and actually create a flow that allows a container to get access to the network the moment it comes online. Kubernetes took that to another level and provided this mechanism through the container networking interface spec or the CNI spec. And you know, as you can tell when you go to a variety of Kubernetes environments, you'll encounter the traditional CNIs that you're so used to like, you know, Calico, Flannel and now like Silium because what they're doing for you is the moment you power on a pod or deploy a pod that container networking interface layer knows about it because it's communicating with the Kube API server and it's understanding that this workload needs to have an IP address so it can have access to the network or access to other pods so that it can communicate with pods. Now, once it has an IP address, you know, it can go do what it needs to do but again, we use DNS to make sure that we can communicate with other pods but what happens when that container disappears or that pod disappears? If we didn't have Kubernetes and we weren't using Docker, we would have to manually remove all the configuration for that container. So let's take a look and see what that configuration for that container or networking namespace actually looks like. So a network namespace I brought it up before is actually an isolation mechanism. You're effectively giving a process access to network space and you'll have many of these that represent or can be akin to a pod effectively. So you have a pod that's like a network namespace at this point. Now, when you think about deploying network namespaces, we'll go ahead and do this right now with NetTools and I've already got the namespaces configured from the previous module so if I do an IP net and S, we'll see that web app and sleep are there. Now, I will display a diagram that I showed much earlier on when we were doing our slide deck which will start to allow you to understand where all these pieces fit together. What we have to do is we have to give each namespace a process and an IP address, a link to that IP address, a link into what we call a bridge so that these two network namespaces are in their own networks and then they can route to each other and communicate with each other. So if we actually go and run one of these commands, we don't have to run both. You'll notice that we still have our previous interface from before. It just, you know, it doesn't have an IP address so we're gonna assign one to it very shortly. You can also notice that if you run this ARP command, ARP is actually a translation for IP address to MAC address. So if you notice, you see that we have 10.13.37.128 and it translates to this hardware address or MAC address. What that actually is implying is if you were to do this with Kubernetes, that just implies the container that lives inside of that pod actually does have a MAC address. That's all I have to say about that. All right, so what we're gonna go ahead and do, we're also gonna check the routes that we are aware of and you'll notice that the previous routes that we injected through those static route configurations in module two are still present here and that's fine because we're gonna be using different subnets. By the way, you can configure multiple subnets within a namespace or a network namespace without any issue. Obviously you wouldn't wanna do this manually because I mean you're gonna be managing thousands of containers. Doesn't make sense to do this by hand. So one of the first things we have to do is in order to connect these network namespaces and isolate them as separate networks is we have to create a bridge. So we're gonna call it appnet zero and we're gonna bring it online very shortly but you'll notice that appnet zero is right here. This is the bridge that we're gonna connect the network namespaces to. The reason we have to do this is because we have to use the physical interface on the host and in our case we have to use ENS4 which is up here to be able to get to the outside world. Previously all I did was just route between two subnets in the private network but if I need to communicate with the outside world I have to expose these network namespaces somehow to the physical interface or to the outside world interface. So we go ahead create the appnet zero bridge we're gonna bring the bridge online and now we're gonna go ahead and attach these network namespaces through the usage of virtual ethernet interfaces to the bridge. So we're gonna go ahead and create one using the IP link command called vEatSleep and we're gonna basically peer it with the sleep bridge which is actually gonna be two ends of a link. So one end of that link lives in the namespace the other end of that link lives on the bridge that we just created and you're gonna start to see this visually as we get to the end. The second part that I need to do is for the exact same thing for the web app namespace. So I have to create a virtual ethernet interface in that namespace and then create the other end of that link that gets attached to the bridge. And now we have two different networks that are attached to this common bridge. Furthermore I have to go ahead and assign the ethernet interfaces to their respective namespaces. I have to then go ahead and assign the vEatS interfaces or more so the bridge side of it to the actual appnet zero bridge and I do this for both namespaces. Then I'm gonna go ahead and assign an IP address to the two different links on each namespace and then we're gonna go ahead and see if we can communicate to the outside world. So we also have to add an IP address to the bridge itself because it's gonna be our exit point to get to the physical network or the actual host network. So let's go ahead and assign an IP address to that bridge and we're also gonna bring up the virtual interfaces on the bridge side online. Because remember each network namespace is gonna have one link that lives in the namespace and one link that lives on the bridge. So we have to do this for both web app and sleep. So we've gone ahead and done that. Now we're gonna try and ping and we should be able to ping without issue like between the namespaces. They're technically on the same network. You can think of these as two containers on the same network. And if you notice here they're pinging just fine so I'm pinging from one to the other. Actually no I'm not, I'm actually just pinging from my local host. Remember my local host here, this Linux operating system has access to the bridge we just created. So if I type in IP link, notice that that bridge right there which is number 12. That's the link that we're actually communicating with because it's local to it we can verify that it's up and online with a ping command. Now if I try to use IP net ns exact web app ping directly towards an external address, 23.185.0.4 I think resolves to solo.io if I'm not mistaken. But if you try to ping it, it actually won't go through and you can see that right here. Network is unreachable. So what do we need to get to an unreachable network? Someone said it default route exactly. We need a default route default gateway to get to that remote network. So let's go ahead and set that up here. And we're gonna specify that one and two, one six eight, five dot two, or sorry 52.0 we're gonna net masquerade it because here's the other thing I also have to point out. So earlier on Jason mentioned the concept of net so did I much earlier on as well. And the reason we use net is because remember about the whole IP address preservation on the public space and whatnot and we use net to be able to circumvent that. Well we're kinda doing the same thing here to a degree we're actually overloading the interface that lives local to us the one that communicates to the outside world and sharing that with our two network namespaces that we just created. So if I go ahead and create this net rule and I try to ping it still doesn't work. So we're still missing something. You mentioned it earlier we're missing the route right. So we're gonna go ahead and add the route and once we add the route and we enable IPv4, IP forward technically speaking this should ping without a problem. And here we are we're able to get to the outside world. What I have just done for you is that I have created two different network namespaces to represent containers. So you could think of the web app as one container and the sleep as another container. They have their own respective IP addresses they're on the same network. They have access to a bridge which has access to the physical host that we're working off of that physical host has access to the network. We're overloading that physical interface with that so that both of these network namespaces or containers can communicate with solo.io and that's what we're seeing right here. However, one thing I will point out is if you look at this diagram I'll bring it up full screen, right? This is exactly what we just did. This is one network namespace that we created. This is another one. You can notice that they're on the same network. You can notice that there is a virtual ethernet interface on this network namespace with this address and then the other link is connected to the bridge we created. The bridge itself has an address. Actually, it should be one. Yeah, is that the right address that we assigned to the bridge? I don't know, I can't remember. But you can assume that's the default gateway. That default gateway allows us to get to the host side of the network that basically nets all of our connections to get to the outside world. So these two network namespaces are using this IP tables using the physical interface of the host to get to the outside world. Now, here's the problem. Could you sit there and do this a hundred times over, a thousand times over, 10,000 times over? No, you don't wanna be doing that because if you had to go and undo all of this it would take the same amount of effort to try and un-configure the network namespace, unadd the link. You can't just simply delete the object and everything's good. This is why we have container networking interfaces because everything I just did is exactly what a CNI does. It automates the process, the moment a pod comes online, it gets an IP address, it gets access to the host network so that it can actually get outbound and communicate with the outside world. This is why you wanna consider a CNI, like Silium or Calico or even Come Talk to Us and we can tell you a little bit more about it. Now this completes the partial picture but what this is gonna set you up for is how we do networking in Kubernetes. So I'm gonna pass it along to Jason to effectively talk us through Kubernetes networking, cover some key components and close us off. Go ahead, Jason. Thank you. And I will be brief. I know that we do have only six minutes left. There's a lot of good content in here. If I don't cover, if you see something on there that you want me to talk about, I certainly can later. But for now I'll glaze over some of it. So I definitely don't need to do the what is Kubernetes. A lot of us are here for that reason but what do we have available in this virtual environment so that we can start to play around with what Kubernetes is actually affording us on the network. In your environment, if you do a Qtl get nodes, you should see that you have three different servers or three different nodes in a cluster and you have access to a cluster by virtue of the fact that the Qtl command response. It is making a network connection to the Kubernetes API server even though it is, I think, all locally addressed. Some of the things that we'll talk about that if I had a lot of time, I would talk about a little bit more are the different values that different CNIs can provide to your Kubernetes environment. They all have provided by the spec a set of functions that they have to do but that does not, I don't think there's only one or two CNIs that actually just limit themselves to those functions. Most CNIs are going to provide you many layers of functionality on top of that. You're going to see things like VPNs, you're going to see things like BGP, you're going to see things that allow you to send a higher bandwidth through your cluster so on and so forth. And so hopefully coming out of this session, you'll be able to understand or at least communicate requests that you have for that functionality with a CNI representative or someone who might be provisioning clusters for you. In this case, we are running Flannel. You can output some of that information here, use the IP command and you will see that information. But what I want to kind of go over is a brief overview of the ways that Kubernetes networking works in general. There's intra and inter pod networking. So a lot of what Marino just talked about, is just the containers within the nodes, within different namespaces, and then extracting that into multi node clusters and being able to communicate across the node and what that means to the container from a security perspective, from a resiliency perspective, so on and so forth. And then lastly, which I probably won't have time for today, but ingress is ingress for anybody who might not be familiar with that term is how you handle the north-south boundary of your cluster. Being able to get traffic into your Kubernetes cluster is usually required in order to get value out of it. Not all cases. But we could have given another hour and a half talk on the movements in the ingress and being able to handle things from a north-south sense. I'll just run through a couple of the examples that we have. In the examples, all we're doing is we're piping a YAML alert. This is one of the many, many YAMLs that you'll see this week. We're going to be creating pod specs and in those pod specs, we're going to have specific information. In this particular one, we can see the container port and some information as to the protocols and what's available from that container. And we can see that we are running two containers in the same pod definition. And so as you might expect, we are going to see that they can network amongst themselves. I should apply it first. Apply it, then we'll run it. I've got to wait for it to load in the container. So we can get some information about each of the containers. This is actually just execuing the IPA command. And there we can see the IP information. Each container, 10.42.0.5 and 10.0.5. So I think that even does not overlap with the example from before, but I think it's close. All right, so that's within a specific pod. We can go one step further and apply a pod to a separate node. Here we can see we're doing the node name key here. That's deploying container one and container two, much like we have here. And that would allow us to run across different nodes in a cluster. I'm not going to spend too much time on this because this is the functionality of Kubernetes. This is why we've chosen to use this distributed system. The next level on top of that, going back to the DNS talks about being able to address things without having to manually store all of the IP addresses. That's what service networking is going to provide you. Mostly it's going to be internal to your cluster. You're all probably familiar with the different types of services. We've already talked about the DNS that's accessible within your cluster to access workloads. There's some examples in here on how you would leverage it, the fact that it is in fact using IP tables and DNS, in this case. And once again, we can see the magic DNS that's available through Kubernetes. There's also an IP tables command that we can look at there. To the IP tables comment I made earlier, this is how the load balancing is going to take place. And subsequent rules that you add to IP tables is going to cut the amount of traffic that gets sent there by a fractional amount. All right, and since that does bring me to time, I'm just gonna leave the rest as an exercise to the audience. If anybody is interested in anything that comes after what we've gone over today, please feel free to join any of myself or Marino or any of our coworkers at the G9, I believe. G9 under the rose zone. And I will say, I know that we weren't able to get to that ingress section, but this lab is gonna be accessible to you forever. So as long as you use that link, refresh, you'll be able to take it anytime you want. I wanna welcome you all to our booth, G9 in the rose zone. Come chat with us, because we've given you the foundations for networking. We want you to come and embrace things like Cilium and the Cilium CNI and the Istio service mesh and see how these two components together. And you'll start to realize that your network is getting complex and you need a platform like Glue Platform. So come chat with us. Thank you so much for your attention and time. We just wanna say have a great KubeCon, enjoy Amsterdam. We'll see you around. Thank you.