 Hi, and welcome to the talk Container Lab, a modern way to deploy network entomologies for labs, CI, and testing. My name is Roman. I'm a systems engineer with NoCab, where I primarily work on network automation with model-driven angle. I do love open source communities and try to contribute to them as much as possible. If you want to talk to me, you can look me up on LinkedIn or ping me on Twitter. My Twitter handle is NTDVPS. So let's talk about labs. They are not anymore a nice to have feature. They are really a necessity each networking team should have. It might be that the labs are the only way to tell that things work as they should or they don't as they shouldn't. So being able to run those labs with virtual networking topologies is really important. Let's recap. What options do we currently have to run those labs? What advantages do they have and maybe identify some disadvantages as well? Of course, it makes sense to start with the elephant in the room, the network emulation software that has been specifically built to answer the need to run networking labs. Projects like EvenG and GNS3 are the prime examples here. They are proven. We've been running them in the labs for years. We know how they work. They are free and available to download for everybody. Although some premium features of EvenG might require some extra bucks. They have a nice UI, so you can really create a topology by drag and dropping nodes on the pane and interconnecting links with your mouse. So that is really nice to have. On the flip side though, these projects, they are really VM centered. That means they have weak container support. And I will talk later about why I think container support is really important for the network emulation of today and tomorrow. They are also quite heavy and not fully open. So if we need to install EvenG or GNS3, we need to allocate a server or a bare metal machine and we need to keep it allocated for as long as we need to run the labs. So we can not really deploy and destroy the labs on demand with the resources dynamically allocated. The nice UI that these tools have can also be seen as a disadvantage even. For example, if you want to generate your topology, then you would need to have a text file that defines the topology. Or if you want to push your topology to the Git repository, then as well the UI might stand in your way. What else do we have? We have a cluster of projects which are basically VM orchestrators, projects like OpenStack for example. They are very common in medium and large enterprises. They use to run VMs, right? And it is quite tempting to reuse those orchestrators to run networking labs. But there are the problems coming. To use those projects as the network emulation software, it requires quite a lot of integration effort. You need to know how to work with the primitives that these projects expose. They of course are still VM-centric because they are VM orchestrators, right? Sometimes it might be even challenging to have a clean data path. For example, if your network topology uses Linux bridges in the data path, it is really probably even impossible to have LACP frames flowing or even have some simple things like LLDP. These projects, they do not have a specific UI nor specific CLI that can deploy labs or networking primitives for you. They do have UI and CLI for the generic purposes, like work with the snapshots with images, deploy instances, et cetera. But those UI and CLIs, they are not specifically built to run network topologies. And of course, we can have some custom automation solutions. Those bespoke scripts of thousands of lines of bash that can trigger or deploy Libvert VMs or create containers, they can really answer the needs that you have. They can be a silver bullet. The only problem is that they only tailor it for you. So being able to reuse those scripts in the customer's environment or in the partner's environment or even in the different department's environment might be really challenging. As you see, I mentioned weak containers support as a downside for both network emulation software and VM orchestrators. Let me explain why moving from VMs to containers can bring some substantial benefits to the networking labs. First of all, by making containers the first class citizens of our networking labs, we welcome all the containerized network operating systems, such as Nokia, Azure Linux, AristaCOS, Juniper, CRPD, and others. Next to that, by using containers, we are inherently going light and fast. Containers are typically lighter than VMs. So we can have a pure topology which will consume less resources and will be very fast to deploy. Also, by using containers, we are bridging the gap between IT workloads and networking topologies because we can leverage the same familiar features of containers such as bin mounts, port exposure, environment variables, and so on. When your lab is defined in a declarative fashion in a text file, it makes it really Git-friendly. You can push it to a Git repository and use all the familiar Git features such as collaboration, PR reviews, versioning, and so on. Using the Git for storing your networking lab topologies, again, bridging the gap between the normal IT workloads and the networking labs. One of my most loved features of using containers for network topologies is the new way we work with network operating images. So before we used to work with QCOW images and deploy VMs out of it and we needed to upload these images to Dropbox or OneDrive or FTP to share it with somebody. Now when using containers, we're inherently using container images and container images are pushed to container registers. So now you can version your images and push it to container registry and everybody else can pull it if they have the certain rights and authorization. And by running networking labs as the containers, you make it really easy to integrate those topologies in your CI CD pipelines. Really, it is just a bunch of containers and links between them. So you deploy them pretty much as any IT workload that you have in your CI CD pipelines. It is fast to deploy, it is lightweight, and thus it is really easy to integrate network topologies to CI CD pipelines that you might have. And that is where our container lab comes into picture. It tries to bring all the benefits that we just briefly discussed and provide a CLI for setting networking labs with container-based nodes. It can be used to deploy complex topologies like you see on the left-hand side or it can also be used to create some small topologies that you need for your ad hoc testing or dev integrations or similar. Now, we have a container lab documentation site that you see below on this slide that probably will answer all the questions that you might have around this project. It's quite a comprehensive documentation portal, so please do check this out. And although we invested a lot in the documentation, I really believe that learning by doing is the best way forward. So I'd like to bring you on a journey where we will first install container lab, then we will get to know what is the topology definition syntax that container lab uses to deploy and define the labs. We then, of course, will deploy the lab and manage it. Then we will see how to access and configure the nodes that will be part of our lab. We will talk about the configuration persistency because it's quite important to understand how to work with configuration, how to save it, how to retrieve it and how to make your nodes start with a predefined configuration. And the last step would be to package our lab and push it to a Git repository. The lab that I chose for this demo is a three-node lab topology that demonstrates a router reflection use case. We have three BGP speakers here. The GoBGP Linux container will inject a route 192.168.101.32 towards the containerized network operating system, Arista-COS. Arista will act as a router reflector and will reflect this route towards another containerized network operating system, Nokia Astrolenex. Astrolenex will successfully receive this route in its routing table and we would like to verify that. You can see the addressing information and the timeline of this lab on the left-hand side. Installing container lab is super easy. All you need to do is type this one command highlighted in blue and container lab binary will get installed on your operating system. We do support three major operating systems such as Linux, Windows with VSL2 support and Mac OS. For other installation options you can check the documentation side where we outline all the other options available. The only thing you need to have on your system is the Docker installed. The rest is packaged into the binary of container lab itself. So let's install container lab on our system. First we go to the container lab documentation side where we navigate to the installation section to get the installation command. We choose the auto installation script and we go to our system, we paste it here, we type enter and in three seconds we have our container lab already installed. It is really as easy as that. With installation step out of the way we can see how container lab really works. It all starts with writing a topology definition file. The topology file contains your links, nodes and some parameters that your lab needs to have. Container lab then takes this file and deploys the real nodes and wires the links between them. Let's starting this topology definition file. We will create a file named rr.clab.yaml where rr stands for out reflection and this file is written with YAML syntax. So we will follow the basic YAML rules. First we need to give our lab a name and that would be rr. Then we will have to define the topology container which will have a nodes subcontainer and the nodes subcontainer will host all the nodes that we will need to deploy in our lab. Now, if you remember we'll have three nodes and I will start just with the first one with Nokia Astrolinux node. So each node must have a name and that is basically an arbitrary string that you give to your node to distinct it from other nodes. The mandatory parameter for every node is its kind. Kind basically tells container lab what node this is. Is it an Astrolinux container? Is it CUS container? Is it a basic Linux container? You provide this information with kind. And of course, since we are working with containers we need to specify the container image that this node will use to start. The image is really the same string parameter that you would use in Docker run command or in the Kubernetes bot specification. Now that we know how to define a single node let's take a step further and define all the three nodes that our lab must have. As you see on the left hand side, we defined three nodes, the Astrolinux that one we defined on the step before then we added CUS node which is the Arista CUS container and we also added our GoBGP node. Notice how CUS, GoBGP and Astrolinux nodes all have different kinds because these are three distinctive containers with three different rules to start them hence the different kinds they all use. Of course, they use different images and that is also reflected here. Now our logical view at this moment represents three nodes deployed but there are no links whatsoever. To interconnect the nodes of your lab you need to work with links. We need to specify a links section in the topology file which consists of list of endpoints. The endpoints element in its turn defines the beginning of the link and the end of the link. So putting things into the context if we need to connect Astrolinux interface E11 to CUS interface ETH2 we will need to create the endpoint that says my first end is Astrolinux E11 and my remote end is CUS ETH1. Same goes for another link in our topology between CUS interface ETH2 and GoBGP interface ETH1. The cool part about ContainerLab is that those 17 lines that we defined here is everything we need to start the lab. So to deploy a lab there is a command ContainerLabDeploy where you supply the topology file that you created and then ContainerLab will immediately get to the deployment process and it will show you the summary table once it's finished. But instead of looking at the slides let's really go and look at this live. So I have created the once 2021 directory where I have my topology file rr.clab.yaml. So if we'll look at that file we will see that it basically the same I just copy pasted it from the slide, right? It's exactly the same content. We have three nodes, we have two links and that's all we need. One important thing here that I would like to articulate specifically is that those images they need to be available to ContainerLab in order to deploy the labs. So how do you get those images, right? For Nokia Astrolenex it's really easy. You can pull the image directly from the public GitHub Container Registry without any registration or licensing agreements. With Aristoseus it's a bit different because you still need to have an account with Aristocyte but you can pull the archive from this site freely. Then you can load this archive as a Docker image and you are done. GoBGP Container is really just a basic Linux container with GoBGP installed, hence the kind Linux here. I wanted to show you that I have the images already pre-pulled. So I have Nokia Astrolenex container pulled from the public GitHub Container Registry. I have downloaded CUS Tower Archive and loaded it as a container image but I do not have GoBGP container, for example. The reason is that I wanted to show you that ContainerLab can pull the images on the fly when you start to deploy the lab. So let's try and deploy this lab. I will use ContainerLab deploy dash t and will specify the bottom of my topology file and I hit enter. So see right now ContainerLab actually pulling the network multi-tool container image and since it's in the GitHub Container Registry it can do that because my Linux machine has access to internet. A few seconds later once all the images are pulled or present in the local image store ContainerLab proceeds with creating the containers and creating the wires between them. Now the last step is to actually wait until AristosUS node render online and ContainerLab will configure the management address for this particular image. Okay, 40 seconds later since the deployment started ContainerLab shows us the summary table. In the summary table we see that we have three containers or lab nodes created. The name consists of three elements, the prefix CLAB, the lab name RR and the node name. All these parts except for the prefix are coming from the topology. We named our lab RR and we named CUS node SCOS. We have the container ID, we have the images that we used to spin up these containers. We see the kinds but most importantly we have the IP addresses that were assigned to the management interfaces of those containers and this brings me to my next section which is how do we actually get access to those nodes? There are two ways to get access to the nodes. First you can execute the process inside the container. Typically you would do this with Docker exec command. For example, to execute the SR CLI process which is a CLI application inside the SR Linux container you can do Docker exec dash IT, the name of the container and then the name of the process but we also can connect to the SSH server that runs inside the containers because these containers are really a networking operating systems. They do run SSH servers inside them. So we can use the SSH client and connect to those nodes immediately. We can use the container name such as CLAP or RR COS or we can use the address that comes from the summary table that container lab shows us at the end of the deployment process. Let's now connect to the SSH inside the SR Linux container. I will use SSH admin which is the default username for SR Linux container and I will copy paste the node name here. As it says that, I get access to SR Linux CLI. So I can now configure the node and save its configuration for feature use. To connect to the RR COS, I can use the other trick. I can use the exact process. So if I use Docker exact dash IT, I will copy paste the name of the container and I will call the CLI application which is the CLI process inside RISD-CUS. And just like that, I get access to the CS CLI. Let's talk about how we can configure nodes once the lab is deployed. We already saw that we can get access to the CLI of the nodes and that is probably the most obvious and simple way to configure something on the networking operating systems. But apart from the CLI, we also try to enable all the programmatic interfaces on the nodes that container lab support such as GNMI, NetCon, et cetera. So you will be able to use those interfaces if you deploy them with container lab. We can also use something that is coming from the Dockerland configuration mount. We can mount the files from the host to container and if the operating system expects to see the configuration file by a certain path, we can use those bind mounts and that will make network operating system to read these configurations. We also can work, of course, with third-party configuration management tools such as Scraply, Ansible, NardNear, et cetera. Because your nodes are really accessible to the host, you can use those tools without any issues. And soon we'll have an embedded configuration engine inside container lab. So you will be able to generate configuration for many nodes that container lab support using the variables that you will define as part of your topology. All right, now when we have our lab deployed, we can go and jump right away into the configuration of the BGP and router reflection. But before we do that, it's really handy to refresh ourselves which nodes do we actually have in our lab. And to do that, you can use the container lab inspect command. When it is provided with all flag, it will list all the labs that are currently deployed. So we have just one lab which consists of these three nodes. I will start with configuring CUS first. I will copy paste its name. I will SSH into it. And I will just paste the config that I have. So these configurations basically are configuring the link IP addresses and configures the BGP. Now I save this configuration and I will proceed with Azure Linux config. Same way I SSH into it. And I will copy paste the prepared config for Azure Linux. I will save this configuration. And what I would like to do now, I would like to test if my BGP peering between Azure Linux and CUS is up and running. So for that, I will install a watch on the show command, network instance, default protocols, PGP. There is a single neighbor that I have that is the link IP of the CUS. I want to see if I get any IPv4 routes. So right now you see that the Azure Linux reports that it has no routes received from the CUS neighbor. All of the neighbors up and running would just happen to not have any routes. Okay, so now it's time to configure the code BGP. That's our injector of the route in the route reflection topology. So to connect to the code BGP, I can also use the container name, but now the BGP container doesn't have any SSH server running. So what I will do is that I will leverage the execution of the process inside the container. I will paste the name here and I will just execute the bash shell. Now when we are in the bash shell, we can start working with code BGP and actually inject the route. Again, I will copy paste the code BGP configuration here in just a second. So as you see, code BGP configuration is a bit more involved because you need to create the code BGP YAML file that code BGP will use to actually configure itself. And then we will create the announcement by adding the address with certain attributes that I chose to use here. So if we save this command and we execute it now, we will see that in five seconds, code BGP will start to announce a route. Now, if I switch back to Azure Linux, you do see that the watch reported that we now have a single route received from the CUS. So just like that, we deployed a lab of three nodes really fast. It doesn't really consume much resources at all compared to the VMs. And we configured the use case for route reflection demo. But it is a bit too involved. We adapt quite a lot of commands on the CLI. Can we do this in a more like declarative form where we would have to specify the resulting configurations and the nodes would pick it up? And yes, we can do that. ContainerLab actually has a state directory where it keeps all the state information for the nodes of a certain lab. This state directory is called lab directory and it is named C lab dash lab name. So for the case of the route reflection lab, ContainerLab creates a lab called C lab dash rr. So in this lab, we will have the state that is kept for the nodes of this lab. And we can use this lab directory to get the resulting conflicts out. But before we can do that, we can also actually save the running configuration to the startup configuration. For that, we created ContainerLab save command which executes an appropriate command for every supported operating system which will save the running configuration to the startup file. And we can get the startup file from the lab directory. Let me show you how it works. So if we go to our container host, I need to switch to the directory, ContainerLab save dash t, and then our topology. Now ContainerLab will perform the save configuration both for the CUS and SR Linux because it knows how to do that. And now, if I go to the lab directory, which is C lab dot rr, I will have two subdirectories for CUS and SR Linux. So if I drill to the CUS directory, there is a flash directory created by CUS and then there is a startup config file. So if you see that actually the startup configuration or the oristic configuration of this node, and it also already has all the configuration that we provided through the CLI such as BGP and link addresses for the interface addresses. So now what we can do, we can actually take those files and save them to our directory. With one purpose, we want to use these files as a startup configuration next time we deploy a lab. So once we know how to save the configuration and it's extracted from the lab directory, what we can actually do is that we can modify our topology file and specify that we would like our nodes to start with some certain startup configuration. So using the startup config parameter for the node, we can specify the file that is present on our container host that will be used by container lab to deploy a node with a certain startup configuration. For the go BGP node, which is a Linux kind, we can not use that, but we can use the binds instruction and we can say that the shell script that we have on our container host, we would like to be mounted to the container namespace and then container will be able to use that script. Now, what does it give us? It actually creates a topology file that has all the things already embedded in it. And that's one of the very nice benefits of using startup configuration within container lab topology. Now anyone who will pull this lab will have those files as part of the repository will be able to deploy the lab with all the configuration already done and they will be able to demonstrate this use case or test it in CI without really doing any configuration over the CLI. And that is exactly what I did here. I created a repository on GitHub, which is named once 2021. So this repository, as you see, it contains all the files that we've previously extracted from the running nodes and it explains how to deploy the lab, how to execute the use case and how to verify the operations. Now you can pull this repository and with just two commands you can perform the full use case. That is really valuable and answers the infrastructure's code claim. So far we have been working with containerized network operating systems in our previous lab. But container lab supports not only containerized systems but also the traditional VM based network OSS. Currently we count nine different vendors and 15 different network operating systems that container lab supports. And the split between containerized and VM based network operating systems is currently at 60-40 ratio. This slide shows which vendors and their corresponding kinds are supported by Contain Lab. As you see here, we make a distinction between the containerized network operating system and the traditional VM based ones. To help you get started with your favorite network operating system, you can go to container lab documentation site and choose the lab examples section. In this section, you will find the lab examples for each operating system that we support. So now if we put together the traditional software projects for lab emulation and container lab head to head, we can see what are the differences between them. So container lab focuses on the, using your labs as a code and working with them as you would work with the code. It allows you to create your labs and push them to Git repositories and then unlock all the features of collaboration that Git offers. It also can enable repeatable labs because you now work with containerized images and your lab is actually an artifact that you can store somewhere. It has a very small footprint and light and fast to deploy, which also play quite an important role in your CI CD pipelines. At the same time, it has fewer support for some network operating systems. We do support the major ones, but we are not trying to get every other network operating system to container lab, although we are open to external contributions. And probably one of the most important distinctive features is that container lab is UI less. We do not have any UI because we think the way you should work with container lab labs is by using the textual files and basically work with it as you would work with the code. Okay, so if container lab sounds interesting and you feel like you can use it in your environment, I think the first step for you would be to explore container lab.acilinux.dev documentation site because it has a lot to offer. There are much more things in container lab that I didn't cover here. So you can try to create a lab with a network operating system of your choice and see how it plays out. If there is any missing feature or you have a nice idea, please do go to GitHub issues or discussions and we can talk to you there. If you want to have a more real-time communication, we have a Discord server for container lab specifically, so do go and join it. And if container lab will prove to be useful to you, you can always thank us by starring the repository on GitHub. With that, I thank you for listening and see you next time.