 Hi folks, welcome to our presentation about AirGap. We want to give you in the following minutes a short overview of AirGap. What is meant and why it isn't so complicated as many think about. So let's talk about the agenda. First, a short intro, who we are. Then we will discuss a standard case environment. After that, we come along to the question, what is AirGap? And we will show you some important topics about it. Next, Vincent will give us a short demonstration and coming to the end with a conclusion. So let's move to who we are. We are from Q-Pops, a company that has been working with microservices and Kubernetes for several years. Our portfolio includes training, workshops, and project support for the installation and the operation of Kubernetes. Furthermore, we are the producer of a Kubernetes distribution for productive environments. You can find our software products on our website, kubops.net. So we are, like you see here on the slide, on the left, Tobias working as a DevOps engineer. In the middle, myself working as a Kubernetes trainer. And on the right side, Vincent working as a developer. OK, so let's talk about Kubernetes. I want to start with this funny quote of a user. He said, I barely understand my own feelings. How am I supposed to understand Kubernetes? Kubernetes is not a flash in the pan. It is here to stay and its prevalence. In the next minutes, I want to give you a short overview about some topics on the Kubernetes and air-gapting environment. However, organizations in finance, health care, public sector agencies, and other highly regulated industries have added security and compliance requirements. In these cases, there is a need to balance the advantage of high, highly availability, scalable, and redundant Kubernetes cloud-based environments with additional infrastructure restrictions, such as no public internet access or high security standards. Cloud-native does not mean cloud-bound. Increasingly, companies are seeking to take advantage of these cloud-native features in their own secure data centers. While challenging to implement, it's not impossible. Building an isolated Kubernetes environment starts with the infrastructure and dependency planning. So the knowledge you'll need for an on-premise deployment is a viable option for air-gap Kubernetes deployments is to develop a homegrown solution. This approach will require specific planning and expertise. So you see here in this picture an example of a typical standard implementation of a Kubernetes cluster. Every node or pod can connect to the internet. To prioritize your goals, try to understand the potential of Kubernetes and imagine how your company might be using it in five years. Kubernetes is a great way to run modern microservice centric applications. Images are often downloaded directly from the internet sources and used in the production environment without further verification. It cannot be verified that the creator of the image has processed in it. This means that unwanted additional data may have been processed in the image. Images should be rebuilt in a controlled environment as much as possible. Each step in the building the image should be controlled and scrutinized with a focus on security. Artifacts, any files in the images, like images themselves, are of unknown origin. It is not possible to control how the artifacts were created and what they contain. OK, so just want to give some facts about Kubernetes. These facts come from a survey from Canonical published by Forbes. Kubernetes implementations are growing every year with more and more machines. The biggest challenge in the CDCI area is the lack of educated manpower. Kubernetes is not a simple Windows 11 installation. Another problem is the negligence of Kubernetes updates. A lot of IT guys are implementing Kubernetes without a concrete plan how to update Cates regularly. Many pros in the meantime know what you can do with namespaces, isolate apps, and make Kubernetes more secure. Interesting is that only the half of the respondents are running a high availability Kubernetes cluster. So what do the other nearly 45% do? That's a good question. This shows you a scan of a package at Docker Hub. Only in this image it were 148 vulnerabilities found. Misconfiguration is the most common security incident. Users are too trustworthy and should rather be a bit more critical in the area. In order to locate and to share Docker container images, Docker is offering a service called Docker Hub. Its main feature repositories allows the development community to push and pull container images. So with Docker Hub, anyone in the world can download and execute any public image as it was a standalone application. Today, Docker Hub accounts over 4 million public Docker container images. With 8 billion pools, it's downloads in January 2020, and growing. Its analyzed image pools should top over 100 billion this year. What a big number. When you develop a lot of images yourself, you need to have a good base image to rely on. The base image should be small, capable, and secure. Now, what is a base image? A base image is the image that is used to create all of your container images. Your base image can be, for example, an official Docker image, such as San2S, or you can modify an official Docker image to see your needs. Or you can create your own base image from scratch. Container images are built by applying layers onto previous images. Each file system layers represent a point in time record of the file system state after certain actions. Images that have common registry file system layers allowing for reduced overhead and greater consistency between images. So that means minimize attack surface and what's not included can't break. That's one of the most important rules for a base image. Make sure that only that software is included, which is actually neat. And as a consequence of this, make sure that you really know which software is included and how it is working. For checking the images you can use, for example, quay.io security scanner part of the Quay container image registry. And most potentially harmful container images are coin minus. There's over 44% of the containers. OK, so let me talk about something about Kubernetes security. So first, enable Kubernetes RBEC, or called Kubernetes Role-Based Access Control. RBEC can help you define who has access to Kubernetes API and what permissions they have. RBEC is usually enabled by default. But when you use RBEC or when you enable RBEC, you must also disable the legacy attribute-based access control called ABEC. Second, use third party authentication for API server. It is highly recommended to integrate Kubernetes with third party authentication provider, for example GitHub. This provides additional security features such as multi-factor authentication and ensures that a Kube API server does not change when, for example, users are added or removed. If possible, make sure that the users are not managed at the API server level. So next one is use process-wide listing. Process-wide listing is an effective way to identify unexpected running processes. First, observe the application over a period of time to identify all processes running during normal application behavior. Then use this list as your white list for future application behavior. So first, turn on audit logging. Make sure that audit logging is enabled and that you are monitoring unusual or unwanted API calls, especially authentication failures. These log entries display a so-called forbidden status message. Failure to authorize could mean that an attacker is trying to use stolen credentials. Next one is also important. It means keep Kubernetes version up to date. You should always run the latest version of Kubernetes. So always plan to upgrade your Kubernetes version to the latest. Upgrading Kubernetes can be a complex process because you get three or four times per year some new upgrades from Kubernetes. And the next one or the last one here is a lockdown kubelet. What is kubelet? Kubelet is an agent running on each node, which interacts with the container runtime to launch pods and report metrics for nodes and pods. Each kubelet in the cluster exposes an API, which you can use to start and stop pods and perform other operations. If an unauthorized user gains access to this API or on any node and can run code on the cluster, they can compromise the entire cluster. OK, the kubet config, kubelet config, and the kubet ADM config contain important information about the cluster. Besides information gathering, modifying these configs can cripple the cluster. By default, the following directories contain important information. These directories are only relevant for troubleshooting. Therefore, there shouldn't be no access rights for non-admin users for this path. Since all paths are immediately visible with the system CTL cat kubelet, system CTL should be only possible for admins in the cluster. Some platforms will go so far as to evaluate clusters or your cluster against the CIS, called the Center for Internet Security Benchmarks for Kubernetes security. Here you see the link to that stuff. In addition to the platform level security, there is a rich ecosystem of organizations that focus on container security specifically, such as ACWA, security, Twistlock, Stackrocks, and Neuevector. So let me show you an example called security concept called wide-listing. This concept deals with a restrictive approach, so called wide-listing. The goal here is to close all contact points, called deny all, and to enable only explicit desired or permitted actions. Depending on the criticality, this can be passed onto other departments, so called segregation of duty, and may have to be checked from a security perspective. For this purpose, a process can be established for requesting certain resources or activations. Kubernetes offers many levels for securing the cluster or cluster operations. And most of the security settings are not focused on maximum security, but are designed for fast deployment and use. So the rule of thumb here is an insecure container can be intercepted by a secure cluster. But however, the reverse does not apply. All right, note that we have an overview of the Kubernetes environment and Kubernetes security. Let's move on to the question, what is an AirGap environment? The term AirGap means the complete isolation of a device or system from the internet. AirGap networks and computers are used when the highest level of security must be provided for the system or the data stored within it. The AirGap protects the system from malware, keyloggers, ransomware, or other unwanted access. In a Kubernetes AirGap environment, internet connectivity is severely limited by a firewall. Cluster has very limited access to software repositories or registries. A common solution is to whitelist access to software repositories or registries with an outbound proxy and keep all the other connections closed. In this way, the cluster is cut off from the outside world. In an AirGap Kubernetes cluster, you cannot reach control plane endpoints over the internet. And in a security environment, a technical user is used instead of a root user. This user has restricted rights on the operating system. Yeah, running Kubernetes in offline AirGap environments means having private registries and repositories in place for Kubernetes, Docker, and all of the open source components an organization needs to run Kubernetes in production. It will need to configure all of its Docker images to pull from its internal registries and repositories. And all of its software and open source components will need to be tightly integrated, secured, tested for vulnerabilities, and made locally accessible to its application and deployment environments. Yeah, here's a list of common problems that occur in daily business. You want to install Kubernetes but don't have access to YAMM. You also do not have to require pseudo permissions. There are also restrictions on the permissions for certain directories. These directories are only relevant for troubleshooting. Therefore, there should be no access rights for non-admin users for these path. One practical example, a specific Kubernetes version is not compatible with an upgraded version of IP tables. Yeah, in order to downgrade to an older version, specific YAMM commands are required, which must be first enabled to install a specific IP tables version. Another example there is a request needed to access the registry through the outgoing proxy so that we can retrieve all the images needed for the cluster and the application. One last practical example, you want to migrate your search solution from NFS to Longhorn. So the disks need to be mounted and integrated into Longhorn. This requires many pseudo commands, but they have to be enabled first. Yeah, let's move on to the pros and cons of an air gap and security environment. In an air gap and security environment, the advantages in terms of security most likely go hand in hand with the disadvantages in terms of productivity. While the limited internet connectivity may protect you from downloading malicious data or third party attacks, on the other hand, you may lose productivity and aspects and the effort and cost of deploying and maintaining your cluster may increase. What only does offer installation enhance complexity during installation, but also cluster management operations, such machine maintenance, disaster recovery, upgrading to newer versions, deploying security patches and more. Ultimately, you will never be 100% secure with an air gap only environment. For example, threats from within are still possible. Let's look at a typical example of a daily task. We want to deploy elastic search and Kubernetes cluster with him. For any common harm chart, the process would be the same. Yeah, installation usually always runs in three steps. First, a Kubernetes cluster is needed. Second, we need to install ham so we can run the curl install script from GitHub and run it on the machine. The problem is that we don't even know what exactly we are running. The third step is the deployment of elastic search. You add the ham repo called a value CML file and install the ham chart. This approach can be risky if you are working in a security oriented production environment. Many ham charts include images and containers that aren't always necessary or worse. You really don't know which images are even installed or how many critical vulnerabilities they have. Especially in a Kubernetes cluster, you should make your containers as secure as possible to minimize the chance of outside attacks or privileged escalations from the containers. Otherwise, an attacker could take control of your cluster and use it for their own purposes. Now, for example, mining bitcoins. Let's continue with our example. Here, we call the ham installation script from GitHub in an environment with internet connection and as you can see, it works fine. Then we can run the script and install ham. In an air gap environment, we already failed because the firewall blocks the URL and we can't access GitHub. Let's get to the deployment of elastic search with ham. You add the ham repository, call the value CML file from GitHub and install the chart. You can see that it works well. However, in an air gap environment, the repositories cannot be accessed. So the chart cannot be installed. The same scenario applies to Kibana as well. Now, let's move on to the installation of ham in an air gap environment. Of course, the required commands must be in the ZudoS file. First of all, we need to install ham in our environment. To do this, we download the ham binary on a local machine and move it to the technical users directory on the admin node. For example, using scp. Then we unpack the file and make it executable, as you can see here. Next, we deploy with ham elastic search. Now that ham is installed, we download the ham chart to our local machine and validate it. This means that we check the chart files of unnecessary or unwanted containers and images. To do this, we look through the chart files and find all the images. Then we download the images to our local machine and check them for vulnerabilities. And if necessary, the images are hardened. Then they are assigned and transferred to our registry. Next, we change the values file to use the image from our registry. And mostly, we will adjust the values there for the cluster as well. Afterwards, the customized ham chart is transferred to the admin node, for example, with scp. Now we are ready to deploy the chart with ham in our cluster. And as you can see, it works. Okay, thank you, Toby. So let's move to Vincent. He will give us a short demonstration. Thanks, Rolf and Toby. I will show you how to set up an air gap cluster with a simple proxy server. So, first things first, I show you my setup. This is the admin machine. It's not a direct part of the cluster itself, but it manages it. The cluster contains two nodes, one master node and one worker node. All three nodes have very limited access to the outside world. The only available ports and addresses are the 443 port for HTTPS communication, 6443 and 7443 for the communication inside the cluster. The port 80 for HTTP connection. I already prepared the port 32454 as well, which I will use for the Docker registry. And the address is available on the HubCovernativeNet, which is the package manager, the place where all packages are stored, as well as all the machines in the cluster. But that's it. I am ready and my cluster is ready as well for redeployment. So, I will deploy a CNA package registry into my cluster. And all I need to do is call CNA search. CNA search gives me an overview about all the packages available in the Hub. As you can see, those are quite a lot packages. So, to filter for the package I want, I can call CNA search minus minus PS. And I search for Vincent, since I determined my account. And now there are only a handful of packages available. And the package I want to deploy is the package Docker registry in the version 271. In order to install this package, I copy the value in the install row, paste it after the CNA install command and pass the value file, which I will explain later on. Now the package gets downloaded and it will deploy the registry into my cluster. Now, if you want to know what exactly happens here, I will show you very soon. But I can call, I can tell you this, it's not only a deployment, it's much more. So, the installation is finished. We take a look at the pods. There's the registry pod. If we take a look at the services, there's the service for the registry. Now, keep in mind, I already prepared the pod 32454 and I will show you one thing more. If I curl the catalog by curling local registry, there is already one repository containing one image. And I will show you how I did it and how easy it is to create such a CNA package. Now, what you need to create your very own CNA package is a machine with access to the worldwide web, as well as installed Helm, Docker and CNA, or short, this machine. This machine is totally clean. It got no images available right now. So, we start from the very first point. I like to create a folder when I create a CNA package to keep things clean and tidy. Let's call this kubecon. And in this folder, I simply have to type in CNA create, which will create two files. The first file is the package.yaml, which is like a configuration file for the CNA package. And the second file is the template.yaml, which enables us to pass user values through the installation process. But since we want to make a high security deployment, we need all the dependencies available on our machine. Since I used Helm chart for the Docker registry, the Jvuni package, I need to pull the Helm chart first. Got it on the machine. This machine uses an image, the Docker registry image in the version 271. Now, what we already do is Docker pull registry in the version 2.71. Okay, it should be available exactly, but since we want to include it into the package itself, we have to store it in a file. We can do this by calling docker-save, docker image name and the version, and we define an output file registry.image. Now we have the image as well, and now we have all the dependencies in this folder. So let's take a closer look at the package.yaml. The package.yaml defines our Sina package, as I said before. For example, the name can be set to kubecon registry, description registry package. The version is changeable as well. I like to take the version of the image included. In this case, it's 271. Now we have to pass all the files included in the package. We also want the Helm chart included. You can name it whatever you want. I simply like to call it Helm. And the file name is docker-registry, and the version is 132. And we can also call the image with the name registry.image. Now we could also take images from an online registry, which we would do in the field containers. But since we operate in a high-security cluster, we don't want any more contact than we need. So we simply delete these lines, as well as these in the installation part. The installation part should also include the Helm chart and the image, so we list them here as well. And now we come to the most interesting part of Sina, the task list. The tasks get called in their order every time the package gets installed. So if we would call Sina install kube-con-registry, it would firstly print start-sina-create-example, use the template engine to merge the template.yaml and the given user values into result.yaml and print again a message. We want to take it one step further and create a really cool package. So what we need to do is add some lines. Since we want to deploy a Helm chart, which uses an image, we need to make that image available for our machine. In order to do so, we will call the cmd plugin, which simply executes a command in your shell and call the cmd.load.registry.image. This will make the image available in our local registry. Next, the template plugin gets called so that we can change the values for the installation of the Helm chart afterwards. After both is done, finally we can install the Helm chart. To do so, we call Helm install. We define which Helm chart should get installed. This is the docker registry 132. And the values, which can change the deployment, are passed in the result.yaml. Okay, after this, we should have a deployed and up-and-running registry. But we can connect to it since it's got no certificate and we don't have a way to connect to it via HTTPS. So we have to edit in the insecure registries file. To do so, we have to edit a file. So we call the edit file plugin. The operation we want to do with that file is we want to override it. The file type, we want to override as a text file and the file path is slashetc, slash docker, slash daemon, json. We want to override the very first line and the value we want to pass is insecure registries and the insecure registry is called local registry and the version is 32454. Okay, now we are able to reach the registry, so let's use it, push an image to it. What we need to do first is tag the image. To do so, we call the docker plugin with the option tag. The source image is the image we load here and it's the docker.io slash registry in the version 2.71 and the new image name should be local registry in the version 32454 slash docker.io slash registry 2.7.1 Okay, in order to push it we need to give the registry a little bit time to get up, so we call another command, namely the sleep command for 10 seconds. Just that the registry gets enough time to get up and get running and after that we call once more the docker plugin this time with the option push and we want to push the source image local registry 32454 slash docker.io slash registry in the version 2.7.1 and that's it. Now let's take a closer look at the docker registry HelmChart. To do so, we simply extract it, go into the folder and take a look at the values dot jam and right there is a big problem in the service. The service type is cluster IP, which is not ideal since we want to reach it instantly. To do so, we have to change the type to node port and define a node port and we could do this directly here in the values dot jammel of the HelmChart but there is a better way to do so, there is the scene away. Again, we take a look at the template dot jamme here we can define a template which gets merged with user values our template should have the same structure as the values dot jammel we have to change so we say we want to change the key service and we want to change the type and now we give a reference to the user values by saying values dot type and we do the same for the node port values dot node port ok, so if we install this scene a package and pass a values dot jammel with the key type and the value and the key node port and the given value those get placed in right here ok, we are ready to go in order to wrap it up we simply call scene a build takes all the files given and takes them into the package and in order to share a package we need to log in to do so we say scene a log in minus u and the username and the password and that's it now we take a short look at package dot jammel just to make sure the name is cube con registry and the version is 271 ok, so the last action to share it with the world and everyone we say scene a push now if you have packages you can share with everyone but want to use scene a don't worry we also got the opportunity for a private hub for you to use if you want it feel free to contact us after this presentation and that's it our packages we go back to our cluster now what we do first is delete our old deployment by typing on delete and the deployment name so that our cluster is clean again also we remove all images just to prove that we took the images from the scene a package ok, and again we say scene a search filter for my name we find the cube con package in the version 271 and we say scene a install but before we just take a look at my values dot jammel here there is the node port given and the type given, perfect so scene a install minus f values dot jammel and that's it again we can check the services it is running on the port 32454 and take a look at the catalog and there it is the image and that's it that's how easy it is to create an air gap environment with scene a to share packages to automate everything you want to do so back to you Ralph and Toby thank you Vincent for this great demonstration so let's move on one of the latest state of Kubernetes and container security reports by Red Hat found that 24% of serious container issues were vulnerabilities that could be remediated almost 70% were as we said, misconfigurations and 27% were runtime security incidents so here's an overview of our so called 5 level hardening procedure that we are offering to our customers if you want to get in detail please visit our website or contact us about air gap and hardening so now we are coming to the conclusion Kubernetes in not can be fast installed but please take care the Kubernetes has some open doors and you certainly don't want everyone coming in through those doors that means a well designed and a safe security concept is mandatory before you install anything and air gap is a really tough topic definitely worth looking into it depends on two sets if air gap reduces productivity air gap increases the effort and cost but a hacked environment cost more how much did log for J cost air gap protects against malicious data downloads or third party attacks so now we are at the end of this video on air gap we would like to thank you very much for your time if you have any questions about Kubernetes in air gap don't hesitate to drop us a line or contact us via our social media accounts we are looking forward to answering your questions you can also reach us through our website kubops.net thank you very much and bye bye