 my name is Manuel Pascuale, I'm the co-founder and CTO in USAIS and the organizer of the DevOps Meetup in Turing, Italy. Welcome to the Open Source Summit Europe 2020. This session is about introduction to container and orchestrator. Here you have my details, my Twitter account in case you want to reach me to discuss anything about container, Docker, Kubernetes, DevOps, culture and practices in general. This session is about an introduction to container, technologies and orchestrator, Docker and Kubernetes in order to try to explain the basic concepts that can drive you through further studies, further analysis and exercise in order to practice on these technologies. Let's start with the container. There are different definitions to define a software container. Container is a standard unit of software, the package app code and its dependency is an operating system layer virtualization. We'll see after in the slide out the container technology confer with virtualization of virtual machine as we used to run it since decades now. The container at the end is a group of processes. In this image you see in the slide, you can see that the container does contain a set of processes, but that if we go and analyze the processes into the operating system, we see exactly the same process that are inside the container directly from the operating system. It's a group of processes, one of multiple processes restricted to a private namespace. Namespace and C groups are features available in the Linux kernel that all to isolate a group of processes in order for them to see only a subset of the resources of the machine. This is what make a clear distinction between virtual machine and container. Virtual machine are based on supervisors that operate a virtualization of the hardware. You have an hardware machine, so a server, physical server, with an hypervisor installed. This hypervisor will let the virtual machines to see virtualized hardware inside the physical one to segregate and distribute the physical resources into a group of virtual resources where each virtual machine run with its own operating system and its own complete setup in terms of libraries, applications and services. When it comes to container, as we said before, container can be seen as an operating system virtualization instead of an hardware virtualization. It means that you have one host physical virtual, in most cases running on public cloud services, what you're using are already virtual machine and not physical machine, where on top of the operating system engine as the Docker engine, only you to virtualize spaces inside the operating system to independent application. It means that shared libraries are not duplicated but are used by each of the container using the same source, but at the same time if you need to isolate a specific version of a library entire container, this will run without conflicting with the other libraries installed on and available in other container because they are isolated each other using the main space and cgroup technology we mentioned before. Does container mean docker? This is a question that we got in different cases, talking with people that were start in addressing this technology. Docker is by far the most known container technology, but it's not the only one. Rockets, RunC, but also LXD, LXC, or Hyper-V container in the Windows area are different technology of running container inside an operating system. Okay, we have containers, but we say this session is about an interaction about containers and orchestrators. So why orchestrator on top of container? A container I can isolate my application, create a package that is easy to deploy, easy to distribute. You can run it, be sure that you will always get the same set of libraries. You will not have conflicts in terms of configuration, in terms of version of the software. But when you start running multiple container, a high number of container, not two, three container or machine, but you will have hundreds of container that are active, and you have them running on a different number of virtual machine that can't be start and stop and restarted. Without an orchestrator, you can find yourself a bit in trouble in trying to balance which container needs to run on which machine and which container needs to restart, which needs to be left stopped in case of issue with container you need to scale or something. What does an orchestrator does? The orchestrator does look at the status of the container into the different machine that are considering the cluster that orchestrators manage. And following the rules that you gave in terms of the configuration, this configuration that you want, the orchestrator will take the ownership about stop and start in moving containers between the different available nodes in order to assure that you always have the specific number and kind of container that you want to be up and running, up and running independently from the state of the individual nodes. As we said before for container and docker, the same question can when it comes from orchestrator and Kubernetes. Does orchestrator mean Kubernetes? Not necessary. Kubernetes is again the most known between the orchestrator, but it's not the only one. Docker has docker's one technology, you have Nomad and Mesosphere as other alternative, but you also have specific proprietary alternative available in the different cloud provider that can orchestrate container running on their platform as a service. So why docker and Kubernetes are the ones that are mentioned before and why are the ones that we are referring to in this specific session? As the US is, but also other company made the same kind of consideration, we selected docker and Kubernetes as our reference when it comes to container orchestration because they are the solution with the large community. They are fully supported by all major cloud provider. They are fully supported also for our on-premise configuration. They have part of the open container initiative, part of the cloud net computing foundation. The docker and support Kubernetes if you install the enterprise edition, it does have Kubernetes inside and docker receives support in a migration from his own orchestrator. It's one, two Kubernetes and the Google Borg is the foundation of Kubernetes making this in an history. This is a longer than the history of the Kubernetes name itself. Let's get the basic of the container. So how a container is created and which are the different naming that we should use. I was using a lot this term container that is the running instance of an image, but what you basically have to run a container is the image. The image is the immutable package of the application and its dependency. It's composed by multiple layer because you can reuse the same layer in different images without having to duplicate it if it's exactly the same subset of libraries and code. And it's created through a docker file that is basically the set of instruction that are used by the docker demon to build images and save them. These images can be then distributed through registry. So it's a repository of images that can be pulled to a system in order to make them in execution. The image is the immutable package. The container is the running instance. The running instance is not immutable. You can enter into a container and you can make changes inside a running container as you can do when you connect to an operating system. But the container does enter it from the image. It means that if the container is stopped and you start a new container, it will not have the changes you made in the running container that you were connected before. It will start again exactly from the same image and it will start from the image as it was packaged and with all the content in terms of configuration, in terms of code and libraries that were in its immutable package. If you have to change the image, you don't change that image but you create a new one with the changes that you want to apply. A docker file is a set of instruction. It's a set of instruction to build a container image that goes in sequence and allows you to define which action in terms of installation, configuration, copy or file you do inside the image. You can copy files from your system to machine where the build of the image is running or you can copy files between images. You also create your own image starting from an existing image. This allows you to create a multi-stage build that allows you to optimize the image creation process and the image size. As for example, move in your step into an image that does include SDK to build the application inside the container and testing tool in order to validate what you built. But then keeping the build solution once the test has passed and copied it to another image where the SDK and the testing tool are no longer available in order to have the image as small and fast as possible to run. Let's move forward from the image container to the orchestrator and Kubernetes. The origin of Kubernetes, the Kubernetes term is a quick term. Instead from Alpsman is the orchestrator for container builds a docker container but it also supports other container technology. It can run on any cloud provider or on bare metal in virtual machine, is inspired by the Google experience is 100% an open source project written in Go and has been created by three Google employees during 2014. It's been released version 1.0 on 2015. We are currently running version 1.9 as the list production ready version. You see K8S as a name that is usually used to shorten the name Kubernetes is just KS and the number of letter in the middle that is 8. It started with the Google experience of problem of managing the size and the scale of your own solution and started with the idea of the data center as a computer. What does it mean that the center of the computer? It means that you have to extract completely from the hardware so move to a software defined at the center. You have to extract from the network so you move to a software defined network and you have to implement a declarative application deployment so the deploy itself is the confrontation of the deploy and you have to put in place a system that has self-healing functionality and autoscaling functionality in order to reduce the amount of operation support you need to have to keep your application up and running when your scale becomes something that is unmanageable by manual operation or reactive operation done by support team. Kubernetes has been designed to be a multi-project initially it was set in the presentation the term multi-tenant that is not really correct. Kubernetes is not perfectly sweet for multi-tenant in terms of multi owner of the same cluster. There are projects that are enhancing the functionality in order to make this a bit more strict and secure in terms of managing multi-tenant but it's definitely designed for multi-project meaning a single owner of the cluster but a cluster that is shared between different projects in its own subset of resources and it has been designed for integration with an API first approach that makes this the solution that has been extended most also with the old integration with all the different public cloud provider. Taking this image from a blog and VMware website is comparing vSphere and Kubernetes so again we're comparing virtual machine with our subject in this case the orchestrator part. And as for the virtual machine cluster vSphere for VMware you have a controller that is designed in order to keep track of the status of the cluster itself the vScenter server in the case of vSphere. Kubernetes masternodes in the case of Kubernetes. Then you have your resources that are available to run your application in the case of vSphere these are the different hosts running the IPervisor in case of Kubernetes these are the different worker nodes that run the kubelet application. In vSphere you can define different resource pools so you segregate your resources across the entire cluster in order to assign them to a group of virtual machine. In Kubernetes you can do the same kind of segregation through the namespace concept where you can assign limits in terms of and quotas in terms of resources it can be used and then you deploy your application your container inside this namespace. The vSphere solution take care about integrating all external resources as storage network the same thing is done by Kubernetes in order to integrate those resources and make them visible for the application inside and making them usable without having to deal with them directly. If we go more deep and we go directly to the virtual machine the concept that it is as the base the unit of work when it comes to vSphere in this case on the other side in Kubernetes we talk about pods. Pods are the unit of works that you have to consider inside Kubernetes we were talking about container before when we talk about Kubernetes you will hear the term pods more than container pods is the equivalent in a certain way what a virtual machine is in a virtualization environment as vSphere for VMware. A virtual machine does include its own operating system and then its application inside it and it has its own IP to connect to the network of the other virtual machine. A pod is the environment where the container are activated every pod does iterate an IP internal to the cluster and this IP as well as the volumes are shared between all the containers that are running inside a pod. You can have a pod that run a single container as in this example the MongoDB container in blue on the right but you can also have a pod that run multiple containers this can be because you have two applications that run side by side you have different patterns that can be used you can have an unit container that executes some operation in order to facilitate and the startup of your application that run the container executing for example certain check for configuration you can have a sidecar container so in a container that runs side by side with your application container in order to do some service as low collection or managing connectivity and you can have multiple container running together because they need to share the same resources in terms of volume and their IPs. The architecture of Kubernetes is based on this specific important layer about resource management scheduling and service management. The scheduler is in particular one of the strengths of the Kubernetes solution for its ability to distribute the different resources available execute the different application the different pods in base the decision on the resource required by each pod and the resource available in the status etc. Underline layer are the infrastructure the hardware in the operated system where the Kubernetes is then installed and the container runtime that need to exist on the machine where Kubernetes is running that Kubernetes will interact with in order to start and run container images into actual container instances. As a main concept of the architecture you have the different nodes composed in a cluster you have the Kubernetes control plane so the master node and you have the Kubernetes nodes or the working node where the application is actually executed. The control plane is composed by the ATCD database the key value store that is basically the database that Kubernetes used to store the status of the required configuration and the status of the system in any moment. API server is the core of all the interaction between the different components of Kubernetes the scheduler take care about scheduling the different execution of the different deployments that are required the controller manager and the cloud controller manager take care about interaction with the other internal or external services. In the worker nodes you have the container runtime typical docker but it can be rocket or run C and you have the cubelette that is the service that communicate between the master components to authenticate to the cluster to receive the commands and communicate with the container runtime in order to establish the running status and configuration of the different posts. The Qt proxy operates as instruction of the network in order to overload the different application running inside the Kubernetes cluster distributed across multiple nodes to communicate each other without having to know which is the actual network topology of the machine where Kubernetes is running. Some of the basic concept we already mentioned some of this term I'll try to go quick as possible on this slide is a quite long deck that will be made available you can also check on my different account other presentation related to this subject that has been made and you can find easily in the internet a lot of documentation in particular Kubernetes website itself as a very very clear and extensive documentation of all the terminology and all the different operation. The cluster is the collection of the OS that aggregates the available resources so CPU, disk, memory, network connectivity. In the cluster you add two different kinds of nodes that we mentioned before the master nodes and node or worker node. The master nodes represent a collection of components that make the contraplane of Kubernetes so these are responsible for the cluster decision in order to schedule the new execution to accept new deployment and new definition. The nodes of worker node are the different OS physical or virtual where the cubelet interact with the container runtime in order to execute the different ports and so the different container inside the ports. Namespace is the logical segmentation inside the cluster that all loop to segregate and to create a specific scope of run for each deployment and set of ports and services. Label is a key element inside the Kubernetes word so you can label object with the key value pairs that only you to describe and group together different objects different deployment pod services. Labels can be used and are actually used in order to operate selection with the selector that all load to find which group of object are subjected to a certain action not on the base of the name of the object but on the base of the labeling of the object. Annotation are also key value pair but in this case they are not used by Kubernetes itself in order to operate selection and define the target of operation but are used by operators running on the Kubernetes that will read this annotation in order to be instructed to do certain activity. For example, annotation are used in order to instruct the certification manager to create a new certificate request for a website, a service, serving a website for example that you deploy inside Kubernetes. This is not done by Kubernetes itself but is a way that Kubernetes Olo an application running inside of Kubernetes to read information and react to that. The pod as we say before is a smaller unit of work and management so is the resource in terms of IP, one internal IP volumes and then resources for in terms of CPU and memory that are associated in order to run one or multiple container. Replication controller, replica set, deployment, stateful set, demon set are all kind of deployment of pods so different methods in order to declare one or multiple instances of a pod that can be replicated and distributed in the cluster. In particular, we have to consider the difference between a demon set with a demon set you are telling to Kubernetes that you want one instance of that pod that you declare in demon set running an active in each of the nodes of the cluster so if you had a new node a new instance of that pod will be executed on that node. A stateful set is a way that has been identified in order to run stateful application in Kubernetes. Kubernetes has been used and is naturally designed for stateless application but with a stateful set you can define that you want your pods to be executed with a specific naming that is an incremental numeric value of the pod. You want your pods to be started and stopped at always in a specific order in order to maintain and include a certain state and identity. While the deployment is declarative method in order to manage stateless pods in a replica set so you define how many instances or which rule with using auto scaling rule you want your pods to be created and you can define which kind of deployment approach you want to have in terms of replacement or rolling upgrade of your instance but your pods in this case are completely stateless so their naming is a variable the order on where the pods are stopped and start is not predefined. Services ingress controller and ingress we have a specific slide about that service is the method to expose pods selected through a label in order to receive connection from inside or outside the cluster the ingress controller is one of the more used methods in order to expose services in htp or htps to the outside world without having to expose each service individually opening the port externally for each service externally to the cluster individually with the ingress controller practically you have a reverse proxy inside the cluster that operate as a central point to receive the connection from externally distributed to the services. An ingress controller can be based on nginx or it can be traffic or haproxy there are different technology available in all cases it operate as a reverse proxies and load balancer router that routes requests internal to the cluster it is configured to a resource that is called ingress this is a common misunderstanding of difficulties the confusing the terminology between the ingress controller and the ingress so the ingress controller is the actual deployment that runs as a reverse proxy managing the connectivity from the external world to your services while the ingress is a definition of rules to configure automatically the ingress controller for your own application you can have only one ingress controller for each class so one nginx and this controller or one traffic in this controller but you can have both ingress controller in the same cluster if you want and you will have as many ingress definition or ingress rule as many deployment you want the ingress controller to be used for. Coming to storage volume is the storage that is tied to a port life cycle so it's consumable by one or more container inside the port a persistent volume represent the storage resource that is defined inside the cluster and that can be claimed by a port through a persistent volume claim in order to attach a volume to the port this is done in order to hold it to provision pre-provision storage and reuse storage or to create storage resources that can survive to the unavailability of a port in order to be reattached to the port when it starts again as for example contain the database file that needs to be used in the case of a destruction and recreation of a database port a storage cluster on a structure on top of the storage resources in order to define the provisioner and the attribute on how certain storage can be provisioned inside the cluster the config map is a way to manage configuration externally from the port itself a config map can be referenced through a command line argument it can be used as an environment variable or it can be injected into the port as a volume as a file or as a folder this can contain a file for example a configuration file that you connect a classical example you have your port containing an nginx container the nginx configuration is in an nginx.conf file and this file instead of existing inside the port is defined inside a config map it is a separate object inside the cluster and can be deployed and updates separately from the port and can be one deployment that then is used by multiple instances of the port and then inside the port you mount that config map as the nginx.conf file so the nginx application when it starts it will find the file and will use it as it is an internal file into the container itself a secret is very similar to a config map but it's storing code at base 64 content it can be encrypted the rest of the configuration is dependent on your Kubernetes deployment and the features that you have configured and also for parties application that you can integrate inside Kubernetes you can define roles in order to define which user or service account can to which operation you define roles or cluster role which is the difference the difference is that the role exists inside the namespace a cluster role is exists cross all namespace in the same cluster the syntax is the same so you define which API that role can connect to and which verb are allowed for the role on the specific API with a role binding or a cluster role binding if it's cross namespace you will then assign the role to a specific user or service account the service account is the identity that the port or external services can use to interact with the cluster directly you can also define port security policies so to find a set of condition that the port must be compliant to in order to be executed inside the cluster two keywords are particularly important when it comes to the adoption of Kubernetes these two keywords are immutable and ephemeral it looks to be conflicting between the two but it's important in order to use Kubernetes correctly to understand and the key value of this two words immutable because we are running images container images that become then active container instance inside the port but they run inside port that are ephemeral what it means it means that Kubernetes scheduler in considering the state of the cluster in terms of the resource that are used and consumed by the different port and the resource that are available on the different nodes get composed cluster the scheduler can define if to stop and evict a specific port and restart it maybe on another node because the resources in terms of CPU for example that were available in that node were no longer sufficient in order to run the specific port so in that node so that port had to be moved to another node this make these two concepts to be something that is important to consider together when you run Kubernetes you have to consider when you run Kubernetes that your application is ephemeral your application will in a certain moment be stopped and restart automatically for some reason for a crash or for the need of move between nodes for a maintenance or for resource consumption done by another application the cluster itself can decide to stop and restart your port so your port you should not consider that as something that is permanent it's not a machine that is permanent but when it restart it restart creating a new docker a new docker or runc or rocket container image or better it run in a new container instance that start from a container image it means that if you enter into a port and you make any modification inside the port this modification will be lost in the moment the port is destroyed and restarted in another nodes or even in the same node because it will start again exactly from its own image usage of the namespace I said before namespace are meant to segregate logically resources inside the cluster and namespace itself by default does not have any kind of limitation there are some mechanisms inside the namespace so for example you cannot make accessible a secret that is in the namespace from an application that is on another namespace but there is by default no limitation in terms of resources that are available for any application that run in one namespace compared to another one but it's possible to add attributes to the namespace you can create on the namespace rules in terms of quotas for the resources so your namespace cannot have more than a certain number of port running or cannot have more than a certain CPU consumption or memory consumption you can also create network policies so you can define which application which network connection are allowed or not inside the namespace across the different namespace and you can define which are some defaults that the deployment executed in the namespace will have to complain with or will be imposed in terms of resource allocation the permission also as I said before are namespaces so you have roles that are defined inside the namespace this all of you for example to create in the same cluster namespace for staging a namespace for production where you limit the number of resources for CPU and memory that the staging namespace can use in order to have a more safe on the production availability or you can also have different roles so that you have a certain group of operator or developer that can operate certain activity in the staging namespace but they cannot do other activity in the same activity in the production one or you can have different project team that each running in his own namespace with being able to create and destroy resources but only inside his own namespace without having the ability to even see the list of resources on the other namespace networking a container in a port they exist in the same network so they can talk on localus one each other and the port is where the ip inside the cluster is assigned each port is given a unique ip inside the cluster for his own lifecycle and they can talk each other through this ip the services also are given a persistent cluster unique ip that is something that persists to the port that are in the back end of the service the one that are selected through the selector of the configure in the service and the label on the port and there is also an automatic dns name resolution that is assigned this all your application to refer for the connectivity to services and this will all look to connect to the active ports that are available without the needs to know which are the port in which node they run which ip those port have been assigned the external connectivity can be managed through services that expose through a port on the public ip of the cluster the service and the ingress controller and the ingress rule can be used to centralize and minimize the number of external connection that and the port external exposed ports that you have in your cluster integration are available in the different cloud provider but also promise with different vendors in order to automate the configuration of load balancer services in the moment you define a service of a load balancer kind inside the cluster the service as i said before is done to expose a set of ports it can expose them internally to the cluster or it can expose externally for example in the diagram here through a load balancer that go through a specific port that is mapped to a service in the external interface of the nodes of the cluster when you expose a service directly you are exposing directly the connection to the ports that are selected by the service but if you need then other services and other deployment to be exposed this way you will have to create a new service that has a different port to expose it externally and the different load balancer configuration in order to simplify this configuration the ingress controller has been introduced the ingress controller has his own service and is exposed and it expose acp and acps by default but can be different configuration to all also udp and not tcp connection or to have a different ports but as a standard it is used for acp and acps it acts as a reverse proxy and then it will route the request to the different services internally to the cluster without the need to expose each service externally it can also act in order to get SSL TLS offload so you have an acps terminated to the ingress controller and then from the ingress controller to the service it will go in htdp and it can also have different routing rules in order to route requests to different services on the path of not only on the os name but also in the pace of the path and you can configure different rules as course a header or authentication ip filtering in order to be managed all this at the ingress controller layer application deployment the applications can be done as we see before in different way deployment state will set even set we said right before there are the job or crown job these are really similar to a batch that is executed once or a crown job is something that is executed regularly in a certain schedule when you make a deployment you define the number of replicas so how many instance you want of a specific application to be up and running on your cluster you define the revision history limit so how many previous version you want to have maintain in order to be able to operate a rollback to a specific version and you can also define your strategy for the deployment for the update that can be a replace or recreate so you want when the moment you execute do deploy a new version to destroy the previous version or replace with a new one because maybe there is no compatibility to have the two different version running at the same time or you can have a running update that makes something more gradual so new resources are created with a new version side by side with the old one and only one the readiness check or the liveness check of those resources is correct so cluster know that the new deployment is working it will start destroying the old one in order to do this this check this is done through props props can be done through an exact action so executing specific command inside the container a TCP socket connection or an HTTP operation so I get operation through a HTTP path in order to check the response code and this can be done to three different kind of probes the startup probes so indicate to kubernetes when the application is has completed the startup operation so it is it can be useful if your application is as a long term to before being completed in the start the readiness probe tell to the cluster if the container is ready to accept connection to the service so if the readiness probe is failing the pod is still there is still existing but the service will no longer root request to it the liveness probe instead is a test to tell to the cluster if the container is correctly running and if the liveness probe will fail the kubernetes will kill the container and have it restart to resolve the uh uh non-working status in your deployment you also define your resource that you are requesting and the limits you want to set to your resources again this as the probes are not mandatory but all this configuration are really important in order to instruct correctly the scheduler to take his own decision what is important when we talk about resources is also to know some of the rules schedule are using in order to define when to evict when to delete or destroy an active pod when the node where the pod is running is going low on resources one of the rule is the difference between the requested resource and the actually used resource so if i don't declare any resource use it so for example i don't declare memory request it means that my request is zero so if i have my application that has no request and is using 200 mag of memory but i have another pod that is a request of 500 mag and is using 600 mag for that one the difference between 600 and 500 is 100 why for my other pod the difference between 200 and zero is 200 so my pod with no request is the one that we selected first for the eviction considering the specific rule so it's very important during the development phase to do testing of your application define which are the uh resource requested for required to run the application set this explicitly limits are then set in order to avoid your application to consume more than a certain amount of resources in order to save resources for the the other uh application that run on the same cluster tanks and toleration are metadata that can be assigned to nodes and can be assigned to the pods in order to instruct your scheduler on how and where to execute a specific pod containing a specific application you can't be willing not to execute your pod in a specific node that you want your pod to be executed on a different node for example because you have an application that requires gpu capability and you want that application to be run on a node that has gpu installed and not on a node that is not running that kind of hardware physical or vehicle you can also set selector and affinity in order to try to drive the scheduler out to schedule your pod for example because you want to be sure that two different pods running to a kind of container for example the application and database you want them to run on the same node or on the other side if you want to application you want to be sure that they will not run on the same node at the same time you can try to instruct the scheduler configuring all these different rules just to note pay attention to this it's always a bit complex you have to be very consistent when you label or the resources when you want to make use of this advanced feature you have also to check the amount of rules and pods that you have in order not to create an access load on the scheduler in order to define the kind of activity and the placement of the different resources even if the impact is only when you are talking about deployment with several hundreds of nodes we have been talking about deployments of the application inside Kubernetes and all the different attributes that you have to set to the deployment but when it comes to the deployment how to actually deploy your application into Kubernetes there are different way of doing that there are different tools kubectl customized helm are between the most common one that we will see in this slide but first of all we need to have a little clarification about two important terms imperative versus declarative you can operate interact with Kubernetes with both patterns you can use kubectl kubectl control as an imperative way of doing so you can execute a common as a run to make a pod to be created or deployment to be created or as a kubectl create action or scale to change the horizontal scaling of a deployment for example or you can use a declarative way of working so you will use your JSON or YAML file where you will describe your desired state and you will give it to the kubectl kubectl Kubernetes API delegating to Kubernetes to take all the kind of action that are required in order to match the desired state and then you can do an update by submitting a new version of your declaration in order to ask the cluster to change the state to your new desired state but why are we talking about CICD when we talk about Kubernetes as we see here because it's a high tower it's quite important when we talk about Kubernetes say Kubernetes is a platform for building profit means not the end game the end game is the place to start when we talk about the deployment inside Kubernetes we're not talking about an application when we talk about a deployment we have something similar to this schema so we have the actual deployment object that contains pod but it also contains config map and secret that can be used by your pod the deployment itself is happening inside the namespace where you could have set limits or quote of different policies that you need on the namespace and these attributes can be different depending on the application that you are going to deliver or depending on the environment where your application has to be delivered you have done the service and the ingress definition and your deployment but this imply the existence of for example of an ingress controller and external services as the public DNS or the load balancer that your service or your ingress controller have to interact with the cert manager can be something that is related to the way you're using the secrets for example to provision to allow certificates and in this case it's also linked to your ingress definition and you have your storage volume that you want to the to the pod to the put different pods that depends on the storage class that have been defined in your cluster and the storage provided that are external that are the condition that you need to have in order to understand which kind of storage class you can define if you are an certain cloud provider or you have one on-premises solution you can have different storage provider available others that are not available that you have to consider you also have to consider that even if your application is one application you can have different kind of needs in terms of deployment we all know that the desired state is to have different environment that are absolutely equal one to the other but we also know that in real world this is not usually the case you have different needs and different situation when it goes to the development environment the production environment in production you could have for example a dedicated registry to store the production images and you have maybe a multi-region setup and you have multiple replicas of the employment in order to guarantee the uptime and scalability in the performance well maybe in staging you have a single namespace and you just have a different kind of storage or a different kind of sizing and in development you are referring to a different registry because you are also using unstable version of your images that are not published to the production registry you maybe have just one replica for deployment and you have a different namespace or maybe you have multiple namespace because you have feature branch and you have one namespace of development for each of the feature branch so this means that your deployment definition and your deployment action is not exactly the same not only between different application but also for the same application depending on where you're doing the deployment you could be running different kind of deployment or have the need to have different attributes or different value to the same attribute depending on where you are deploying the same application in terms of deployment tools we have different options as I said before one option the starting one that we have to consider is he has chip ctl key control plan manifest file in the form of json or yaml file this is the official tool is the standard one you get when you start dealing with Kubernetes there's no need to install any additional package is simple or the simple one between the solution that you have in place it's standard you in any case if you are dealing with Kubernetes you will need to know how this tool works how you can work in an imperative model or how you can work in a declarative model with chip ctl and how you define the different manifest file for the different kind of resources and objects you can deploy in Kubernetes this is something that this must have in terms of your knowledge in the moment you are working with Kubernetes so if you need to know that definitely you can manage your deployment with this but it does not directly support variables so you cannot really reuse in different kind of deployment for different application or for different environment of the same application the same manifest file what you can do you create template and you clone this template and you make the changes one to the other but that's become difficult to manage as soon as your number of application or environment is growing or you can put placeholder inside the file and use your ci cd pipeline to operate replacement of your placeholder with different values that are variable depending on the application and on the different environment and it does not manage dependency there is a GitHub project that I made available and it's available in the comment of this slide where there are some sample when you execute chip ctl apply on the file you can apply a single file you can apply a folder with all the files that are inside this files will be applied in alphabetical order it means that there is no check on dependency so if the order of the file is not the correct one in terms of the dependency some of your apply command will just fail because of missing dependency through that being a declarative mechanism if you apply it again you can apply it as many times you want it will be the idempotent so you will just get find the state it will not change things that you are not changing in terms of definition and so you will recover from the missing dependency one after the other but it's definitely not the way you want to manage your deployment you could do it by defining explicitly your order of execution but again in complex deployment it can be an additional effort to be managed in order to solve some of this problem customized the project that has been started evolved it's now part official of the Kubernetes tool you don't need to install any additional package you use customized through your qcdl command line it does support variable patching and composition it means that you can create snippet and fragments of manifest file and you can merge those managers file one to another it means that you can have something that is reusable across different deployment different application of different environment still managing the variation between between them you can also use that to reuse common models so imagine a structure where you have your own department taking care about the observability solution that you who use in your own application externally to the application in your observability solution for example is deployed in psychognitis but in order to interact with each application to grab metrics and logs it needs the deployment of the application to a specific annotation so that your observability tool will get from this annotation all the metadata that are required this annotation could be created as a module that then is merged into any application deployment using the customized application it means that you don't have all the application team to know and to be aware and to remember to insert those annotation but this will be iterated at runtime when the command is executed and you can have a separate department that take care about only that specific module that is then injected inside your deployment I made example with the annotation but the same things can happen to different attributes as for example set in the resource request and limit is something that can be different between different environment and you just have the same yaml file where you merge different models for the different option you still need to manage file and folder order so customers does not have the concept of dependence in the order as we said before for qctl or you can as we say before manage this by constructing your pipeline to manage the order of execution in the correct way a very popular way to manage deployment inside Kubernetes is ALM is not part of Kubernetes per se is an additional tool you don't need to install specific tool when it comes to version three of ALM in the previous version you had to install tiller components inside the cluster in order to interact in the client server communication but without version three this is no longer needed you execute through again a common line what you have using an ALM chart is that you have a template definition where you can set variable inside what is exactly a Kubernetes plane manifest is a plane manifest where inside the manifest you can put variable you can put if condition in order to execute or not certain part of your deployment it does not support composition so it's not as customized you cannot compose your file so in the case of the annotation that I was mentioning before in this case with ALM those annotation has to be inserted into the template file into the manifest but it does support dependencies so it will check on the different object and will manage the dependency between the object and it has a concept of a package of the application what you have on a noun chart is a template that is grouping together the different kind of resources that your deployment needs so if your application is composed by a deployment object plus a service config map secret on ingress and volume persistent volume and persistent volume claim definition you can package all together into a noun chart and every time you will execute this will be executed together you have your different placeholder and your if condition in order to define different variables so for example in the same manifest you can have declare the image repository both your unstable registry and your stable images registry and have an if condition that on the base of the value that you pass on the execution you will select which of the two to use without having to create two different deployment file you will set your value for the different variables into a file the value file you can give them on the common line that's a parameter in order to execute another option is to use Terraform and ACL language so Terraform has two different option it can directly deploy the resource to the Kubernetes provider so Terraform interact with Kubernetes and execute the deployment or it can have an Elm provider so it will deploy a noun chart so you have a noun chart but instead of deploying it directly through the Elm command line you ask Terraform to deploy it for you in case of usage of Terraform you will use the same ACL syntax that you are using for other Terraform resources as managing your public cloud resources you can manage dependency Terraform provider will manage dependencies for you but then you have as part of the ACL language the option to declare your dependencies between the different resources or the order of execution you can support variable because this is part of the ACL syntax and capabilities and you can integrate your Kubernetes deployment with your Terraform workflow validate plan and apply and you can have to your Kubernetes deployment inside the Terraform state together with the rest of the Terraform deployment so for this reason this can be an option if you are proficient with ACL you are using Terraform on all the different provider you could consider also this option this is the only option between the one that I presented where you are not dealing with the Kubernetes manifest syntax so is in any case something more that you have to learn if you are not already proficient on ACL so what to use uh other option besides I mentioned these are say between the most popular one what should I use it brings a lot it is your content what is your need so if you have already a proficiency on ACL and you are using Terraform for everything you could consider to use Terraform as your deployment tool into Kubernetes considering that in any case you will have to learn the syntax and the format of the QCTL plan manifest if you want to package your application and you have self-contained team that manage an application in all its components your hand chart can be your preferred solution if your company structure is such that you have dedicated team to specific areas that will cross application as a security team working security aspect cross all application on observability team monitoring observability one and you have a storage team managing all other aspects then the customized version is the one that can fit better your structure and organization it also depends a lot on how much you are already using the automated pipeline because if you're using the pipeline your dependence in order complexity can be managed at the pipeline level while we are executing things from via an operator from a computer something that we would not suggest as the preferred way to operate but some companies are doing that and maybe there are also good reasons for some circumstances to do that the fact that to have something that does not manage dependency directly can become an additional complexity to manage there what we would suggest is to start simple and gradually add complexity what does it mean to start simple first thing you will need to learn the plane manifest and QCTL you have to learn how to use Qt control in an imperative and declarative way this is in any case needed for your way to interact with a cluster when this is done you can start exploring customize and help first one or the other depending on which is your context as I said before if you have your team taking care about all the aspect of an application album is probably the solution that will suit most you can also combine things you can run an own template command in order to create an output that is a YAML file containing the right order of dependency with a different variable that has been populated by the values set to the L execution and then execute a customized command on top of the output L template command and add the additional feature this is something that is becoming quite popular in different contexts to combine the two things and customize and being able to take the benefit of both the benefit of the packages of the application the conditional the selection and the usage of the variable and the management of the dependency with the ability to compose and merge value from a customized command line in the presentation we have a series of links that are available for you to get some additional documentation and information the different links are often related as in this case to the official Kubernetes documentation and the Kubernetes project on GitHub and the blog as I said already before these are a very important source of information very detailed up to date with the latest version every time and to a very clear and rich of example of how to do things so this is the end of the presentation as I said it has been a very quick move from the initial concept of what is a container what the container is why container are needed why when you start using container you would find yourself on need of an orchestrator why Kubernetes has been selected as an orchestrator by many companies which are the basic concept the architecture and the resource and element that compose the Kubernetes environment I'm available online after the video for Q&A in the initial the beginning of the presentation you also have my contact on Twitter so feel free to reach me with any question or comments as I said in the comments of this slide we have also reference to the GitHub project where you have the sample of the different kind of deployment so there is a sample of imperative and the clarity of QCTL sample of deployment of a MongoDB replica set exactly the same deployment defined using QCTL customize HAL and Terraform is a sample is not a production ready deployment so please take care not to use it for any production environment but you can play with it to understand the the different option on deployment and put in here now on the video also directly the link of the project and GitHub should see it here below so on GitHub around the Pasquale Kubernetes-ci that's the presentation is the project I was mentioning before that you can freely access to try to get some sample and to exercise with different method of deployment if you are interested in a specific case on building up our Kubernetes cluster you can also refer to this other banner this other project that you see as a banner Azure Kubernetes service via Terraform that is a Terraform project that deploy AKS so an Azure Kubernetes service there are two different models one to deploy without Azure Active Direct Integrator the integration the other with integration and it does use the Terraform Kubernetes and Terraform and provider one to deploy the ingress control engine access controller and the other to deploy the search manager so the service that will take care about automated creation and renewal of certificate for your application in order to see again also in this case the different sample and option so I thank you for your time this has been even quicker than what was planned and expected I've been quite quick apologies for my English for all the error I made during the presentation and for having been so fast but I was really trying to save time for any Q&A session that we would be able to have online thanks for your attention and enjoy the rest of the conference