 Hello everyone, my name is Katie Gomanchi and currently I am an ecosystem advocate for the CNCF. I have joined CNCF quite recently and my responsibility within this role is to grow the end user community, but more importantly to close the gap between the end user feedback and the projects currently residing within the CNCF landscape. Today, I would like to talk about the building blocks of DX, more specifically the Kubernetes evolution from the CLI to GitOps. And to do so, I would like to firstly introduce the cluster CLI known as KIPCTL and how it provides this native way of interacting with Kubernetes. I would like to follow that with the DX enhancement section, which is going to focus on how can we further extend the KIPCTL commands and wrap them through graphical representations such as web and terminal UIs. And lastly, I would like to conclude by introducing how can we automate most of the operations within the clusters by talking about the application ops. And in this section, I'm going to mention the click ops, GitOps, and even sheet ops, which is a way to manage your clusters using Google Spreadsheets. Before I move forward with the topic, I'd like to introduce the ecosystem which allowed this evolution all the way from the CLI to GitOps. If we look six years ago, the container orchestrator framework space was very heavily diversified. We had tools such as Docker's firm, Apache Messos, Kubernetes, and all of them provided a viable solution to our containers with scale. However, Kubernetes stood the lead in defining the principles of how to run these containerized workloads. Nowadays, Kubernetes is known for its portability and adaptability, but more importantly for its approach towards declarative configuration and automation. And this can actually be seen in numbers. When you look at the CNCF survey in 2019, more than 50% of the companies are using Kubernetes in production. The other 42% are actually prototyping Kubernetes as a viable solution to move forward. When a transition towards the contributor community, there are more than 2000 engineers actively collaborating towards future build out and bug fixing. And when we look into the end user community, more than 23,000 attendees were registered at the KIPCONS around the world. Now, these were the in-person events which were in Europe, China, and North America. And this flourishing community around Kubernetes has been extremely beneficial because over time, multiple tools were built around it to extend its functionalities. And this created what today we know as the cloud native landscape, which resides under the CNCF umbrella or cloud native computing foundation. Now if you look into all of this tooling, some of them are going to provide similar capabilities such as service mesh, networking, storage, runtime, and many more. And a more diverse tooling that means we're going to have a more diverse user base as well. Because engineers with different capabilities will need to interact with the cluster, either to manage the cluster or deploy to the cluster. As such, we can identify free main personas or free main user groups here. The application developers, application operators, and infrastructure operators. The application developers are the engineers which don't necessarily need to interact with the cluster. However, they will need to be aware that the application is going to be containerized. As such, they will focus on the business logic, but at the same time, they'll need to be aware to develop employees such as liveness and readiness probes. The application operators on the other side are the engineers which will interact with the cluster and deploy the application. They will be aware of all the resources within the cluster and any components that are necessary to make sure that application is up and running at all time. And infrastructure operators are the engineers which will not only understand all the resources within the cluster, but they'll be aware how to create, manage, configure, and delete the cluster from scratch. Now usually speaking, the application developers and application operator roles sometimes are overlapped. However, it does not always the case in current organizations. Hence, we need to segregate them at the moment. At this stage, we require one set of tooling which will allow us to interact with the cluster based on all of these end user groups. And currently, this is provisioned by cluster CLI known as KubeCTL, which provides a way to interact and manage Kubernetes objects. We can think about KubeCTL as a way to close the gap between the end user terminal and a cluster if running locally in a public cloud or on a data center. And to connect to those clusters, we'll require a KubeConfig file, which will have all the metadata necessary, such as the APA server endpoint and authentication data such as certificates or tokens. Usually the KubeConfig file can be found in the .cube folder, or it can be specified as a flag for the KubeConfig flag, or alternatively it can be consumed as an environmental variable. When we refer to KubeCTL, we refer to 6 CLI, which currently resides under the Kubernetes Community umbrella. Currently, KubeCTL offers more than 40 operations and these are going to be actions we can perform on top of the resources such as create, delete, run, get, and many more. And these operations are associated with more than 70 flags, which will allow this tailored and custom experience. Quite limiting, but still it will provide that. We'll be able to either provide more information on creating a cluster, we'll be able to specify more details when we try to get information from the cluster. But more importantly, with KubeCTL we have the support for declarative and imperative management techniques. When we refer to the imperative technique, that means that we're going to operate on the live object straight away. So this is going to be an engineer which will either create or delete the resource and this is going to be straight away visible within the cluster. The good thing about this approach is the fact that it can have a very quick upscale for our developers. They'll be able to deploy their application, if it's not correct especially, they'll be able to deploy their application to the cluster immediately. And the downside of this approach is the fact that it has no source for the configuration. So if you'd like to share this configuration with other teams or with a wider organization, it's going to be difficult for us. And that's why we have the declarative approach which allows the operations on file stored locally. So we have the full representation of our manifest, Neamo, and we'll be able to apply them to the cluster. The good thing about this approach is the fact that it allows the directory level operations. Usually when we deploy an application, we're going to have a deployment and that's going to be associated with a service, service account, configuration map. Maybe we're going to have a secret and ingress and that's already more than six components we require. With the imperative approach, we'll have to create all of these resources manually. However, with the declarative approach we have directory level operations, which will allow us to create all of these resources with one command. However, on the downside of this approach is the fact that we have quite a high learning curve, because it requires an in depth understanding of the nested YAML configuration, and it can present some challenges while we debug and troubleshoot our manifest. Now cluster Ctl is a cluster. Ctl is a great way to explore the cluster assets. However, it is generally and wrongly assume that everyone interacting with communities is going to be fluent in Ctl. A clear path was discerned from the community from this point to further enrich and extend the cluster DX to make it available inclusive to engineers of all levels. As such, when we mention DX enhancement, we can refer to four main principles, simplicity, optimization, transparency and durability. Simplicity means that any changes or any new tooling that we introduced in track to the cluster needs to remove some of the complexity rather than making it more difficult for our engineers to get information from the cluster. Optimization is all about finding patterns and trying to simplify them. For example, if engineers always need to look for the CPU and memory for an application, instead of grabbing the YAML off a pod and looking for the right files, but for the right configuration, we'll be able to write for example a plugin, which will date for them with one command. Transparency is a principle which should be true at all times. When we introduce new techniques to interact with the cluster, our engineers should always be aware where the application is deployed and how they can connect to it. It is going to be a major blocker if we don't have the transparency because we're going to become the bottleneck when the application team will require to troubleshoot for example during a production incident. And the last principle I would like to mention is durability. Any changes that we introduce, they should just be the building blocks of the interactions in the features. It should be something which is easily extensible, but more importantly, it's something which is going to stick with our developers. It's quite important to introduce these changes frequently, but with a good pacing to remove the developer fatigue as well. I want to refer to the DX enhancement with kubectl, we can identify two main areas, extensions and wrappers. The extensors are going to be represented by kubectl plugins, while the wrappers are going to be represented by web and terminal UIs. I would like to dive a bit more into both of these areas. Now, as mentioned, if you'd like to extend your kubectl commands, then you're going to write a plugin which will be able to write these custom extensions for your end users to consume. The more important thing about plugins is the fact that they can be written in any language, underneath they just a binary, and as long as they have the right file name and are in the right location, they'll be identified by kubectl and the end users will be able to utilize that. But the key point about the plugins with extensions is the fact that we can really provide a custom DX which will focus on our simplicity and optimization principles. Let's look at how kubectl plugin could be written. Here I have an extremely simple shell script, which is going to have an echo command. However, I'd like to draw your attention towards the path and the file name of this particular plugin. The location of the file should be in one of the binary folders defined in the path environmental variable. As such, in this example, I have it under slash user local bin. And the file name for a plugin should be always prefixed by kubectl, followed by the actual command. So here we create a greetings plugin, and the file name is obviously going to be kubectl greetings. Once the file is in the right path with the right naming, we don't need to do any install, it will be automatically identified by kubectl. And we can validate that by doing a list of our plugins, and you can see the first result is going to be our greetings, followed by any other plugins we have installed in our machine. And to test that our plugin is working, it's just enough for us to do kubectl greetings. And this will go to our shell script, we'll execute that and print our echo command. As simple as that. Now this stage we could extend or we could share some of these plugins with wider organization, however, over time there are plugins which were found useful across the community. And it was required to have a centralized mechanism to collect index and distribute these plugins and currently this capability is provisioned by crew. You can think about crew as a service discovery for plugins. Currently, it allows the install of more than 90 custom commands for kubectl. And a cool feature about crew as well is the fact that it has its own version manager. So you can have it's your kind of separate way of managing your plugin for crew rather than to make it very similar to your project versioning. And if you look into the most widely used plugins at the moment, there are plenty of them, but as you can see, search manager has taken quite a broccoli, I would say. It has more than 700,000 downloads at the moment. And search manager is a plugin which allows you management of the TLS for kubernetes cluster. Ingress nginx is another plugin which allows the inspection of your nginx controller within kubernetes. Crew, which is our plugin to install other plugins, so we have a slight inception here, has been downloaded more than 150,000 times, so astonishing amount of number. Tail is another plugin which allows the streaming of logs from a collection of pods, while score is another tool which allows the static analysis of our YAML manifests. Now if we'd like to distribute our greetings plugin with crew, we'll require to write a plugin CRD for customer service definition. Now this plugin is going to have two main areas of information, and this is a trim output of our CRD. The first section is going to be informative, while the second one is going to be a guide on how to install the plugin on different operating systems. As such, in the informative part, we're going to have the version, the page to our repository along the short description. While in the platform side, we're going to have information on how to install our plugin. So for example, we're going to point to the right binary, we're going to point it to as well to our shell script. And we'll be able to see that in more details during the demo, which follows at the moment. Now currently, let's see how can we distribute our greetings plugin. To do so, I'm just going to list all the plugins I have currently available on my machine. I don't have the greetings plugin, and I can further validate that by doing a crew list, which will list all the plugins which were installed crew. The greetings plugin is not there anymore. Now let's look at our plugins CRD. As mentioned, we're going to have two main areas. This is going to be the informative part. So as mentioned, we're going to have version home page, a short description and a long description which probably can be useful if you have multiple commands for your plugin. And you can actually have this health message described over here. And the second part is going to be how can we install our plugin on particular operating systems. Here we're going to focus on Darwin Linux operating system. Mainly, we're going to point it to our binary. So this is going to be a zip folder of all our files. And we're going to have a check some to make sure that our integrity of the files is going to be kept all times. Because underneath it does an extract operation from our zip, we can specify what we want to extract from that folder, and we can specify course we actually need to specify our exactly binary that's going to be executed for our QCTL command. Now let's install this particular plugin with crew. With that, I'm going to do a QCTL crew install. And I'm going to point it to the manifest that we have seen which is plug in the channel. Cool. And to validate that I'm just going to do a crew list. We see our greetings plugin with its version as well. And of course I can do a plugin list to be identified as well, just over here. And to further validate that our plugin works, we're just going to do a QCTL greetings. It will go to our shell script and output our Hello Cloud native message. Now this pretty much concludes the demo for the crew plugin. Before I move forward, I would like to leave a few takeaways about the QCTL plugins. The most important fact about QCTL plugins is the fact that we can further extend our CLI and provide this tailored experience for our end users. Another quite widespread practice is to use QCTL plugins in association with Alesis, so we can have sure the commands to interact with the cluster. Another profile to mention is the fact that they currently cannot override the existing QCTL commands. So QCTL get pods is always going to perform the same operation. We'll not be able to change that. However, we still have a bunch of plugins which are just one install way of crew. Now when referred to the wrappers, this usually focus on a graphical representation of our cluster state. The wrappers are how can we visualize some of the resources by using the QCTL commands. And as mentioned, it's all about this operational view of the cluster state. So it's going to be easier for our end users to understand what's happening with our resources. You can think about QCTL wrappers as a point of presence for the clusters. And this is quite important because currently within the team, we're not going to have just one Kubernetes cluster. We're going to have three clusters per region. So we're going to have a sandbox staging and production, and this is the minimum. And this can be replicated by the amount of regions we'd like to distribute our cluster to. So it is we can have 10 to 15 clusters nowadays. With the terminal UIs, or web and terminal UIs actually, we'll be able to quickly interchange between these clusters and interact with them. Another cool feature about the wrappers is the fact that they are developer-first. Now, when you have a UI of your cluster state, you have a finite amount of clicks that you can actually perform within your dashboard. As such, it can really have this developer-first experience because our engineers will be able to explore what's happening in the cluster in finite world of actions. But more importantly, QCTL wrappers currently, most of them actually, are providing a basic coverage of the QCTL operations. So mainly it's going to focus on the read-only commands such as QCTL get. However, it provides some support for operations such as scale, delete, and port forward. And we're currently into the ecosystem. There are plethora of tools for provisioning the UI capabilities from which Optint is one of the most prominent tools. It has been developed by VMware, actually by their SRA team. And in addition to the presentation of the cluster state for the developers, they have a section for the Kubernetes operators as well. So they'll be able to see the nodes and any information associated with that. K9 is another terminal UI, actually, which has a wrapper around all the QCTL commands. And Lens is another tool, which is a Kubernetes IDE, and it can be installed on any operating system as a standalone application. Now, at this stage, we can extend and visualize the end user journey within the cluster. However, all the operations we've been performing have been manually. To introduce plugins, you'd have to manually type them to interact with the cluster. And if you'd like to visualize it for a dashboard, you still have to click through different menus to understand what's happening underneath. We're talking about the application deployment. We need to think about automation and how can we automate most of the operation for our applications to be live in the cluster. But for that, the next organic step is to actually introduce templating, because as mentioned, we're going to have multiple clusters and we will require you to tailor the configuration for those clusters. And currently, these capabilities are provisioned by configuration managers such as Hell and Customize. Now, these two tools have completely different ways of provisioning the template capabilities. However, Hell at the moment uses a template-driven approach, which means we're going to have the YAML representation with the options to inject values to our templates. Controversial idea here, but Hell goes slightly towards the imperative approach. And this is because if you think about a chart that actually includes other charts, or it has like, you have a Hell umbrella chart, which actually utilizes other charts as well, you're not going to set every single value. And because of that, most of them are going to be set to their defaults, which means that we're going to use the imperative approach. And the good thing about Hell is the fact that we have a collection of upstream charts, which allows the install of many cloud-native applications nowadays with just one command. Customize on the other side uses a template-free approach, meaning that we're going to have the full representation of our manifest, and that's why we're going to have the declarative approach. And the way Customize works, if you'd like to tailor your configuration, that's going to be done through the deep mergers. Now, to tailor your application or your deployment for through Hell, you'll have to use multiple values files, while with Customize, you're going to have a base and multiple overlays. And once we have our templating layer introduced, that means that we can automate the configuration moving forward. And these are provisioned by some of the techniques such as click-ups, skate-ups, and even sheet-ups. Click-ups is a technique which is represented by a collection of clicks through a set of menus. The most important feature about the click-ups technique is the fact that it provides a very powerful DX. Our engineers do not need to care what's happening on the infrastructure there. All of this is going to be abstracted for the UI. However, it's very important to keep a balanced abstraction level, because if we provide too much information for our developers to configure, or if we provide too little information for them to configure, that can actually distort their experience of interacting with the cluster. And with the click-ups approach, it's quite important to think about operations such as configuration storage, rollbacks, and roll forwards. For example, if something goes wrong in production, and if you'd like to use a new version, or if you'd like to revert to an older version, is this going to be represented by a myriad of clicks, or is this going to be something you'll be able to restore from a state storage somewhere. So this is an area to be photographed with this approach. The click-ups technique, actually some of these issues are tackled. The click-ups approach is represented by a set of click repositories, which represent the desired state of our application. The key point about the click-ups technique is the fact that it's PR-based rollout. That means that the delta between our developer terminal and the cluster is just one PR away. So here we have a more powerful automatic reconciliation. The click-ups model works with pull approach, which means that if it identifies any changes to the image repositories, it will be able to identify them and apply them to the cluster. But more importantly, we're going to have a version state of our cluster, meaning that we always can revert, using it, to a version of the cluster which was up and running and with no issues in production. And currently, the key-ups capabilities are provisioned by tools such as Flux and Argo CD. Flux is a tool provisioned by Weaveworks and currently is a sandbox CNCF project, while Argo CD currently moved to incubation stage. And to end on a lighter note, I would like to introduce quite a controversial technique, which is sheet-ups. And sheet-ups aims to control the Kubernetes state using Google Spreadsheets. It's a very new technique. It hasn't been developed that much, but at the moment if its mission is to replace YAML with Spreadsheets. So if you heard or heard of your engineers complain about YAML, you can allow them to manage your Kubernetes state through Spreadsheets. And to actually showcase this particular model, I would like to use the sheet-ups technique to scale an application currently. So at the moment I have, let me just move this slightly. I've already my cluster linked to a Google Spreadsheets where I can see, as mentioned here, I can see all of my namespaces. And currently, in my terminal, you'll be able to visualize all my pods in my Argo CD namespace. And currently, sheet-ups actually only allows the scale operations. However, it's something you truly care about. However, extended to support other operations and techniques as well. So if I'd like to increase my Argo CD Redis amount of replicas to free instead of one, I'm just changing it to my desired state in my Google Spreadsheet. And this, as you can see, is already applied to my cluster. And the same goes with the scale-down operation. I can scale it to two. You'll be able to be identified. And it will delete a cluster. It will delete a pod straight away. Now, this pretty much concludes the evolution all the way from the CLI to techniques such as Github's. We started by introducing the native way of interacting with the cluster by using kubectl. And more importantly, we focused on how can we further enrich and extend that with the kubectl plugins and the wrappers represented or translated through web and terminal wise. But more importantly, we see that we have multiple techniques nowadays which really tries to automate most of these operations by introducing techniques such as click-ups, githubs, and even sheet-ups. And all of this evolution has been possible because Kubernetes has a declarative configuration schema that really advocates for extensibility and the optimization of the end user journey within a cluster. If you have any questions in regards to this talk, please reach out. I'm going to be available on social media such as Twitter and LinkedIn. As well, you will find a more detailed blog about this particular talk on my meeting account. As well, I'm writing any other blogs on the new tooling that I'm exploring and you'll be able to find them there. Enjoy the rest of the conference. Thank you.