 Hello everyone, my name is Katie Gamangy and currently I am the ecosystem advocate at CNCF. I have joined this role quite recently and my responsibility is to lead the end user community while making sure to close the gap between the practitioners of cloud native and the projects within the ecosystem. As well currently I am one of the members of the advisory board for captain, which is a CNCF sandbox project. I'm working with OpenNK Nodacity to ensure that open standards are used fairly and they're actually created across data, hardware and software. I have mentioned the end user community and I'd like to give a brief introduction of what it actually represents. The end user community is composed of more than 140 vendor neutral organizations across different industries and sectors. These organizations are using cloud native technologies to build and distribute their services. They are not selling cloud native technology. This is quite an important distinction. It is the largest end user open source community and it stands at the CNCF center of end user driven open source. That means that we're looking towards this organization to define the production experience and growth of cloud native technologies. Pretty much these users and these organizations are the ones who adopt technologies in their production systems and based on their feedback will be able to grow organically the ecosystem. If you'd like to find out more about the end user community, how can you showcase the usage of cloud native tools as an end users, please visit the CNCF.io slash end user. We're going to have more information on how can you join and be part of this community. Today, however, how I'd like to talk about open standards and more specifically how these are anchoring extensibility within the cloud native ecosystem. And to do so, I'd like to start by talking about container globalization. In this section, I'm going to focus on open container initiative and Docker and how this pretty much helped to have a wide adoption for containers across the industry. Next, I'd like to focus on standards and interfaces and this is more around the Kubernetes ecosystem and how this particular initiatives helped communities to grow and develop this cloud native landscape. In this particular section, I'm going to talk about the container runtime interface, the metric interface storage service mesh interfaces and cloud provider interfaces. As well, I'm going to briefly talk about the observability stack and the open standards across metrics and traceability. And I'm going to talk about the open standards within the application delivery, talking about Kubevilla and crossplane. And lastly, I'd like to conclude on the impact the open standards and interfaces and head on the vendors, community and the end users. If you look back seven years ago, the container orchestrator framework space was very heavily diversified. We had tools such as docus form a patch messes queries fleet Kubernetes, and all of them provided a viable solution to run containers at scale. However, Kubernetes nowadays is known for its ability to define how to run containerized workloads, which is known for its portability and adaptability, but more importantly for its approach towards declarative configuration and automation. And this has been extremely beneficial for Kubernetes and we can even see the numbers based on the CNC of survey in 2020, more than 83% of the companies are using Kubernetes within a production system. When a transition towards the development community, more than 2500 engineers are actively collaborating towards future build out and bug fixing. When we look into the end user community, or the practitioner community, more than 41 attendees were registered at the Kubecons around the world last year, notice encapsulates Kubecon Europe and North America. And this flourishing community around Kubernetes pretty much focused on extending its functionalities. And this created what today we know as the cloud native landscape, which resides under the CNC of umbrella, or cloud native computing foundation. However, the community around Kubernetes was not always as flourishing, but more importantly, it was not always as engaging the picture the beginning was quite different. Nowadays, Kubernetes is known for its adaptability and flexibility to run containerized workloads with predefined technical requirements. It will provision the ecosystem for application execution, while shrinking its footprint in the cluster. However, to reach the state of the art multiple challenges required, solutionizing such as equalization of the containers. To run it effectively successfully container required two main primitives, the C groups and the namespaces. The C groups are used to impose limits into what kind of what resources the process can use. And this refers to CPU and memory. The namespaces on the other side makes sure to control what a process can see. This includes either other processes mounts, network interfaces and so forth. However, these two primitives have been within the Linux kernel since 2008. And there is a natural question that emerged emerges why containers became popular only in the recent years. The answer to this is that technology became quickly the competitive edge for any business out there. And they had to find solutions to the application complexity fast deployments and environment uniformity. Application complexity refers to the fact that we have a service oriented architecture where it's composed of multiple microservices that construct one bigger application. While this introduces simplicity and uniformity standardization, it is still a challenge on how can we deploy and keep all of these components up to date within the production system. As well as deployment is an imperative nowadays, the faster you deploy new features the more the more of a competitive offering you have for your customers. And the other challenge was environment uniformity. How can the differences or the delta between the development environment and the production environment can be reduced or completely eliminated. And to solve all of these challenges, Docker actually had was introduced and had a wide adoption within the community. Docker pretty much packages and executes an application in a environment which is loosely isolated called container with Docker there of course by default are a couple of advantages. In the first one there is an application environment defined as code within the Docker file will be able to specify how the application can be executed but more importantly will be able to predefined all of the dependencies and these are going to be consistent across all of the environments. So Docker simplified the testing process. That means that you have a reproducible environment between your development stage and production, and you can have more confidence into what exactly is going to be deployed to every single cluster, or every single environment at the time. And of course we're going to have frictionless deployment, as long as you have the Docker engine running within an environment, you'll be able to deploy your application with minimal effort. So after the introduction of Docker pretty much made sure that containers are widely adopted within across the industry. But there was not only Docker, we can see that multiple initiatives at the time, where to diversify the ecosystem of front times images and registries, and it was clear that it's necessary to have a set of standards to make sure that all of these container solutions are compatible with Docker. And this prompted pretty much for the Open Container Initiative, or OCI to be introduced. The Open Container Initiative is a governance models which focuses on the industry standards across container formats and run times. It was established by Docker engine 2015 and offer industry leaders, and it mainly focuses on free main core principles, sustainability, decentralization and minimalism. Composibility pretty much focuses on the fact that every single container should be run independently, but at the same time should be portable and executed securely. Decentralization focuses on the fact that every single container should run similarly on different platforms and different environments. And when talking about minimalism is the fact that one container or the application within a container should run a simple process that can be easily a good plot to different processes or be experimented upon. And when you're talking about the OCI specifications there are four of them from which two of them are mostly mostly wide known and used. We have the image and the right time specification distribution and artifacts. The image specification pretty much define how an image for a container should be constructed. It will contain the image index, layers, configuration files, any file systems and so forth. Run time on the other side is a specification which looks into how to initialize and execute a container. Distribution focuses on how these images can actually be distributed and it's going to be focused on the construction of registries. Pretty much it will look into operations, how to pull and push an image, how to list tags, delete an image and so forth. And artifacts is another specification which focuses on how to distribute artifacts which are not container file system bundles. Now this stage we can see the open container initiative pretty much introduces standardization and when it comes to the adoption of containers on an industry level. However, at the same time had Kubernetes emerged within the ecosystem, and it slowly but surely gained adoption from the community. And it was clear that it is necessary to introduce a set of standards within the Kubernetes ecosystem as well. The first notable one was the runtime component. The runtime particle is pretty much the one which intercepts any request from the keyword and make sure that the containers are created on the note with the right specification. There are functionalities at the beginning were provisioned by Docker and Rocket. One of the challenges with these two runtimes is the fact that they were their logic were very deeply ingrained in the Kubernetes risk code. This imposed quite a few challenges. The first one was a low rate for future development. If you'd like to introduce new features for these runtimes, the release process is tightly coupled with the Kubernetes release process, which in itself is quite lengthy. This was imposed for new runtimes if you'd like to create a new runtime and pretty much introduce it for Kubernetes, because the developers will require a very in depth knowledge of the Kubernetes source code. It was clear that a runtime interface is necessary to pretty much obstruct the integration of runtime capabilities, but more importantly, most of these were already using the OCI or open container initiative standards. The CRI or container runtime interface provides an obstruction layer over the integration of container runtimes from which Docker and Rocket would be just some of them. And when you look currently into the ecosystem, there are plethora of tools provisioning these capabilities from which we have tools such as Cata containers, cryo, gvisor, container, database, firecracker, and many more. Continuity here currently is a graduated CNCF project and it's known for its ability to provision these running capabilities as an industry standard. Cryo as well is a lightweight version of a runtime and of course it's known for its compliancy with open container initiative standards. As well it's only natural to make sure that the containers can be created on any infrastructure and this means to enable the cloud providers to use their own APIs and libraries to create containers on their infrastructure. As such, Google comes with their own runtime provider which is going to be called gvisor and there is another one which comes from AWS for example which is going to be AWS firecracker. As well during the same time when the runtime interface was introduced, there are a lot of initiatives to democratize the networking tooling around Kubernetes as well. And this prompted for the container network interface to be prompted, which focuses mainly on the connectivity of containers within a cluster or within a distributed amount of machines. As well here we have a lot of tools so that diversifications of tooling from which we have Calico, Flannel, NSX from VMware, OpenVswitch, Silium, and many more. Flannel has reported a lot of success from the end user community and it's known for its simplicity to provision that network overlay for a cluster. Calico in addition to provisioning the network overlay, it will come with a network policy enforcer, which ensures that we have fine grained access control to the services within the cluster. And Silium has gained a lot of momentum as well and it's because it allows this transparency of networking packets at the networking and application level. The runtime in network interfaces were extremely important because they made the adoption of containers with Kubernetes more feasible. And from now on the community concerns are more and more with how to extend Kubernetes rather than choose and use very specific tooling. And this can be confirmed by the appearance of other interfaces around storage service mesh and cloud provider. The CSI or container storage interface was introduced into Kubernetes 1.9 and it moved to general availability in 1.13. Pretty much this interface focuses on how the services within the cluster can consume storage outside of the cluster. And here we have again many tools provisioning these capabilities and it's actually one of the most developed areas within the cloud native landscape because more than 60 providers are actively collaborating and integrating with CSI. From which we have Rook, Ceph, Open EPS and many more. It deserves to mention here that Rook is a graduated CNCF project and it's known for its simplicity to dynamically allocate storage for an application. As well, there were a lot of initiatives to democratize the introduction of the service mesh within the cluster and this prompted the service mesh interface to be about Currently the SMI integrates with tools such as Istio, LinkerD, Console, Open Service Mesh and many more. It is worth to mention here that LinkerD is an incubated CNCF project and it's known for its simplicity to provision the service mesh capabilities for cluster. And the last interface I'd like to talk about is the cloud provider, which is heavily used by cluster API. Now the perspective of interfaces is completely changed in this case. Because when I talked about the runtime network storage service mesh interfaces, all of this resides within the cluster. However, with a cloud provider interface, this takes the idea of standards a further step. It defines how the Kubernetes cluster can be created across different cloud providers using the same standards or the same manner. And currently this container provider interface integrates with tools such as GCP, AWS, vSphere, VMware, Azure and many more. Now there are a lot of initiatives as well to introduce open standards when it comes to the observability stack. When you're talking about observability, usually it encapsulates how we can simplify the instrumentation while at the same time lowering the cost for data aggregation, but introducing standards to the formats and the frameworks to ensure the visibility across the stack. Pretty much this is going to be focused on how we collect data points from our application to ensure that we have the transparency of what happens within the systems. And this is translated by collecting metrics, events, logs and traces. Metrics are low cost to collect, but they are the most efficient to diagnose and verify or to define actually the state of an application. Events, they are collected from the application to provide these data points or historical data points of when exactly an action happened within the system. However, events are supposed to be high obstruction level just to indicate that something either has happened or something went wrong. For an actual debugging process logs are necessary, which present high fidelity data, and the engineering team will be able to recreate step by step what exactly are the functions called within the system. And traces are necessary, especially nowadays in a service oriented architecture, where to solve one request or to serve one request at the, there's necessary to trace or to invoke multiple microservices. Now with these traces will be able to put all of these journeys together and we'll be able to recreate the full end to end path of what exactly the end user experienced. When you're looking into the ecosystem, we have open metrics which pretty much focuses on how can we introduce standards into consuming metrics scale. However, when you're looking into the logs tracing and metrics as well we had open tracing and open sensors however it is merged into 2019 with open telemetry. And open telemetry now is focused on how we can identify gather collect traces logs metrics around an application to make sure that we can have an indicator of the performance and the behaviors of the system. And in addition to observability that there were initiatives to introduce standards within the application delivery. Now this pretty much focuses on how an application can be deployed but at the same time how can we enrich the overall developer experience. And these initiatives have been crowned by tools such as Qvilla and crossplane Qvilla has been introduced or actually it was revealed at KubeCon North America in 2020 by the open application model community. And it focuses on how can it abstract the deployment of an application completely from an end user perspective. That means that if as a developer would like to deploy using Qvilla, you don't need to be aware of the resources behind Kubernetes such as deployment services ingress and so forth. We'll just be able to provision a configuration file just specifying what functionalities or what variables you'd like for your application and this is going to pretty much deploy application in the background. As well crossplane has gained a lot of momentum from the community lately. And it is because it extends the Kubernetes API and makes it possible to deploy an application on any platform by using the power of operators and customer service definitions. Now we have seen a lot of initiatives to introduce these open standards and interfaces amongst the cloud native ecosystem. And as a result, that means that the cloud native ecosystem transmersified is identity multiple times. And this overall has been possible because Kubernetes overall is not opinionated. Of course it's going to be opinionated when it comes to the networking model, for example. In this case that every single container should have an IP. However, it's not going to be a search if when it comes to the underlying amount of tooling that you're going to run through this on top of what kind of plugins you can introduce to ensure a functional cluster. And this had a huge impact when you look into the perspective it had a huge impact on the vendors and users and the community. When you're looking into the vendor community, the introduction of open standards and interfaces means innovation. As a vendor you don't have to concern yourself how to integrate your services within Kubernetes usually these standards and the interface is already going to be there, such as a vendor you can focus on how to deliver customer value with minimal agency. When you're looking into the end user community, the emergence of open standards and interfaces means extensibility. It was never as easy as it is today to benchmark different tools with the same capability. As an end user you have the leverage to further empower or create focus on your product and choose the right tool for your problem. When we're looking into the community the emergence of interfaces and open standards means interoperability, because we've created this campus of different tooling where multiple solutions for the same problem is actually embraced. And this has been extremely beneficial for Kubernetes because over time, multiple tools were built around it to extend its functionalities and this created what today we know as the cloud native landscape. And this has been possible because the open standards and interfaces are the central engine for innovation that anchors extensibility, but more importantly, it's an ecosystem that embraces contrasting solutions for the same problem. If you'd like to find out more about today's talk, please visit my Medium account, I'm going to have an article written about this talk in a bit more detail so I'm going to find more resources. And if you have any more questions I'm going to be of course available on social media such as Twitter and LinkedIn. Enjoy the rest of the conference. Thank you.