 Hello everyone, and welcome to Cube Day Japan. We're excited to be here and actually connect with the local Japanese community. My name is Katie Gomanji, and I am a Senior Field Engineer at Apple, and also I am a TOC, or Technical Oversight Committee member for the CNCF. My name is Gouhei Ota. I'm also a Senior Field Engineer at Apple, and I am a local Kubernetes CNCF ambassador in Japan. Today, we'd like to invite you to reverse engineering cloud native and more specifically look into interoperability and community. And to do so, firstly we're going to look into the emergence of interfaces. This is pretty much at the basis of our landscape. We embrace multiple solutions for the same problem. And we're going to look into the networking, runtime, storage, service mesh, and cluster provisioning interfaces. Next, we're going to focus on how we can build a sustainable community. And here we're going to look into different practices that we can establish to create a safe space for the next generation of cloud native practitioners to operate in. And lastly, we're going to put all of this in perspective, and we're going to see the impact that cloud native had on vendors, adopters, and the wider community. Now, let's start off with talking about Kubernetes. We're all here for Cube Day. We all know Kubernetes. And the Kubernetes community is getting larger and larger day by day and impacting more broadened industries now. And we can see it from numbers in the statistics. So this is one of the results of CNSF survey. And according to this, 96% of the people who answered either are already using Kubernetes or have evaluated it for their platform. And we have over 62,000 of Kubernetes contributors, and we have over 35,000 of Cubicon attendees in 2022, which shows how large our community has been. However, the community around Kubernetes was not always as flourishing, but more importantly, it was not always as engaging. The picture at the beginning was quite different. Nowadays, Kubernetes is known for its ability to provide the environment for application execution while shrinking its footprint in the cluster. It's all about resourceful management of the resources. However, to reach the state of the art, multiple challenges required solutionizing, such as support for different networking and runtime systems. And this is where we had the CNI and CRI prompted within the landscape, pretty much the container network interface and the container runtime interface. We'd like to deep dive a bit more into these interfaces that are quite important when it comes to the increased adoption of Kubernetes. Now let's focus on networking first. Exploring the networking fabric within a Kubernetes cluster is quite a challenging task. Kubernetes is known for its ability to schedule workloads on a distributed amount of machines while preserving the connectivity and reachability to these workloads. As such, the networking topology is highly assertive and a gravity store is the idea that every single pod should have an Ionic IP. Now this particular principle dismisses the need for dynamic port allocation, but it brings into light new challenges, such as how containers, pods, services, and users are able to access our application. Now to actually see where exactly we need an interface when it comes to networking, I would like to showcase how a packet is sent between two different applications on an internet communication. So we're going to have an application A on a note, and we're going to send the packet to application B on a different note. Now the first thing that's going to happen, we're going to look inside the pod to see if any containers are able to serve our request. For the sake of the example, we're not going to be able to do so, which means the request is going to go outside of the pod to the root network namespace of the device. At this stage, we have visibility of all pods within our machine. Again, we'll not be able to serve our request, which means we're going to go outside of the device, outside of the node for the internet device towards the routing table. And the routing table is quite an important element here because it has the mapping between every single node and the IPs to be allocated on that node. That means that with minimal hops, we will be able to identify the node we need to reach out to, and in a reversive manner, we're going to go through the internet device towards the root network namespace and the pod which will be able to serve our request. Now the networking topology dictates that every single pod should be reachable via its IP. As such, we needed inclusivity for different networking systems. And here's where the CNI, or Container Network Interface, was introduced. Pretty much it introduces the network overlay for a cluster, and it has two operations, addition and deletion. It will make sure to allocate an IP to a pod when it's created, and it will ensure to remove any resources when the pod is not going to be there anymore. Now, when we look into the landscape, we have PLEFRA multiple tools solving the networking problem. And this is where we see interoperability, multiple tools for the same problem space. Now, Flannel here is known for its simplicity to provide that network overlay for a cluster. However, if you want to have a more fine access control to services within your cluster, you might look into a tool such as Calico, which comes with the network policy enforcer. And if you want to tackle security as part of the networking, you might look into a tool such as Silium that allows you to have visibility of the layer-free and layer-seven, so pretty much the networking and application layer. When Kubernetes was born, the community decided to use Docker as the primary container runtime. And then CoreOS also introduced Rocket as an alternative. However, because there was no standardization back then, the implementation for the API clients' libraries in the Docker and Rocket were internally embedded directly in the Kubernetes source code. And as the project grows, it became problematic to keep maintaining the code base within the same code, even though they are completely separated projects. And now, we do have this standardized interface called CRI. So CRI basically gives Kubernetes an abstraction layer, and it makes the runtime providers make changes more flexibly, and also Kubernetes Core doesn't no longer depend on the different projects, like external libraries anymore. There are two different major implementations for CRI runtimes in the community. One is ContainerD. ContainerD was born from Docker, and it's an open source project called MobyProject, and it still empowers the strong power for the container creation deletion for Docker as well. CRIO, originally created by Red Hat, provides the functionality with the simplicity and security as a CRI native container runtime. Thanks to the standardization, you can also choose different approach to make your container more secure, for example, by using GVisa and Firecracker as OCI runtime. Now, when we look into the networking and runtime, these two interfaces were important to increase the adoption rate for Kubernetes. From now on, the community didn't focus on settling down to very specific tooling, but to extend further and make Kubernetes a pluggable system. And we can see this through the innovation wave, pretty much the appearance of new interfaces, such as service mesh interface, container storage interface, and cluster API when it comes to infrastructure provisioning. Now, service mesh interface was introduced in KubeCon Barcelona in 2019, and it provides a solution to integrate service mesh within your cluster. Now, for anyone new to service mesh, this is pretty much a dedicated infrastructure layer that concerned itself with traffic to be sent across different services within a cluster. Now, when you're going to have a request issued by the user, you're always actually going to look or transcend different microservices. And with service mesh, you'll be able to fully recreate that path. Now, the SMI currently integrates with tools such as LinkerD, Istio, console hub from HashiCorp, and many more. It is worth to mention that currently, LinkerD is a graduated CNCF project, and Istio is recently an incubation CNCF project. And CSI was introduced in Kubernetes 1.9, and it provides flexibility to Kubernetes and the maintainers, just like CRI and CNI, but specifically for storage drivers. It doesn't just support the open source implementations, such as Work and CIF, or OpenEBS, but also major public cloud services support this type of platform. Class API is a project that you can create your Kubernetes cluster and provision it as your Kubernetes custom resource, just like managing your manifests in YAML files. And Class API has an interface called Provide Interface, which makes you have an option to choose on which cloud provider or cloud infrastructure layer you want to provision your cluster. Now, when we look into the perspective, all of these interfaces created the landscape that we know today with multiple tools. Now, while the technology is the gravitational point of cloud native, nothing would be possible without the community around it. Now, from the beginning, the Kubernetes community operates on the principle that inclusive is better than exclusive. We try to build communities through open governance, transparency, and a diligent conduct. This is what invites everyone to contribute, and, more importantly, pushes innovation forward. Now, if you want to contribute or be part of any project that we've showcased today, or any project within the landscape, I definitely invite you to go to www.contribute.cncf.io. We'll find more details on how you can contribute right now to your favorite project. However, one thing I would like to highlight is that you do not need to write code to be an active member of the community. You can actually get involved with the community through the community. We have a community of six and tags, or technical advisory groups. Now, within the CNCF, we have seven tags that focus on very specific areas. Some of them we already talked about, such as networking, runtime, storage, observability, and many more. Now, I definitely invite you to again, checking these tags. All of the information is publicly available on GitHub, such as the meeting notes and the meeting invites. Each individual can grow their career or experiences. CNCF provides certification programs for the developers and experts to prove their expertise. They are called Certified Kubernetes Application Developer, Certified Kubernetes Administrator, and Certified Kubernetes Security Specialist. There's also a new program for beginners and non-technical cloud-native committers called KCNA, Kubernetes and Cloud Native Associate. Now, as a community, we should not only focus on solving the problems that we have now. We should focus on solving the problems of the future and actually create a safe space for the next generation of cloud-native practitioners to operate in. As such, I would like to encourage everyone to continue building a diverse community. We need to further diversify our executive leadership, governing and advisory boards, and technical committees. We need to create a safe space where different feedback and ideas are valued, but more importantly, where the next generation of cloud-native practitioners will be able to grow and collaborate in. So, here's the summaries we want to highlight today. So, a cloud-native community has grown the culture where solution providers and vendors can focus on their innovation by having a standardized interface. And when we look into the end-user community or the doctor community, cloud-native translates into extensibility. It was never as easy as it is today to benchmark different tools for the same problem. As an end-user, you can choose the right tool for your technical stack with minimal compromises. And when we look into the community, cloud-native is all about sustainability. We still have a lot of work to do when it comes to the diversification of our community, but at the moment we should focus on solving the problems that we're going to have in the future and create that safe space for everyone to contribute. Yeah, so that is all from us. Please reach out if you have any questions. We are also here and we are also on Twitter. And, yeah, Apple has also some openings if you're interested. Please visit jobs.apple.com online. This is Katie Gomanji. And we're looking forward to seeing how you can shape the cloud-native ecosystem. Thank you and enjoy the rest of the conference.