 Welcome to the SIG network update and directions. We will be going over major things that have changed in SIG network from the last KubeCon. The SIG network special interest group is responsible for Kubernetes network components. This includes things such as pod networking within and between nodes, such as C&I and IPAM, ingress and egress traffic, service abstractions. This includes service discovery, load balancing, L4 and L7, network policies and access control, which is basically network security within a cluster, and all of the APIs associated with these functions, which includes pod, node, endpoint, endpoint slice, service, ingress, gateway, network policy, to name a few. We have a Zoom meeting every other Thursday, as well as a very active Slack channel. In our community page, link is shown below. If you are new to Kubernetes or need a refresher, we have covered the basics in previous intro videos at KubeCons. This presentation will focus mostly on the cool stuff that has been happening in the past few months. The SIG has been paying special attention to our backlog. We would like to make sure that proposals that are in alpha, beta make it to GA and stable, rather than staying in a half complete state for extended cycles. The SIG is also focused on a couple of major projects. The first one is dual stack support, which has landed in GA in 123. Second project is gateway API for L4 and L7. And third are network policy improvements. Now let's take a look at some of the proposals and updates that have happened since 121. A couple of smaller improvements have graduated to stable. Disable LP node ports is now GA, which lets the user avoid allocating node ports when creating a type load balancer service. This is GA in 123. Ingress class can now reference a parameter object that is namespace scoped. This takes care of some of the common use cases for incluster proxy deployments and self-service ingress deployments. The new topology aware routing feature is now beta, and along with this is a deprecation of the old topology keys based API. The new topology hints is more flexible than the previous iteration and allows for implementations to have more leeway in determining traffic routing when the user specifies that they want their traffic to be topology aware. The key thing here is to note that the topology keys field is now completely deprecated and renamed. DNS config now lets you specify more than five entries, matching the behavior of modern libc implementations. The new limits are 32 search path elements, and the total path length in characters cannot exceed 2048. This is alpha, so please try it out and let us know if you had any problems. Also in alpha is the ability for the node IPAM controller to allocate IPs from multiple non-contiguous CIDR ranges. This allows the cluster admin to dynamically manage IP ranges using the API, adding and removing valid ranges from the cluster. This is especially pertinent to people who use the built-in IPAM versus getting IPs from their CNI provider. Note that this does not change the behavior of node podcider. Once a node has been assigned a slice of IPs, it cannot change the lifetime of the node. Finally, we would like to call your attention to a CVE that was discovered in the 122-123 cycle. Namely, the vulnerability revolves around the power of endpoint or endpoint slice APIs to direct traffic with unintended effects. If a malicious user is able to create or edit endpoint slices in the API, they could expose direct traffic to arbitrary backend IPs in the cluster. For example, if your ingress or LB implementation is shared between namespaces, network policy is not able to distinguish traffic to a destination based on just the source IP of the ingress or LB. In this case, it may be possible for a malicious user to send traffic to your backend, bypassing security controls such as network policy and load balancer source ranges. The mitigation for this issue is to treat the ability to create, modify endpoint and endpoint slices as a privileged operation and remove this capability from the ordinary users of your cluster. About the major projects ongoing in the SIG. First, we're happy to announce that the wait is over. IPv4 v6 dual stack is going to be GA in 123. What this means is that services and pods now support both IPv4 and IPv6, either in single or dual stack modes. There are specific APIs designed around migration of existing services between single stack and dual stack within some reasonable limits. Dual stack is also possible with load balancing and services and this supports any combination of the stacks. Previous IPv6 v4 semantics remain unchanged with the support, namely the behavior egress policy and as well as having a single IPv4 IPv6 address on a pod, e.g. there is no multiple IPs per pods within the same family. As part of the dual stack effort, we are now also making the API server service, a dual stack enabled service. There are some subtleties to take care of, which is which of the features need to be enabled without breaking legacy apps. Dual stack API server endpoints will be published using endpoint slice. The client code will be updated to understand dual stack, but environment variables such as Kubernetes service host will remain the same to avoid breaking existing applications. The gateway API has also made significant progress during the 1-22-1-23 time period. Gateway API is the next version of the Kubernetes APIs describing Alphorn L7 services in the cluster. It aims to be role oriented and extensible, allowing users to configure services with modern functionality, split across ownership boundaries. We have been making steady progress towards v1, alpha 2. The intention is that if v1, alpha 2 does not uncover major issues with the shape of the API, it can lead directly to v1, beta 1, and GA. We expect to have, from this point onwards, as much backwards compatibility as possible going forward from v1, alpha 2. Detailed release notes can be found in the link below, but here are some highlights. First, we have moved to the official API group for the project, which is gateway.networking.case.io. Second, one of the major changes was the simplification of the way gateways and routes bind to each other. In v1, alpha 2, gateways can select routes by kind in namespace, with the default being routes in the same namespace as the gateway. This enables ease of use for simple deployments while giving flexibility for more complex cross namespace relationships. Routes directly reference the gateways they attach to. We previously had a more complex label selection mechanism, but this turned out to be more complicated to understand and support. The third improvement we have made is a separate resource called reference policy that governs whether or not a given resource is allowed to be referenced from another namespace. This makes cross namespace resource sharing safe, and we are seeing other areas in the API, not just within networking, look to this API pattern. We have come up with a common design pattern for policy attachment inheritance in the gateway, gateway class and route resource graph. Because of this, the backend policy resource is now removed in favor of the more generic policy attachment scheme. Finally, routes no longer contain certificates or TLS information. While working through the edge cases, we found that this is too many edge cases and is probably handled better by an API outside the system. There are many other improvements that can be found on the release link shown below. You can also try out the V1 Apo2 API for yourself with one of the implementations on this page. The network policy working group has been hard at work on improvements and extensions to the current API. There are a number of threads that are close to caps. The first is the use of DNS names, i.e. FUDNs as destination selectors. The second is the use of services as a selector for the source of destination versus just using pod selectors. We have also brought a couple of features in network policy to beta in 123. First is the added feature of port ranges to network policy API. This lets you specify an entire range of ports rather than one port at a time. The second is the availability of a default namespace label that has the value of a namespace name, which enables the common case of policy selection by name. One of the biggest changes being proposed to network policy is that of cluster network policy, which allows operators of a cluster to create safe by default and guardrail policies for their users. Here we're looking for a happy medium between needed functionality without adding complexity. One of the biggest hurdles that we are currently looking at is whether or not there is some inherent complexity which makes the most generic answer giving network policies priorities make the most sense. Below we have a couple of choices in the proposals. One is to first set of rules to empower users and then a fixed deny and then an allow rule to punch holes in the deny and then apply existing network policy rules. The second is to reverse the order between allows and denies and existing rules. And we see that these two choices, A and B, are in tension with each other. Choice C, where we can use priorities, can actually express both A and B. But up to now it seems that we have been trying to avoid adding in priorities because of the complexity. But this may be inherent in trying to write policies that work for all situations. We would love to hear feedback about this from the community. Finally, let's talk about some of the other activity in the SIG. First, there is a working group looking at modernizing the Kube Proxy implementation. This is Kaping or Kube Proxy next generation. They are looking at moving Kube Proxy out of tree, cleaning up the code, and potential ways to add new functionality. Another large group is looking at rebooting the community around Ingress Engine X. Ingress Engine X is one of the largest and most commonly used Ingress implementations. And now with a new set of community maintainers, we're hoping to carry forward some of the feature work and support for the Engine X Ingress implementation. So there's a lot going on in SIG Network, and much help is wanted, especially to close CEPs that are in progress from Alpha to GA. Thank you for attending, and now it's time for Q&A.