 Hi, my name is Stephen Higgins. I co-founded a company called ZigZog with John Long. ZigZog's first product is KubeFox. Today, we're going to chat about one of KubeFox's superpowers, virtual environments. I'll refer to them shorthand sometimes as KubeFox VEs, or simply VEs. KubeFox VEs enable you and your teams to focus on application development and not be hamstrung by DevOps. We'll start by walking through the basic challenges and desires of teams as they collaborate on the development of applications for Kubernetes. Then we'll talk about the needs of companies from a similar perspective. We'll then walk through some hypotheticals for the enhancement of a simple application, how that would look with bare Kubernetes, and how it would look with KubeFox. I'd like to mention that we're very interested in your feedback and welcome your participation in our journey. We're just getting started. So it's a good time to check us out. A great way to get started is through the QuickStart, which we just pointed to Azure, by the way. You can see links for the KubeFox repo in QuickStart here at the left, as well as a general email address for ZigZog, and finally, direct email addresses for John and me. We also have a live stream coming up in March, and we'll walk through some coding exercises with Hasura so you can see KubeFox in action. Without further ado, let's chat about team and company desires. What do teams want? Simply put, teams want to innovate without friction. They want to prototype rapidly, compare versions for functional and behavioral differences, easily test, and do so without worrying about provisioning. Low friction also extends to an absence of bureaucracy, being able to quickly get things up and running and make changes. For instance, modify any app component, switch data stores, trace an individual component, iteratively redeploy and test, et cetera. And they want to be able to do all of this without DevOps involvement. The idea would be independent sandboxes for every team and even for individual developers. Sandboxes that could be stood up, torn down, and run side by side to compare and contrast. And where no analysis or other team involvement was necessary to wire components together. Finally, developers want to focus on the application, not security, not telemetry, and not other infrastructure. What do companies want? Companies share the desire for innovation. Companies must innovate rapidly to drive competitive advantage and build technical barriers to entry. And they want to be as responsive as possible to their customers. This gets tricky, particularly with high-end vertical applications. If an application is being enhanced for different customers in different ways, maintaining a cohesive architecture becomes challenging. While companies aspire to these goals, they're also obligated to keep an eye on costs. Cost controls always take the form of resource utilization controls, where those resources are the usual suspects, people, hardware, and software. And in today's world, where developers are highly valued, companies want to be known for advanced technology and positive cultures to ensure they attract and retain the best talent. That's even more important when reputations travel at the speed of social media. These factors conspire to create inherent conflicts. As noted before, cost controls manifest as resource utilization controls. And that means controlling what people and resources are made available to a project. Managing provisioning requires coordination and collaboration to distill resource utilization to the minimum. That task usually falls to DevOps. Teams work with DevOps to review what can and can't be shared, and that leads to constantly changing configurations. These exercises quickly become ingrained in how software is developed, and the bureaucratic nature of them can slow progression of a technology significantly. Due to the difficulties and inefficiencies imposed, these factors can and often do lead to segregation and duplication of resources. When we segregate, it becomes very easy for applications to diverge and increasingly difficult to maintain architectural integrity. Let's take a look at a simple retail application. We have three modules, a web UI module, a catalog module, and a fulfillment module, each of which comprising two components. And we have three shared components. Suppose I have requests from three customers for features to be added to the application. I decide to assign three different teams to develop these features. I want each team to work in parallel and get through the work quickly and efficiently, so they need to be able to function with as little overhead as possible and without conflicting with the other teams. Let's drop back into developer Utopia again. It would be very desirable if each team could launch into their projects with no concerns around provisioning or resource management. Even within a team, we'd like to empower engineers to prototype quickly without impacting their teammates. One way to achieve this with minimal resource utilization would be to namespace each team. They could then share the components that are not changing. However, that does take configuration to achieve. Segregation by namespace may work if we're talking about a single component. Configuration would be required, but the balance of the components could be shared across teams. And segregation may work if each team is working within a single module. The pattern here is probably clear. The larger the subset of shared components, the easier it will be and the more efficient it will be to namespace each team and wire things together in a rational way. But these are really not realistic scenarios. What is far likelier is that each team is working on different components or a mixed set of components. The tipping point where it becomes impractical or too painful to share resources actually happens pretty quickly. Even enabling a single engineer to rapidly innovate is difficult. Again, what we really want us for the teams to each function is if they were the only team working on the app. They'd be able to quickly and easily prototype and test various versions without any overhead. And ideally we'd be able to extend those benefits to an individual engineers within those teams. The reality is that the use of namespaces requires significant DevOps involvement. Let's delve into that. Using namespace segregation means that we need to determine who's working on what. This is an imperative as it dictates how we structure the system or systems. We need to duplicate config maps and secrets which cannot be shared across namespaces. That necessarily means for memory to copy those resources across namespaces when they change. We need to configure deployments, ingress, routing and telemetry. Routing requests through different versions of the app in and of itself can present some significant challenges. Each team will be working on different components and determining how they'll test. And the results of those tests will be dependent on correct configuration. We'll probably also create an additional namespace for integration testing, for instance, for smoke testing. Sometimes we end up going down the brute force route and creating separate clusters. Of course, there are significant trade-offs, but in some cases the brute force option imposes the lowest DevOps overhead and is the least bureaucratic. A clear drawback here though is obvious, cost. How do things change with KubeFox in the picture? The changes with KubeFox are pretty significant. Inter-team coordination becomes unnecessary and DevOps doesn't need to be burdened with day-to-day logistics around individual team efforts. We don't need to configure ingress. We don't need to configure namespaces. We don't need to duplicate and manage config maps and secrets. We also don't need to configure deployments or telemetry. In fact, in some cases deployments are not even necessary. By the way, regarding config maps and secrets, we can use KubeFox VEs to customize things. For instance, we can switch to the database we're accessing and layer in different credentials. And we can do so for the same deployment and without deploying any new code. Developers can make changes to the application and when they're ready, they can publish the app. They don't need to configure CICD to do so. This is important to emphasize. Developers are largely removed from CICD logistics. They don't need to worry about or manage deployments for individual components. KubeFox handles this for them. KubeFox, not teams or developers, evaluates the repo and determines what has changed, then containerizes and distills the deployment to only changed or new components. Developers always interact with KubeFox at an application level. Teams and developers can deploy multiple versions of the application and those versions can coexist side by side with prior versions. And developers can make changes and test those changes against the prior versions of the app. And they can do so without DevOps, without worrying things together and without additional configuration. One reason why KubeFox can achieve this is because KubeFox routes dynamically at runtime. Requests are evaluated at each component to component transition and routed to the correct version of a component for that version of the application. Telemetry is available by component, application or across the cluster. Remember our messy example? Let's take a look at how teams would function in a KubeFox world. TMA is working on a single component, the web UI. When TMA is done working on the web UI, they commit their changes to the repo. Doesn't sound very much different at this point, does it? When ready, TMA publishes the application. KubeFox looks at the repo, determines that only the web UI changed and containerizes and deploys only the new version of the web UI to the cluster. Remember, TMA is publishing the application. They're not publishing the component. KubeFox is the one that figures out what changed. After TMA deploys two versions of the retail app are available. The original version and the updated version that uses version two of the web UI. It's that simple. Let's take a moment to delve a bit deeper into KubeFox capabilities and how VEs help simplify engineering workflows. This is a sample of what the YAML looks like for the TMA VE. It's surprisingly simple and you use KubeCuttleApply to write it to the cluster. When TMA deploys, they deploy to the TMA VE. Traffic is routed to the TMA VE by subpath, so it's extremely simple for TMA to test their changes without being impacted by the activities of their Team B and Team C colleagues. When teams deploy, they deploy to a KubeFox VE that they define and control. TMA can optionally release the retail app. When an app is released to a VE, KubeFox defaults any traffic directed to that VE to the released version of the app. So TMA can modify and release iteratively without changing any of their unit tests. Versions of the app that are not released can be accessed via query parameters. A common workflow is for a team to repeatedly iterate and test their changes against a prior version. For example, a production version. KubeFox makes life simple for developers in QA by enabling them to rapidly iterate their changes and retest. It's important to emphasize that developers and teams can accomplish all of this independently. For instance, there could be an individual engineer in TMA who creates their own VE to experiment with the change. This is a simple overview of the composition of the app and how traffic would be handled when TMA releases version two of the web UI to the TMA VE. Default traffic would use version two of the web UI and it would appear that the only version of the retail app were this one. The prior version would still be available, though. Unreleased or prior versions of apps can still be accessed via query parameters. We can run and access multiple deployments simply by specifying the name of the deployment as shown in the URL here and the KubeFox VE. We can deploy multiple times and access those deployments as if they were running in their own private sandboxes. TMA can access the original version of the retail app after deploying the new version. It will appear as if nothing changed. No deployment or other configuration changes are necessary to achieve this. Let's get a little bit more involved. Let's say that Teams B and C are going to make their own changes, iterate, and test. As before, each team commits their changes. Teams B and C operate as if they're independent and they have autonomy over the cluster. They each publish their changes, just like Team A did, to the Team B and Team C VEs, respectively. Again, KubeFox distills the deployments to changed and new components only, containerizes and loads them into pods in the cluster. For each team, only three components changed. So when each team deploys their new version of the app, three additional pods each will be created on the cluster. Note that when Teams B and C deploy, the cluster changes only differentially, meaning that the cluster has only seven additional pods running, one for Team A and three each for Teams B and C. However, there are four different versions of the app running. The original version, Team A's version, Team B's version, and Team C's version. This is a key value out of KubeFox, deployment distillation. KubeFox creates new pods only when there is a container that has not already been loaded into the cluster. Deployment distillation along with dynamic routing enable KubeFox to maintain the minimum subset of pods in the cluster. In a sense, KubeFox is regulating provisioning automatically. Just as with Team A, Team B can interact with the application as if they were working in a private sandbox. Behind the scenes, KubeFox is routing traffic for Team B, as if the only changes were Team B's changes. Team B's VE, Team B, is analogous to that of Team A. The VE and deploying to that VE are all that are required to partition traffic for Team B, as if their changes were the only changes that had been made. And this pattern is identical for Team C with the same simple YAML. Let's chat briefly about smoke testing. What we want is for QA to have the latest version of changes from all three teams. So we want a version of the retail app that is composed like we see here. We'd like to be able to do this with minimal effort and overhead. So how are we gonna go about it? The first thing we'll do is we'll create a new virtual environment. We'll call it QA Smoke. Then we'll determine what we want to smoke test. We'll pick a commit that represents the point in time at which we wanna test the application and we'll deploy that to our QA Smoke VE using the Fox CLI. With QFox, we can deploy an application by a commit or by tag. When we're iterating quickly, as with Dev or when we're smoke testing, we're focused on the speed of testing and not so much on formal versioning around release. So we'll typically work with commits. When we promote to higher environments, we wanna stable a mutable record representing what we think will be a release candidate. So we'll tag. QFox handles both of these scenarios gratefully. So now we're going to deploy the commit that captures all changes to components from each of our teams. So here's a question. What would QFox deploy? I've kind of given it away, haven't I? It won't deploy anything. QFox compares the state of the deployed components with the composition of the proposed deployment and determines that all the components necessary to run the proposed deployment already exist in the cluster. QFox will provide routing information so the runtime knows how to route requests within the QA Smoke VE. QA sees the application with the components as they existed at the point in time for the commit we deployed. In this case, the version two changes from each of teams A, B, and C. That's exactly what we want from the smoke test. Suppose that as part of team C's project, they would like to test with two different Postgres instances fronted by Hasura. The first thing we're gonna do is create two new VEs, Team C Hasura one, Team C Hasura two. These are not back end specific VEs, by the way. The dash one VE could comprise the original deployment with no component changes coupled with the original back end. And the dash two VE could comprise Team C's changes coupled with the updated back end. As before, our dash one VE is very simple. QFox is capable of injecting the configuration and credentials, including secrets necessary to provide Hasura what it needs to connect and interact with our original data store. The dash two VE is old hat to you by now. The dash two VE directs QFox to inject a different set of configuration and credentials and connect with our new data store. We could do what we did before with smoke testing, enable QA to test all changes with the new data store. And remember that there is no modification to ingress or internal routing and all of these components still reside within the same cluster. I hope this peaks your interest and you decide to check out QFox. We welcome you to do so and we invite your thoughts and feedback. The spirit behind Zig Zog is simplification. That was our original idea around the company is to try to make things as simple as possible in the Kubernetes space. To that end, we think you'll find QFox easy to work with. If you don't, we would like to know about it. The quick start is a great way to get started and gives you a peek at the power of QFox. We recently ported it to Azure and you can run it on your local system on a kind cluster or run it in Azure. Thank you very much for your time.