 Okay, so yeah, welcome to Cloud Native Live, where we dive into the code behind Cloud Native. So I'm your host, Shahrir, our second middle, so you can call me middle Shahrir, and I'm a sense of ambassador. Okay, so every week we bring a new set of presenters to showcase how to work with Cloud Native technologies. They will build things, they will break things, and they will answer your questions. So in today's session, I am stoked to introduce Stephen and John, who will be representing our rapid code and prototyping with you for virtual environments. This is an official livestream of the CNCF and as such is subject to the CNCF code of conduct. So please do not add anything to the chat or question that would be in violation of the code of conduct. Basically, please be respectful of all of your fellow participants and presenters. So with that, I will hand it over to Stephen and John. Okay, so yeah, hey Stephen and John, so yeah, can you hear me? Yeah, so yeah, stoked to have you guys both. So how are you guys? Doing well, doing well. Yeah, awesome. So yeah, really awesome to have you guys all. So I think Stephen and John, I think Stephen, you can start with introducing yourself, and then John, you can join in and then we can start the session. Yeah, so thank you very much. So my name is Stephen Higgins. I'm along with John, I co-founded ZigZog and today we're going to be, we've been going at it for a bit now, and I think the tech is in a really interesting place. We're really looking forward to people trying it and working with it and giving us feedback, all feedback, appreciate it. And yeah, excited to be on here on this CNCF event and walking through and showing you some of the power of KubeFox. Great. So yeah, John? Yeah, everyone. Yeah, we've been making releases and kind of excited as to introduce new features. We're also featuring a Sura, which is a unique product and an integration that we've done with it, and more generically with any GraphQL endpoint just to make it very easy to use with KubeFox. So I'll be diving into some code and showing you how that all works. Okay, great. So yeah, let's start the session. Okay. Okay. So, hey everyone. Again, my name is Stephen Higgins. I'm here with John. We founded ZigZog some time ago. KubeFox is ZigZog's first product. Today, I'm going to walk you through some of the high-level KubeFox concepts. Then John is going to take the reins and go through the coding portion. And don't fret, the coding piece composes the bulk of today's livestream. Hopefully you've gone through the KubeFox Quickstart. If you have, you should have all that is necessary to go through the exercises today. If you haven't, no worries, go to the link above and it will walk you through the steps necessary to bring you to the point where you can work along with John as he goes through the coding portion. Just to note that I'll be walking through things very quickly. A, because I'm confident you don't want to listen to me blather on. And B, because I want you to know that I know you want me to get to the coding piece as quickly as possible. You can learn more about these KubeFox concepts at our GitHub site. Finally, keep in mind that live streams are recorded and made available on YouTube. So you can always walk through these steps at your leisure. So what's ZigZog all about? ZigZog's mantra is Kubernetes Simplified. I was chatting with a former colleague who is joining us part-time. He told me that in his day gig, he's dealing with data scientists who are not interested whatsoever in the infrastructure. They just want to run their scenarios without jumping through hoops to do so. It reminded me of something I learned long ago. People will suffer your technology to accomplish their business. That includes developers, particularly when infrastructure considerations hamper their efforts. Developers would like to focus on the application, not all the logistics around deployments, cluster configuration, et cetera. With KubeFox, teams don't need to worry about or manage deployments for individual components. KubeFox handles this for them. KubeFox, not teams or developers, evaluates the repo and determines what has changed, then containerizes and distills the deployment to only changed or new components. Developers always interact with KubeFox at an application level. Teams and developers can deploy multiple versions of the application, and those versions can coexist side by side with prior versions. Developers can make their own changes and test those changes against prior version of the app. They can do so without DevOps, without wearing things together, and without additional configuration. When apps are published, KubeFox distills deployments to only those components which have changed. Loads those changed containers into pods in the cluster and makes each version available as though those versions were running in independent sandboxes. There's actually a lot going on on this slide. This is an abridged explanation of dynamic routing and deployment distillation. Again, John will show you a practical application. What I've done here is deployed three versions of my app to the same cluster. You can see the deployments mapped on the right. For the first deployment, for the first deployment, I deployed version one of the app, which correlates the version one of each of the constituent components. For the second deployment, I changed the shared component, the API server. Now I have version two of the app, which of course includes version two of the API server. When I deployed version two, KubeFox deployed only the new version of the API server. For the third deployment, I changed the order UI and added a review component. When I deployed version three, KubeFox distilled the deployment to only the updated version of the order UI and the new review component. KubeFox makes each of these versions available via dynamic routing and virtual environments, VEs for short. VEs provide sandboxes for teams or even individual engineers, where new code can be deployed without affecting the efforts of colleagues. Each team can test their own version of the app. Because KubeFox routes dynamically at runtime, it can shape traffic to specific deployments. No configuration of routes is necessary. So there's no bureaucratic overhead or DevOps involvement in this pulling up of sandboxes. Requests are evaluated at origin and at each component to component transition and routed to the correct version of a component for that version of the app. Yellow highlighted data path shows the components that would be used for version three traffic. Because KubeFox distilled deployments to only unique components, provisioning is automatically kept to a minimum. If we have a 20 component application and three teams are modifying the same four components, we can run all four versions of the application on the same cluster with a total of 32 components deployed. The original version plus four additional components for each of the three teams. Imagine the implications. VEs can, for instance, facilitate the delivery of a version of an application in UAT to a particular customer without requiring separate resources. Hasura abstracts data sources from developers, builds APIs, and provides access through a GraphQL HTTP endpoint. That's a short sentence that belies the sophistication of the technology. Abstracting away the data sources and enabling developers to think about the data they need as opposed to all the wiring complexities normally associated with getting access to it is a powerful thing. There's an analog here of KubeFox's perspective on development. That being led developers focus on application development, simplify CI CD, remove DevOps barriers, and Hasura's perspective on database access, led developers focus on application business logic, simplify and unify access, abstract way connectivity infrastructure. Both products enable developers to work at an application focused level. Suppose that as part of team C's project, they would like to test with two different post-gres instances front-ended by Hasura. The first thing we're going to do is create new VEs, a dash one and a dash two. The dash one VE could comprise the original deployment with no component changes coupled with the original backend. And the dash two VE could comprise team C's changes coupled with an updated backend. What we'd like is for developers to interact seamlessly with either of the data sources and share common components across the cluster. In fact, I can test each of these simultaneously, each of these versions simultaneously, without DevOps, without wiring things together, without going through a process of namespacing common components, and without any other overhead. I make my changes, deploy my app, and KubeFox does the rest with virtual environments. John's going to show this to you in practice now. John, take it away. Hey, everyone. So if you want to bring up my screen, so I have a tendency to get excited and go fast. So if I'm skipping over something, or if you want more explanation of something, just reach out on the chat and happy to answer your questions. And I don't mind getting interrupted. So to get started, we're just going to do some setup. So we're just going to create a local cluster, local Kubernetes cluster to work on. I use kind constantly. It basically starts up with a Kubernetes cluster inside of a Docker container. So it's very easy to start and stop. And I just wipe out these clusters and reset them all the time. And it's very lightweight. So if you do a lot of element and use Kubernetes, I highly recommend checking it out. So when I headed in, it started up a cluster for us. The next thing I'm going to do is I'm going to set up KubeFox. So we have an operator that you can install with a help chart. And so I'll just go ahead and do that now. GitHub is hosting our chart repo. And these commands are available on the document page as well. So I'll go ahead and get this started. So again, this is setting up our operator. And it's also automatically setting up an instance of Vault that we use for our secret store. We're not going to dig too deep into secrets today. We will in a later cast, but just kind of heads up and let you guys know what's going on. The next thing we're going to do is make sure that we have Fox installed. Fox is our command line interface and it kind of helps deal with deployments and getting things set up. So if you have Go installed, you can just install it using Go install. Otherwise, on our GitHub pages, there's releases that you could download manually. So we'll go ahead and do this and you want to make sure you are on version 0.9. Great. So next thing I'm going to do is I'm going to make a directory and initialize it with our little tutorial today. So go ahead and do that. And then Fox has an init script built into it. And we're going to be looking at our GraphQL project today. So if you just pass that flag, what it will do is set up. I knew I was going to forget to do this. Sorry, I was supposed to do a flag so it pops out some more information. So right now, what it's doing underneath the covers is it's setting up a platform for us. It's kind of doing a initialization step and as well as outputting some files to the file system and setting up a get repo for us. So I'll go ahead and launch VS code here. So it's a little bit easier to kind of explore the code. Okay, let me make this a little bit bigger. So let me do this now before I forget it. So any of the Fox Flays you can set as environment variables as well. So I'm going to do that just so I'll put some extra information as I type commands. Okay, so what this initialized for us is kind of the standard setup of what a Qt Fox repository is going to look like. So in it we have a component, a single component, which is server. And in the root we have an app.yaml which is just kind of an explanation of or metadata about the app redeploying. So as Stephen was saying earlier, Qt Fox always works on an app level, which is just a set of components. And the easiest thing to do is just have one to one with a get hub repo, one app per get hub repo, but you could actually nest them depending on how you want to set it up. So I'd also like to kind of show you real quick what's running. So I normally use K9S, but I will try to use this to make it a little bit easier. So what's going on is the Helm chart created this Qt system, namespace, and you'll see here what we got running, but sorry, I went to the wrong one. Qt Fox system is the one we want. So you'll see what's running is just the operator and vault. Additionally, when we did the Fox and Net, it created a platform for us, which is Qt Fox demo. And here you'll see a couple of our services running. So Qt Fox is event driven and we use NATs as our event bus. And then the HDTV server is an adapter, which I'll go into more detail. And then the broker's kind of the meat of kind of takes care of most of the cool stuff that Qt Fox does, including the dynamic routing that we'll kind of dig into. So in addition to all this, we're also going to set up Hasura. Hasura is a really neat project. We're going to be using just their open source version. They have enterprise features and a bunch of hosted features as well. I'd highly recommend checking out. It makes doing development a lot easier. But in a nutshell, basically what Hasura is doing is inspecting a database. They start off with Postgres, but they support a whole bunch now. And generating a GraphQL API for you automatically. So there is a YAML file here called Hasura.YAML. And it's just a bunch of pods that I kind of packed together. The first time you run this, it might break. It's just because the GraphQL engine gets mad because Postgres is running in a net script. But you'll see we have three instances here. So we have Hasura prod, and there's two containers in each one of these pods. So there's Hasura, which is the GraphQL engine, and then there's Postgres instance. And this Postgres instance actually is pre-loaded with our database that I'll dig into later. So we have prod, we have a dev instance, and then we have a john instance. And this john instance is mine. And these link up to the virtual environment. So you can think of prod as your traditional prod environments, kind of your stable release version. And then dev, you could kind of be your integration. It might be built, the CIC might be updating it from the main branch. And then john, like I said, is kind of my personal playground, my private sandbox to test things with. And we'll show kind of how these three different databases are able, we're able to switch between these databases using virtual environments. So what we'll go ahead and do though is apply this file. And it'll create those three pods for us. I'll go ahead and open up K9S. This is another tool that I use often. It uses VI bindings to help you drive an interface to Kubernetes. So like I said, it might crash the first time because Hasura is mad, but these pods should start up eventually. So this started our three pods for us. And again, there's a Hasura instance and Postgres instance in each one of those. So the way this all kind of works is, sorry, I'm just looking at my notes here. Make sure I don't go out of order. Yeah. Okay. So we're going to set up the three environments that kind of align with those three instances, the three databases that we just set up. So this is kind of our custom resources that we use to define virtual environments. And you'll see there's also environments. So environments are clustered scope. And they're kind of the, it's kind of a shareable container. So there can be a shared set of variables, for example, that are inherited by any of the virtual environments, which are namespace scope. And so each virtual environment must have an environment defined for it. So for example, here, you'll see under the data block, there's a DB var, and that will automatically be inherited by any virtual environment. But then you can see here, we could also override it for virtual environment. And so these variables are kind of what's allowing us to switch dynamically between the databases and routing, which we'll go into a little bit more. So there's kind of these two pieces that KubeFox needs in order to service a request. So it's a virtual environment, which is pulling in the environment variables, and then a deployment, which would go on app deployment. So with these two pieces of information, which we call the context of a request, it's able to route. So traditionally, what you do is you have dedicated hardware for each one of your environments and deploy a particular version of your software into those environments, which ends up with a lot of duplication. What KubeFox is doing is we're dynamically routing to allow us to share deployments. And so these two pieces of information are either resolved at request time or provided explicitly when you're doing development. And I'll show you what that looks like. And so our main application here, there's just a single component to just kind of try to keep things simple. And we have three routes defined here. And so you'll see, for example, this is kind of the main route we'll be playing around with. You'll see within the route definition, we're able to use variables from our environment. So these can be injected at request time and allows us to do this dynamic routing. So this subpath is going to be using this variable subpath to give us kind of unique paths into this component in the different environments. And we'll dig a little bit more into this code as we go forward. So the first thing we're going to do is we're going to set up these environments. So we'll go ahead and just do a kube code apply and get those environments in there. So we created all of our environments. Now we'll go ahead and do a deployment. So Fox has a command built in that will automatically build the component, push it out to a container registry and then push out the app deployment to Kubernetes and it'll start the pod. So go ahead and do that. And there's a nifty little weight command. I'm going to make sure I did this. That'll just wait for everything to be running for it. So we'll go ahead and do that. And you'll see the first thing it's going to do is build the component image here. So I had actually built this already. So it was cached for me, which is why that was so fast. Worked out well for the demo. So if we switch over now, you'll see that there is a pod running, which is the name of our app plus the name of the component. And then this is a hash of the changes. And so what Cube Fox did was it generated this app deployment for us. And so this app deployment is kind of the definition of the app and what needs to get run. Additionally, you'll see because of the way we wrote the code, it's able to find a bunch of dependencies that this component requires. So here you'll see our routes that we kind of took a quick look at. And it inspected and saw that we're using the subpath in our routes. So it's marking that in an environment, in order for this to be functional, you're going to have to have the subpath variable. Additionally, you'll see it has two dependencies on other components. And these are defined here at the top. So we're saying that these two adapters are going to be used by us later in our code, in which again, we'll dig into a little bit more. But just by doing this, it's letting Cube Fox know that these are dependencies that have to be present when you want to run your app. So we're deployed now. So we have something running and we have our virtual environment set up. So we're able to make a request now. Normally, you'd have a load balancer set up and things like that. But for this, we're just going to do a proxy into our HTTP server using Fox again. So this is just basically a shortcut for a kubectl proxy. That's just a little bit pre-wired for Cube Fox. So that's going to start up a local server for us on port 8080. And so now we could go over here and do a call request and see what's happened. So like I said, Cube Fox needs those two pieces of information, the deployment and the virtual environment. And so right now, all we've done is deployment. We haven't released anything. And so when you do a release, that's when it's telling Cube Fox to activate these routes for a particular environment and it'll automatically match routes for you. But since we haven't done that yet, we could explicitly specify the deployment and the virtual environment that we want to do. And you can use headers, you can use query parameters. And so we're using query parameters here. So we'll go ahead and run this command. And you'll see what happened is we actually got a 400. And the reason for that is Cube Fox is telling us that our event context is invalid. And it's using those dependencies that were defining the app deployment and looking at the context and saying, you know, I can't fulfill this request. And it's all that you're manually specifying the query parameters. So it knew it was kind of a development request. And then it'll give you this output that shows exactly what's going on. So, you know, it's saying these routes are looking for subpath, but it's not part of the environment. And it's also looking for these two adapters, which we haven't set up yet. So let's go ahead and fix this. So the first thing we'll do is, you know, uncomment this so that all the Dev environment have a subpath. Virtual environments have a subpath. The other thing we're going to do is we're going to set up these two HTTP adapters. So what an HTTP adapter is doing is it's kind of acting as a proxy. So everything in KubeFox is modeled as an event, which is our events are fairly simple structures. What an adapter is doing is it's able to translate a KubeFox event into some external protocols. So in this case, you know, HTTP. And because we're kind of taking these events and translating through the adapters, we're able to kind of do that dynamic switching without you having to change any of your code. So, you know, in your code, all you're saying is, you know, I need to use this adapter. And then depending on which environment you're using, it's going to be automatically switching the endpoint that it's using, based off of the environment variable that's going to be injected at request time. So let's go ahead and fix up and deploy the things that we're missing. So we'll do our HTTP adapters. And then we'll redo our environment so that they have the new variables. Okay, so now if we do that request again, we should get valid result. Great. And this is what we're looking for. So this is an HTML page, obviously very long one. So what we can do is we can move this over to a browser and kind of take a look at what's going on. So if we run this command, you know, it's better we're seeing the, you know, our web page. But if you look under the covers of what's going on, you know, within this web page, it's referencing a style sheet, and a fade icon. But you'll notice that our original request, we're passing the query parameters. So Qfox is understanding how to route it. But in these subsequent requests, it's not present because the browser is making those request directly. And so it's not able to find those without the route. So there's a couple of ways to kind of deal with this while you're doing development. You know, the first way as a developer is to kind of give Fox proxy some more information. So you can explicitly specify the virtual environment and the app deployment that you want to inject while you're using Fox proxy. And so when you do that, you no longer have to specify this information, it'll automatically be injected. And now things work, you know, as expected, our styles sheet and a fade icon are coming back. And it's not quite so ugly. The other thing and kind of the more recommended way to do that to view this is to just make a release. And so we'll do that now. So what we're going to tell Fox to do is release our app deployment, which was automatically named by the app name and the branch that you have checked out. And we're going to tell it to deploy it to the virtual environment dev. So that was a release. And I don't know if you noticed, but I reset the Fox proxy. So it's not specified in the context. And now that we've done this release, you'll see that the routing works as expected. So there's no magic going on here anymore. This would be, you know, like a live, a live release where the Q Fox is is is routing automatically. If we look, you know, at the John environment, the, you'll see that my sub path here is John. And if we go and try to use this now, we run in the same problem because again, we haven't done the release. We're not doing Fox proxy. So let's see blah, blah, blah. Talk about that. Great. So what I'm going to go ahead and do is I'm just going to make releases for all of our environments just so we could kind of start playing around with the database and show you how the dynamics switching and say to the database access is working with the BEs. So I'll go ahead and release to, I'm sorry, Sebastian, the banner at the bottom of the video hides. Oh, I see. Let me just move this over here. And you know what is it speaks my name so long. Let me just fix that. And I'll fix that problem. Is that better Sebastian? I'm going to assume yes. Sorry about that. Okay, so I released the, the Dev John environment. And we're also going to release a problem. One of the things it's recommended to do when you release into a stable environment is to publish a versioned copy of the app deployment. And the difference between what we've been doing now and a version one is versions are immutable versioned app deployments are immutable. And so Fox again can help us do this. So we'll go ahead and do another publish command. And this time we're going to tell it specify a version for the app deployment. And we're also going to have it create a get tag for us. So, oh, so Fox operates against kind of, it's using get up get for versioning. And since we have uncommitted changes, it's kind of yelling at us. So we'll just go ahead and commit these changes and run this command again. Right. So it's solid that the version of our component that we're trying to deploy is already existed. So it skipped the build and it went ahead and it made another app deployment. And in this case, it's called GraphQL v1. So if we go over and look, you'll see that we have two app deployments to find here. But they're both using the same version of the component. So if we go back and look at our pods, you'll see that there's only one version of the GraphQL server running the one version of our component running. But now if we go here, you can see that Dev is working, the John environment is working and prod is working. Oh, no, it's not because I haven't released yet. So we created a release a version app deployment. And now we're going to release that to prod. So we're going to do a release and we're going to specify that we want to release version one and to the prod environment. So now. So John, I'm sorry, maybe touch on the selection of a deployment with query parameters and what transpires with the release. Yeah, so what I was showing you earlier with these curl commands, you know, we can use query parameters to manually specify our context. Basically what the release is doing is it's is it's taking these routes. And when you release it, when you do a release, you're obviously having to specify the environment. So what the release is doing is it's taking these routes and kind of compiling them and injecting the variables from that particular environment. And then it will actively match any of those routes. So in this case, we're explicitly telling it so it doesn't really, you know, it's compiling those kind of dynamically at request time. Whereas a release, it's going in and compiling these and has this kind of, you know, master index of all the routes that it's iterating through and trying to match a request when it comes in. So, you know, in this case, we're talking about, you know, we have our subpaths for the three different environments. And so we're switching between the three of those. And so now we'll dive into the circle quick and kind of show you, you know, that in fact we are switching and not only between the virtual environments, but because we're switching virtual environments, we're changing our database. So if we go in and just get rid of the heroes, it'll bring up a server for us. And this is just kind of a bit of magic that I added into this component. And you can see it here. So this is just telling it to basically forward anything that's not static or heroes over to Hasura. And this is just our very simple forwarding logic that we're using with KubeFox. Since KubeFox is a venturing, you know, the request coming in that HTTP request coming in is modeled as an event. So it makes it very, very easy to just clone the HTTP event kind of, you know, fix up the path a little bit and then send it over to the Hasura adapter. And so you'll see that's exactly what's going on here. So we're asking the kit context to clone here and get it set up to send it to the Hasura adapter. And then we send it, we get a response, and we just forward that response back. So it's a really simple little proxy. And it just makes it makes it easy to kind of switch between the Hasura consoles. So this is kind of the main interface to Hasura. And we'll go really quick to kind of show what's going on here. So this is our data structure. The main table here is superhero. And so you'll see here, you know, we have our superhero name, we have the real name, and then we have a bunch of foreign keys to several different categories. And you can see what happened is that Hasura went in and it kind of inspected all these relationships. And it automatically created a GraphQL interface that reflects this. So what we can do is actually go ahead and just perform a GraphQL query. Maybe we're going to switch me over. They have a little explorer built in that you can perform the GraphQL query. So this was generated for us by Hasura when I click that button. And so you'll see there's a result. So we ran our GraphQL query and it went in to Postgres and it pulled out, you know, all the data requested. And it automatically kind of creates this association. So for example, you know, right now we're having the alignment ID, which is not very helpful to us. What we really want is you know, the actual name of the alignment. So you'll see it has autocomplete and it created that sub object for us in GraphQL. And if we do that, it'll pull back, you know, the word instead of the ID. So that's kind of, you know, very, very quick overview of Hasura. What we're going to do real quick is we're going to go into our superhero and it's not a well-known fact, but in fact, I am 3D a man. So we'll go ahead and update that. And now if we go back our heroes page in the job environment, you'll see that 3D's man name has been updated. But now if we switch back to Dev, it's back to Charles because we're going to update it in that database. So this is kind of just proof that we're switching back and forth between the databases dynamically with these requests. But again, there's only a single pod that's running for us, even though we have these three releases and these three virtual environments, it's a single pod that's dynamically being switched using the adapters. So that's kind of the high level, you know, kind of showing some magic of KubeFox. So what we're going to do now is we're going to update our code. So, you know, right now in our table, we're pulling in the name and the alignment and all that's great. But let's say we want to add the gender to the table. So we could come in our code here. And, you know, what I usually do is I, you know, I got to figure out how to get kind of the gender out of this. So if you go back into their query, you could use their auto kind of complete to kind of make this a little bit easier. So these names are a little bit redundant. And it's just kind of how the database was created. This was just a database I found online. So normally, you know, this could be name value or something like that instead of having it duplicated. But if we run this, we'll see that we're getting back what we need. So the gender sub-object is what we're looking for. And gender is the column name. So, you know, if you look at the database, you'll see, you know, under gender, we have the three columns and they're just the table and the column are both named gender, which is why you're getting those, that redundancy. So if we switch over, so this is our code that is performing the GraphQL query and generating our HTML table for us. So you'll see what we're doing here is creating a GraphQL client dynamically with kit and the GraphQL adapter, which is the HTTP adapter that we put a set of dependency on. So if you look at this adapter, all it's really doing is specifying the URL. And this is the URL to the GraphQL endpoint of the Hasura pods that we deployed. So you'll see each Hasura pod had a service. This case, Hasura Dev, Hasura John. And so you'll see that by specifying the adapter in this way, we're able to inject the environment variable and have this adapter dynamically switch. So here we're saying, okay, Kit, I'm processing this particular request. I want you to give me the HTTP adapter that is specific for this request and it provides a client. So this is a super lightweight operation. There's not actually anything being created here. This is just allowing us to get a client that is for the correct virtual environment. And so this is basically just a very shallow copy of the Hasura Go GraphQL client library. And all we're doing underneath the covers is we're basically, instead of making an HTTP call directly, we're generating an event and then proxying it through the adapter to allow us to do that dynamic switching. And so this is our query that we currently have. And so what we need to add is gender. So, similarly to alignment, we'll kind of go through and add gender. And just to kind of make a little bit more sense, we're going to rename what is actually the gender column to value just to kind of make it a little bit simpler to read. So, basically, we're just extending our GraphQL query and go to match this query here so that it returns the gender as well as the names and alignment. So the next thing we need to do is add an HTML template. So you'll see here, we're using kind of a helper function from Kit, which is our SDK by the way. I should have mentioned that. And which is able to generate templates using built-in Go HTML templates. So if we pull our template, we'll see here is where we're producing the HTML table. And so if you're not familiar with Go templates, they're used heavily in Helm, which is how I kind of got my first look at them. And if you like templating, which some people do, some people don't, it's they have a pretty nice system set up. But anyways, we will go ahead and add the changes. So what we want now is gender. And we will also add the column for gender. I'm glad you got that, Sebastian. So that's a first person to notice it. Kit is a baby fox for anybody who's trying to has no idea what we're talking about. And Kit is also the end of SDK. Works out well. Anyways, so we'll go ahead and make these changes. And so again, we just added extended our query to pull back gender and extend our template to pull that into our table. So again, we'll go ahead and commit this. And what we're going to do now is, oops, I was supposed to do this first, sorry. So we're going to check out a feature branch. I was supposed to do before I did the commit, but it's okay, we'll figure it out. So we're going to switch over to this feature branch because we're adding gender to its new feature. And what we're going to do is a fox publish from this branch. And so again, this is going to go in and build that app deployment for us. And you'll notice there's a couple of things. First, it's actually going to build the component because we made changes to it. And then again, it's going to use, it automatically uses the branch that you're active on for the app deployment name. You could specify it explicitly, but it often works out well to just use the branch name or the tag name that you're on. So you'll see it created that new app deployment called GraphQL feature. And if we go over, you'll see that now we have two GraphQL servers running because there's two different versions of this. Right now, if we go to John, you'll see there is no gender table or gender column because we're still using the old version. So now what we'll do is we'll do a release. So these examples are all in the main Qfox repo under the examples folder. And then we're working on some documentation, Sebastian, to kind of walk you through and text what I'm going through right now. But if you look at our Qfox GitHub page, as exact Qfox, there's this examples folder that has this GraphQL code, as well as our Quickstart code, which has a couple components. And the Quickstart guide is a little bit more full or featured and kind of walks you through all this stuff. So all that's there, just clone our main repo. Or like I showed you in the beginning, you could use Fox to kind of initialize these locally. So Sebastian, do me a favor and just email us your address. And we're going to set up a complete thing on the Hisura stuff. And I can send it to you when that's ready. So what we're going to do now is release that new app deployment to the John Dev environment. So now if we go over and look back here, we'll refresh this page. And now magically we're using our new version. So the gender column showing up. But if we go back to prod, still the old version. And that is still the old version. And you'll also notice again that John is using my database where I updated the name here. So great, we have our new version. Everything looks like it's supposed to look. Gender is coming back the way we want it. And so now what we can do is go back and check out main. And normally you'd merge in your feature branch, although I forgot. So let's pretend that it just merged that in. And now we'll just do our Fox publish again. And what this is going to do is now it's going to update the Fox GraphQL main app deployment. And since Dev is already released to that, what we should see happen is that Dev is now using the new version. And so because it's released into a development version, you're able to update these app deployments underneath the covers and it's automatically updating kind of live as you make these changes. And the reason that's all possible is because we've explicitly marked these environments as being capable of that. So this release policy here is indicating that you're able to make these live updates. So changes to the virtual environments as well as changes to the app deployments are realized immediately. So I didn't test this, but let's give this a try. So for example, we can change our sub path for Dev to be something else. So let's just call it something. So if we go ahead and apply the environments again, it's going to update the path for Dev, the sub path for Dev. And what we should see happen is when we go in and refresh this, no longer found because we changed our sub path. So now if we hit it here, it comes back up. And so all that's happening automatically under the covers, because we've explicitly said that it's okay for this to happen in this environment. Now, if you look at the broad, broad environment, you'll see that it's stable release policy, and that's the default. And so that won't happen with in prod. When you make a release and into a stable virtual environment, it actually creates an immutable copy of the environment variables and secrets and the app deployment that you're releasing. And so any changes underneath the covers to those things will not affect the live deployment. So just kind of some little details here. So let's go ahead and undo that. So what we're going to do now is we're going to just make another release and publish our changes. Sorry, we're going to publish a new version of our code here with the gender changes. And so again, this is just creating a version of the app deployment, which is immutable and required to release it to Dev, or sorry to prod because it's a stable release policy. And now we'll release our version to and to prod. So now we should see that they're playing the music to get me off stage. So prod, Dev, and John are all now running this new version. So you'll see we're, if we go back here, you'll see we still have two pods running. And that's because the app deployment for version one is still running. So you could see our feature app deployment or main app deployment version one and version two. So if we deploy this version, which is no longer being used by anybody, and go back into our pods, you'll see it distilled everything. And now it realizes there's only needs to be one GraphQL server instance running. So I'm going to finish there. Well, I'll give you one sneak peek really, really quick. So we've been working on kind of integrating. Oh, sorry, never mind. I didn't set it up properly. We've been working on getting traces working. And so the primary stuff is there. Look for a release in the next couple of weeks. We're doing a whole bunch of really cool automatic tracing and linking with logs. So we'll probably show that in our X screencast as well. Okay, I'm done. Sorry, it took a lot longer than I was expecting. Yeah, no worries, I guess, right? So yeah, folks, you can ask your questions if there's any, we'll be able to answer. Yeah, maybe throw up the, throw up my screen, Mitchell. And I'm just going to avoid going over it and make sure that we have time for questions. Sure. If we have any, if we don't, I can actually crank through it. I think you can go to the slide for folks to ask questions. If not, you can just go tonight, I guess, it's better. Yeah, so so first I want to thank John for going through that. So John is the guy responsible for this tech and he's done a lot of good work. So how do things change with Kube Fox? Changes are quite significant inner team coordination is tremendously eased teams and even individual engineers can rapidly prototype and test their changes. All of this without any bureaucracy or overhead developers deploy and test the app as if they had dedicated resources to do so. Simultaneously provisioning and is automatically controlled. John was showing you that at the end that distillation of the deployment to only those components that are unique. DevOps doesn't need to be burdened with day to day logistics around individual team efforts. So these type these things and the tools in Kube Fox are available to the teams themselves. We don't need to configure ingress. We don't need to configure namespaces. Kube Fox provides a service mesh. We don't need to duplicate and manage config maps and secrets. We can actually use Kube Fox or ease to customize things. For example, switch databases and layer in different credentials and John showed that some of that to you. The switching of the data stores particularly easy with something as nice as Hasura. We also don't need to configure deployments or telemetry. In some cases deployments aren't even necessary to accomplish some of the things that were described and I just want to be clear that Kube Fox runs on any cluster. Here's some contact information. So Sebastian, don't forget to send your information to me and I'll be happy to get you some information out on the stuff we're doing around Hasura and GraphQL. We want to round out that tutorial. I hope this speaks your interest and you decide to check out Kube Fox. We very much welcome you to do so. We very much welcome and invite your thoughts and feedback. Good, bad, indifferent. We really want to hear what you have to say and where you think we could improve what things like etc. The spirit behind zigzag is simplification. To that end, we think you'll find that Kube Fox is easy to work with and if it's not, again we want to know. The quick start is a great way to get going and gives you a sneak peek at the power of Kube Fox. We recently ported to Azure so you could run the quick start either on your local system on kind or run it in Azure. Sorry, I want to make sure we get to Sebastian's question because I'd like to answer it and thanks for all the chat messages, Sebastian. I really appreciate it. Istio was actually something I used in the past and it was kind of this love-hate relationship. There are features from Istio that we're implementing. One of the things that kind of came out of our last, Steven and I's last experience managing kind of a larger medical application on Kubernetes cluster is that we ended up kind of stringing together things like Istio and just forgot the name, a tool for CICD and then we had an API gateway. We had all these open source projects which are awesome open source projects that we ended up spending more time getting to work together and managing and upgrading. The genesis of Kube Fox was let's pick kind of one way of doing everything instead of building a kitchen sink for an API gateway or kitchen sink for Istio is for a service mesh. We're going to do things one way and we're just going to kind of build the whole software development life cycle into a single product and make it very, very, very simple. The short answer is yes. Kube Fox would be a replacement for something like Istio. One of the things that's really cool is we're actually doing edge routing even so you could be matching on hosts and things like that that you'd normally be doing with an API gateway. You're able to write that all in those routes. Which gives you a lot more visibility and also the, yeah, it was Argo. Thank you. We're using Argo, which is really cool. But anyways, that's the long and short answer. Hope that answered it for you. Okay, so yeah, we are about to end the session. We are out of time now. So yeah, thank you so much, Steven and John for your awesome session on IQ Fox. And yeah, see you again soon, right? Yeah, thank you. Thank you. Okay. Okay, so thanks everyone for joining the latest episode of Cloud Into Life. So we enjoyed the interaction and questions from the audience as special thanks to Steven. And thanks for joining us today and we hope to see you again.