 Good afternoon everybody and welcome to this week in cloud native episode number six This week. We're gonna get a little bit more into API deprecation and removal We're gonna play with like, you know kind of work our way through sort of the blog posts that talks about it we're gonna evaluate some of the tools that are out there for For testing that we have we still have that problem and we're probably gonna build some kind clusters tear some kind clusters down that kind of thing so I'm glad you're here. We're gonna have a bunch of fun if you Are in the chat go ahead and say hello, and I can highlight you It'd be great to know that you're out there. So welcome. Welcome to this week in cloud native number six Let's take a look at the news this week of which there is quite a lot So again, if you go here on the left hand side hack indeed.io at twicken if you go here You're gonna find a link to the latest notes and if you have anything you'd like to Share or link in there feel free to do so Hey, how's it going good to see you Glad you're here. So this week is number six We always start off these pot. We should be starting off these broadcasts with a roller-minder of the COC So this is a cloud native Foundation Video broadcast and so with any of those things it's important to remember the CNCF code of conduct So, please don't throw anything with a chat or questions that might be in a violation of that code Basically, please be respectful of your fellow participants and presenters registration for cube con cognitive con 2021 is open for in-person and virtual so definitely check it out. I'm gonna be there and I hope to see you there That'll be a lot of fun Cloud native TV has a bunch of different shows this week So this week some of the shows are coming up. We have cloud native Latin X and vivo on Tuesday We have cloud native live on Wednesday And we have fields tested capture the flag and Kubernetes and I'm actually really curious which one Casin is gonna take on there. I wonder if it's gonna be kctf or if there's some other thing that Should be playing with them. They'll be playing with on that on that episode. It'll be tremendous. I'm sure though So definitely check that one out So there's new content every day of the week Now the news of the week I actually was able to attend a cube a Meetup of the Atlanta, Georgia Kubernetes meetup and it was really great and I'm actually giving a shout out here to James Searcy Who is a good friend of mine works at T-Mobile and we work together for quite a bit I was really impressed with how Joe put the news together for that And so I've copied a lot of what he's done there and put them in here and figured I cover them here as well So as you already know API removals are happening in version 122 definitely check that out Hello, and welcome back Joe and Hello to you guy with a cube. Wait, you have a cube like this kind of cube Or some other kind of cube I'm really curious So some of the stuff that happened this week or is it getting ready to happen link or do you got it up? Oh, they have graduated from the sandbox to the CNC of sandbox. That's pretty exciting the contributor summit North America 2021 has finding has begun if you have actually been there definitely come check it out if you haven't This is what a great opportunity. I can't wait to see you there. It'll be really a lot of fun So if you're curious about the event, here's the information for it It'll be at the JW Marriott LA live Here's where you can register the location and the schedule all of that information is posted I mean as soon as it gets updated. I don't think the registration is there yet either but Yeah, no as soon as that registration link is available you will be able to find it here. Oh That's awesome I've actually on a personal note I've actually just recently decided to start exploring cubes because when I was a kid I tried it and I was not very good at it And uh, like the only way I could really solve these puzzle these twisty These twisty puzzles with a butter knife and lately I've been totally obsessed with like solving all of the twisty puzzles So now I have at my disposal like a two of four a couple of threes four a seven and this A 10 and it's it's super fun. Get your brain working in fun ways. So Pretty cool Hello Russ. Good to see you Welcome welcome So another thing that I don't know if you've uh, if you've ever contributed to kubernetes as a project You've probably interacted with phasia bot and phasia Phasia bot has been pretty amazing and there, you know, it's got some pretty fun stories like this one Or basically basically somebody opened an issue to make sure that phasia bot knew admit that it was a bot But what the bot does is basically Mark issues as stale if there's no activity on them after a time and it will also take care of a lot of A lot of other kind of maintenance and housekeeping stuff But The awesome part is that it's being replaced With an official bot. So it's being retired All of the automated comments from really made by phasia bot are going to be made by the k The kubernetes triage bot as of this pr the k kubernetes triage bot account is fully under the project's own control Owned by sick contrabex github management sub project It means basically that like it's another one of those pieces of infrastructure That is now actually managed by the community rather than by an individual which is really pretty great It's a pretty important bot and it handles things like life cycle and a bunch of other stuff So it was pretty important that we that we get that pulled into the kind of under the auspices of the of the project itself So you'll still see those same warnings and stuff. It'll just happen a different way There is a proposal. I don't know if you've looked into the sidecar stuff But within kubernetes, there's a proposal to support the idea of sidecars now I know that this is kind of an overloaded term In that when you think about a kubernetes pod and you have Multiple pods inside of that pod or multiple containers running in the same pod You think of all of them as sidecars of one another effectively, right? Um, what this proposal does is it describes a model around which we might be able to describe that Individually that some of those containers might need to start before other containers right now It's sort of just a list of containers all which will start somewhat Uh in an uncontrolled rate, right and this is not taking into account in it containers That's not what i'm talking about. This is actually just within the construct of the the containers list Um, or I guess it's an array really within the within the pod itself Uh giving some giving some uh capability there to kind of control it Some of the problems that are trying to be solved there are like if you had a log forwarder running as a second process Within a pod you clearly want that log forwarder to become the last thing out Right because you want to make sure that it's able to forward all of the logs that it has Access to before actually shutting that log forwarder down So these these are just some of the uh relatively obvious use cases that People have and if you're interested in this they're looking for feedback And so if this proposal makes sense to you if you're happy with how this works Um, definitely give a thumbs up otherwise give some comments that To uh indicate like what you think might be an improvement If you're operating a helm repository This is it kind of a surprise one So if you're operating a helm repository repository credentials pass to alternate domain So there's like there's a hack that kind of lets people Let's people kind of get a hold of The credentials when in an unexpected way So while working on the helm source a helm core maintainer discovered a situation where a username and password credentials associated with a helm repository Could be passed on to another domain inference referenced by that helm repository In the index demo within the helm chart repository contains a reference where the chart archive for each project is So that means that if you're like say you took the dependency for engine x in your own chart When you do an a helm get or helm fetch or helm install or any of those things um of your own umbrella chart in your own private repository then What could happen is that your username and password could be passed To whatever repository is holding the helm the engine x chart as well Um, and this is you know, unexpected behavior. It's likely that they wouldn't do anything with it But it's definitely an important one to understand. So that was a security problem and that has been addressed Next up we have A pull request. This is a work in progress Introduce documentation around managing a separate mount namespace now. This is a fascinating idea And I had I had not heard of it until uh, joe had mentioned it in his in his announcement of this particular issue so um And I think I could see how there would be challenges. But anyway, so the proposal here is that You have the ability to define a mount namespace in which The ephemeral mounts that we create for pods would be associated right by default right now The mount namespace for all of the ephemeral stuff like, you know, if you're going to do like a amount of type of You're going to mount just a scratch space within a pod or what are any of or any of those sorts of things like empty deer Then empty deer is mounted on the underlying hosts in the hosts mountain namespace and then passed as a volume into Um, you're running container as part of the instantiation of that container when you kick off a pod or I should say when the cubit kicks off the pod In this model the idea is that anything that we would create ephemeraly We would actually associate with a different isolated mount namespace And maybe even share that mount namespace with like some other entity And or and then that mount namespace is actually where we're going to mount any volumes for your given pods in from What gives which is great because it improves the level of isolation between the pod and the underlying file system But I could see that it also might add a little bit of complexity. So it's an interesting idea I haven't actually played with it myself. Um But if any of you out there play with it, definitely i'm curious to get your opinion on it Definitely a cool read Improves the security isolation boundaries, which I think is good kubernetes release cadence change Is the blog as of july 20th um So starting with the 122 release a lightweight policy will drive the creation of each release schedule The policy includes the first release And the last release of the calendar year And the kubernetes release cycle has has a length of 15 weeks So the week of kubecon cloud native con is not considered a working week for the sig release mainly because like obviously everybody's at kubecon or cloud native con Should be the weeks of actually because there's a couple of them, right? There's eu and the us So might be interesting to see if that was taken into account as a little kubernetes will follow will follow a three releases per year cadence kubernetes 1.23 will find will be The final release of the 2021 calendar year and the new policy results in a very predictable release schedule allowing us to forecast upcoming release dates There you go. That's our new release cadence This means that there will be fewer releases per year Um, it doesn't really do anything with lcs or anything like that, but it does mean that there will be fewer releases And it means perhaps it'll be a little bit easier to adopt and and pick up the latest releases for kubernetes as things go Yeah, it is a great question. Russ. I'm not sure what what happens with uh with the old with the old phage about as it goes out to pastor Be kind of a fun fun idea The sysbox container runtime. I was playing with this actually this week. So Uh, if you're unfamiliar with this idea the idea is that you want to be able to run, um, docker containers that have system d or Or or running inside of them make them look like they're actually a little closer to vms now like, you know I could definitely hear that on some level. This is a weird thing to want right because you're you're operating a container Why would you want it to look like a vm? Uh, you should be able to like, you know, keep with it You should live with you've been able to live within your means within a container not necessarily Try to run like all of the linux operating system stack inside of it Well, one of the great use cases for this would be something like kind right wherein you could have um Or you could run your kubernetes nodes as containers Inside of a inside of a cluster and so you're able to do a lot of testing Um in that scenario lightweight testing you'll be able to spin these things up and tear them down Effectively the same rate as containers themselves, but not necessarily Um without the cost of virtualization So it also enables a lot more kind of like it's a great learning tool You know kubernetes and in docker are kind gives you a great ability to kind of like play with all the different knobs and dials of cuba dm and that sort of stuff Well sysbox is another one of these and there are a few that i've been playing with lately Footloose is another one of these sysbox is one And and this tooling basically sysbox is actually sysbox is pretty low level it had and happened it has a Um a run c driver for it. So you can actually plug it into your existing docker and i've done that Play with it and see what it would look like This is basically what you would add to provide another run time for docker by default docker uses container d as the run time But if you wanted to add another run time you could add one like this And what this does Is it gives you another command so you can do docker run actually history correct sysbox That gives you the ability to do something like this Or you can run a specific container Under that other run time In our case i'm going to use the docker run time. I'm going to use the sysbox run c run time I'm going to remove it when it's done I'm going to call it my container and I'm going to pull from the image from the nesty box registry And it's just the ubuntu bionic system d docker And we can see this thing kind of starts up like a container It looks and feels very much like a container not too dissimilar from the way that if you were to do this with um Kind it would work as well. Now. What's interesting is also the The mechanism this this particular container image has pre-installed bits of docker, right? So I can do docker ps Didn't find the image locally So now it's pulled it then i'm inside the container Inside of another container Kind of like docker and docker in some ways, right? That's that is basically how it's working. And so this gives you kind of a more generic way of handling system d pieces one of the other pieces that's nest that Sysbox does is it implements a user ns. So if you don't already have User ns it won't work for you But there are some there were some interesting challenges there like I tried to run this on An arch box that had like the latest kernel and it was not working for me at all I had to drop back tomorrow an lts release because apparently This works really only really well in kind of like the older versions because of the shift to fest requirement And so if you want to play with it, it's here. It's a fun one It seems to work pretty well. I've had I've had good luck with it One of the challenges I had previously was I was trying to use ansible to I was trying to use kubespray to install A kubernetes cluster and I wanted like docker containers to do that because I didn't want to Go about managing all the things now one thing I learned was that in ansible's kubespray project or in the kubespray project If a hostname isn't already set correctly then a kubespray tries to set it and the way that it does that is through this command I'm as root right now if I do hostname kettle hostname foo And that fails and it gives me the output could not set property failed to set static hostname device or resource busy and that's wacky because like even if I would assume that that would be possible within the container right because I can still do something like echo foo hostname And then log in again and boom is foo right so I can still change the hostname that way But something in the way that hostname kettle does it like makes use of something that I don't understand yet and and blocks it So at some point I might as trace that and see if I could figure it out, but kind of an interesting challenge So this in my I guess all that to say it didn't solve my problem Which was a very obscure problem to begin with and that was the problem that I was running into But it is neat so definitely check it out if you're interested in VM like containers Argo vulnerability leads to crypto mining if you're using Argo definitely check out the vulnerability The cncf white paper. I haven't had a take a chance to look at this But this is a a new tag for app delivery and they're talking about Um the white paper the the final version here. So this is actually Kind of I think a pretty solid write up that was funded by the cncf To talk about the operator pattern how it works and all of that stuff So if you've heard people talking about operators and you want to know more about it I think this is probably be a really good reference to begin with Um, so definitely check that one out. I like that. It's somewhat agnostic It talks about the different frameworks that are out there qbuilder cop cncf operator framework the metadata controller Um talks about life cycle management and use cases for an operator It talks about a bunch of different um cool stuff and does it does a pretty good job Yeah, so A great reference on operators There's a new admission control micro framework and then there's also the crosslet, which is a It's been moving along pretty well, but it's a cubelet rust and awesome to give you the ability to run like Web assembly as uh containers instead of containers as containers pretty neat stuff Nothing new in the cve ground and the next thing I wanted to start playing with was I wanted to kind of explore that Take us back to that blog post About api deprecation just in case anybody had not already seen it Here we are So this article was written by christina calari and tim banister And it talks about the api removals in version 1.22 So when we get to version 1.22, which has already been cut If you start migrating to it one of the heads ups here is that you're going to start seeing um Things get taken away. Uh, you're going to see the api not present for a particular group So let's talk through like which ones are going to be removed and I we covered a little bit of this in the last episode If you want to check that out. It's on youtube But the thing that I wanted to point out here is like, you know, for example validating webhook configuration mutating webhook configuration originally was V1 beta 1 and now it is just v1 So and and the removal means that if you still have your object defined that manifest defined as api group Admission registration dot k. So if you i o slash v1 beta 1 then it's it's going to fail and it's going to tell you There is no object of that URL right and so that's Going to be the experience that you have and you'll be surprised by it And there are a few other ones here custom resource definition That's a big one So if you haven't if you have if you're not updating those custom resource definitions that you've created Then you're going to get caught out by that It's a great example of custom resource definition in here somewhere In our docs and it makes me wonder if It's good. All right So here's an example of a of a crd that has been defined And I'm going to actually go I'm going to go ahead and apply this. I'm going to go up here to raw I'm going to grab that URL and we're going to go ahead and apply it and see what it looks like. So let's do Kind create cluster And while we're doing that. Oh, that's going to bring him up 122. That's not what I want. Uh Do I have a 122? Let's go and take a look Kindest It looks like to test this we would have to build 122 unless I've already done that which I might have done but All we're waiting for that. Let's go ahead and do this kind complete cluster So I'm going to go ahead and build kubernetes real quick. I know that sounds kind of weird, but we're going to do it service kubernetes And I've got it checked out locally and the way that I checked that out This is something I learned or kind of relearned recently was that You have the go 111 module command and if you set that to off Then you can do a go get k8s.io kubernetes And it will put it in go search k8s.io kubernetes for you If you have modules on then it actually just checks out whatever the release version is and props it into your modules So you can reference it But from my case, I kind of wanted to have it just checked out locally so I could play with it Make polar quests and that sort of stuff So here we are we're in go source k8s.io kubernetes. I've done the check out already and I'm going to go ahead and change to a release branch v1.22 We'll call it rc0. I'm going to do get check out dash b That's the first release candidate for it and then I'm going to go ahead and do kind build node image Name equals v1 Kindest to zero rc zero Unname its image And because my local checkout in my go in my go environment is set to v1.22 zero rc0 That's where I that's what I got checked out locally then kind will actually build that particular version And make it available to us and we'll see if this works. I might have to actually grab the the The current release or the top of tree release for kind to make it work But let's see what let's see what happens if we bundle it up this way first and maybe we won't have to do too much more But this will give us the ability to go ahead and test out those expiring apis and see what that looks like So while that's happening, let's come back over here So these are the things that are being affected. So any automation that you have that Does a token review you might want to check a look at anything with subject access review or local subject access review Self-subject access review anything you're doing that's actually checking the credentials Or any testing that you do for any of those things any of those objects have to be defined in that way The beta certificate signing requests is now no longer beta and it's not going to be available The lease api if you use it And the ingress object right ingress extensions v1 beta 1 And networking kh.io v1 beta 1 this one has been around for quite a long time And it will be removed from serving It means it will not be available And if you were to try and create an object with that old version, it will not be available Now there's a couple a couple things that I covered last time that i'll just read i'll reiterate here real quick And that is if you ever ever wondering what version is the right version right you can do kubectl explain For that particular object. Let's take an ingress for example Have a cluster up so kind create cluster Am equals 1.21 kubectl explain ingress And kubectl explain gives you lots of great information including Right up here at the top What the target version should be And what the kind should be Right and if you wanted to know more about the spec if you wanted to kind of explore that spec somewhat dynamically the entire spec is defined here right and so you could do things like kubectl explain ingress recursive And it will give you all of the fields that you could possibly define within that object And if you wanted to know what status load balancer ingress was going to be right you could do kubectl ingress spec And here are the fields that are viable for that particular object right and so you can dig into a particular object Lower and lower you can look at the spec And at any point you can do recursive and see like the entire Construct of the spec all right there very easy to kind of navigate and troubleshoot and see what's happening here So this is one way of understanding like if I wanted to do like kubectl explain self subject Access review Right So it is under authorization k it's i o v one That is the correct version and the kind would be self subject access review And again, here's all the information for the object That's one way of determining the group another great way to look at it is this like kubectl api resources so Actually Ingress oh in this particular output And this is a tricky one because this will actually show all of the api Resources that are being served currently like what things you can define And in this case you can see that you could define Uh ingress under networking case i o slash v one or you could also Define an ingress under extensions v one beta one and that's because we're running version one dot 21 We're running 122. We would not be able to do that. It's like we're almost through our build here Lots of good cpu time. Oh look Let's slow it a little bit Hope y'all can hear me. Okay Shouldn't be too long So that's another one and then the last one which is also useful is kubectl api versions So we can tell that like, you know for particular groups what versions are available So networking that case i o has networking v one and networking case to i o slash v one beta one Let's just build for a second and then Jump back into our docs so But one of the things I really want to make sure that we highlight is that like removal means removal It means it will no longer be served. It means that if you try to create that object, it will no longer be there, right? um And we're going to play with that just in a minute when I get 122 up We're going to start up a 122 cluster and like deploy some stuff that is that that don't work anymore So for ingress migrate to use the newer api. This is going to be true of pretty much everything um The related api ingress class is designed to complement the ingress concept allowing you to configure multiple kinds of ingress within one cluster If you're currently using the deprecated ingress.clask annotations plan to switch to ingress class name field instead And I believe that was actually handling being handled kind of somewhat automatically when migration would happen That's great information here for each of the different kinds of things that are being torn down There's a plugin to kubectl that provides the kubectl convert sub command It's an official plugin that you can download as part of kubernetes. So you download kubectl for details I was not aware of this plugin. This used to be a function that um, that's kind of a bad link I want to know more about convert So this used to be a functionality of kubectl But i'm really curious about the plugin. So let's let's actually go looking for that I mean Crew might have it I hope maybe it's already there I'm missing something super obvious So they reference Sting so they reference it, but it doesn't look like it's there No, I don't have it So in theory, I should be able to grab it from here if it exists So let's do that. Let's let's try that out kubectl Version I'm currently running 121.3 It would be held in someplace different than this. I don't think It's a file I'll be damned All right, that's cool So they basically moved the functionality of convert into a plugin and they made the plugin available, but they didn't necessarily It doesn't look like it's packaged it packaged together yet Nor have they registered that with crew That's really cool I mean it's cool that it exists the packaging could use a little work probably should update this to uh include the kubectl convert piece But the way plugins work If plugins work is anything that has the word kubectl and then a dash and then something after it That's actually how the plugin trick will work, right? So kubectl would discover that and make those plugins available to you. That's pretty neat Let's take a look at our build here Okay, that's done. So I'm going to do kind create cluster config actually image equals v one zero What did I call it? Oh, this may or may not work. Let's see if it works. It might work. It'd be awesome if it did Hey kubectl version 122 rc0 Woohoo. All right So now if I go back over here And I do kubectl create ingress kubectl create Oh, you know what? I'm on the wrong one. So I have to um deployment test Which equals and next Replicas equals one Suppose port equals 80 Context 21 I'll send you expose deployment dash test Equal 21 I'll create a service of type cluster ip for that guy and then I'm going to do kubectl Create ingress all back end equals default test Context equals 21 Now here's the fun part. You can actually do uh, this is what I wanted to show you You can actually pull the previous version of an object. So if I do kubectl get ingress test Context equals 1.21 Yemo So there is the the current Valid object and I could actually create the same object but by copying it to our new context But I'm not going to do that right away Instead what I want to do is I want to pull an older version of it. I think I can still do here Ingress This is kind of wild but check it out. So this time I'm actually going to use the old extensions the one that's actually being reviewed from the cluster so if I do that I can see that I'm doing kubectl get ingress's extension. So I'm telling it that I wanted to convert Whatever object it has an ecd into this particular version of object so that I can see the results of that object, right? And it's going ahead and it's gone ahead and done that. It's created extensions v1 beta one It's a type ingress and here's the content that I had Back end was service name test and the service port was 80 And if I get if I just go back to just get ingress You can see the difference here, right? So create timestamp is the same generation is the same name is the same resource version is the same The configuration looks a little bit different Before it was back end service name service port now it's default back end service name service port Okay So let's grab that old one and then we're going to just pipe that to our 122 cluster and see what What happens here? We had two really interesting outcomes The first really interesting outcome is that kubectl itself on evaluating The object that I was getting ready to present to our 1.22 cluster Told me hey warning This is deprecated. You probably want to move to networking k8s.io slash v1 That is awesome The second one and more interesting and more relevant to this particular testing Is that when I applied this object to the 1.22 cluster? I saw this error come back unable to recognize No matches for kind ingress in version extensions v1 beta one So this is the area you're you're going to want to watch for Yeah, the thing doing the conversion is the api server. Correct. Yep But this is the error that you're going to kind of want to watch for If you see if you're you'll know that you're hitting this problem when you see this error No matches for kind is the key is the thing right? That's actually where you're going to where it's going to catch you out The next thing I wanted to show you I wanted to do is just grab another example So hold on one second here It's response body That's everything All right So now I'm digging to the the bones here a little bit and show you some other interesting stuff That's happening behind the cover So this is like probably the easiest way I can think of to do a test on like self-subject access review So first understanding what self-subject access review is One way to conceptualize this Is to is to is to basically it's the idea that you can query the api server Uh with specific questions to determine what access you have Right, so if I were to do kubectl auth And I one of my favorite commands actually Uh, and then I do list for example But it will show me all of the permissions that I have With my current credential Against the api server According to the api server itself and the way that it does this is through this self-subject access review And actually in this particular case it has to do with a self-subject access the self-subject rules review Because I'm having it list all of my permissions For this given namespace As an authenticated user Now let's say I create a new uh a new user, right? So let's do kubectl Actually, let's do this. Let's do kubectl auth Can I list And we'll just use the default service account in the default namespace and take a look at that one, right? As I'm going to impersonate system service account default And that's the default service account in the default namespace And we've identified it as a system service account. Now what's going to happen Is it's going to run this command the self-subject access review? But it's going to impersonate This particular service account and get the result back So my question was what permissions does the default service account have We can see that they're very different than the permissions that we have as an admin, right? Like I basically have like cluster admin role right now So I have all of the permissions for all of the things But the default service account does not It can do a create of self-subject access reviews It can do a create of self-subject rules review And then it has the ability to understand a little bit about the configuration of the cluster So it can basically walk the api and see what resources are there, but that's about it Right interesting stuff Now what I was showing you before was if I wanted to see for example, what the api call Was that made this api that made this request I can actually pull that open and take a look right And so here is the request body And you'll notice that it looks a lot like the json stuff or the or the yaml stuff that we normally see So it's a self-subject access review api version Authorization k8.io slash v1 And then there's no the metadata is effectively empty The spec is defining the namespace in which I want to spot to see And and then a status object exists, but doesn't matter And so here is the curl request if I were going to use curl to do this And we can see now one of the biggest takeaways of this particular piece of it is that you can see kind of how The api removal Will affect you Right if we are looking at the api object the group Is defined right after The group and the version are defined right after this well-known path slash apis So inside of my yaml document, right? That's that's actually let's take a look at that ingress again that we were looking at before Inside of here. I'm saying api version networking k8.io slash v1 And I'm saying kind ingress right And if I just apply that kubekettle apply Oh bunch of data, but actually you know what would be easier There we go So here's the curl making the call and it's saying it's going to put that according to the document kubekettle has converted that Based on the yaml that I have provided It has picked up the group and it has applied it in that uri So it's going to send it to my api server On slash apis inside of the networking k8.io on version v1 It's a namespaced object. So it determines what namespace i've targeted the default namespace by default And then the ingress create the ingress object i've created is called tests right So that is effectively how that conversion happens and if that api gets removed you won't be able to see it if you send The 121 version of the output of the ingress to the kubekettle convert plugin it will convert it to 122 And it should deploy correctly in the 120. Yeah, that's right. Let's try that. It's a great example. So the question Is if you send the 121 version Of the object to kubekettle convert Well, it converted is interesting I expected that to go a different way honestly, I did Because what's fascinating about that is that it it came out As incorrect as it went in Which is really neat. I know that you can pass versions And so I wonder if I have to be explicit about the version or maybe Actually, what if I do output version No, I shouldn't have to do that You know what will do the conversion? This is kind of an interesting one. So before we get into it here, let's do kubekettle apply dash f dash If I apply it If I apply it to a cluster that has both versions Then the output will be converted I know and we Ah, it still took it. So what I want to see Look at this one more time it might be a way to get it to That's neat. I mean Oh, I see I see I got it now. Hold on I pulled the plug in That's cool. Okay, hold on see, um The history grip convert Yeah Although that's still pretty busted But let's grab the 122 version of this and try it that way Still fails Interesting Yeah, I mean I could do the output version thing But the problem is that you would have to know what the output version was, right? And the idea is that this would actually automatically convert this for you networking dot kxio slash V would be one out of curiosity If you know it it will actually convert it for you correctly But if you don't know it then that one still works Yeah, that's kind of a trip. That's a very good point Because otherwise you wouldn't you wouldn't really know. Yeah Anyway, that's fascinating. So thank you for pointing that out. That was a great question I do think that it would be good. I wonder if there's like some other piece of kubectl convert that I'm missing If valid use cumulative validates I want to look it for output version basically Should be like a kind of a latest or something. It does not appear to be I think the old kubectl convert did do that the old kubectl convert did actually Expect and it basically comes down to discovering what the preferred version is and then using that preferred version So I wonder if they've wired it up to a new api that doesn't exist yet because there is now There will soon be a uh an api that describes the like oh actually in my kubectl Yeah api resources No, it doesn't have it yet But there is a new api behind like a beta piece that I read about last week That gives you the ability to kind of define The preferred version in an api and so I wonder if if that existed whether kubectl convert would be able to consume it and make the right decision But it used to be that kubectl convert would determine what the preferred version was And then it would pick that And that's not what's happening here instead. What it's doing is just I don't get what the convert part is it's actually just taking the object and Dropping it in the same way that it went in they're dropping it out the same way it went in So it seems like a bug on the On the plugin part anyway So last few things I wanted to cover before I bounce out of here are there are a few other things That are worth calling out. Um, and these are different projects that I found That give you the ability to understand whether things are deprecated or falling out of uh, or or have expired Uh, thank you So cube no trouble is a great example of this. It basically looks for um objects that have been created and then Uh on a deprecated version and warns you about them Now this is a neat one because it actually looks at The thing it looks like home charts. It looks at things that metadata that's been left behind, right? So like a few for example were to do kubectl get ingress test Ah, it didn't work here, but um, sometimes it'll actually leave behind a metadata for um, actually Let's see what if I just do get deployment, but from time to time When you're deploying things it will actually keep a meta it'll keep a metadata record for what was what was the deployed configuration and things like q-bend can actually look at that Metadata and determine that When the object was applied to the cluster it was stored in an old version Some of the other tools out there are pluto by the wonderful folks at fairwind ops It does a very similar thing you can apply it You can actually point this at your cluster or you can point it at your source code And it will try to evaluate whether those things are expired or not cube pug The third one written by riccardo cats community member might even be on the call today Gives another way of actually just exploring this as a crew plugin And then the last one is deprecation written by a good friend steve wade And he is doing exactly the same thing just like evaluating Those manifests that you've provided and trying to determine whether those manifests are using expired apis And that is our session for today. I wanted to say thank you very very much for joining me It really means a lot. Um, I hope these sessions are useful and I look forward to the next one in two weeks And uh come right back here and I'll I'll meet you again in two weeks And we'll cover some other interesting fascinating part of all of this So I hope you all have a great week and I'll see you soon If you like what you see shout out on twitter follow me on my following Follow me at mowye line anywhere and also subscribe to the channel Talk to y'all later. Bye. Alrighty