 Yeah, hello everyone first of all congrats on making it to the end of the day So being the very last presentation. I don't know if that's a good thing or bad thing But here we are so we are gonna talk about preview environments with our go CD. I'm Brandon as introduced I'm at code fresh as a principal technologist I also hand a lot of go-to-market stuff, but the most important thing is I get to spend a lot of time playing around with Argo, so that's very fun I'm gonna start with what seems like a very obvious question that I think everyone in this room is gonna Oh, I know what a preview environment is so why are we discussing this? But what I want to talk about is that preview environments mean different things to different people And a preview environment for you might be a namespace. It might be a cluster It might be a set of clusters It could even be a region or in the case of like Mike's great talk earlier It could be V clusters as well So preview environments really differ from person-to-person and what I want to talk about today is how Argo is trying to bridge that gap With their generators they're making it possible to fit as many use cases as you can so here in this image obviously the simplest Diagram I could have is you have some developer action that is resulting in some Kubernetes resource in some way whether it's a cluster or namespace That's what we're gonna talk about today and how we're gonna do that So why should you care? I mean that's always the question, but but why should I really care about this? what is it solving for me and That's where I really think it's important to talk about testing because as we move more towards modernized applications As we move towards Kubernetes and really getting at scale testing is just getting more and more complex We can't really get away from it and there's this concept of oh, we're gonna shift left We're gonna move security testing a lot of things further left developers are gonna take on more responsibilities. I think it's a great idea It's harder to implement and practice But what we want to do is with preview environments has helped shift a little bit of that onto the developer without Making it feel like a giant burden on them So that's a goal here and we want to avoid that rework that happens in static environments Like I don't think anyone wants to get up into their staging their uat And definitely not their production and have a feature that's not acting as expected so the goal is to make sure that we test this earlier in the process to truly understand the state of it and Really then we can proceed forward with just better development practices We want to avoid really ugly merges as well because I'm sure many of you have been here involved either from the dev ops I did a developer side that you have multiple features that are actually clashing together And you're hitting some really nasty merge conditions And it may not even come out until you're actually testing So this gives us an opportunity to be a little bit more dynamic about how we create our test environments And we can merge feature sets together into a preview branch that gets built out into a preview environment So if you have two high-profile features that you know are potentially going to conflict with each other Let's create a controlled environment where we can test that and maybe you have a QA resource that can also help you validate that out So that can be a very good approach to this One other thing is and I was kind of talking about this earlier But you really need to comprehensively understand that behavior that you actually have within your applications and within your features that you're testing The reality of the situation is a lot of times your dev environment your integration environment or every whatever you want to Call it it becomes kind of a dumping ground and what developers do because they're focused on delivering their story points Is they deliver their code and they go hands-off and it turns out something's broken in the integration environment And it basically becomes a case of whoever found it has to figure it out That's not a fun scenario right and I think the developers are going to push you off because that's not their goal to worry about that Right their goal is to work on the next thing code that they have to deliver So this again gives you another platform to understand that feature behavior a little bit better before we move on So let's talk about some use cases So I'll start with the good and some of these are going to be obvious As we move into some of the other sections maybe a little less obvious But the first one being microservices I think most people on Kubernetes are doing some sort of microservices or mini services or whatever you want to call them depending On what you kind of had to go with at your organization They're just a naturally good fit for this right they're kind of bite size You can easily and test them in a preview environment Ideally if you really followed microservices, you know best practices you don't have a lot of other dependencies that are going to Cause problems so that can be very helpful if you have projects with heavy concurrent feature work So in other words, you know, you have like a big project release coming up You have 20 50 100 developers all working on features at the exact same time And I think we all know that when you get towards the end of a sprint, they're coming in hot a lot of times Right, you might have you know, 80 75 80 percent of the features getting delivered right at the end of the sprint So if you have projects like this, it can become kind of the Wild West in those integration environments Where we're merging everything into the main branch to really queue up this release to kind of push it through So if you have that situation these preview environments can help you peel off a few of those key ones again Identify test figure out what you need to resolve before you kind of merge it in with the main feature set static environments are another interesting one so You know, we work with a lot of folks that have static environments that they're gonna have to have code free Right because they might have external customers that are depending on it or even earn internal customers that need to utilize an environment in a specific way And once code is delivered into this track, it's got to be frozen right so you end up with developers who are kind of like hung out to drive for a little while because They're waiting to see where they can actually test their change at and they don't really have a good environment to do that in So this gives them a little bit of an opportunity to spin up an environment that suits their needs to actually test it out And then just our go and preview environments is a great combination, right? I think we can all agree with that a lot of the recent development in Argo has made this so much easier and we'll talk about that But it's really just improving things So I'm gonna talk about the bad, but these aren't really bad, but I wanted to say the good bad and ugly So this is what we have But these are potentially bad things, right? So API's for example can seem like an obvious use case. You're like well I can really get out there and test this in a preview environment. It's gonna be very easy Depending on how much dependency you have on other services or other services have on you as an API and also how you set up ingress routes This can be a bit of a trap Because API testing can really lead to very unexpected results And you're likely gonna have to build out preview environments with multiple services So you may find that to be a good thing you may not it can actually be a very good use case But you need to watch out for some of the caveats there Anything that has heavy database mutation or depends on messages or basically in strong external Infrastructure that you have to consume from and you're changing things can be tricky. It's not that it shouldn't be done It's that you just need to be cautiously aware of it because for example if you're deploying Let's say a new version of a customer messaging service and that customer messaging services You know, it's consuming off of Kafka or something like that And you're consuming the same messages that the service that currently in integration is consuming You're gonna start to wonder where these you know These missing messages are going and it can be a really confusing thing if not everyone's in loop on exactly what's going on So you need to be careful about where you're consuming from and then I also have core services And what I mean by this is let's say I'm a business-to-business retailer and I have a pricing engine Right and that pricing engine affects effectively everything in the company Right, it's it's you know on manifests. It's on the checkout processes in all these different places You can test these in preview environments But it can be tricky to really understand the full implication of what you're changing in an engine like this Across the entire ecosystem of what your you know company really utilizes and organizes So that can be quite tricky So you can do it but again you need to truly understand the scope of the services that you're trying to utilize And how are you actually going to play off of that? So here's the ugly I don't think there's gonna be too many surprises here, but you know monoliths They're not really a great use case It's not that you can't use a monolith to do this because you definitely can I've worked with many people that put monoliths into preview environments But there are a lot of potential battery percussions here. There's resource usage issues There's just the impact on other external infrastructure supporting services that you really need to be aware of So monoliths again, it's not that you can't do it, but it's probably not the ideal use case here I think what you'll see is a lot of times as companies are going through like modernizations They're taking monoliths up They're gonna split them up into smaller more bite-sized services probably not microservices But somewhere in between and then they can really start to see the benefits of things like preview environments If you have microservices with hard coupling you probably didn't really do microservices, right? But it happens all the time, right? So you're gonna have some hard coupling between microservices that can make it very tricky If your preview environment to test one service requires you to bring in 20 other microservices You have kind of a problem. It's probably an architectural problem, but it's also a problem for preview environments That's just a tricky thing to actually deal with and then load testing is an interesting one because I've seen preview environments used for load testing And it can give you some cursory information for sure But the reality is it's never going to be very accurate compared to your production resources You know your actual environments that you want to have sort of a true metric from how is this going to perform compared to What I expected to look like in production It's very difficult to generate exact load unless you have the exact resource specifications as your production environment and the exact load there So you can do it pretty tricky And the last one here is more of a philosophical one, but it's just an inflexible culture There are a lot of organizations out there who they have their static environments They have a they have a dev they have assist you they have UAT they have a production and that's all they want to stick to They don't want to have to manage any resources outside of that It's not a problem that you can't change But it's definitely a problem that can make it very difficult to get developers and your DevOps folks and Kind of everyone else within the ITOR organization Into a situation where they're ready to kind of have preview environments and understand the impact that has on developers and how they deliver code So that one that's up to you guys to figure out. It's your companies You know, you can you can tell me if it's gonna work there In my experience usually companies that are you know moving towards things like Argo and Kubernetes They already have a more open mindset to things like this But if it's a company that's kind of like hey We want to use Kubernetes in Argo, but they're an older legacy software company It's a little bit more challenging to kind of bring that culture along So let's talk about some best practices There's a lot here and I'm sure like I have here you guys already do all of these things, right? So this is not surprising. I'm sure you guys are experts at all this stuff We already talked about shift left, but in general we do want to shift left now That doesn't necessarily mean we just want to give developers more work to do We just want that to happen earlier in the process, but we want to assist them We want to do it in a way that it doesn't create additional burden on the development process on you know the delivering feature process And of course, you know in along with that, you know, we need to automate it, right? So that's kind of where our Argos coming into this picture here You want to stick to a consistent naming strategy This is not something that you necessarily have to do, but it's gonna make things a lot easier like if you're using a pull request generator or If you're using SCM generator having that consistent naming strategy is going to make it much easier to set up the Argo side To actually pick up on things and automatically create these environments So it's a good thing to have that consistent naming strategy And then the third one is really what we're gonna be focused on which is pull request preview environments This is one of the the simplest ways to start I mean we obviously saw you can get much more complicated in this early round But if you have a pull request create a preview environment you can drive it off that pull request information It makes it very easy to understand the lifecycle of it So not only when is it created but when does it get destroyed? And how does it get merged in? You know how do we mix features together you can easily do that by merging two feature branches together and creating a preview environment out of that In this case, I'm using labels on github So you can merge branches together and then put a label on that and it's gonna kick off the process So a lot of flexibility there Some other cool things that you can do and I would suggest you do and this I'm gonna credit Coast us with that I work with which is the automated PR comments because this is very helpful if you are creating a preview environment you kick off a Preview environment after a pull request has been created It can be very helpful Especially if it's a web-facing application to go in and actually put the address or the destination of that environment on the pull request itself Because a lot of times your QA folks there, you know, especially if they're gonna assist you with this Or maybe even other developers that are gonna help you test this out. They don't necessarily know where to go I mean they could go digging through kookettle right and they could look for your service and look for the namespace and try To figure it out, but if you have that link there, it just simplifies the entire process So automated comments on there very good thing to have The next one being smoke test is a little bit trickier I think we all know that you know everyone would love to have you know comprehensive smoke tests and your end-to-end testing is Gonna take care of everything for you It would be great if it is part of this process as well that may not always be possible But it's definitely a best practice like when you spin up a preview environment Ideally if you have the resources You should run it through the same set of smoke tests and automated testing that you have for your regular integration environment It's gonna simplify things because it might catch stuff like you don't have a verified or signed image, right? You have code smell issues or whatever it is, right? Just run it through that same process so that before you attempt to actually test it in a preview environment and merge it Into main we've already kind of resolved most of the issues that we might encounter there This one's the fun one self-destruct You want it to basically just go away, right? Like once you're done with the preview environment You don't want to have to handhold and go in and babysit and have an end-of-day process that cleans it all up You want this to automatically happen and this is where get ops is really helping us with this, right? You know it's declarative configuration. We're assigning a state it comes in cleans up the process So we want the environment to clean itself up the advanced ones. We already talked about culture But ingress here so ingress is an interesting one It could be anything from just routing a little bit of traffic into a preview environment You know you could do across namespaces if you really had to although I wouldn't recommend it It could be something like a service mesh and actually splitting traffic But it can be a very good best practice particularly if you want to test specific use cases So we're not demonstrating an ingress here today But it can be something that is really very helpful depending on what you're trying to test And here is an image of really just like the simplest scenario, which is I have feature a There's a pull request created out of it It's deployed to a dynamic environment dynamic environment a again That could be any sort of resource that you want it could be a namespace a cluster, you know, whatever really fits your use case All right so where does this fit in with get ops and here I'm gonna give a little plug to the open get ops group because This is really perfectly in line with preview environments So you're looking for something that is declarative and versioned and immutable pulled automatically continuously reconciled Preview environments within Kubernetes and Argo perfectly fit into all of those right so we have resources that are defining out the infras Defining out the environment that we want to spin up. It's also Stored and get right so the entire life cycle of this is stored and get whether it's get library at hub or whatever it is So we can kind of trace between those as far as when I spin up a preview environment I know that it's either a combination of these branches. It's this branch Here were some additional commits after we created the preview environment It's automatically created and then it lives with that life cycle of that pull request as well So if we get rid of the pull request and close it we clean up our environment automatically And then of course, it's just benefiting from the fact that we have tools that are continuously checking for state So here's another one that I want to talk about now might touch on this a little bit earlier But Argo sync phases and waves can be immensely helpful here so When you're spinning up a process like this Ideally you have a very simple scenario when you're kicking off a preview environment The reality is there's often supporting things that need to be put in place, you know, whether that is infrastructure. It's additional Services, it's you know, whatever you really need to put in place So I would highly recommend that you learn the sync phases and waves if you don't know them and really understand How to use the hoax to your advantage So this diagram is technically not a hundred percent correct But it's kind of how I think about it which is get us holding our state Argos attempting to sync our state Kubernetes is our end resource in this case and we have precinct sync and post sync events Here's some examples that you might utilize them for right so in our precinct. We might need to validate our image signature first In some cases you might need to actually verify compute and service availability Now I know you guys who are all working in the cloud are thinking that's that's silly like I mean the cloud It's just gonna spin up But the reality is a lot of folks are actually working with on-prem infrastructure as well Where they have requirements for compute and you could potentially run out of space or you might need to actually create a new Allocation for request for space So having this as a precinct to say hey, I'm gonna need this cluster These pods this much resources so that I can actually spin this up can be crucially important for a sync Again, you can spin up additional necessary infrastructure. So if you have a database requirement You could create let's say a dynamic Cassandra instance Populate it get it ready for this you can check if you have service-to-service dependency that you need to follow up on And then post-sync this goes into what we're talking about earlier If you want to put a slack or teams message out there with an environment link that can be very helpful And then testing workflow again This is kind of like in the dream But if you had your end-end testing actually working in a preview environment that'd be fantastic and not shown here Is actually if you're in a sync fail scenario But that's also a valid use case if for whatever reason the sync fails What do you want to run in the case of a preview environment? A lot of times it's gonna be cleanup tasks for any of the stuff that you previously set up in either precinct sync or post-sync All right, so let's talk about the meat and potatoes of this which is our go application sets So this is incredibly powerful So it's really meant to be a very flexible approach to managing deployments across multiple repose applications clusters and more Right, it's kind of the catch-all scenario. It's like, okay I need to go, you know this deep and this far. How do I do it with Argo? Argo gives you that flexibility to do that through its generators The generators offer different levels of complexity depending on what you actually need You can do something as simple as a list generator where you just have a list of text that it's going to iterate through and Create environments that match that it could be based on merges pull requests the matrix Which I think of a matrix like a slot machine where you pick how many lines You want to actually you know have and the matrix is I want to do you know three across and one down or whatever The matrix is kind of a many to many to many relationship manager Can get very complicated but can be helpful if you want to mix and match say the Cluster and the pull request and a list generator because for whatever reason you have the need to actually have all of that in there In our case we're using that pull request generator that I was talking about earlier Which allows us to dynamically have an Argo application set Monitoring for pull requests in our case. It's looking at a label. It could be based on a branch named filter as well But right now it's just gonna be looking for pull request label that says preview or whatever you want to use and it's gonna kick off that process All right, so let's talk about what we're gonna try to demonstrate here And again, I'm just gonna I said this earlier, but to someone else last time I tried to demo for the CNCF Amazon went down globally So we're gonna give it a run and see how it goes this time But we're gonna start here by just a kind of walk it through it from a simple perspective So I have my main branch, you know with this commit ID this version of the application There's some CI process whatever you want to use. I'm using code fresh, but it could be whatever you prefer It's going to generate an image apply the tag. We're gonna kick it off to our environment in this case It's in staging What we're going to do though is we're gonna create a PR branch here, you know In this case, it's called feature update index this commit Argo is actively monitoring that Repository so it knows that we have a pull request Based on the label that we put on there or again, it could be a filter based on naming that you actually picked it Then updates the application And creates a new namespace creates the application inside of your cluster Whatever that destination cluster it is and this could be n number of clusters, right? It could be one or ten really depends on what you want And we're gonna have this preview environment then out there for us ready to go So once that's out there. It's gone through all of our testing. We validated it You know, we did our own smoke test one out did some API testing against it. We know it's in good shape We're then going to merge our brands or delete our branch in this case We're doing a merge so I'm merging my PR branch from feature update index into the main What that's going to do is it is going to kick off a new build process It's gonna give us a new image ID This isn't intentional because again, we're merging into an integration where all the features going in together And then that's going to update our staging or our static environment up to the new version that we were just testing and playing around with All right So let's try it So the first thing let's take a look here All right, let's just take a look at the repository real quick There's a few things I want to point out the first being that if we want to create them from the CLI Here's the commands to create the base app and the app set itself Just simplifies that a little bit. Let's talk about what we have in our home here This is really simple and straightforward in a real scenario. You're probably gonna have customized with overlays That's gonna overlay certain values for your each of your environments But in this case we just have a home chart and there's a couple of values that are critical on here Really, it's this namespace one and our tag So our namespace is going to be dynamically created based on the name of the feature branch The tag itself will change depending on you know, what's happening whether we're merging up to main or if we're just building out a preview environment And if we go back here, we can take a look at the world's simplest Go web server, which is kind of intentional. So we just have a web server here. It's just saying hey, hello get off con I'm on release version one And let's go ahead and let's take a look at our namespaces right now. So we don't really have anything out there at the moment Let's take a look at our pods. That's not in cube system. And we see again. We don't really have anything there So let's go ahead and create our base application using the CLI All right so far so good So we'll see that this is already spinning up. We're creating our base application now And for this this is looking at the main branch at this point And we haven't actually created our app site yet The thing that's going to take some time here is we're actually getting a load-balanced URL so we can click through to it Very straightforward stuff here But if we go take a look at our namespaces, we have our staging namespace now And let's take a look at our pods. We see that we have a simple deployment now running in our staging namespace And let's go to the cheater window here and click through So this will probably take just a second to actually pop up and while we're doing this let's go back here and Let's take a look at our app set that we're going to create Because I think this could be very interesting So inside of our app set There's a couple of things that I really want to make sure that I highlight here because it's crucially important for us the first thing being that your name here and your naming schema for your namespace is very important because That's what either if you have a manual cleanup process that's going to have to run with this or you have Something depending on where this namespace is going to be deployed You want to make sure your namespace is consistent and that your developers kind of stick to a consistent naming practice here The other thing being we are passing through the image tag and again This is so that we can promote between environments and also have a preview environment And so we will go ahead and create this as well And we do have create namespace equal true on this technically isn't really necessary for us because we actually have a namespace resource in our Home template and the reason that we have that is so that it dynamically gets cleaned up when we merge the pull request But if we go take a look at our application we can see that it did deploy out We have a incredibly tiny hello get ops com v 1.0 So let's go ahead and go back and this time we will actually use the ui To create our app set So we'll just call this app set Default we'll do automatic sync. I do like prune and self-heal Use our repository And this is actually in the app set bucket We'll do in cluster and then for this case because this is Argo managing Argo applications We actually want to put this in the Argo CD namespace for this particular example Go ahead and create this And we'll see that our app set has been created But of course nothing's happening yet because we don't actually have a branch that is going to kick this process off So let's go ahead and go into github and we will create a very simple Bump here if I can type and we'll just change a version. Obviously. This is not a real version, but it works for this All right date Web server will create a new branch out of this Will update the index and Then this is a very important part here We have to actually select our preview label because this is what's going to kick off a process because again in that app So we're actually monitoring for that. Let's create this Now because it's an actual code change. This is actually going to depend on a ci process. So we do actually have the build Happening here, and there's nothing really Too crazy going on here all it's going to do is actually clone it down Build a Docker image push it out so that it's available for our preview environment here So that'll take a second to happen once that kicks off then we'll start to see this go green in the meantime Argo itself is actually going to attempt to sync it now I what I need to do eventually is put in a little bit of a pause here I actually have it be webhook base so we know when it's actually ready to kick off that process Because this will go into a holding pattern until our Docker image is actually built and pushed out there Which is happening right now? All right, so that is been pushed and if we go back to github here Now we'll see this go green and that our checks are all good. So in other words our ci build actually happened Let's go back in and look at our application And we can see that we actually have a deployment out and it has our new tag information So let's look at our namespace now So now we'll see that we actually have a simple deployment feature update index 5 namespace So that is a mouthful But that is definitely the namespace that we want and if we take a look at our pods here We'll also see that we have the pod running as well So let's go back to our cheater window here and let's get our load balancer URL we can see for this namespace We can see that we are now on version 2.0 for this and our staging environment is still on 1.0 All right, so everything is actually going smoothly, which is always nice So let's go ahead and merge this pull request So what we want to happen like I showed in the slide earlier is we want it to basically clean itself up Right when we merge this pull request will have our preview environment will no longer be needed because it's been merged into main branch So let's go ahead and confirm that we're merging This will actually kick off another CI build process so if we go back here and This time the CI build is just slightly more complicated. We're cloning we're building the image We're actually going to update the manifest image tag and then we'll push that image and image tag out So very straightforward, but it's going to give us the end result that we actually need here and let's go take a look We can see this is already saying no it's out of sync We actually no longer need this so this is going to work on cleaning it up And you can see it did already clean up that application So right now I had it set on a recue timeout of 15 seconds So I'll go pick that up pretty quickly And if we look at our app set our app set no longer has a dangling dangling application off of it because it's no longer Necessary and if we take a look at our namespaces again We no longer have our simple deployment feature update namespace Take a look at our pods same story, right? So that's actually been cleaned up So let's go look at our process where this is at. This is actually already updated it and pushed that out So now let's go back here And if we look at our commit message We've updated the image tag to this new version which is a 9 f 4 4 7 5 not that that really matters that much But it's meaning that hey our main branch did get this new tag that we just built And if we go into our Argo applications here, and I go into our base app We haven't actually seen an update yet here. So I didn't change out the reconciliation timeout So it's still set to three minutes. So let's go ahead and just manually refresh this That's gonna say oh, we actually have a new application that needs to be deployed out here It's gonna go ahead and sync that up and now if we go back to Just go back here if we refresh this we'll see that we only have one external load bounce URL It's in our staging namespace. Let's click through to that and we can see that now We do have our preview environment at version 2.0 up there So yeah, I think that covers everything. Let me go through. I think I have one more slide here Which is just to talk about some challenges. Some of these are gonna be obvious some not so much If you have to automate a lot of non Kubernetes infrastructure, this could be painful You might still need to do it, but it won't be fun If you have a ton of service-to-service dependencies, you really need to understand the entire landscape Of what you want to test against and know what's reasonable to actually pull into a preview environment We kind of already covered it But even outside of load testing if you have performance dependent services It can be very tricky to do preview environments for particularly if you're doing something like automated learning or things like that It can be very difficult to actually get a good test run through this Comprehensive cleanup can be challenging especially if you are creating dynamic databases and all sorts of other cool stuff Collaboration can introduce a people process So what I mean by that is as a developer if I need to test multiple features Together with someone else I have to go talk to that person now Whereas before it was I would just throw the code over the wall it ends up in some integration environment and someone deals with it Now you're saying no I want developers to be a bit more proactive about talking to other developers understanding how their features interact and test some together That can be challenging, but it needs to happen and then as is any Kubernetes Presentation and our go secrets is always a challenge. I don't think there's really ever an easy solution You're probably gonna have secrets that you need to deal with whether it's injecting through sidecars or however You want to do that sealed secrets or sops or whatever you want to use but that can be challenging and That's it. Yep. I do have the repo here So if you have any questions, it's a very simple example, but hopefully it puts you on a good path towards application sets