 Hi, I'm Cesar Saavedra, Technical Marketing Manager at GitLab. In this video, I'm going to show you how GitLab provides unified and integrated monitoring and deployment strategies. GitLab provides the ability to monitor the performance of a deployment and easily rollback if needed. It also empowers you to choose what to deploy and who to deploy to in production via feature flags as well as advanced deployment techniques like canary deployments. Rachel, a release manager using GitLab, oversees the CI-CD pipelines that automate the continuous releases within her organization. In this first scenario, Rachel is going to roll out a new release of the application to production, and she's going to do a sanity check on it. Then a second release of the application will take place, which she will also roll out to production. She will find a web performance issue with the new release, find out its probable cause, and then proceed to roll back the latest update from production. Rachel notices that there is a new release of the application running in the staging environment, whose pipeline is currently waiting at a manual step. She proceeds to do a sanity check of the updated application and all looks good. So she rolls out the updated application as a canary deployment and then to all of the pods in the production environment. Notice the performance job at the end of the CI-CD pipeline. This job checks the updated application's web browser performance in the production environment. Rachel checks the application in production. Notice that the edit and new product screens have a white background. The next release of this application will update the background color of these two screens. Later in the day, Rachel notices that there is a new release of the application that has just been deployed to the staging environment, whose pipeline is again waiting at a manual step. She verifies the running application in the staging environment and notices that the edit and new product pages are now the same color as the main landing page. It seems like a simple update to her, so she goes ahead and rolls out the updated application to canary first and then to production. Rachel proceeds to check the application in production and encounters an error when trying to log in. She tries again and is able to log in on the second try and confirms that the application now has the edit and new product screens with the new purple background. She's concerned about the login error she saw. She goes to her environments dashboard and drills into the CD pipeline that rolled out the latest version of the application to production. She then goes into the merge request that initiated this rollout and notices that two browser performance test metrics degraded in this release. She expands the section to check the specific metrics. This may very well be the cause of the issue she saw in production. Rachel definitely does not want any users to run into this error because it will negatively affect customer satisfaction and experience. So she decides to roll production back to the previous version of the application she knew was running fine. She goes to the production environment window, which is an auditable list of all the updates to it, and identifies Update 51 as the one she needs. She proceeds to click on the rollback environment button to bring the application back to a previously stable state. Rachel monitors the rollback via the job log and also by visually tracking its progress through the pipeline execution. Finally, to double check that the application was indeed rolled back, she opens it in production and logs in as an end user to validate that the edit and new product screens have the white background from the previous release. So far, Rachel has been monitoring releases that are deployed to all users in production. But what if Rachel needed to introduce a feature in a control manner by segmenting who was going to see this new feature? She can accomplish this by using feature flags and advanced deployment techniques such as canary deployments. She can check that the feature flag is enabled in the review and in staging environments, and then start an instance of the latest release as a canary deployment before rolling it out to production in an incremental fashion, first at 50% and then at 100%. To this end, she has created a new feature flag called product in alphabetical order feature flag with three strategies, and a user list called prods in alpha order user list with two users in it, mickey at dissey.com and mini at dissey.com. The first strategy will use a percent rollout of 50% based on available ID in the production environment. The second one will target the feature to users in the users list, prods in alpha order user list in the staging environment. The third strategy will target a specific user hulk at universal.com in the review environment, which is an ephemeral environment used for validating application updates before they are merged into the main branch. Feature flags help Rachel reduce risk, allowing her to do control testing and separate feature delivery from customer launch. Rachel double checks the review environment to check the application updates and the feature flag strategies specific to this environment. She logs in as hulk at universal.com and confirms that he is indeed being provided the new feature, which is the product list sorted in alphabetical order by product name. She also checks that he is getting the updated purple backgrounds for the edit screen and the new product screen. Once the updates to the application have been merged to the main branch, Rachel goes to the staging environment where the application should now match the one from the review environment. She verifies the feature flag strategy for the staging environment by logging in as mini at disney.com, one of the users listed in feature flags users list, prods in alpha order user list, and sees that mini is being served the new feature. She verifies that mini is also getting the updated purple backgrounds for the edit screen and the new product screen. She logs out and logs back in as mickey at disney.com, the second user listed in the feature flags user list, and sees that mickey is also being provided the new feature as well as the updated purple backgrounds for the edit screen and the new product screen. To ensure the feature flag strategy for the staging environment is being followed, she logs in as hulk at universal.com and validates that he is not getting the new feature. The product list he gets is not sorted in alphabetical order by product name. Rachel would like to combine this feature flag with the latest updates to the application, which include the fixing of the web browser performance issue that cost her to roll back the previous release. She can do this by deploying the latest release as a canary deployment. But before she does this, she checks that the web browser performance issue was solved by going to the merge request and expanding the browser performance test metric section. She sees that four of the metrics are the same as before and three have improved. She's happy to see these results. She proceeds to roll out the release in a canary deployment by clicking on the canary job in the CD pipeline, which will instantiate a new pot in production with the latest release of the application. Advanced deployment techniques like canary, incremental, and blue-green deployments improve development and delivery efficiency, streamlining the release process. At this point, Rachel decides to roll out the canary deployment to production at 50%. She clicks on the rollout 50% job to do so. She observes the deploy board to track that two out of the four pots in production contain the canary deployment. At this moment, half of production contains the previous version of the application, the one with the white backgrounds for the edit and new product screens, and the other half contains the new version of the application, the one with purple background screens. In addition, Rachel has introduced a new feature-flex strategy to all of production. Rachel opens the application in production and logs in as different users and checks that in some cases she gets the new purple screens with the feature flag, and in some other cases she gets the all-white background screens with the feature flag. In total, per the 50% canary deployment, she gets the new purple screen backgrounds half the time, and per the feature-flex strategy for production, she gets the feature-flex half the time. The combination of canary deployments and feature-flex can help gather direct users' feedback to determine what features are relevant to them so that Rachel's organization can focus on these to shorten release cycle times and deliver higher quality and differentiating value to their users. Everything looks to be working just fine, so Rachel decides to roll out the canary to all of production. She clicks on the rollout 100% job and monitors the deploy board to see when the rollout finishes. Everything looks good. One last thing she does is to check production metrics by going to the production monitoring screen. She checks the overview metrics for the cluster, including its memory and CPU consumption and Ingress metrics. Finally, she checks the Kubernetes PodHealth monitoring dashboards. She's happy to confirm that all looks good in production. We have gone over how GitLab provides unified and integrated monitoring and deployment strategies in a consistent, repeatable, and uniform manner to help you make your releases safe, low-risk, and worry-free. I hope you enjoyed this video and until next time.