 Hey, my name is Fernando and I'm a Technical Marketing Manager here at GitLab. Today, we're going to go over the newest features in GitLab 13.5. Hello, I'm Cesar Cervejo. I'm a Technical Marketing Manager at GitLab. In this segment, I'm going to be covering a new feature in GitLab 13.5 called Feature Flags, Flexible Rollout Strategy. This new feature or capability enables the feature for a percentage of page views with configurable consistency of behavior. It leverages an open source project called Unleash to implement this activation strategy called Flexible Rollout. You can configure the consistency to be based on user IDs, session IDs, random and available ID. And the rollout percentage could be anywhere from 0 to 100 percent. What does it matter? To customers and prospects, it enables them to define the stickiness based on session ID, user ID or random, which is no stickiness. And this gives them more control over the rollout and allows them to support stickiness for anonymous users. They're also able to experiment with variations of their applications in production. And also, they can leverage this feature to segment their users and to do A-B testing. To learn more, here's some resources. There's a link to the documentation, a link to the issue that implemented or that addressed the implementation of this capability. And some things to follow. You can check out our progressive delivery information in our CD category direction, and the link is right there. So let's jump into the demo. And here I have a project called Spring MVC JPA in which I have implemented a feature flag. And the feature flag is right here, it's called FF1, and let's look into it. This feature flag has two strategies, it has one for staging. It will offer the feature just for a user ID called mickey at Disney.com. And for the production environment, it will do a percent rollout of 50% based on available ID. Available ID, the way it works is if the user is logged in, it'll make the behavior consistent based on the user ID. If the user is anonymous, it'll make the behavior consistent based on the session ID. And if there's no user ID or session ID, the feature is enabled for the selected percentage of page views randomly. So in this case, it's 50%. And this specific feature consists of a list of products that we'll see in a second that they may be ordered by ID or by name. And the feature actually orders them by name, not by product ID. So let's go to the environments and let's do, we can do production first. And here's the sign in. So let's sign in as Pluto first. And as you can see here, the products are ordered by name, which is the feature. So Pluto got the feature. So that's one user. I'm going to try with four users. So let's try with the second user, which is who is magic. And magic also got the feature. As you can see, the products are ordered by name. So that's two out of four. Let's try with Mickey. Mickey did not get the feature. As you can see, the products are ordered by product ID, not enough order by name. And the last one is Hulk. And Hulk did not get the feature. So that's two out of two. So that's 50% and in production, that was the strategy. So let's go to staging and then staging. Let's try the same user IDs and the other strategy that was the, the Mickey was going to be the only one getting the feature. So Pluto did not get the feature. Magic did not get the feature. Mickey got the feature right there. He was specifically targeted and Hulk should not get the feature. Yeah, so Hulk did get the feature. So in order to enable feature flags, you need to do a few things within your project. The first thing you need to do is define some variables. And here the variables you need to define are the unleash instance ID, and the unleash URL. If we reveal the values, you will see these values here. And these values you get from the feature flex, flex configuration settings. So if we exit this one, let's just go here. Here, if you click on the configure button, these are the values that you see in the variables right there. Okay, so you just copy and paste those to the variables. And then the next thing you need to do is you need to update your source code to use these variables and the feature flag. So let's go to believe it's this one here. There we go. So please ignore the debugging statements that I left in there. But basically in this class, I have an instantiating configuration for a leash here. And I'm passing the instance ID and the URL that came through via these environment variables, as well as the GitHub environment variable, which in our case is going to be either production or staging. And then the next thing you need to do is wherever you want to enable this feature, you need to have this if statement. If the feature is enabled and you pass the feature flag name. In this case, if a feature is enabled, the product list is ordered. Else, if it's not enabled, the product list is just ordered by product ID. Very good. So that concludes this segment. Thank you very much. In this segment, I'm going to cover the capability introduced in GitLab 13.5 called view cluster cost management data in GitLab. So this capability allows you to see an overview of your cluster costs and resource usage in the GitLab user interface. Our integration builds on top of the Kube cost model open source project and gives you flexible insights into various levels of your clusters. Before this capability, many users have to create their own scripts to better understand their cluster costs. Why does it matter? For customers and prospects, now as they're part of their cluster cost management, customers can now get insights into their cluster resource usage. They can also identify unusual high peaks in cost. They can save money by identifying idle clusters and either decommissioning them or consolidating workloads into underutilized clusters. And finally, this insight can help them plan and forecast for future quarters in your cloud consumption budgets. Here are some resources to learn more about this new capability. There's the documentation link. So it's a link to the issue. There's a link to the open source Kube cost model project. There's also a link to our adaptation of that Kube cost model project. And a link to examples of cost queries that you can use when querying the data related to the cost, cloud cost. Things to follow, check out our cluster cost optimization category direction page. The link is right there. And other notes, in order to be able to use this capability, you need to be a maintainer of the group or project. And also you need to have organization level billing permissions in your cloud provider account, whether using GCP or Amazon or Azure. Okay, now let's move on to the demo. I have created a group called Kube cost. And I'm going to go ahead and create two projects inside of that group. The first one is called Kube cost cost model. I'm going to make it public. I'm going to create it. Next thing I'm going to do is I'm going to clone the, our adaptation of the Kube cost cost model open source project onto my local directory so that I can then import it into the project, the empty project that I just created. I'm going to go into the clone project. I'm going to go ahead and get rid of the dot git directory. And I'm going to go ahead and copy these instructions to push an existing folder into this project that I just created. Very good. So once I've pushed that project, I refresh the page and you can see the contents. Of the project now have been uploaded to GitLab. I'm going to go back to the group. I have the populated project already there. Kube cost cost model. I'm going to go ahead and create a group level. Kubernetes GKE cluster. I'm going to give it the name. Kube cost. Provide the project. The zone and the number of nodes. I'm going to change the zone to US East 1D. I'm going to leave three number of nodes. And the machine type is going to be N1 standard 2. I actually know I'm going to change that to E2 standard 2. I'm going to go ahead and create the cluster. And once it's created, I'm going to go ahead and start. Install some applications. There it is. The clusters started on GKE. I'm going to start to install. I'm sorry. Ingress assert manager on Prometheus. Prometheus is an open source monitoring system that is, we're going to be leveraging it for this Kube cost integration. Very good. All installed. Next, I'm going to make sure that I am connected to the right context and the cluster. So I get the credentials and make sure that the context is right. I'm going to create a namespace called cost model in my GKE. And then when I run a Kube CTL command using the YAML files in that Kubernetes sub directory, which will basically instantiate the pod in the running clusters right there, cost model. That's the name of the pod. And it's up and running already. And also Prometheus is up and running on GKE. I'm going to go ahead now and create a second project under this group Kube cost. And I'm going to call it spring Java. Make sure it's public. It's going to be an empty project like before. I already have this project in my local directory. And I'm going to be pushing it to this empty project I just created in GitLab. So I'm going to change directory to that project on my local drive. I'm going to make sure that all the files are there. And then I'm going to copy and paste the commands to push the existing folder into this empty project. Let's refresh the screen. And now we should see that this project is no longer empty. There you go. So that's the Java sample project that we're going to be deploying to GKE. So we're going to turn on autodevops. Let's just click on the continuous deployment to production. And this will start a pipeline that will compile, that will build, run a bunch of tests and deploy the application to production. Very good. So now that the pipeline is finished, let's go to operations metrics. And we want to open now the dashboard for the cube cost, which is default cost YAML. And there you go. This is the chart showing us or a graph showing us the amount of the monthly note costs for GKE in this case. And as you can see, we pushed something to production where you see that rocket. We applied a change to that environment. And we can change the range to 30 minutes if you want. And that also gives you the monthly cost, a minimum, a maximum, and an average. Just to make sure, let's open the running environment, the application, I'm sorry, and it's up and running. And this is the YAML for the dashboard that you just saw for cube cost. So that's all I had for this segment. So thank you very much. Hope you enjoyed it. Hello. My name is Itzigan Baru, technical market manager. And today I will speak about group WIKI. WIKI is a separate system documentation that you can access it from each project via this link here in the left side for WIKI. And you can create pages via the web interface via this button or locally using Git. Until 30.5, WIKI was limited for projects. And since 30.5, it is available also on the group level. Why does it matter? As a Git abuser, I want an easy way to document my work and also access and consume my team and project documentation. And why does group WIKI matters? As a Git abuser, I want my WIKI to be on the group level pages so that my whole organization can access the WIKI. And why it's important for us? Group WIKI was the most requested feature. It got something like 654 votes. And when we make our customers and users happy, this matters for us. Resources, the issue and the docs. And now let's see how it works. So this is my group. And I will open WIKI. And this is my home page. And on the right side, there is a navigation to different folders and pages in my WIKI website. So for example, I have here a marketing and TMM. And under each folder, I have all another folders or pages. And I will create a new event under the marketing. For that, I will create the full path. And I must give you any description and create page. And I have here AWS re-invent. I can go here and edit the description and the title. And I can delete the page. And for each page, I can see the page history, which changes it has. The next item I want to show you today is trigger downstream or a child pipeline with the manual jobs. So what it is, parent child pipeline or course project pipeline is not the new, but it wasn't possible to trigger manual jobs. Manual jobs means that the pipeline stops and waits for a manual action in order that the job will start. Why does it matter? Customers will have the flexibility to use when with manual condition, even for trigger jobs. Before it was limited for when with on success, on failure and always. But for some reason it wasn't possible for manual. And for us, we are fixing some limitation or bug. And now we are not limiting when to specific cases. And we allow when with all options, like we do it with other jobs. So let's see how it works. Okay. So we will open the CCD configuration file. I have two trigger jobs. And in the second one, I added the keyword when manual. So we start the pipeline. You can see that one job, the iOS job, trigger the child pipeline. But the Android waits for my manual action. When I click it, it will trigger the child pipeline. And of course it can click here and see that child pipeline. So now you can trigger manual jobs also for cost project pipeline and pipeline pipeline. This is the end of my demo for today. Hope it was useful for you and enjoy the rest of your day. Hi, I'm Ty Davis with technical marketing. Today I'm going to be talking to you about epic swim lanes that's been released in 12.5. Something that is very big and helpful for those that are using a GitLab for agile or agile portfolio management. So what are epic swim lanes? You know, straightforward here, swim lanes. When you add them to the GitLab issue board, it's going to go ahead and group issues according to an epic. And that's going to help us track high priority epics, C related issues or track board progress by that grouping mechanism that is now part of GitLab. We could see here, if you're on your board view, you go up to the top right corner. It's grouped by and you can put grouped by epic. And what that's going to do, it's now going to organize all those issues that are part of your board on that epic board. Sorry, it's part of those swim lanes that are organized by epics. And why does this matter? Because this is something that is essential for agile project management for agile portfolio management for customers and prospects. It's just that base need to group work based on epic. So you have that portfolio management piece. You can have leadership that has visibility into where certain epics are in terms of issue progress. And it's giving that view at a portfolio level that we have not had yet for GitLab. There is current document or there's not direct documentation on swim lanes in docs yet. That is coming very, very soon. There is still documentation on epics. And this resource here around the epic is you can go view the epic for swim lanes and see what kind of progress is going to be continually made in the future with swim lanes and what's been released right now and adding the kind of feedback you may have on the current state of swim lanes. We'd love to hear that feedback and know what you'd like to see in GitLab as we continually work on the MVCs for swim lanes. Now real quick, I'm going to hop out here and I will just show you live what you basically saw in the screenshot. So I have here my board view up here is the group by. This is where I can come group that by an epic. And then now I have these different epics that have grouped my issues and I can reduce those. I can open those up and I can go about organizing or looking at different swim lanes based on the epic I want to focus on most. If you have any questions, please feel reach out and thank you for this time. Hi, my name is Fernando and I'm a technical marketing manager here at GitLab today. I'm going to go over some of the new features in the 13.5 release. So the first feature I'm going to go over is customizing sast and secret detection rules. And what this does is it allows you to modify your existing sast rules as well as modify your existing secret detection rules and be able to remove some of those rules from from actually being used in the scans. So why is this important? Well, this is important because it allows better customization of your sast scanner and secret detection scanner to go ahead and run custom rule sets, make the the current rules, you know, more custom to your organization's needs. Same thing, for example, like for secret detection, instead of maybe there's weird types of formats for secrets that you use that you put within that you can put within code and we want to detect those who want to add more instead of just scanning for a password or past WD or the default ones we want to add more. And I'm going to show you that in the demo. One thing to note that right now is that this is available in the sast for Node.js and GoLine. And these customizations can be provided by editing the TAML file. And now, yeah, let me jump into this demo. So I've created this project called Tiny Micros, just a simple Go microservice. And I'm going to show you how this works. So I have a .gitlab folder with the sast ruleset.taml. And what this is doing is it's setting a custom ruleset for GoLine. And it's going to for GoSec. And what it's going to do is it's going to use this file as the custom ruleset file. So you can see that we're going to use the GoSecConfig.json to customize the GoSec scanner. And what I do here is I go to the GoSecConfig and I created this, which checks for certain patterns or certain strings. And I added the weird pattern. So the default one looked exactly like this. Without the weird. So now I added a pattern to detect anything that has weird in it as a possible hard coded secret vulnerability. And I went ahead and changed the entropy. And you can read all about this within GoSec. But so now looking at my main.go you can see that I just print, I just have a variable name weird and I'm just printing that out. And if I go to the security dashboard to see the vulnerabilities detected within the master branch which that's in, you can see a vulnerability for potentially hard coded credentials. And there you're going to see that there's a essentially hard coded credential. I go to location which is in main.go. 26 and you can see eight points to my weird variable. So that's one thing I wanted to note. Now, and that this makes it very, very useful for just expanding these rule sets and adding different things and different configurations. The rules are of course different in the Node.js. It just depends on the scanner what you can customize. And you can see that within the documentation. So now to jump on to the next feature. So SAS support for iOS and Android mobile applications. So this feature adds SAS scanners for iOS and Android mobile apps. So now you can scan the static source code of these mobile applications. So one benefit of this is that a lot of companies have, you know, code written for back end and infrastructure and then they also have, they might have a mobile app to access that infrastructure. Or that back end service. And you want to make sure that your native application is also secure. So we are expanding our portfolio to enable that. And this, this was this integration was actually contributed by the HB team. And HB is a grocery chain within Texas. And parts of Mexico. So their digital team actually contributed back this integration and we would just want it to highlight that. And highlight that. So we have a lot of documentation on how to integrate different scanners. And that's something I'd like. You know, everyone to go ahead and check out. There's also a demo project. Which, I don't know how maintained it is at the moment. I'm planning on making my own project on this and then sharing it with y'all. Within the next update. But what I want to show is currently in this project. I'm looking at an MR. And within this MR, you can see that there were vulnerabilities resolved that were master before. And we can see that it detected insecure web view implementation. And that the web view ignores the SSL certificate error. So this is known as a high severity error. And you can go ahead and see that this works exactly how it works with. All the different vulnerability scanners. It's the same format. It's just done for the mobile platforms. Thanks for watching. And I hope you enjoyed. To see more cool get lab content. Be sure to subscribe.