 You've heard how GitLab has added focus on capabilities needed by security pros. On our product direction page, you'll see a greater focus ahead on understanding the potential impact of the vulnerabilities, or in other words, the risk. IBM, an important channel partner of ours, recognized the need to understand and manage this risk of unresolved software vulnerabilities. They're working to complement GitLab's current capabilities with their own for even greater risk insight. Suhas Keship with IBM will share how AI could be used to compute the risk score of a GitLab merge request. And as always, be sure to ask questions in chat. Good morning, good afternoon, or good evening, wherever you are in the world. Thanks for joining the session on using AI in production to compute risk score of a GitLab merge request. We will have a Q&A session in the end. My name is Suhas Keship and I'm the product manager for IBM DevOps. I'm here to talk about some exciting concepts around marrying various DevOps data from GitLab and application monitoring data from tools like IBM, Watson, AIOps, and making more sense out of it. Before going into the details, I'd like to tell a disclaimer that this is not something that is already developed and available for use. This is still in a concept stage and we are figuring out how to make this a reality. So there are no timelines yet associated with any of this. But I will show you all a basic demo on some initial work we have done for these systems to talk to one another and get some basic scenarios going. There's obviously much more work that needs to be done. So many of you have seen such DevOps related memes before. We've had development teams and operations teams work in silo. I'm sure we've all seen cases where the mechanism for handover between the development team and the operations team isn't well defined, and this leads to many complications. GitLab, of course, solves this problem to a large extent by providing a single source of growth for development and operations team. But can we do something more in addition to this? Can we be more proactive about figuring out problems that can occur in production when a developer makes a code change? So in case you haven't heard before, IBM and GitLab have entered into a strong partnership few months back. We are looking at leveraging GitLab and IBM's strength in DevOps and other portfolios to create unique solutions that can make a difference to our customers. In this particular scenario, we are looking at IBM Watson AI op solution, which understands the state of an application in production and using artificial intelligence, detects problems early and even prevents some problems that might occur. It also auto fixes some problems and automates certain operations, workflows, as and when problems are detected and solved. Since this is not the main topic for today and not going to go into more details about how the AI ops product works. So what can we do to marry the DevOps and production data and be more proactive in problem detection? As you all know, GitLab has the concept of issues so we can tag issues to work items and merge requests. When a developer creates a merge request in GitLab, whatever data that is required by AI ops product to train its data models like GitLab issues, review details, comments, text inside the merge request and many more things can be sent over to AI ops using REST APIs. The AI ops systems can pick this information up, correlate this data with the information they have, with the real-time information they have on threat monitoring and then associate and calculate a possible risk score for this merge request and provide the risk score back to the developer who has created the merge request. So this information can be displayed within the merge request itself. But as with any AI system, AI ops would need to know if its recommendation was accepted by the user or not. This would help it to train its model in a better way. A simple way in which this can be achieved is through providing a simple mechanism like a thumbs up or a thumbs down button along with the risk score, along with its explanation as to why that risk score was calculated. The developer can either accept the recommendation or if the developer feels it's a false positive, reject it by clicking the appropriate icon. This feedback can then be sent over to the AI ops system which can help it train its models. More sophisticated feedback mechanisms can include comparing the code snippet that went in with the merge request, whatever changes are the chainset that was promoted to production with the recommendation provided by AI ops. So this can help to determine what part of the recommendation was implemented and what was rejected and train the models accordingly. A lot of this is still in theory. We have technology to make this happen. We hope to achieve meaningful implementation of this in the coming months. So let me move on to something that we are in a position to demo. This is a basic integration scenario between GitLab, IBM DevOps, Watson AI ops and monitoring tool called Instana which IBM acquired very recently. The current setup of the system is as follows. A sample application to order robots online called RoboShop which is hosted on OpenShift on IBM cloud. IBM Rational Test which also runs on OpenShift on IBM cloud to eject load onto the application. Watson AI ops and Instana for observability and monitoring which also runs on OpenShift on IBM cloud. So for the benefit of time, a lot of the configuration between products have already been done and I will not be showing that part in this demo. I will be skipping to the working of the use case. And since this is a complicated setup, we have done this before and I will be running a recorded video of this. So this is the RoboShop application I talked about earlier. This is used to order some robots online. So this is the Instana instance. Instana monitors OpenShift clusters on which the RoboShop instance runs. An application view has been created in Instana. When we select the time interval as last five minutes, we can see that Instana is monitoring our RoboShop application. Instana also offers different views like dependencies of services in the application and a lot of other information about the applications running on the OpenShift cluster. So let's now log into GitLab and add a test job and add a test job which injects load to the application. So let's just edit the GitLab CI.yaml and comment the test stage, comment the changes and when we go to the pipelines. So we can see that a new pipeline is now being triggered and we can now see that all these tests which had been configured have run. So just to make sure that we have the application, the latest application deployed. So the GitLab pipeline also deploys the RoboShop application on OpenShift and IBM Cloud. That has been deployed. So we can just see that Instana now detects that the latest version of the RoboShop has been deployed and it has started monitoring it. IBM Rational Performance Tester is a highly specialized load injecting tool. We have a sister product called Rational Test Automation Server which is a fully containerized test execution engine which runs on OpenShift. So both of these products complement each other and we can have different SLAs for passing or failing a performance test from within the Rational Performance Tester. So let's now run a virtual user schedule which injects the actual load onto the system. So we can edit or add whatever data set that we want for the performance test to run. So we can run all the way from like five virtual user schedule to all the way up to 100,000 user schedule with the Rational Performance Tester. And these are different SLAs that can be defined for the performance test to pass or fail. Now we have started the test case. The test case is running. Now if you select what's happened in the last five minutes on Instana, now that the load is in, Instana detects that the error call rate has increased. It's slowly increasing as you can see. So until now we have seen some load being injected to the sample application. Now for this part of the demo, for the second part of the demo, let's move over and focus on event triggers. So this is the Instana instance. Instana provides an out-of-the-box integration with Watson AIOps. Let's test to see if the connection is successful. It is. Now let's configure a trigger and run book in AIOps' event manager. For the benefit of time, we have pre-configured two scripts. One to create an incident in GitLab and the other to roll back a deployed application to its earlier version. This is just a simple curl command to create an incident in GitLab. The triggers will decide when the automation script will be executed. In this case, the scripts will be executed when the erroneous call rate is too high. We can define multiple conditions for this trigger. So these triggers can be automated, but for the purpose of the demo, let's make it manual so I can walk you through the step-by-step process. The automation scripts are defined as a part of this run book. When a higher rate is discovered by a monitoring application in Instana in our case, this run book is triggered. So before the start of this execution, let's see the current state in GitLab. We see that there are six issues currently. There are also no pipelines that are getting executed currently. Now let's start the run. The run book execution is now successfully completed. Let's go back to GitLab. You can see that a new pipeline execution that's happening. A new issue was created as a developer. I can assign it to myself or any other relevant person in the team for the issue to be fixed. So as a recap, we first talked about what the future can hold. We talked about using the data in a GitLab merge request, sending it over to Watson AIOps. Watson AIOps trying to correlate the data between whatever GitLab sends and whatever real-time threat monitoring and threat intelligence data that it already has calculate a risk score associated with a merge request and send it back and display that in the GitLab merge request. AIOps collecting the user feedback on whether it was accepted or not accepted using different mechanisms and in turn training its data models. Later we moved on and we looked at a simple use case where we inject load onto a sample RoboShop application running on OpenShift. We have a monitoring application in Instana detecting high erroneous call rates and informing Watson AIOps and the automated raising of GitLab issues and triggering a GitLab pipeline which rolls back the application to its previous version. So this is just a sign of things to come, more exciting things to come in the coming months. So thank you and we can move on to the Q&A session.