 Hello everybody and welcome to another OpenShift Commons gathering. This time we have a new member to the OpenShift Commons, CollabNet, but they're longtime folks that I've loved working with in the past, so I'm really looking forward to hearing the update today on everything they're doing in the DevOps space. We have with us Eric Robertson and Franco Albaster from CollabNet. I'm going to let them introduce themselves and we'll have Q&A in the chat and then after the presentation and demonstration is done, we'll have some live Q&A. So without any further ado, please. Eric, take it away. Great, great. Thanks Diane. So my name is Eric Robertson. I lead the DevOps business unit here at CollabNet and I'm joined with two colleagues. I'm Don Freeman and Franco Albaster. Franco, do you want to start off and introduce yourself? Sure. I'm Franco. I'm doing a lot of fun things over here, especially with the DevOps architecture and integrating OpenShift to the products that we offer to add instant value and visibility. Great. Don? Hello, my name is Don Freeman. I'm one of the product architects in development on the DevOps product line. Thank you. Thanks, thanks. So what I like to do is start off with a company overview and then it's a couple of topics I'm going to talk about before we introduce the solution. I really want to hit home on the importance on what I'm seeing and customers are telling me, the importance of accelerating this concept of DevOps idea to delivery pipeline. Why is that important? Why is it important to remove bottlenecks and to really move that idea to delivery pipeline to realization? Next, I'll talk about a concept that customers are using and industry is starting to use as far as to be able to measure that. This is the concept of lean value streams, basically be able to measure your tool chain to provide visibility. And we're going to do this through, talk about how this is enabled through the combination of Red Hat OpenShift with the CloudNet DevOps Lifecycle Management product. And then that we're going to showcase through a walkthrough and demo. And then we'll have a summary and then we'll have some follow up questions. Okay, so CloudNet at a glance. So CloudNet's been around for founded in 1999. I'm a file headquarters in South San Francisco. We've been the leader in the application life cycle management agile DevOps and collaborative solutions. Many of you may know us from Subversion and we were one of the first open source on virtual control solutions out there in the market. We have over 10,000 customers. And we have over 250 employees across the globe. And of course we're a Red Hat technology partner. So I'd really like to talk about what is our definition around the DevOps. You know, everybody has their definition. And I kind of start off with the CAMS, the CAMS Definition. Which is the acronym describing the core values of the DevOps movement. Many of you remember it was coined by David Edwards, Willis at the DevOps days, Mountain View back in 2010. Some of you may be lucky enough to be able to witness that. But it really is around the culture automation measurement and sharing. And Lean was brought in a little bit later by Jeff. And what's key here is a lot of folks have already started down the path of the culture aspect which is really around, you know, breaking down these barriers with the team and how to reduce waste among the teams and the processes among those teams. And from there automation became perhaps the most visible aspect of the DevOps. And many people use that to focus on projectivity gains. And it's one of the main drivers to our DevOps. But automation wasn't really just used to save time, but it was also more for corrective actions and things of that nature prevent defects and enable self-service. And OpenShift, as you know, has become very key component in there providing automation and streamlining, especially developing and deploying container-based applications. Measurement is one I really want to focus on because I think that's one now that customers are really starting to look at and really starting to identify. And measurement is really around how can I have continuous improvement without the ability to measure that improvement, right? How do I know if that automation task is really worthwhile, right? So it's about how can I collect this information, collect this data, and be able to take action on it. Sharing is a piece where it's also kind of core. Now, a lot of people see this as sharing of forest, kind of extension of culture, sibling needs, finding people with sibling needs across the organization, being able to capture these best practices. And what I want to talk about is how we're able to capture these best practices, but in context of your process, in context of your tools from improvement. And it's all part of tying into that measurement enablement, okay? So the first piece of this is a sellering the DevOps idea to the delivery pipeline. And to be very simplistic, it's really about how do I get these ideas, which come from lines of business, it comes from individuals, but how do I get this into production and get this delivered to the customer quickly, right? And this is this idea to delivery pipeline. In other words, how do I quickly remove the bottleneck so that idea can be realized by the customer? Because as soon as it's realized by the customer, that's where the value comes in, right? And also a key part of this is I want to be able to get feedback. So I've got to optimize the process, not just feedback from a customer, but feedback throughout my process, my delivery idea delivery pipeline. Because if I could get feedback, I could remove bottlenecks, I could take action immediately on that. It's kind of similar to how if I wanted to get from location A to location B, I have Google Maps, and Google Maps can map that out for me, and now our newer technologies now, you can have a car and I'll like to drive it, right? But what happens when I have roadblocks in there, right? If I know about those ahead of time, I could alternate my destination path there, where I'm able to still reach my destination based upon the time that I have set, right? And optimize that delivery. And so this is constantly being able to provide this feedback across this idea to delivery pipeline to continuously improve the application delivered, the environment serving deploy, and the overall application environment to delivery process. So why is this such a challenge? Where if you look at your idea to delivery pipeline, it could be a lot of steps, right? If I look at it just at a high level, I got planning, coding, building, release deploy, got monitoring and operations piece, typically monitoring operations piece was kind of not in the developer planning world, didn't really care too much about it. But in our new DevOps world, I do need to care. It's part of my value, this value stream that we're talking about here, right? It's all about how do I get that requirement and those requirements quickly into the hands of those features, into the hands of the customers, and provide that feedback overall. Now part of this process is what makes this a little bit challenging is for each one of these steps that I have that I see here, I can have multiple tools involved, right? And driving that process and capturing that information. And so for us, it's really about how do I give visibility from that planning to ops overall, right? So as you know, there's many tools in my tool chain here and these tools are all generating their own events, their own information across this life cycle here, right? Now typically, if you wanted to understand, let's say, what's going on in planning, how does that relate to development, how does it relate to build, release and the rest of this, I'm pulling log files or information individually from these tools. What's nice about the Red Hat OpenShift is Red Hat OpenShift starts bringing this platform where I'm tying in pieces of my development tools, build tools, release and deployment capability, especially around containers, right? And are collecting events, organizing events in such a way where I can optimize my process very quickly, especially utilizing the source to image technology very quickly, thank you to my repository for getting this application out and deploying. What's key here is I have other tools that are part of my process that may sit outside of the OpenShift route, right? But it's key piece as part of my value stream. So it's very important for me to be able to connect holistically all my tools together in context with the OpenShift platform as well. Once I'm able to get this data, I'm now essentially monitoring my entire tool chain for planning through operations. And now what I could do is I could start executing rules on this and do alerting and take actions on this. So if I'm going down this chain I could quickly understand what's happening and make corrective actions immediately and the last piece is the reporting aspects of that and understanding my entire pipeline, what is my status, how is it looking, and what are the potential bottlenecks that I need to start identifying or looking at and inhibiting me from from releasing this application. So one is tool chain integration, automation, event collection, two that continuous monitoring piece, and the three is the reporting aspect, okay? So this drilled down a little bit in depth here. So let's say I have a tool chain here, I got multiple tool chains, and I got one that is utilizing this set of tool chains, the first one that you see here, and I got a second one that's utilizing a tool chain. I may be leveraging tools that may be hosted within Overshift, or it may exist outside of Overshift, but I have the connection to it. That's part of that coding all the way to that monitoring type of process here, but what we allow you to do is those projects and those applications that you have within with Overshift will allow you to group them together into value streams, right? So for example we have PetStore and PetStore may have applications, sub-applic multiple applications that are, or may even contain multiple projects within Overshift that allow me to tie that together, but what's nice is I'm tying in and I'm bringing in external tools into the mix as well, right? Like my planning tools up top, like my JIRA, and let's say my operational tools, maybe some chat-up tools, maybe ServiceNow, I can now start bringing them into the picture and start measuring information like plan to deploy, cycles to resolve, deployment frequency, and I can start comparing this. One of the other interesting things is I may have a value stream or this tool chain here that may totally exist out of Overshift, and let's say that it's utilizing some legacy type of tools for doing deployment things of that nature or maybe some manual type of activity. And then I have a very similar tool chain but it's utilizing the Overshift. I can now start measuring and you can start seeing how efficient that, or how more efficient actually the Red Hat, a tool chain based on the Red Hat platform, is actually performing better and it's actually more efficient, which is incredible because now you actually have data to showcase to your organization, to folks to say we need to put more investment toward this platform because we're seeing the returns here, right? Then the measurements here. So that's what's nice. So what we do is we bring and collect that across, as you can see from this orange is DLM, brings in your your planning, all the way to operations, including your Red Hat platform tools well. So once you're able to do that, what's nice here is that I can have a dashboard at a high level that shows my planning all the way to my operations in a current state here, right? So immediately I can start getting alerts as far as you know, Overshift just to successfully deploy. We had, look like we had a little issue with Kubernetes. We could do a corrective action there that you'll see in the demo, how we do that to immediately remediate that. And I'm also getting also information across my value stream that may not have a dependency on Overshift like an SAP era that occurred, but it's key for driving a SAP Combo container-based type of application that I'm running within my organization. Another key piece here is the rules. So what's nice here is that I can have rules based of events to drive alerts, right? So I can have alerts that say hey I want to know when I use a story as block, right? If I have one build failure, no, that's not really too interesting, but if I'm multiple ones, I probably want to be alert to find out what's going on here, right? And understand better. I mentioned remediation type procedures or runbooks, right? We're going to showcase that. I may want to open up a service test ticket. I may want to notify me of chats of any of the conditions of the rules I set, right? And I may want to know, like for example, if I get a security alert, a security check, I want to, I want to escalate it, and I want to know about that, right? In context of my process. Well, right? So I could set, based upon these events, I could set these rules. And this is where the knowledge sharing comes into play, because now I could start collecting best practices that I could use across my tool chains and fire streams. Wow. The other key piece of this is the metrics. Now that I'm collecting this data, I can now start having holistic type of KPI, like meantime to resolution, right? Across, because I'm now getting data from my planning and I'm getting data from my deployment, like for membership, and I'm getting data from like my ticketing system. So I could start having holistic type of reports and views across my entire organization. Dev cycles to resolution. I could even start looking at costs, bringing in cost metrics. If I really want to understand the value of how much this feature that I'm working on, how much it actually costs me, how much rework is actually costing me as well. Information I can now have visibility into across that. So what I want to do is I want to turn that over to Don, and what he's going to do is he's going to do a walkthrough. What I want to highlight and show in there is we're going to talk about how we have the Red Hat platform here and then I'm sure you're all familiar with this architecture here. And what we want to talk about is how we're able to link in and bring in, let's say, your planning type tools, right? And link into that lifecycle for lifecycle automation. And then you're maybe linking in some other third-party integration tools that may already be tied in to Opus Chef, but maybe external ones that are not, but we're still able to give you visibility into it. And then also linking into your container management. So extending the great monitoring that you already have within Red Hat, within the Opus Chef platform, extending it out to external type tools and bringing that information in as well. This is all about, again, quickly driving that idea to delivery pipeline through visibility and through action. So with that, I'm going to transfer it over to Don. I think, Don, you can, I'm going to stop sharing. And I think, Don, you should be able to share your system. Okay, let me get into this. Let's see. Let's see if we can make this one look. Okay, are you able to see my screen? Okay, I should be showing three OpenShift projects. Absolutely, thank you. I do. All right, great. So like Eric said, I'm going to walk through an end-to-end demo where we go through an entire lifecycle for a particular feature. Spanning a lot of tools. In this example, we bring in tools, of course, the OpenShift platform, as well as tools outside, such as JIRA, Splunk, ServiceNow, and those types of things. So just to show how we can manage through ValueStreams the entire lifecycle. So here, around our sample application PetStore, I have three projects for managing the application, as well as various services. PetStore is the one that we'll focus on in this demo. Here, we can see that it's deployed. It's out to one pod right now. And this is the application that we'll be interacting with during this demo. Now, Eric showed in his slides our ValueStream dashboard. So we have the concept of ValueStream cards where I can quickly see the ValueStreams or the solutions or services that I'm monitoring with the DLM. And I can see high-level KPIs for those ValueStreams to understand how they're currently performing. I can also quickly see the health of those ValueStreams, both the ValueStream Health as well as Activity Health. ValueStream Health means that my KPIs are staying within their thresholds or their SLAs. Activity Health means that I have maybe some alerts or some issues that have been identified and they may or may not currently be impacting my ValueStream Health. So this is the ValueStream dashboard. These are the ValueStream cards. And the one that we'll focus on in this demo is the PetStore ValueStream. Now, I can click on that ValueStream card and quickly look at the activity going on in that ValueStream, such as the alerts that are being generated. Maybe some informational alerts telling me what's taking place. So this is that console. Eric showed this in his slides as well. And I can quickly see in all the phases of my life cycle, plan code, build test, etc, exactly what's happening. Currently, things are all green. We've detected no alerts, so that's good. Eric mentioned rules in that same DLM console. I can go into our rules console. This is where I can create and edit rules that will either alert on events flowing from the tools or they can take action. They can execute a workflow, they can execute a script, they could call a rest endpoint, take some kind of action based on events that we see coming out of the tools or out of OpenShift. So to get started, we'll actually make a coding change. So as you know, at the beginning of the life cycle and the planning phase, there's typically a tool such as JIRA, which will outline what feature or fix needs to take place. We integrate with JIRA and we have what's called a traceability view. And currently, it's blank because we're at the beginning of the life cycle. But at the end of this demo, we'll come back to this view. And what you'll see is all the events that were triggered that we correlated back to that user story. Okay. All right, so I'm going to go in and make a coding change. I'll go into Eclipse and actually make a coding change and commit and push this to a Git repository. And that Git repository is already configured for the OpenShift project, the Pet Store project, so that anytime there's coding change, it will detect that and automatically rebuild the project and redeploy it. This works as well with other source control systems such as Subversion and things like that. I need to stage that change and then we'll actually commit it. And what we'll see when we go into OpenShift, of course, we will see that the build and deploy is as started. So going back to the OpenShift console, we can see that the new build has kicked off. So there we see the build is running. In a moment, we'll see the deployment start to take place. Now, I've made it to where it's currently resource strained so that during the deployment, we're actually going to encounter a problem. You'll see that it kind of got stuck deploying to the new pod. So, of course, Kubernetes detects that problem and automatically results, remediates that problem. But that will populate up to our console that there was actually a pod stuck in pending state due to insufficient resources. If we look at the alert details, what we actually bring in the knowledge base from Kubernetes to show how it can be resolved, of course, we're going to auto remediate that by DLM calling Kubernetes to perform these steps. We'll call the kube control commands and do and repool it and things like that, which will then allow the deployment to continue. So if we go back to the OpenShift console, we'll see, in fact, that it actually did deploy out to the new pods. Now, we've got our new build deployed. Now, since it completed successfully, we also see that it completed and so we automatically resolved the prior alert about the Kubernetes error and we see that there was a successful deployment to the OpenShift environment and that successful deployment event also kicked off match to rule and DLM to kick off a workflow or script to execute security scans against that new deployment. Here we're using ZAP, which is kind of an open source tool. Diane mentioned earlier, aqua security. We're currently integrating with aqua security as well, as well as some others like contrast security. So we actually kicked off the security scan once we saw a successful deployment. We can go into details for one of those such as the ZAP scan report and there you see a URL. We can kick off, click on and here we can see all the security alerts that are exceptions that were detected by ZAP and look at the report as well. It's also very easy within the DLM console to drill in a certain area so we can see that, for example, you know, deploy has an information alert so I can just click on deploy and see those alerts. I can click on security which is turned red and see the security related alerts. Now also after that deployment we told AppDynamics, which is an operational tool, to start monitoring that deployment from an operations perspective. So I'm going to generate a 404 error which is a page not found which will be detected by Splunk. I'll do that by just typing in an invalid URL and that is going to be detected by Splunk. Again this is an IT ops tool, IT ops probably using this and if we go into Splunk we can see the 404 error that we just generated so there it is for the shop dogs bad URL that we generated and if we go back again to DLM we will see as part of our value stream monitoring that ops did detect a problem with our application. So if I refresh we also see other events came in but if I refresh we can see the 404 page not found error. Again when that event came in that matched a rule within DLM to create the alert as well as we created automation to automatically open a service desk ticket in ServiceNow. So if we go into ServiceNow and look at incidents created we'll see that there is in fact a incident created for our 404 error. Okay so we're also monitoring other events coming from OpenShift so for example if autoscales configured for our project and OpenShift detects that memory or CPU kind of exceeded their threshold you see it just scaled up from one pod to four pods and if I refresh we see we see that we got the four pods notification that it was automatically scaled. Also when it scales back down we can notify about that and other things. So the thing is we're monitoring logs from with OpenShift as well as events coming from the OpenShift platform and again I can drill into certain areas that I'm interested in. Now if we go back to that Jira user story remember I told you that when we looked at that blank traceability view now if we look at it we can see all the events that occurred as part of that lifecycle that we just went through. So we see the user story we see the commit took place the build the binary you know then it was deployed and then we see some health events at the bottom you know that were generated by the monitoring tool. So we talked about automation we talked about monitoring and alerting. The third thing that Eric talked about is metrics you know we're collecting those events so that we can also report on them. So here we can look at our reporting dashboard and we see meantime to resolution dev cyclist resolution. I can also click on financial analysis and this will give me some of the financial reports that Eric talked about so I can see how much you know iterations are costing me based on the hours and and sizings based on those iterations. I can also compare one release to a prior release and those types of things. Okay so Dan that concludes the demo Eric did you have anything that you wanted to wrap up with? I do have a quick summary slide I could just share that again. That would be great. So I think I just do my share screen again here do it again I should be the expert. Okay you should see a summary slide. We do indeed. Excellent okay so I like to three things I've always been told that whenever you do a presentation is really focus on the three things and so like the end in the three things the first one is I talked we talked about that and I you know the importance of accelerating a DevOps idea to delivery pipeline this is essential right especially as the drive now to be able to customers now to be able to see very competitive new features coming out capabilities releases the days of me taking three months or six months to release over right and and OpusShift now makes it a lot easier especially when I'm moving into the newer type of deployment technologies like containerization. The key here is removing those manual tasks and operations again OpusShift it's a great job of doing that and be able to quickly identify and move bottlenecks and this is where DLM product comes into play along with OpusShift to do this. The visibility across the plan to ops toolchain that's very important to be able to look at this holistically so it's important to be able to integrate measure monitor your toolchain for events and then from there you can start giving visibility as far as reports scorecards like what Don showed and the analytics pieces as well and the last piece is that that continuous feedback improvement by measuring and I'm moving from path A to path B and I'm able to correct very quickly based on my information that I'm getting back I mean we showcase this with the auto scaling that that OpusShift has for capability around that and we show how DLM also brings additional information for alerting across your plan to ops on August 3 and of course now because I I have this information I can now start putting automation more automation into play and that's how I could drive drive additional value as well so there's more information if you want on the on the DevOps life cycle manager product I have the link here if you want a little bit more information or a little bit more detail or trialing it just go to our site and with that I will turn it over for questions I think you have Diane for for having us thank you again I'm really thrilled that you actually you came and did this because it's it's been it's been very interesting and lightning because I actually used a DLM tool myself so and I and I've always wondered since you know I'm on I'm at Red Hat and I'm always espousing new technologies and I I constantly see new like your zap security is new to me and and on aqua security like how enterprises deal with all these you know not disparate but separate tools in and all the incoming messaging you know and and alerts and everything and so it always seemed to me like that you know you pick your favorites like Splunk or whatever it is and you know just stick with them but this whole DLM umbrella process that it is is awesome and I hadn't seen it demoed quite as obviously successfully on OpenShift as I have today so I really appreciate that the question I have is because there are so many tools out there what is the process like when a new company brings in yet another tool for managing some bit of the life cycle adding them into your DLM offering is there way you know like Don mentioned you're working with aqua security every time I go to a conference I meet like at least eight other new packages and things that people want me to use so so how do people go about getting new and other things integrated into this or Don you want to take that one yeah Eric I'll be happy to so most of the newer tools support webhook functionality where we can go into that tool and just configure it to call a webhook anytime they generate one of their events or alerts and what we have to do on our side is we have to create what's called the transformation so that we can take that raw data that raw event from that tool and transform it to an event that we understand within our platform within our rules engine and things like that so most of the modern most of the newer tools support webhook type functionality almost all of them support some kind of rest api where we could at least periodically you know poll and go query for events that have happened in the last 30 seconds or whatever in that case you know we have to create more of a an adapter type integration but what we're finding with today's tools is almost everyone supports webhook functionality so it's very the bottom line is it's very quick very easy and quick for us to support new tools that's awesome and it really was a great overview for me and I'm sure for the rest of the audience because there were no other questions and you know the other thing that was really impressive was when you popped up the kubernetes knowledge based messaging that's probably the first time I've seen someone do a decent job of making those pop up at the right moment that was that was great so I really appreciate you guys taking the time today to to walk through this and become part of our community and we look forward to seeing more features and seeing you at upcoming events as well and when you have another release or another offering that you'd like to walk through please reach out and I'm sure there'll be other folks who are interested if people want to get a hold of you guys is there what's the best way to do so so the best way to do so is I have our links here our link information here and if they dial directly I think I'm still sharing you see that there yep yep that's the best way there and go to our site and then there we have because we're a technology company can upstairs where you can directly connect to me for questions if you like okay perfect Eric thank you very much thank you Don and thank you Franco for being here this morning and this webinar will be up on the blog.openchip.com probably in a day or so and we will get these links added in as well so thanks again and we look forward to hearing more from you thank you