 Hopefully they'll log back in here and so Tom, there are 37 people anxiously waiting. Your first slide and how about if we get started a little bit late? I really want to thank Tom and the folks at Shippable, Abby, and Tom for coming today and giving us this talk on the ICD workflows using Shippable on OpenShakes 3, which I think will be very interesting for everyone. So Tom, I'll let you introduce yourself. Thanks, Diane. And like Diane said, my name is Tom Trahan. I'm from Shippable. Shippable is located here in Seattle and also on the call today, I've got Abby Cavalli, our CEO and co-founder, and he'll join us in the Q&A session as well as Ragesh Krishna, one of our lead engineers. So just a quick introduction to Shippable. We're a company that was founded in 2013 by Abby and Manisha. We've got 24 people. We're based in Seattle. We have a development office in Bangalore as well. We have a fairly healthy customer base at this point with almost 50,000 developers using the Shippable CI CD product in the cloud with covering 8,000 organizations. Tom, you've gone quiet. Oh, have I gone quiet? Is it not coming through very well? Yeah, it's coming through. Okay. I'll keep going here. So the purpose of Shippable is to make the complex problem... Don't worry about it. Keep going, Tom. Okay, got it. A building, testing, and deploying software frictionless. I mean, the goal of Shippable has always been to help enable software delivery to make it as automated and as efficient and as fast as possible to discover bugs early in the process and enable software changes to make it all the way into a usable state. The challenge, of course, is that it always starts with a fairly simple problem when you're talking about software change, which is how long does it take to make a small change to an application that you're supporting? But it becomes a more complex problem when you extend the question to ask, you know, how long does it take to make a small change and get it to the customer so that they can use it? And then, of course, it becomes an exponentially bigger problem when you say, well, how difficult does it become as you add more developers to your application team, more components in your application, more environments that you're going to manage and move that software change through as you validate throughout your entire software delivery lifecycle? So for Shippable, we exist to help solve that problem. And it's really to help manage across the end to end lifecycle that you see on the screen right here from source code and from a change in source code through all the validation steps necessary to merge that code into the master branch to be able to perform unit testing and continuously integrate that code back in to be able to now with the advent of containers to be able to generate container images and be able to easily access those images for use later and to deploy those into dev and test environments where full functional testing on full topology applications can occur. And then ultimately, of course, to be able to promote those changes all the way through to the production environment where they exist. And so for us, you know, we now call this DevOps 2.0. I think everyone's familiar with the term DevOps. It's been a very popular topic for multiple years now and rightfully so because it's ultimately been attempting to help solve the problem of how do you make this workflow here as efficient as possible? And some of the most inefficient elements of that workflow come have to do with the many handoffs and the many various tool sets that exist across this lifecycle. When you talk about development activities, you talk about deployment activities, testing activities and operational activities, you end up introducing a lot of complexity in terms of complex process, many handoffs, many roles and many different tools involved in that process. So for us, DevOps 2.0 is a concept that really is, you know, taking that great idea of the great objectives of DevOps, but saying that now with the advent of container technology, and if you look at the technologies that OpenShift 3 is bringing to the private paths, it enables you to essentially truly automate this workflow and with a lot more efficiency and a lot less effort than ever before. And so for us, it's about enabling these end-to-end workflows for continuous integration, testing and deployment of code changes throughout that workflow, really getting to the point where one click code change all the way through deployments into your production environment becomes a reality for software delivery teams. And ultimately, to achieve that type of speed, it needs to be as easy as possible and as usable by as many people as possible. And so we believe DevOps 2.0 ultimately requires that you can do DevOps without DevOps code, without a separate set of languages or tools or infrastructures that you need to manage or learn in order to be able to do DevOps. And so for us, kind of simply, you know, we think it really comes down to the fact that DevOps really is evolving. It's going from kind of that concept where folks talk about it as a cultural idea that we just need to get disparate teams within that value chain to come together and collaborate more effectively. And it really is that they need to be enabled properly. And with containers and with really the decoupling of infrastructure, it really does allow for this to happen where the dev side of DevOps really represents the application delivery team. Everything, and that doesn't necessarily only mean developers. That's anybody involved in application delivery. But it really means that you're focused on continuous delivery without separate DevOps code. And to do that, you are leveraging smart containerized environments that are aware of changes throughout the life cycle and are executing various validation activities and promoting throughout the life cycle as long as things are proceeding as planned. And then from an operational standpoint, then the focus really separates to talk about managing the decoupled infrastructure. And obviously with OpenShift 3, you've got this clear concept available in the platform or with the Kubernetes layer abstracting the application layer away from the underlying machine hardware. And so for shipable CICD, then our product that runs on OpenShift Enterprise 3, it's really, we think, a pretty compelling offering that's going to make setting up these workflows, setting up continuous integration, validation, unit testing, integration back into your source code repo as part of that pipeline, as well as those deployment activities into your test environments and ultimately all the way into your production environments. Very easy to configure and very easy to start. And so for us, it's about extending the value of OpenShift with these workflows and various tools that we've developed over multiple years in our CICD solution in the cloud. And then enabling that anybody who is adopting the OpenShift platform to truly adopt the platform and not maintain a separate set of CICD infrastructure tools outside of that platform and need to manage those independently. And lastly, as you heard me mention, it really is about being able to do all of that without writing separate DevOps code to manage that. And so kind of just briefly, some of the details we'll go into and hopefully you'll catch as we run through the demonstration is shipable does bring native continuous integration to the OpenShift platform. So your CI system, integration of your code, your unit testing, your build processes in order to test and merge your code back into your master branches and your repositories, all happens on the OpenShift platform. The shipable product provides end to end traceability and history so that you've got full visibility into everything that's happened throughout that pipeline. We make it very easy to set up end to end workflow via our UI. And we have made it so that you can establish environments within your software delivery work pipeline that really match the SDLC that you currently use and are familiar with. We try not to be too opinionated here and telling anybody how they should run their IT delivery processes or their software delivery processes. And so we make it so that you can flexibly configure the system to match the way that you work. And then lastly, we make it so that the act of automating that pipeline and essentially initiating activities in that pipeline become fully automated or executed with a single click inside the user interface. And then lastly, you heard me talk about the fact that when you adopt the OpenShift platform, you're doing so to eliminate a lot of non-value at activities and to bring more and more of the tools necessary to develop and operate software into a single platform. And we think it makes perfect sense that you do the same thing with your CI CD infrastructure as well and not maintain a separate one outside the system. So just a quick picture, oops, like my builds here are a little off, but just a quick picture then of what the all up solution looks like. You heard me talk about that workflow from source code through CI CD through image registry through the Dev and test labs environments plus the production environments. This is that flow that gets implemented and it is, like I mentioned, running entirely on the OpenShift platform. This one page slide will be available after the presentation and it becomes, I believe, a good one page reference for you as you think about how to set up these deployment pipelines and just think about this in terms of the demo that we'll be going through here shortly. So in closing, before we jump into the demonstration, I just want to kind of mention if you're going to wrap it all up into a single summary statement, you know, what shipable CI CD does is bring seamless deployment pipelines to OpenShift Enterprise 3 without writing any DevOps code. With that, let's head into the demonstration and we can kind of see a hands on view of the shipable product running with OpenShift. And so you'll see me bouncing back and forth between a few windows here in the demonstration. What you see here is I've got my OpenShift console running here and you can see the projects that I've got running out there. This one here will be jumping into a little later. It's it's the my OpenShift Enterprise 3 demo project connected and integrated with with shipable. We'll also be jumping into a sample application. We're going to walk through the end to end flow of what it what it what it looks like to make a change in software and promote that change all the way through the system. And and then and we'll be spending some time in the shipable user interface as well. And so when we jump down here, typically when I come into shipable for the first time, I'm going to access it via the authentication of my source code repository. So in this case, I'm going to go ahead and and join with my GitHub credentials. And I'm taking into to the main dashboard page here. And you can see here on the main dashboard page that the last activity within my organization and all the organizations that I have access to all peers right here in front of me. So what you're looking at here is is a place where you can see the last activity related to your continuous integration builds and tests. You can also see that you can then quickly access the other elements of the system in terms of that flow. The way we think about it within shipable is there's the front half of the flow, which is everything around preparing code to be deployed. And then there's everything around the act of deploying that code and then operating that code. And so the preparation piece happens for us on the front half of that workflow within the CI portion. So between source code repository, the CI CD functionality of shipable and the image registries. And then the formation side happens on the back end of that workflow. And that is connecting image registries with your dev and test labs or environments and and ultimately your production environments as well. And so, you know, just just to kind of give you a little bit of a sense of where we are right here before we jump into the end and workflow is I'm going to narrow this down and look just at my personal GitHub account. You can see here, you know, I can have access to multiple organizations within the CI system. And so I've got access to all of the shipable activity and all the repositories that shipable manages. And then I've also got access to my own personal forks and personal repositories as well here within the system. And so I've got a demo application that you saw me jump to a little bit before. And it's an application that I've got two repositories out there in in in my in my GitHub account. I've got one called radar WW2 and you can kind of see looking here. You can see that the last time I ran a build here was 13 hours ago. I had checked in code and it had triggered a CI bill to go validate that. You know, we'll drill into that in a few minutes here and see some of those details. I guess one other example I'll show you here is a different project that I've got that actually runs a matrix build on continuous integration. And you can see that there's a lot of flexibility in the types of tests and the types of CI activities that you can run in this particular example. My application, you can see that I was testing it simultaneously against multiple versions of Node.js in order to validate that they work. And you can see it ran each of those tests against that check-in when I made the code change and came back with the success for each of those. So let me go ahead and drill down a little bit and show you some of the other details about this build. So if I go in and take a look at this build that ran yesterday when I was in this code change, you can see that everything about the build is, of course, available to me here. In the screen I can see all the activities of it. I can also monitor as it happens or after the fact all the activities that occurred in the processing of that build. And you'll notice here just kind of a quick glance through here. You can see that this build does quite a few things. It not only builds the code and executes tests against it, you can see that it obviously is syncing with the latest code changes that I've made in my repository. It's doing a variety of activities that I've instructed to do prior to executing the tests. It's doing a bunch of installations inside of my build container. And it is also going all the way through. You can see here some language that clearly indicates that I'm building images. I'm pushing those images out to Docker Hub in this case as well. One of the things that's really, I think, important to understand about the CI system from Shippable is that it is fully containerized, which is why it runs natively on an OpenShift. Every time you initiate a code change in your system in your repository, commit that change or anybody in your organization commits a change into that repository that's enabled for CI on the system, it's going to recognize that it's going to be notified from a web book from that repo. It's going to be notified of that change. It's going to immediately spin up a build container and then go about the business of preparing that build container to fully test that code change. And that's really what you see here is the end result of all those activities that are happening in the background. It can also go back in all this particular build. I don't have any tests set up for a bit of a different one, but I can look at the tests that are executed and I can see code coverage in there as well. And I can also see the entire actual execution script that's happening behind the scenes for my build container itself. So let's actually go back in and start the process of looking at a build from end to end. And we're going to make a code change. We're going to watch that code change kick itself off inside of the CI system. We're going to take a look at what's happening in that system as it occurs. And then we'll follow that as we actually promote that all the way through into deployed environments where in full topology environments where they can be tested as well. And so I'm going to jump back out now to my source code. So here's that application that we were just looking at, the radar WW2 application. And I'm going to go change, make a real simple change on the application. If we jump back here, you can see that when I'm in this application, the default, this is just a simple demo app that goes out to GitHub and it queries a repo. And it brings back a list, a summary list of the issues that are currently associated with that repository in GitHub. And so this application currently has the shipable slash support repo as the default repository that's going to pre populate in this field. But let's say I wanted to change that. You know, so I'm going to, of course, go out here, change my code, I'm going to save it, and I'm going to commit that code. So there you can see that change I just made. I'm going to go ahead and say, commit that code into my GitHub repository. As soon as I sync those and push those out to my repo that is enabled for shipable. Actually, it beat me to it. It's already triggered the build over here on shipables dashboard. And you can see it's getting ready to start processing. And now it's processing. So now I've got a build running on the system based on the fact that I just submitted a change into my into my source code repository. And if I drill into that, you're going to see the you can watch those details that we just saw in that previous build as they occur inside inside the process. Now, one of the things that, you know, I'll point out while we're while this is running is is the various options that you do have here to set up inside your repository for your particular project. So for this radar WW2 project that I've got, you know, you can see I've chose for this to use. A Docker build process. And so it is using a Docker file from within the repository in order to build the image that it will use for the build process. I could have also chosen to not use a Docker build file, but instead to use instructions from my shipable YAML file in order to create that. And I could also have pulled a an image that perhaps a custom image that I've already prepared outside of the system. I could have pulled that in as well to use for the CI process. We also have options and functionality here for determining how you'd like to use Docker build in that CI process pre CI and post CI. And ultimately, the way to think about this is we have quite a few customers who tell us they want exactly what they test to be exactly what they promote through their various environments. And so the image that gets built and tested is the image that then gets moved throughout that workflow. We have other customers who have come to us and said that they want to be able to execute all of their tests, but before they then create an image to promote through the rest of their environments, they want to be able to extract out all the test artifacts out of the image and make that as lean of an image as possible before they start to promote it through the rest of the process. And that's really the difference between pre CI and post CI. The one thing I wanted to show as well, let me jump back out here to my OpenShift console and assuming that build hasn't completed yet, you can see that has spun up a project on OpenShift. And it's now running inside of an OpenShift container on the OpenShift platform itself. As you'd expect in any kind of CI system, these are transient workloads. And so it has created this project when the build got initiated and it will it will spin down and delete all of these artifacts once the build is complete. All the results of the build will of course be stored inside the shipable database. And so the history and the tracking and the traceability to be able to go back and review results and be able to trace any details related to this particular CI run are always going to be available to me. But the artifacts on OpenShift themselves are going to be transient workloads. They're going to spin up and spin down as needed. And of course, I've got the ability in this particular case, I just made a single change to a single repository and it's running a single build. Of course, this will run parallel builds as well and spin those up simultaneously to run at the same time. Let's jump back over here and complete taking a look at the options you've got available to you are on the CI system. You know, I've also chose besides using Docker build to create the image that I'm going to use for my my CI testing processes. I can also decide at the conclusion of that run to push that build. And so essentially create that image push it to an image registry. And in this case, I do have that option turned on for this. So at the conclusion of this CI run when it's done, we are going to see that a new image has been pushed to Docker hub in this case. Because that's that's the location that I've specified specified here to push this through. Now, of course, with OpenShift, I can push that to to my OpenShift private registry. I can also push the Docker hub. Essentially, you can push this to any image registry that you'd like to use. The last other option I've got here called Lighthouse is essentially an image watcher. It is it is essentially letting me sign up and be notified of changes to images that I care about. And those could be images that I create. In this case, I'm actually telling the system that the image that gets produced as a result of this. Tom, this is Diane. You've lost your screen sharing capability. So if you could try and share your screen again, we all want to watch that transient image on the console disappear. Yeah, let's see here. Okay. Blue Jean seems to be having. There we go. Loading the presentation. Coming back. Coming back. Now let's just go to the click the on the screen itself on the shipable. There you go. Perfect. Good. Okay, thanks. I think we still had audio there. So I won't repeat any of the things I just said, but I'll just keep going from that last point. And so the lighthouse allows me to essentially watch images that I care about in the registry. So in this case, I'm saying that, Hey, after you push this image to the registry, you know, I want to be, I want to be notified. Anybody related to this project within my CI system, I want them to be notified of the fact that an image is changed in that repo. And so in this particular case, this happens to be for the images that we create out of this process, but it could be for any images in any registry. Public images, private images, et cetera. And then lastly here, you can see that I'm able to, you know, obviously specify where I pushed this and decide how I want to tag my images before they get pushed into my registry. In this case, I've got this setup for default tagging, which means it's going to tag that image with information related to the CI process that that's running. In this case, it'll tag it with the branch and the build number that has been executed. And you can see here as well as part of my CI process, I've specified my base image that I want to start from in this case, which is in this case, we're going to 1404 Node.js images where it starts. These other details down here, I think, you know, we'll skip over for now, but ultimately just let you set up other external integrations so that your process can obviously interact with some of the other external processes like Docker Hub. So if we head back over here into the status system, we're going to see, well, first of all, that build has already completed. We can go take a look at the details of this one. We can see it took five minutes to run the full end-to-end process. We head back over here to OpenShift and refresh this page. Like I mentioned before, that build process has now disappeared from the OpenShift system. And if I go drill into that particular run, again, just like we saw before, you know, I've got my full history here and I can obviously expand out and see any of the details necessary about what was happening in that process. And obviously that becomes, you know, much more important as you get, if you run into any problems in your build and if there's a failed build in any way, you'll see exactly why that failed. I think I believe, for example, I should have some failed builds out here as well. And we can go dive into one of those and essentially troubleshooting where the issues happen become fairly easy because I can see exactly where in the process this build failed. And here I can see I had an error when it came time to push into my registry. And so then I can start my troubleshooting there. In this particular case, it was exactly the information I needed to go figure out why this wasn't working and then be able to make the necessary corrections. And now you can see it's since corrected, it's now been running successful builds ever since. Oops, maybe we lost the screen share again, it looks like. There's one question here, Tom, while you're pausing. Okay, great. Sorry. Well, if it's okay, I'll finish going through the demonstration, then we'll come back and get all the questions that have been happening as well. If that's okay. That's fine. Okay, good. So, so a bunch of things have just happened there in that workflow that we talked about before you saw me make a source code change push it to my repository. It, it has automatically triggered a build on the system. It has run through all the necessary CI activities that I've specified that I wanted to occur, and it's completed. And it's told me that it's been a successful completion. Now, the next steps in the process are, you know, obviously that I now want to see I want to go to that second half of the workflow where I can, I can now do. I can now deploy that image off to different environments in order to work with it. So if I jump over here, I can see I've got a formation here set up. I've got multiple formations, all the ones that I have access to are here. A formation is essentially a multi container environment where I can deploy any number of services or pods out and network them together in order to leverage them as full applications and essentially full topology applications in a multi container environment. And so in this particular formation, you can see, you know, you can control access, you know, you can control the, I guess, the resource capacity of those containers for your teams. And so for this particular formation team, I've got 10 containers available to that team. And within it, you can see some of the key concepts that come to play once you start to deploy your applications and these images out into full topology environments. In this case, I can quickly scan and see what the status is of my various services that are running in my applications. I've got multiple environments set up here. So I've got an environment called all in one, which if I expand this out, you can see is actually running two containers, both in the same service. In this case, you can see this app, that demo application I showed you earlier. Earlier is a two tier application. It has a web front end, which is the repository that we've been working with. And it has a API service on the back end that essentially manages all the page feeds and the data. And in this particular case, I've deployed those as a single environment running inside of a single service. And of course, over on OpenShift, I would see that as a single service with a single pod with two containers running inside of it. This one over here talks about how many instances of that service I have got running and of those pods I've got running. In this case, I've got one of those running for this particular service. This other example of an environment that I've set up here is my Dev environment. In this case, I've deployed each of the components of the application as separate services. If I expand those out, you can see that they both contain just a single image. My web service contains my radar WW2 image. In fact, we can already see here master.55 has been deployed here, which if you remember back to our demonstration example back over here, 55 was the last build that we ran that we triggered with that source code change. And so in fact, you've already jumped back and seen the end and flow occur in two different ways. For the all-in-one environment here, it's still running master.53. I don't have auto deploy set up on this particular environment, but on my Dev environment, I've got auto deploy enabled and in fact has already deployed the image that was created out of that CI run for master.55. We've actually now got a working updated piece of code running in my environment. In fact, let's go ahead and take the moment right now to jump back out there to that application. Let's refresh this. And we should see, there it is. We now have our new code running in that environment. This happens to be that Dev environment that I'm pointing to. You can see I'm using environment variables here that are configurable to be able to change the behavior of my applications. And that change is already deployed and available to me in that environment. So let's take a quick look at some of these other examples here. In this particular example called Dev Single Component, this is a third environment that I've established in this particular environment. I've only deployed a single service, but in fact, it's still a complete working application because in reality, if you take a look here, my back end that I'm leveraging for this particular environment is actually pointing to the back end of my test environment. So I've got an API down here, service running in my test environment, running on port 31767. You can see that this web front end is actually using that environment. Internally here at Chippable, we use this concept very frequently. Our developers, who are making changes to a component on the system, will deploy into a single service environment that points to all of our official beta integration test components running for all the other ones that it depends on where they can do full integration testing and functional testing on the entire app before they actually promote it into that test environment for other teams to then leverage it as the latest and greatest component version for them to test with. And then lastly, you can see here, I've got my test environment. It's actually configured exactly as I had this dev environment up here with the exception that I don't have auto deploy turned on. And I've also, I'm running multiple instances of my web front end here. I've got three web front end instances running on the front. And so here you can see the fact that the prime difference being that I know I've got some different settings set up for this environment. I'm running error logging here instead of debug logging that I was running on the dev environment. And I've got different versions of my components running back there as well. The thing that I want to jump out to right now as well is just kind of show you very quickly how quickly I could decide to use auto deployment or in fact do deployments on my own at whenever I want. And so in this case, let me jump into my all-in-one environment. We can see before that, you know, we looked before it was running, you know, 29 of my API2 and it was running build 53 of my master. We know that we've now seen build 54 and 55 come out. You can also, I can see all my deployment history. I can decide I can roll back, for instance, easily to a previous environment. So if I want to go back to when I was on 35 and 29, I can just go ahead and roll back with a single click. Alternatively, I can choose to redeploy and deploy a new version of this. I can, for instance, come up here and say that I want to go and deploy on the web front end. I want to now go up to that newest one that came out, master 55. And I'm going to go do the latest API version that has come out as well. And go ahead and deploy that out into the environment. And so because, of course, this is a container-based environment, you're going to see those activities change very quickly. And in fact, that container has already been the old containers and the old services have been stopped. They've been replaced. If we dive out back out to my OpenShift console and go into that project where I'm running all my services related to this particular formation, we can see all of those various endpoints running here that I had going. And in fact, the latest one that we just changed and deployed is this one here that was created a few seconds ago. You can see it's the one that's running two different images within the same service. And it's already refreshed and running out there. And in fact, that's this one over here. And I can now refresh that one and see that I'm actually accessing the latest and greatest code for that one as well. So I want to go ahead here and take a quick break. It looks like there's been a lot of good questions coming in on the chat line. So I want to go ahead and open this up for Q&A here for the last 15 minutes and make sure that we can get to everybody's questions. And I'm going to unmute, or Avi, Avi should unmute himself as well. If he could. And then, Lee Calcott has asked a couple of questions and Avi has answered them in the chat. But I think they're worth asking out loud so that they get recorded and other people might see them if they're not watching the chat. Let me slide back up to his first one. For those organizations working with on-premise deployments of GitHub Enterprise, Docker Hub Enterprise, or an OpenShift Enterprise, this ship will integrate with non-public cloud hosted instances. Yeah, I mean this, Avi. So yeah, I mean we currently whatever we are showing demo is an actually running service. So it's externally connecting to all the cloud services. If you want to run this on-premises completely, we support anything that runs on-premises. The limitation right now is it has to be Git-based. We are working on other types of source control, but predominantly what we see with our customers is they want to Git-based for us. So that's what we support. We can support any kinds of registries as long as you have creds to connect to it and you have a URL that you can actually access through whatever service that OpenShift is running on. We should be able to connect to that registry and manage pretty much on-premises. And then the other half of the week, you answered my question, which was about private registries and I was pretty sure you had that handled because it was in the slide previously. Let's see the other part of the week. I think the other question was if folks are using external integrations to do things like Selenium Grid or external pieces where it's doing scale-out testing, I think Lee was asking whether we support webhooks from our external pieces. So any of the actions we are completely event-based internally, our system works completely on webhooks internally too. So we can expose those webhooks externally also and so we can hand the control out to any external service, push the webhook out and then wait for a webhook back from that system to continue the process. It's a very flexible system and just the same way GitHub gives us a webhook and we push information back to GitHub in terms of your build status and all of that stuff. We are natively supporting external webhooks also. A lot of folks use this to deploy to external things like digital ocean or they probably have Ansible workflows that sit on some of the server or maybe Kepibra or something like that and we call into it and then it returns back of webhook. So that's possible too. I'm looking to see if there's any other questions. I think that was the other one was about rollbacks, but that was a psychic question because it was just as Tom was about to show how to do rollback. So that was great. So the one other thing I would ask you to talk about a little bit is if people wanted to get started using Shippable on OS3, where should they go to get started? What's the best jumping off? So I think right now Tom, sorry. So today we'll, in fact, this afternoon we'll be exposing the web page that announces the functionality for OpenShift and exposes the beta program for it and the contact information for how to get a hold of us and either schedule follow-up demonstrations with your team or just start talking about how to come on site and be part of our beta program. We'll all be on that page. All right. So you go to www.shippable.com and you'll find that information on our products page. All right. I will make sure that that goes out to the commons mailing list as well along with a link to the video. So, yes, Lee, I agree. This is pretty beautiful. Are there any specific integrations with OS3 constructs to be highlighted? CD by district? If you're using OpenShift online or upset it up with districts? So I'm not sure what district means. I'm assuming it's some kind of zoning for your application hosts, I think. Anything that OpenShift exposes through APIs, we can expose all of that stuff. So as Tom was demonstrating that there is forward sync and reverse sync back from the OpenShift. So all the projects, any changes you make on the OpenShift UI will get reflected on Shippable. Shippable, you make changes to where it will get reflected on the OpenShift. So if you have districts which are basically labels, you are absolutely, all of those are definitely possible. All right. Then I'm looking to see if there are other questions. If not, that's about a great length for someone to have to watch this video in the future. And I'm sure we'll do another one once the beta is done and get some more feedback from folks. If people have feedback for Tom and Avi and the Shippable folks, please do hit them up. If you want to throw a slide up here now with your contact information or maybe even your email or whatever. I think you had one there. There's Avi at Shippable. You guys have the easiest email addresses. Avi at Shippable and Tom at Shippable. Yep. Pretty quick and dirty to get there. So very impressive. I really, really love to see stuff that I don't have to do any scripting for. Though I can imagine a few people are going, I love my scripts. So we'll see how we can work with them as well. And I think that was a great presentation. So thank you very much both of you guys for coming on board and being part of the Commons. Thank you, Diane. Thank you very much, Diane. Thank you, everybody. Take care.