 Well, welcome everybody to this OpenShift Commons Image Builder SIG meeting. We're going to have a talk today from Mark Worshton and Brian Bullock of Premolo Security. And you've heard them before talk about open unison and identity management on an OpenShift Commons briefing in the past. And I've invited them back here again because they've done taken the next step. And they've containerized unison and open unison. By that we mean we put it in a docked container and use the tool chain, which is sourced image that is part of the OpenShift project. Build that. And I thought it would be a great thing to have them talk about the lessons they learned, what they had to do creating and containerizing with that process. But also giving the sort of service providers, they have an identity management service, which they're going to explain what that all does to in this talk today. Because I know there's a lot of interest in other service providers and people with packages that they want to package up and make sure work, not just an OpenShift but anywhere. So I thought this would be a really good way to start the conversation, have a good working example. We had one a couple of weeks ago, Crunchy Data did one talking about packaging up and containerizing Postgres. What we're continuing to see and learn more about the different aspects of packaging up your service. So without too much further ado, Mark's going to introduce himself and his partner here and we'll kick it off and then ask any questions in the chat. And at the end of maybe 20, 30 minutes of talking and demoing how this all works, we'll open it up for a conversation as part of the state. So take it away Mark. So Diane, thanks for the opportunity to present again. My name is Mark Borschtine. I'm the CTO of TREMO Security. And my partner in crime here is Brian. Hello. Yep, you're there. So like Diane said, we're going to talk about how we containerized our open source identity management product, as well as our commercial product. And then we're going to kind of show off how that all works with the demo of our identity management capabilities for OpenShift. So really quickly, what is Unison and OpenUnison? So Unison is our commercial product. OpenUnison is our open source project. Only real difference between them is that Unison has a management interface and what we call virtual appliance, whether it's running on a container, VM or physical hardware. Whereas OpenUnison is a J2E application. And you have to configure everything by hand. So we're going to be focusing mostly on OpenUnison from a demo standpoint today. So we're an identity management solution. So that means user provisioning. What do your applications need access to? Self-service. Can users just log in and request that access rather than having to have an email chain? Virtual directory for integration, SSO and web access management. And we're built on Java. And really the key difference from a Docker standpoint that we ran into when containerizing these systems was how they run. Because Unison is a standalone server. It's built on top of Undertow. So we have multiple Undertow instances that all run in one JVM. Whether you're connecting to the identity provider versus proxy for web access management, the LDAP virtual directory or the admin interface. Those all run on different ports on different Undertow instances on one JVM. Whereas OpenUnison, J2E web application. So we had to take different approaches and we're going to talk about both approaches. For the commercial Unison, we had to make some changes to the assumptions we made. Whereas with OpenUnison, we were able to use source to image, which actually made the process very, very easy. So real quick, before we dive into the specifics of how we built our source to image, we're actually going to kick off a source to image build because we're going to be downloading all libraries. It takes a couple of minutes. So I don't want to have some dead air time while that process is going. So before Brian kind of talks about what our source to image build process looked like, just wanted to show you, this is a GitHub repository that we're using for our demo. And this GitHub repository has our source codes. We have Scale.js, which is the interface that you'll see, as well as our configuration files. And there's really two config files. The myvd.comp file, which is specifically for the LDAP virtual directory that's embedded into OpenUnison. And not going to dive into too many details, but what I do want to point out is that we've parameterized everything. So this one source repository can be used to build dev, test, pre-prod, all based on configuration parameters. So things are in source control, but not secrets and passwords. And then just to show you, this is the Unison config file. Again, I'm not going to get into too many of the details as to how it's configured, but pointing out that we have parameterized that as well. So that's up on GitHub. And so what we're actually going to do is, before we talk too much about the process of building our source to image, I'm just going to kick it off. So we've got S2I build. I'm running this from our source to image repository, also on GitHub. And we're just going to pass in that Git repo. This is an image that we have hosted that's our builder image on Docker Hub. And Brian's going to talk about that in a second. And this is the resulting image that will be created and deployed into my local registry. And one thing I want to point out here is that this is all running against the local Docker machine. So you can use source to image to be able to create images that will run on any Docker container, not just OpenShift. I go ahead, hit Enter, downloaded, building. And at this point, Maven is taking over. It's going to be running for a couple of minutes. So with that, I'm going to go ahead and hand it over to Brian. Thanks, Mark. So as Mark pointed out, OpenUnison and Unison in general are just simple J2EE applications. They're deployed as a simple work file. So all we really need in our image is a server container. We are relying on Tomcat for that. Of course, Tomcat requires Java 8, which is also in our builder image. We decided to go with CentOS and build Tomcat into the builder image rather than use the official Tomcat image that's available on Docker Hub for a few reasons. One of them being that we wanted to stay within the Red Hat ecosystem, perhaps the most important reason being we wanted to stay within the Red Hat ecosystem. Additionally, we wanted to be able to configure that for TLS, which admittedly we could have done using the official Tomcat 7 image on Docker Hub. But again, staying within the Red Hat ecosystem, a lot of stability to do that manually anyways. So our image contains very simply Java, it contains Tomcat 8, and it contains Maven. Our builder image allows users to pass in either that constructed war file, that OpenUnison constructed war file, or a GitHub repo, which is what Mark actually has done when he kicked off the command moments ago. Passing in the war file makes things a little bit faster. The builder image doesn't have to go out and download all of the required libraries, it doesn't have to assemble the war file. But it also requires that the war file be assembled and ready to be passed into the builder image, which isn't always going to be the case. That's something that Mark and I actually talked about at length a bit when we were putting this together and something that we actually reached out to the OpenShift Commons groups to discuss when we were putting this together. Do we require that the war file be built and passed into our builder image, or do we allow a Git repo to be passed in and then have builder image assemble that? The community was very responsive, very quick and very helpful. Their biggest suggestion to us was to take a look at the wild fly project. And wild fly, the way that they assembled their project was to say, if you pass in a war file, great. We'll take the war file, we'll deploy that out to the server container for you. If you pass in your Git repo, great. We'll package that up using Maven for you. So we followed that same model. Like I said, Mark has passed in our Git repo into the builder image. So that is going to take a couple of minutes, two or three minutes to go ahead and build, download all the required libraries and build that out. Real quick, before we move on to the next slide, just want to point out, we've been talking about source to image and want to make sure that we define what source to image is. We have a bullet here at the bottom with a link to source to image. The way that we saw source to image is it allowed us to provide that base image and a build process without having to create our own scripting base, our own install process. It gives us a standard where we can now say, hey, we've got the baseline image up and Docker Hub and Brian's going to talk about that in a second. But actually going from a couple of text files, a configuration to a running open unison instance just requires that one command. And that was immensely powerful for us. That's great to point out. Thanks for that, Mark. That was incredibly powerful for us as Mark said. Using containerization, it does obviously simplify life a great deal, but using source to image to go directly from that Git repo into a running configured open unison image is incredibly powerful. So what we've got laid out here is sort of a simple map of how all of this works for us. Starting here, I guess on the upper right hand side of this image, our builder image is maintained on Docker Hub. It is built from the official CentOS 7 image that's available on Docker Hub. So anytime that CentOS 7 image is updated, our builder image is automatically updated by Docker Hub. Our Docker file is contained in a GitHub repository. Anytime we make any changes to that builder image, although infrequent it may happen, Docker Hub will recreate that image for us. So that's all automatically updated on Docker Hub. Over on the left side here, what we've got is we've got the Maven Central repository as well as our tremolo security repository there. The thing that we wanted to point out here is our libraries are not available in Maven Central. We've got our own repository where you will find and download any necessary libraries for our project. The reason for that is we wanted to provide ownership or some level of accountability. The files, the libraries that are coming from tremolo security can be trusted. You know that they're coming from tremolo security. They're signed by our EV certificates. They're not just available up there in Maven Central. And just to add to that, if you go to Maven Central and you find unison libraries, they're not from us. We didn't build them. And of course, nothing about our images here are specific to Docker or really any other containerization platform. Of course, our builder images is available on Docker Hub. It is a Docker container, but all of our images are built primarily to be run in OpenShift. Anything else you wanted to add to this slide, Mark? No, like we said before, the output image can run in any Docker instance. So the demo that you're going to see here is actually running on a local Docker machine on my Mac. Great. So before we go into, oops, built copy artifact. Okay, it looks like somebody decided to make a change to one of their repositories. So we're going to go ahead and just kick off OpenUnison anyway. This is doing live demos, Mark. Yeah. Someone decided to make a change. So we'll take a look and see. It looks like somebody changed out the one of our applications that we integrate with Al fresco. So we'll take a look at that later, but I'm kicking off OpenUnison now in the background. It takes a couple of minutes to start up. So go ahead and continue the presentation while that's going. So we talked about the open source image that we created around S2I. Now we're going to talk about kind of some of the issues we ran into in creating the commercial version of Unison inside of a Docker image. We didn't go with S2I for this particular route mainly because S2I is really great at creating a static image based on configuration files. Whereas Unison, we really wanted to balance the ability to play inside of an image but not lose the power of the admin interface. And so we wanted to instead of maybe breaking things up into microservices, which would probably be the more orthodox approach for a new app, we wanted to make sure that we were keeping it as simple to deploy in Docker as it is on bare metal or VM. If we're starting this up from rel, yum install Unison, the system's up and running. We didn't want to have to deploy a set of containers just to get to that same point. And so as we started looking at containers, the first thing we did was we broke the first rule of creating Docker images. Really bad fight club joke. But don't treat containers like VMs. That's the first thing we did. We said, well, we were very lightweight. We run in a VM grate. This is going to be easy. Not quite. So we ran into a lot of challenges with that first methodology of, well, let's just treat it like a VM. And not to iterate through the list. Networking was a big issue. Configuration management was a big issue in security as well as how do we maintain that simplicity. It's super simple for us to deploy on a Linux VM. It shouldn't be harder to deploy on Docker. So this is kind of a typical, highly available deployment for Unison pre Docker. We use a master slave model where our configuration is all text based. It's all XML and configuration file. And so we use a push model where a master knows about all the different slaves and you want to make a configuration update. You make all your changes to the master. You're happy with it. You hit a button and it pushes out to each individual slave. Well, that doesn't really work with Docker because in a dynamic environment like Docker, I don't know where the IP addresses are. I don't want to know where the IP addresses are because you might be spinning things up, bringing things down. And so we needed a more dynamic approach. So we added the option of basing everything off of volumes. So we said, all right, well, we're all only reading data off of a drive. So let's just make the push mechanism from there as well. So we say, all right, when you spin up your Docker containers, whether they're pods in OpenShift or Kubernetes or some other mechanism, you go ahead and spin it up and everything's off of a shared volume server. Don't care if it's SIFs, if it's NFS, whatnot. And so instead of having a network call to each slave, we put a marker on the file system that the slaves are looking for, says, hey, time to reload your configuration. So we're getting the best of both worlds there. We had some other lessons learned really around security and that volume management. From a security standpoint, great resources were both the OpenShift documentation as well as the Atomic, the Red Hat Atomic documentation guidelines for how to build images. That really helped us really narrow down what was going on. When we look at Docker help, unfortunately, most of those images run as root, which we really want to avoid. And so having that baseline really helped out. And then also making sure that we're staying consistent. We're not trying to fire off too many processes. We've got it down to it's just one Java process now, which makes the management much easier. And then from a persistent volume standpoint, the way Unison was originally built, configuration information was spread across five or six different directories depending on what you were configuring. And so our initial run, we said, OK, you deploy on OpenShift. Here's the mapping for each of those volumes and you needed five or six different volumes. Well, because in OpenShift you can't guarantee the mapping from persistent volume to a persistent volume claim. That causes a lot of heartburn with folks that were trying to use the system. So we said, OK, we're going to change that so that if you're running on Docker, we're going to put all the configuration in one directory. So that way for Unison, you only have one volume point. And we're actually going to expand on that in FutureRev, where we've got a package format that you can download all of the configuration into one file and use that to bootstrap a new instance. So we're trying to figure out how we want to maybe integrate that where you just pass in a URL of the bootstrap of the package. And we don't even need a shared volume. So that's another way to get around that. So we want to show off what we're doing with Unison and OpenUnison and how we integrate with OpenShift. And so what we're showing here is an environment where we're going to create a project. We're going to add annotations to the project that specify who owns the project, who's able to log in, and who owns the approval process for that project. Put a user log in using their centralized credentials to have Active Directory, request access, get approved, and log in and be able to work on that project. This is actually piggybacked off of the demo we did at Red Hat Summit a couple of weeks ago. So we have a mix of both OpenUnison and Unison all running on containers. The environment itself, we have OpenShift running on OpenStack, both of which are running on RHEL, as well as Red Hat IDM. Those are all being managed by Unison and OpenUnison from an identity management standpoint. Obviously, we're going to stick with OpenShift here. OpenStack and RHEL may be another episode. And so from an identity management standpoint, we've got two Active Directory forests, because Microsoft has basically told everybody since 2000 to logically break up your Active Directory forests. So we've got two forests for different sets of users. Now we have a Red Hat Identity Management server. Active Directory stores all of our people, our humans, our users. And Red Hat Identity Management stores Linux-specific attributes, shell, SSH keys, things like that, all authorization, so all of your groups as well as service accounts. OpenShift has kind of an interesting model for identity management where it really outsources most of it. And there's really kind of two ways you can add users to, authorize users to use projects. The first is to manually use the OADM command to add users to roles inside of a project that can get messy, that can be really hard to scale, and it's almost unmanageable once you get to a certain level. The other option is to use LDAP directory. So you set up a directory, you set up a group, and OpenShift provides a synchronization capability. And that works pretty well. The impediment there is that you have to manually set up these synchronization jobs. They're not real-time, you know, they're periodic. And so you want your OpenShift deployment to be as dynamic from a security standpoint as it is from a DevOps standpoint. And so we actually integrate directly with the OpenShift API so that when you get provisioned into a group, we're not adding you to a group in the directory, we're adding you directly to the group inside of OpenShift. So everything's very dynamic. You define one workflow or set of workflows for your projects and say, look, this is how the approval process works, maybe it's a single step, maybe it's a multi-step, and then we handle the rest. So a project gets created, you don't have to worry about creating new workflows. So that being said, let's get to the really fun part. So OpenUnison is up and running. Like I said, it's just a J2E application running on Tomcat. And the first thing I'm going to do is I'm going to go ahead and log in. I'm logging in with my domain number two account with the aptlyname underscore two. I'm going to put in my password. So this is scale. It's scale.js. It's an AngularJS application, RESTful APIs. So if you wanted to integrate this into your own portal, your own application, we fully document all the APIs, nothing's hidden, nothing's secret. If I click on my profile, this is coming directly out of the directory in OpenShift. So you can see I already have access to three applications. And when I come into OpenShift, just to show you there's nothing up my sleeve, again as the same user, and this is authenticating through the Unison instance, running on Docker, virtualizing Active Directory. So you can see I've got those three projects lining up with those three groups. So the first thing we need to do is show you how we actually request access. So I've got this OpenShift organization set up and I've got a dynamic workflow built. And what that dynamic workflow actually does is it talks to OpenShift to figure out what projects are available. So when I click on this and this loads up, this is actually coming directly out of OpenShift. This is not a static configuration. So you see I've got test project, test project two and test project three. We're going to add team project four. So here's my YAML and for my new project, I'm going to make a couple of tweaks to this to make sure that it will load. So this is pretty standard. These annotations and an annotation in OpenShift is specifically designed for use cases like this, where an external tool that wasn't built by the OpenShift team is going to work with the OpenShift API. So let's us add additional data, which is great because that makes our product work really well. And so these are kind of your standard annotations that would get created if you created it through the GUI. Then we created these two additional annotations. And what these annotations say is this first one is identifying a group inside of Red Hat Identity Management that stores the list of approvers. It doesn't have to be a group. It could be a dynamic rule. It could be a set of users. I tend to like groups just because it makes it a little easier to manage the access. The second annotation defines what group in OpenShift will allow access to this project. And so what we're going to do is once we create this project, the next thing we're going to do is create this group, TeamApp4, and then add it as an administrator to the project. We can't let you add users directly to projects. The way that OpenUnison figures that out is we need to call OpenShift and have OpenShift tell us what you already have access to. That mechanism doesn't exist right now. So instead of trying to do that manually, we decided to stick with groups. So I'm going to go ahead and create this project. Save the file. So I just created the project. And now I'm going to go ahead and create the OpenShift TeamApp4. So that lines up with this authorization I created. And then finally, I'm going to add the TeamApp4 group as an administrator role to TeamProject4, which is the project I just created. So that's getting created. And so I'm here in Request Access. So you can see TeamProject2, TeamProject3, TeamProject. I click on this again. I haven't made any changes to OpenUnison. And you can now see here's TeamProject4. So that was dynamically generated based on a dynamic workflow. So now I'm a user. I need to be able to access this project. I'm going to check out for my job. And I'm going to submit the request. So now the request has been submitted and email has been sent to the approver. My boss is giving me a hard time, wants to know why I haven't started work yet. Well, I don't have access yet. Okay. So here's an example of a report that we can do of my open request. So the current we logged in user, I'm requesting access to TeamProject4. I got to go bug this other map Mosley in the other domain. So I'm going to go ahead and do that. So the other map Mosley is going to log in. Now I'm logging in with a different domain. So I'm logging in as the approver map Mosley. So I'm now logged in and you can see that I have this open approval. And this is all generic bootstraps. So works great on mobile devices. So this is who it's for. This is what I'm asking for some other details. So I'm going to go ahead and say, okay, looks good. So I'm now on record is saying that the other map Mosley should be able to have administrative access to TeamProject4. So other map Mosley has gotten that request. Now here's a really important benefit here is that now we have logs. So I can look at the completed approvals for today. And we can see TeamProject4 that the request was made and was approved by map Mosley. So you could assign just your auditors or just your security folks access to these reports so they can't go approving anything, but they can get into this data and stop bugging you about it. So just show you there is absolutely nothing of my sleeve except a notification. And I'm going to log in now as user map Mosley. And when I click here, now it's pulling up that I'm a member of the TeamApp4 group out of OpenShift. So if I start, you know, if I'm having a problem logging in, I start saying, hey, I have access. You know, it's not uncommon for users to forget to go to the checkout and submit things. You know, for support, it's real easy to say, well, does it list as a role? No, okay. Well, you didn't actually ask for it. So now what I'm going to do is I'm going to log back in. So this is my current access. If I hear refresh will come up. Yep. And there we go. TeamProject4. I'm an admin and I'm able to start, you know, doing my job. So we actually just showed a complete lifecycle here of having the user law or having an admin create a project that workflow gets dynamically applied to that project. So if you wanted to change who the approvers are, et cetera, you make the changes directly to your OpenShift project. And this can be dynamic. So we're using two attributes here, but you know, you can put anything in those annotations and include it in the workflow. So you want to add extra descriptive information, additional approval steps that can all be driven straight from the project YAML file. And then once that YAML file was created, the user was able to log in, request that access, get it approved. It was all audited, reported. And now you can see I'm logging into OpenShift. So that kind of brings us full circle. We're using S2I, generic Docker, integrating with OpenShift and running on OpenShift as well. And that's the whole demo. That's pretty awesome and nothing crashed there. What was the issue that you were having earlier with doing the STI build? The build itself was fine, and I can actually pull it up here. It looked like there was a dependency that didn't want to get pulled down. Yeah, so for some reason, the active MQ dependency. Oh. Actually, new space. I ran out of this space. That's not good. I think I need to go ahead and take a look at that. But interesting. Not really sure why I did that. But yeah, so that was a local problem had nothing to do with either S2I. We I think ran that build process like five times today and no problem. So once I fix my laptop, I'm sure it'll work great. Why don't you throw your slides back up again so that we have a background. If there's any questions from any of the participants at this point. Just type them in the chat or raise your hand and I will unmute you. I think it's actually an interesting top that you've given me because it's it's like the open source side. And when you're pulling from GitHub, the build and using the base builder image. The whole S2I workflow works very nicely. But as you get more complicated and you want to do more things with your service. They look to be like an alternative. I'm wondering if you're thinking. That that there's anything we could do for S2I to make it work for. The commercial offering a little better. You know. What for your commercial not S2I is an open source project I should write. I don't know that S2I value would really hit at least for us on the commercial side. Because the gap between the commercial. And the S2I is really the ability to have that. The management UI. And because source twice really great at going from some text files in a. Get repository to a running solution in a couple of seconds. Whereas for us. The management UI is where you would go ahead and at least start that process. You might be importing things you might be executing against it. But I just don't know that S2I would really have helped it. For us. You know what we have thought about doing because we've had a couple of folks ask this is, you know, maybe have a. Hybrid approach where they love the idea of using the UI to build the config. But they actually want to use open unison and production because they don't want to make changes on the fly. They want to be able to say this is static so I and because the configurations are 100% compatible. You take a configuration from open unison you can drop it into unison vice versa you take configuration from unison. You drop it into open unison. There's only one or two very, very specific instances where that isn't 100% the case and that's only because those components are not components I own and so I can't open source. But even then we make those binaries available so. You know we've taught we've talked to folks about saying okay well we'll use the management UI to generate a get repository for you that you could then use the source to image as part of your deployment process. So, and especially when you start getting into open shift that makes it even easier because then we can integrate with builders and simplify through that process so from for our commercial side I could definitely see a hybrid approach and source image really helps us out with that. Brian is there anything you wanted to add at the end of that. No I don't think so I think Mark does a pretty good job of summing all that up there was a couple of things I was going to chime in with but it's almost as though you were sort of reading my mind with that Mark with some of that hybrid approach stuff so well done. Perfect. I'm not seeing any questions in the in the chat right now so you've done a really awesome job Mark answering most of the questions there. And if you wanted to ask more questions afterwards you can always post them on the mailing list the OpenShift Commons mailing list or get a hold of Mark and Brian through Tremolo security on Twitter and via email so we'll pop that up. We'll be editing out that little tiny glitch in the beginning with the audio and posting this as a log post on OpenShift.com and putting on a YouTube channel shortly so you can rewatch it and and see if you have any questions or if there's other bits that you'd like added. I just really want to thank you for doing this because it's really one of the things that we really like to see is the different use cases for containerizing applications and how they're configured against and run on OpenShift and just done a really nice job of doing that. The one other one we did previously was crunchy data and they had a slightly different approach because they did containerize each of the each of the tool pieces of tool chains like Prometheus and one logging and other that the Postgres databases and others. So they did a different a slightly different approach and not saying that any of them are better or worse they're just for different applications and different processes and services you need. Slightly different things and so this is really good to expose this use case so thank you very much for coming and doing this and everybody for joining us in the little silent today. And we'll be on in two weeks time with another image builder SIG and tomorrow there will be an image there'll be an OpenShift Commons briefing when induction to big data by members of the Apache Spark team here at Red Hat. So we're looking forward to that. So thanks again and we'll just we'll talk to you all again probably tomorrow or in the following weeks to come. And if there's other folks who'd like to share their use cases for containerizing and building images please let me know and we'll give you the podium next time. Thanks again guys. Thank you. Thank you.