 Well, good morning everybody and welcome again to another OpenShift Commons briefing. This week we are going to have Ben Paris talking to us about source to image, S2I. It's a toolkit for building reusable Docker images from source code. It's one of my favorite things so far, besides origin, to come out of the OpenShift team. And I think you'll find that it's one of the more useful tools for building Docker images, especially if you're running in an enterprise with lots of things that you need to make sure work and are updatable and maintainable. So I'm going to let Ben introduce himself and get started because we have a lot of content to cover. The format for today is we'll do Q&A through the chat. And then once Ben is done with his talk and his demo, we'll open it up for a conversation for anyone to have questions and answers answered there. So take it away, Ben. Thanks, Dan. So my name is Ben Paris, and I am the team lead for developer experience for the OpenShift product, which means that we help users coming to the OpenShift platform with turning their source code into application images, handling the builds, some of the CI CD stuff, etc. And a big part of how we do that is this source to image tool. So it is integrated into OpenShift, but we've also made it available as a standalone command line tool for developers who just want a more source-oriented way to take their application source code and produce a Docker image that they can run either on OpenShift or just run on their own system or run anywhere else that they would like to run that image. So I'm going to be taking you through some of the capabilities of the source to image tool and what you can do with it and how it works. So just the high-level workflow as a user who is coming to use the S2I tool, we have a concept of builder images, and I'll go into what a builder image is a little bit later. But as a developer, assuming that somebody has already produced a builder image that you would like to use, a builder image that's capable of building Ruby source code or building Java source code, then you just have your local source code or you have a Git repo containing source code that you've been working on with your IDE on your local system, and you're going to invoke the S2I build command, and the S2I tool is then going to combine the builder image you have requested to use with the source code that you have provided and go through all the steps needed to turn that source code into a runnable application in terms of installing dependencies, doing compilation, etc., and then commit that as a new Docker image on your system. And then it's up to the developer from there if they want to go ahead and run that Docker image themselves because they want to test their application, or if they'd like to push that application image out to a Docker registry so it's accessible from another system, then you can do that. And that's the very simple straightforward flow as a developer who just has source code and they want that source code to be turned into a Docker image that can be deployed or run somewhere. No messing with the Docker file, you're basically just working in the source code that you're used to working in for whatever framework you're using, and then using the S2I tool to turn it into an application image. So we had a number of goals that we set for ourselves when we started this project. So first off was ecosystem, right? The Docker is very popular, Docker images are very popular, so we wanted to make sure that we could take advantage of existing Docker images if there is an image out there for Python or for PHP or something else. Somebody who's using the S2I tool should be able to use that image with S2I. They shouldn't have to create a whole new image from scratch or do a lot of special work in order to have a S2I builder image and use it with the S2I tool. We wanted to take advantage of all those great images that are already out there. And then in conjunction with that, we also did want to create an ecosystem of our own builder images. So while you can use existing Docker images, and I will show you how that can work, that you can just take any existing Docker image and use it with S2I. But there are certain things you can do when you're creating a new image for a language or framework that makes it immediately consumable by S2I so that someone else who wants to use your image with S2I, they don't need to provide anything themselves. They can have that straightforward flow I described earlier of providing their source code, invoking the S2I tool, getting an application image. So we want to see this ecosystem grow of what we would call an S2I-enabled Docker image where everything is baked into the image for use with S2I. And it's not much that has to be there. And I'll again talk through what some of that is. We were also going for ease of use. So we wanted developers to be source code developers. And they don't need to learn and write Docker files. There's a lot of best practices around Docker files that to write a really good Docker image, you need to know certain tips and tricks about how to reduce the number of layers that you have, what user are certain things running under. It's just a tool and there's nothing wrong with Docker files and you want to write Docker files. That's great. But if all you want to do is worry about, I've got my Java app and I've written Java source code, we wanted developers to not have to worry about then writing a Docker file that's going to Maven build their source code and put it in the right place within the Docker image and debugging all of that. At the same time, we wanted to then enable all the existing tools, so developers who are familiar with Rake, with Maven, et cetera. Those should be the tools that S2I is going to enable and use within this build flow, so that building your source code via S2I could look and feel a lot like building your source code in the way you're used to doing it via the command line or the make file or Maven or whatever tool you're used to using to produce your assembled application code. We wanted reproducible builds. We wanted to ensure that if somebody is doing iterative changes, they're pushing something into production, that we have a reproducible way to say, okay, I'm just changing the source, some rebuilding on top of what is otherwise the same base image, which, again, when you're doing a full Docker file, you've got more potential things that might change. If, for example, you're doing a yum install or a download of a Java framework as part of your Docker file, then every time you Docker build that, you might be getting something slightly different. But here we're minimizing what are the inputs into the final application image that you get by saying, we've got the builder image, it's got the framework, it's got everything else. All we're adding is the source. That's the only thing that you're going to change if you need to fix a bug and rebuild it. You're not going to pull in a bunch of other inadvertent changes. We wanted to be fast. We wanted to be ideally as fast as somebody is used to developing on their local machine when Docker is not in the picture. And, again, that comes back to when you write a Docker file unless you do a lot of work to ensure that you're not creating a lot of layers. Doing a Docker build is going to commit each layer as part of that build process, and it can take a while to create an image. With S2I, everything is done as a single layer. So you're only adding one new layer to the builder image, which is efficient for pushing, and there's only one commit that has to be performed during the build, so it's pretty fast to actually produce that image. And then finally, from a security perspective, we wanted to ensure that all of the assemble logic where we're building your application should be done as a non-privileged user. So a lot of Docker files, they default to root. Nobody sets the user in the Docker file. And so everything's being done as a root user. Not necessarily bad, but it is a potential vulnerability if there is a way to escape the container logic or just you're not really sure what your application is ultimately running as and what permissions it has when you do that. So we're trying to give away for people to have more fine grain control over what permissions they are granting when they're performing this assembly logic. And it becomes more important when you move away from that local on doing an S2I build on my machine and into an environment like the OpenShiftPas where we're going to be performing these S2I builds and we want to ensure that when a random user comes and wants to build their application, we're not letting them build their application as root on our systems. All right, so this brings us to the first demo that I'd like to do, just of what the S2I tool looks like and the experience of a user running it. So you don't have to worry about this first screen, which is probably too small to read. But one of the things that we try to do with our images that we produce, so in this case I've selected the Python 3.4 image, which is S2I enabled, we set up a default usage script, so if you just run the image, you actually get a little bit of help information telling you, hey, what is this thing? How can I do something interesting with it to build it and run the image? So just a little helpful information for a user who's new to an image and doesn't know what the heck this Python 3.4 image is about. So with that in mind, I'm actually going to go ahead and do an S2I build. So here I'm invoking the S2I tool with the build command. I'm specifying a repo on GitHub. You can also specify local source, but I'll show that in a little bit. But here we're going out to a repo. That is a Django Python repo, and then we're specifying that we're going to use that Python 3.4 image as the builder image, and it's going to produce a new image for us called my slash Python. And so if we just invoke this, we're going to see that we're actually running through a Django install of downloading various Python dependencies. Again, just like you were doing this locally, we're just getting all that output from the Python build process, and this will just take a minute or so to run through building all of the dependencies and packaging up the application. It's actually done now. We're now committing the image. I've got this on a fairly low log level. If you increase the log level, you can see a lot more information on exactly what S2I is doing. But here it's committing our new image with our application added to it as a layer on top of that Python 3.4 image. Should be done in another second or two. Just to show you what that repo that we built actually looks like, there's a very basic Python repo with a requirements file and all of our source code under here. So nothing special about this repo in terms of it being a S2I specific repo or anything like that. It's just a standard Python repo. OK, so now we've committed that image. So we actually have a new image named myslashpython, and I'm going to go ahead and run that. And we see that our app is actually coming up right here. And it's listening on 32.788, so let me go to that port. And there we are running our Django application out of a Docker image that we just built from source code on GitHub. So going back over to the presentation. So that's the basic workflow that we would see users going through with S2I, hopefully making it very simple for them to turn that source code into an application image that they can push somewhere or run and iterate on, rebuild, et cetera. So I've already talked a little bit about the differences between doing S2I versus a traditional Docker build with a Docker file. So like I said, we're able to enforce that you're not doing any of those assembly steps as root. We also let you separate out your dependencies. So you can, you notice that there was actually a lot of dependency installation happening during that Python build, but you could include those dependencies within the image so that again, when you actually do the build, you're not pulling down, sorry, within the builder image so that when you are performing the build, you are not actually grabbing any dependencies. All you're adding is your source code on top. And again, that'll improve the speed of the builds and reduce how many layers you're introducing. So what are in these S2I builder images? As I said earlier, you can use any image as a builder image, but if you want to directly S2I enable it, there's a couple of things you can add to the image. So you start with an image that contains your base language, so like a Python runtime or a Java runtime, and then any build tools that you're gonna need for building applications for that framework. So for example, Maven. And that has been a bit of a controversial thing. There are people who are concerned that they don't want to have both their Maven and their wild fly runtimes in the same image. And we do have a solution for that coming, which I will talk about towards the end of this presentation. But for the most basic case, you've got a single image that has both the application runtime and build tools that can prepare source code for that application runtime. On top of that, you then add what we call an assemble script. So the assemble script has the responsibility of taking the source code provided by the developer and doing whatever steps are needed to turn that into a runnable application. So if that source code includes upon XML for Maven, then you might just run Maven package and that's everything that needs to be done. Really depends on what steps need to be done for your framework. It's very generic. It's simply a shell script. It doesn't even actually have to be a shell script. We can execute any binary. That's just named appropriately, but normally they are shell scripts. They can also inject configuration into the system. So if you're running a, for example, an application server and you want to ensure that it's configured in certain ways, then that can be done as part of hints that are provided by the developer during the, with the source code or hints that are provided by the developer through environment variables to ensure that the runtime is all set up correctly for running the application. So that's the assemble script and then you have the run script and that's responsible for actually starting the application. So when the application image gets committed, the start up run command for that image will be this script. So that's going to be your Jboss start script or your rackup command or whatever command is gonna start the framework for the application. And again, any additional runtime configuration that you want to have performed each time the application is being started up, possibly configurable via environment variables. So again, totally flexible, just a shell script or other executable binary that will be invoked and we'll just set the run command of your app image to this thing so that next time you go to Docker run that image, it will start your application. And then lastly, there's an optional save artifacts script. And this is for iterative development. What it allows you to do is if you, for example, pull down a bunch of maven dependencies during your build. And so now those dependencies exist in the application image that you've produced. Then next time you build, rather than starting from the base builder image that doesn't contain those dependencies, we can actually pull the maven dependencies from the previous version of your application image and then use them as part of the next build so that we don't have to pull them down from the internet. So this save artifacts script essentially is responsible for picking the files and paths from the application image that you want to reuse and make available to future builds. And that'll, again, improve the performance of what we call an incremental build where you've already built the thing once and so you've got a lot of your dependencies already and you can just reuse them. So the steps the S2I framework actually goes through when you invoke that S2I build command is first off it's gonna create a brand new container running the builder image that you specified. For example, that Python 3.4 CentOS image. And then it's gonna stream a tar file into that image and that tar file consists of the application source that the developer specified. And in the case of any, an incremental build where you have saved some artifacts from a previous image, then it's also going to have saved those artifacts out of the image and it's gonna stream those into the builder image as well. There is actually a little bit of setup before it even gets to this point, which is for example, if you were building from a GitHub repo, then we're gonna clone that repo to the local machine first before streaming it in. But basically it starts up this container, streams the artifacts and source into the container. It's going to provide any environment you can provide an environment on the command line when you invoke S2I or include a S2I environment file in your source repository itself. And then it's going to run the assemble script for the builder image, which is then responsible for consuming all that content that was streamed in as far as the source and the artifacts, perform whatever activities make sense for that assemble script and then waits for the assemble script to finish. Once it finishes, it's gonna commit the container, sets the command to that run script that I mentioned earlier and tags the image with the output name that the user specified. And then you've got that new image locally available. So what does an assemble script look like? Like I said, here's just a shell script. So it's going to, the source code in this case is being streamed into slash temp slash source. So the assemble script is gonna grab the source code out of that slash temp slash source directory and move it into the working directory of the container. It's going to configure an NPM. This is a Node.js assembly script I should mention. So it's gonna configure an NPM mirror just in case somebody is wanting to use a mirror. So this is where those environment variables that you might specify come into play. If the developer wants to use a local NPM mirror for performance reasons or something else, they would have set this NPM mirror. And then the assemble script invokes just a standard NPM install to set everything up. And then once that finishes, we will commit this container in the state that it's in with the run script as the startup target. So what's the run script look like? Well, this one is incredibly simple. We're just going to invoke npm run. And the npm run environment.wc there, it does default to start, but if somebody wanted to override what command they were actually wanting to run within their package JSON for their Node.js app, then they could override it again through environment variables for configurability. The exec here is important because that's what's going to make this the primary process of the container so that it receives all the signals. You always want the last command in your, in any Docker image shell script that's running to be an exec command so that signal handling goes to the place that you want it to. So I've talked a lot about source in the sense of Java source, application source code. But it can actually be quote unquote source that the developer is providing can actually be more than just that. It can also be configuration files for the framework. So if you're using EAP or Wildfly, you might provide a standalone XML that's specific to tune the framework for your application or PHP INI file. There's really two ways that you can do config. So one of them is this way where the assemble script could actually look for a standalone XML file being provided by the user and make sure that that file ends up being the configuration for the application server by moving it to the right location before committing the image. And then the other way is through the environment variable config. And it's really a trade off, right? If you do environment variable config, it can be a little bit easier for an end user because they can just specify that one environment variable and the value every time they need to. They don't have to write a whole config file and include it. On the other hand, if you write a whole config file, you can include it with your source code and manage it with your source code. So a little bit of thought goes into whether you want to enable configuration through letting developers provide a config file or setting an environment variable or even potentially doing both and letting them pick. Beyond framework configuration, we've also used S2I to enable something that's not even a application framework such as database images. So in this case, I've got a MySQL image but I want to easily be able to create, customize MySQL images where I have fine tuned the database configuration. So in this case, my source code could be the config settings for the database. It could also include schema initialization scripts where it's going to create some tables within the database. I should note, you don't actually want to create the database content generally because generally speaking with a database image, you want your database content stored on a volume and not baked into the image itself. The pattern here would be, create some schema initialization scripts so the developer can provide those SQL commands that are going to create tables and things like that. The assemble script would put those initialization scripts into a known location but not execute them and then the run script on startup of the image to look for those scripts and execute them on startup that way. It's executing those SQL commands once you have a proper volume mounted where the database storage is going to. So just a subtle point there but that is a way to allow people to provide database schema initialization as part of an S2I build process. And the last example of configuration that we do is we have a Jenkins image and so in this case, the S2I assemble process allows a user to provide Jenkins job definitions. That's their quote unquote source in this case, general configuration of the Jenkins server that they want, so we've got a basic Jenkins but they can configure it any way they want and they can store that configuration in a Git repo so they're managing revisions to their Jenkins configuration through source control with Git but then they're able to quickly build a new custom Jenkins image that has exactly that config out of that Git repo anytime they need to. And the last thing that the assemble process for our Jenkins image will do is you can list plugins that you want installed and so you can just reference those and part of the assemble process will pull those plugins down and ensure they're included so we don't have a big bloated Jenkins image with everybody's plugins but we make it super easy for somebody to create their own custom Jenkins image with exactly the plugins they want. So I alluded a little bit to incremental builds earlier this is how we reuse dependencies. So there's a few conditions required to take advantage of incremental builds. So first off, the builder image itself has to be compatible with incremental building which means it has to provide a save artifacts script or the user has to provide a save artifacts script themselves and I'll get into how you can provide them outside of the image layer that's how we enable generic Docker images to be used with S2Is you can provide them outside the image but one way or another a save artifacts script has to be there so we know what dependencies we are reusing. The second requirement is that we need a previous image to exist that matches the output name for this build. So in my case I was outputting to my slash Python. The first time I can't save anything from the previous one because my Python doesn't exist but on follow on build since that now exists it would be able to reuse the dependencies from there. And lastly it's actually a flag you have to explicitly enable in S2I and the reason for that is we didn't want people to get confused where the system reused an old dependency and they actually expected it to pull a new version down from the internet so we wanted it to be an explicit choice that you know you are deciding to do this and are aware that you're gonna get old dependencies and not necessarily the latest thing that's out there. So the way it actually works is it's gonna create a Docker container from the old output image and then run that save artifacts command within that container which is gonna copy all the dependencies the save artifacts script references out and then make that available in addition to the source code that it's going to provide to the new build. So I'm just gonna show you how this works in this case with a Maven image so Maven is a little bit slower in this case so we're gonna do the first build where we're not doing any incremental on building from a local directory with a wildfire image and you see here we see all these downloaded messages it's downloading tons and tons of Maven dependencies it takes a little bit of time to grab all those and so that's what we're gonna avoid when we do the incremental build the next time around. So we've built our war again we are now committing the Docker image and we're done with that. So I'm gonna go ahead and run this image just so you can see what we did there. I'm gonna guess that we are on the next bumped port number. Okay so here we are in our wildfly 8 application that we just built running on my local system. So that's cool except that it says wildfly 8 and actually this is a wildfly 10 application. So I'm gonna go ahead and fix that change this code here and go ahead and commit that change and then we're going to build again except this time we are gonna specify the incremental flag. In addition to the incremental flag I'm gonna exercise another option within S2I which is to actually go ahead and just run the image as soon as it's built that way I don't have to actually do that Docker run command afterwards. Just saves a little bit of time when you're doing iterative development you can just do S2I build and dash dash run and then immediately test your changes. So this time when we do the build we're gonna see that it does not do all the Maven dependency downloading because it's gonna extract all that stuff from the previous image and so you see it's actually already done it's already extracted and built it and committing the image. As soon as it commits it we'll see it actually starts up the image so now it's starting it up I didn't do a Docker run. It's scrolled by the port number that it actually started it on. So it actually tells you information about where it started it and so now that it's up and running we'll go ahead and hit that new port and now we see that this has changed to wildfly 10 that was the change that I committed earlier. So that's an incremental build again saves you time not having to pull all those dependencies down every time that you do the build. Another capability that S2I has is what we call layered builds. So under normal circumstances because we are tar streaming the content into the container for the assemble script we require that you've got tar and shell binaries within the image so that we can do that and that's not a very challenging dependency most images meet that dependency but if we do encounter an image where those requirements do not exist we have this fallback model for injecting the source code in which we actually build a new image with the source code baked into it and then invoke the assemble on that. So we actually write a new Docker file that's going to be based on the builder image and contains an add command that adds the local source code from the developer and we build that commit that as an intermediate image and then we do an S2I build against that image since it now contains the input files and at that point the assemble script gets invoked and your new application image gets output. One warning if you do find yourself in a case where S2I is doing this if your builder image contains on build commands then those are going to get executed because on build commands defined in an image get run anytime you Docker build an image based on that image which is exactly what this is doing. So just a little bit of a caution there if you're doing that. Okay so I mentioned earlier that some people express some concern over the fact that we combine the run time for the application as with the build tools like Maven and the outcome of that is that the application image actually still contains that Maven tooling and it's not really needed in the application image and some people also would say it's a bit of a vulnerability exposure there that you've got this additional stuff in your application image that really just doesn't need to be there. So our approach to solving this is what we're calling extended builds and we have a pull request in flight it's probably within the next couple of days going to merge but it's not quite there yet. And the idea is to allow people to invoke S2I builds in which they provide a builder image and a run time image as two separate images. And so the builder image looks a lot like what we've discussed with the assemble script so it's going to contain your Maven tooling does not need to contain your application run time just the build tooling like Maven and we'll invoke the assemble there so it would produce for example a war file and commit that image. So now you've got a new image that contains just your war file or whatever built artifacts you wanted and then we will immediately start a second build in which we extract the files you specify from that artifact image and add them to a run time image and we'll have a assemble script in the run time image that knows how to move those artifacts around to the right locations but it's not actually building anything it's just shuffling files around and then we commit that image as the application image. So because we built that final image on top of a run time only image that doesn't contain Maven your application image also does not contain Maven it only contains the run time and then you've got your image that you can push to the Docker registry or you can just run locally. So I'm going to go ahead and demo that like I said this is not in what's in S2I today but it should be there very soon so I did want to talk about it I know it's been something the community has been interested in having. So first off just to show there's nothing up my sleeves here I'm just going to run my this is my wild fly run time image that I've created that does not contain Maven so you can see I listed the wild fly directory and we do indeed have wild fly there but if we look for Maven there's no Maven in this image and then we've got our Maven image and there is no wild fly in that image but we do have Maven available within that image. So I'm going to go ahead and make another change so you can see that we're actually doing something here changing that source code committed it and now I'm going to go ahead and do one of these extended builds so the syntax here same S2I build dot just building my local get directory I'm using my Maven image as my builder image my output is going to be my wild fly just like before but then I'm specifying in addition a runtime image which is going to be WF runtime and that's the hint to S2I that is doing an extended build and so it'll use the first one as a builder only and then do this extended extract and commit and then the other thing I'm specifying is what files I actually want to extract from my Maven image my quote unquote builder image so the Maven image when it builds my source code is going to produce a root dot war file in slash temp slash war so that's the file we're going to extract and provide to the runtime image that the runtime image will then deploy you can actually specify these files as a label on I believe it's on the builder image so that you can actually have a builder image that knows what files it produces and then the user using it doesn't have to specify the files but this is a way where you can customize exactly what files we're still working a little bit on the experience there so I'm going to go ahead and run that and we're going to see once again we're doing a Maven build here this is not an incremental build so we're re-downloading all the dependencies from Maven in this case and so we've done that and now we are have also actually already done the runtime image as well and we've committed everything so now we're going to go ahead and run this new image that we built now we see we're in our runtime only Docker image that we have produced so that's the extended build feature like I said coming very very soon hopefully that addresses some of the requests from the community that we provide a way to separate out the runtime bits and the application building bits in the images just some other options that the S2I framework has available that I want to talk about I kept mentioning you can override the scripts so you can provide the assemble run and save artifact scripts in your source repo itself you put them in a .S2I directory you can also on the command line specify an HTTP URL that contains where it points to a directory that contains those files in either case we'll then pull those down so that's how you can use S2I with a random Docker image that has not been S2I enabled because you just provide the assemble scripts yourself and then you can just go with that you don't need a Docker image that was specifically built for S2I if you want to provide the scripts yourself or if you do not like the assemble logic there's an S2I builder image and you like most of what it does but the assemble logic itself isn't quite to your taste you can simply create your own assemble script include it with your source code and use that builder image but you'll get your assemble logic you can specify pull policies for the images so by default we do pull if not present so if you reference a builder image that you have locally we won't do any pulling but if you want to make sure that you're always pulling the latest one you can specify pull policy of always if you want to make sure that will error out if it's not available you can specify a pull policy of never environment variables so these are environment variables that can be fed in during assemble time so the assemble script can use it to make decisions and they will also be part of the final image so the runtime application image that you've built will have those environment variables defined as well so you can do that either on the command line when you're invoking S2I or provide an environment variable in your source repo to control your application assembly and runtime behavior incremental build I showed you is an option by default it is disabled that you can set a flag to enable it there is also a way to inject additional files during assembly that will not be in the final application image so all your source code is obviously going to be in the final application image but if you have for example a certificate that you need at assembly time you need to authenticate to a proxy to access something or to access a Nexus repo or something like that there is an argument that allows you to provide those files so that the assemble script has access to them but those files will be truncated before the application image is committed so the final application image won't contain them they won't be in any sub layer of the Docker image or anything like that they will be non-existent for purposes of the application image so you're more secure from that perspective of if somebody does get into your application container they're not going to have access to those secrets and so where are we going in the future? Well, extended builds we obviously want to finish up that get the fit and finish right the experience right on that and get it delivered we would like to have the ability to override parts of an assembler run script today you have to provide the entire assembler run script you can work around this a little bit by providing an override assemble script that actually invokes the existing assemble script but it's not a great experience so we want people to be able to who like the existing assemble or run script but they want to control some additional behavior to be able to provide like a pre-assemble step or a post-assemble step so that they can inject that additional logic without having to fork the entire assemble script one thing we've discussed and I'd definitely be interested in community feedback on this one is actually removing the get support from S2I so we sort of there's a lot of complexity in the S2I tool today with being able to clone get repositories deal with sub modules of get repositories be able to check out a specific ref so that's another flag I did not mention but you can specify on the command line that you want to build a specific ref of your get repo we will check out and use that ref not master if you specify it but it's not really S2I's core competency to be a get utility and so one thought was sort of keeping with the Linux philosophy of doing one thing well that just defer to developers to get clone themselves whatever they want to build get their local directory to the state they want it to be and then S2I should just build your local directory and not be building off of get repos that would also solve the problem of people coming and saying well I use subversion and I would like S2I to support subversion while we sort of would just be deferring that whole thing but there hasn't been some concern that people like the get support and it's certainly handy to do so definitely something that we want to see where the community is interested in going with that and then lastly abstracting the Docker layer so that today we're heavily dependent on the Docker engine to start those containers and commit them as other container platforms like Rocket are evolving it would be interesting to abstract S2I so that you can sort of bring your own container platform and S2I can work with it just a couple resources that you might be interested in looking at so the source image repo itself of course on GitHub we've got a number of community S2I images in our OpenShift-S2I GitHub organization there's also a number of images that are part of the OpenShift product produced by our software collections library team those are under the GitHub SCL org they all start with S2I-something and end with Dash container so we've got images from Node.js Perl, PHP, Python and Ruby there is a wildfly image under that OpenShift-S2I community as well we also now have a .NET image available and we have the Jenkins image that I was mentioning earlier which is under OpenShift slash Jenkins that is S2I enabled and of course anything on Docker Hub you can consider using with the S2I tool so that's the end of my presentation I'd be happy to take any questions that we have all right well thank you very much Ben there've been a couple of questions in the chat I think you've covered most of them Jonathan had asked them and he was jumping the gun I think a couple of times on things and we've covered them because one of his point was at the very beginning was it looked like this S2I was really good for beginners but I think he's covered off the payoff for complicated systems and his question there do you want to add any more to that? Let me read through Can we ask a question now to show you put it in the chat room? Sure, go ahead Rob. Okay and in this one and you know if the answer is yes to this you've made my life perfect and wonderful in all ways and if the answer is no well you know you're just another bastard but we have a lot of ISPs who are interested in putting these up on OpenShift is it possible is it an ambition that we'll have a we have a website where someone could just go and we have a wizard that would guide them through your app, choose this, this and this and voila, here's a Docker image and it's ready to report OpenShift and we would then download that image to them and we could keep that image on our side as well to sort of bring it on board. I know there is some work in internally I don't know how much I can really discuss and I don't know a whole lot about it but to do that sort of ISV enablement of saying okay you're an ISV who's got this application but you really don't know much about Docker how can we make it easier for you to Docker package your application. It was in the middle where ISVs know little to nothing about Docker what they've done is they have put it up on other cloud platforms and they're sort of looking for that same level of ease of use and functionality. Yeah so I know there is some discussion around that how simple that could be is a little bit hard to say you know there is definitely some thought that needs to go into containerizing an application and understanding some of the differences of what it means to be running in a container and where you're getting the content from. We had a wizard that guided us through some of those decisions and of course when you talk to me and say well they just got to a point where they don't know the answers and they have to come back to us but at least it gets some solution going. Usually I say well you have to Dockerize it. Yeah. Rob there is a lot of work going on internally in the container group at Red Hat to make that build service available in the image builder sig we'll probably be having a talk shortly about what that roadmap looks like. So hopefully we can answer that question again and make that everybody wants a wizard. I'd love a wizard too but some of the stuff that the ISVs are doing is even more complicated than and have numerous computers and I think Jeff McCormack who's been listening in on this too is a good example of one of the ISVs from Crunchy Data who's managed to containerize up his application and it's multiple containers not just a single one. So there's some complexity to doing that there actually is. There's one other question that's in that's just coming from Jonathan here. If Ben. Yep sorry just reading it now. Now read it out loud is there or Jonathan you can unmute yourself and ask the question too. Is there a potential for a Maven or a Gradle plugin to trigger the STI command? This seems like a great way for developers to automate the process of deploying a Docker image to run on their local machine so that it matches production environment for local testing and stepping through code. Yeah so there is not a plugin today but is there a potential for it? Absolutely that would make plenty of sense to me if that's the workflow that a developer is used to as they prefer they're working in Maven and they want to just do that as a Maven step to package things up. Yeah that would be a very logical thing to do. So there's been a little talk about creating custom STI builders and is this something that you know the Maven and the Gradle plugin something that someone might customize or fork STI? I don't think you would have to customize or fork anything that would that would purely be a plugin made available to you know Maven or Gradle as just a standard Maven or Gradle plugin that would just contain the logic to do the STI invocation so it would just be a sort of a standalone additional tool that certainly we'd be happy to reference from the STI tool to say you know to point people to if they were interested in doing that. So people want to contribute to STI as from the community something along that line. What's the best way for them to reach is it through the Github repo and issues and... Yeah yeah absolutely open an issue on GitHub if you have a feature you're interested in or you're hitting a bug and we can discuss it and decide you know is it the right fit and shape up what we want that to look like and if you're interested in contributing the code then submit a pull request. If not we will put the feature request on our backlog of things to get to and try and make that improvement. So you also mentioned that there was a new release coming with that extended build stuff in it for separating out the run time and build. Is there any timeframe for that that you can share? Yeah so like I said in the next couple of days I'm hoping that we actually land the pull request and we will do an official S2I release once that merges since it is a big change. So I'm going to optimistically say by the end of next week I would expect that we would have that done but if not it is certainly within that timeline it's not months away it is days away. So well maybe by the time that this video is logged about and edited and made into something for the YouTube channel by that point it usually takes us three days or so maybe we'll have it and we'll include that bit in the blog post as well. So I'm looking to see if there's any other questions here Jonathan if you have anything to follow up you seem to be asking those. You can unmute yourself if you'd like to or you can get a hold of us on the mailing list and then nor I will respond to that. Yeah I was just going to follow up on the complicated systems and teams. It is particularly useful for sort of the DevOps separation where the ops team would like to ensure that developers aren't just going nuts and putting everything in the world into their Docker files or that they evolve a system of every single developer in every single application has a slightly different Docker file instead of things. So it gives the ops team the ability to say here are the frameworks we support. We know these have the right versions and your developers you bring the source code we provide the framework that you're running on and you can do that with Docker files you have to be a lot more careful to make sure that whoever's running those Docker files isn't doing things you didn't want them to be doing in it. So that's another advantage if you have a large team or a complicated system where you're bringing a little more control of the process of what is going into those images. And we've heard various times in the comments briefings from Kotoban and other of the larger enterprises that are running large teams developing lots of applications that this is one of the use cases that they're really keen to have supported. So it'll be interesting to see if we can get some more community support and to use this with other container platforms like Rocket and other things and how it moves forward. So really thank you Ben and your team for all the work you guys have been doing. It makes my life a lot simpler and hopefully a lot of other teams will be happy with us as well. So it seems to me one more thing in chat. Yeah, so can the S2I tool pull artifacts from a remote location like Nexus and create a new image based on some base image? Yeah, in fact, that's exactly what the wild fly build was doing there. It was pulling dependencies down. That's really not the S2I tool doing it. It's the assemble script and the logic in the assemble script is going to pull dependencies. So in that case it was doing a Maven package and part of the Maven processing was to pull those dependencies down. The assemble script could have curl commands that are pulling artifacts as well if that's what you want. It's really up to whatever you wanna put in that assemble script. All right, are there any other questions? Yep, I do agree with that. The presentation was great, thanks Ben. And what Jonathan's saying now, it seems like an advantage of S2I is more about Jenkins configuration as code and less about Dockerfile automations that you need to do with or without S2I. What would you say to that, Ben? Certainly one of the advantages is that Jenkins configuration flow. I'm not sure as far as the less about Dockerfile is true, somebody needs to write a Dockerfile somewhere with or without S2I. The question is who is that that needs to write it and without S2I that who is the developer needs to write it. With S2I, the developer doesn't have to do it. The developer can rely on that ecosystem of S2I enabled images either provided in the community or provided by their IT group or whatever it is, but they don't have to worry about the Dockerfile creation and building those images, the base builder images. And that's actually a wonderful thing as a developer to hear. So, all right, going, going on. Thank you very much. That brings us almost to the end of the hour. It's been great to have you here, Ben, and I'm sure we'll have you back again soon. So, and maybe we'll get that we'll have done and get all the ISVs happy someday. All right, take care all.