 All right then, I would like to get started. We have a really good session set up for us today that I've been looking forward to for a while. It is with Eric Jacobs from our Red Hat OpenShift team and he's going to give us an update and an overview on getting started with OpenShift 3's latest beta 3 release. He's one of our master trainer type persons for OpenShift and we're really pleased to have him here today. So Eric, with that, I'm going to let you take it away and get started here because we have a lot to cover today. Awesome, Diane, thanks so much. For those who aren't familiar with the beta program that we've been running for the version 3 release of OpenShift that's coming up, we have a number of customers that have been participating with us, getting early access to the code as well as installing it, working with it, et cetera, et cetera. All of that material that shows people how to work with the beta is actually being developed in the open in public on GitHub. So it might be hard to read here, but if you go to github.com slash OpenShift slash training, you'll find out more information. You'll see all the training material. And currently we're working with beta 3, so we're going to look at the beta 3 sort of overview. And what we're going to do today is focus on the developer experience. So I've already done the installation and configuration of the environment. I'll review really quickly how it's configured. And then we're going to do or go through some examples of acting like a developer using OpenShift 3. We're going to do some builds using pre-existing templates, using public code repositories that are available, and we're going to build a Docker image directly from a Docker file. So firstly, we'll go over sort of the architecture and the environment. So for those of you who aren't familiar with OpenShift 2, OpenShift 3 has two main components. We have masters, which are the orchestrators, and nodes which are where user applications where containers actually get run. The beta environment is targeted at with three systems, three virtual machines in my case running on my laptop. We have a master that also runs the node software, so it's hosting applications as well as orchestrating, and then we have two more nodes. Again in sort of comparison to OpenShift version 2, with OpenShift 3 we wanted to still provide the capability to set up topology within your OpenShift environment to provide for high availability and distribution of applications, and et cetera. And we call those regions and zones in version 2. In version 3, for the beta, we're still talking about regions and zones, but the scheduling part of OpenShift has been drastically improved, and we don't enforce a topology on you anymore. So region and zone is a construct you can choose to use, or you can name it something completely different. And this section of the training material and also the OpenShift 3 documentation talks very heavily about what the scheduler is and how it works and all those kinds of things. But if we dive onto the command line really quick, I will make my font a little bigger here. Diane, let me know if people are complaining about sizes or speed. So if I do OSC get node, I'm currently the root user. And after the installation is completed, the root user is set up as a master. The root user on the master server is set up as a master administrator of OpenShift. So when I ask for the list of nodes, I see that we have three. These are host names. We've got the server that is called master where the orchestrator is running, the one I'm logged into. We've got node 1 and node 2. And from an architectural perspective, we've defined an infrastructure region and then just one other region called primary. And this primary region is where all of the applications are going to end up running. Again, we're going to do a separate session on administration and kind of setting all this stuff up. For now, it's all set up for you. To do a quick recap of sort of architecturally what OpenShift 3 looks like. Again, masters are the orchestrators. Nodes are where applications are run. The service layer we'll see a little bit of today is internal. It's meant for applications to talk to one another inside OpenShift. And the routing layer is how external traffic from consumers of the applications running on OpenShift. It's how their traffic gets into OpenShift. And we'll talk a little bit more about that as well. So what we are going to do is log in to the web console. So the first thing we do is provide a username and a password. Currently, the beta provides documentation on how to use HT password for user authentication. And we're going to be providing the same sort of Apache backed authentication layer should you wish to use it. Or you can actually configure OAuth directly. So if you wanted your users to log in with their GitHub credentials or their Facebook credentials or whatever their Gmail credentials, we can do things like that as well. But we can still plug into enterprise directories and authenticate through an Apache proxy as well. Projects. So in OpenShift 2, developers essentially when they created an account, they only ever saw one place where they could put all their applications and all their components. And in OpenShift 3, we've sort of expanded that bucket. So now a user can be a member of multiple projects, multiple buckets where applications and their components may land. So in this case, this user Joe has been assigned to this demo project. And Joe is the administrator of the demo project. He currently has complete rights. So if Joe wanted to delete this project, which would be sort of silly, but he could. And in some cases, Joe may be actually given administrative rights to create projects as well. And if we go into this demo project, right now it's empty. There's nothing in it. So some administrator has created a project for Joe, given a Maxis to it. And it's up to Joe now to fill it with stuff. So the first thing we're going to do is look at an example that is sort of a single, complete example of an application that's going to run with a service that's defined so that the application can be found. And then as well as a route so that external traffic can get into it. Let me look at this one. Yes. Okay. So one thing to note right now in the current status of the beta, we are still a little JSON heavy. We're going to see that you can do a lot of things without using JSON. But in the case of this example, well, we also support YAML. But anyway, this particular example is an example that we've created for the purposes of this training to create something very simple out of the box to really verify that all the components of your environment are working. And if we sort of dig around in here, one of the important things is we see this pod template. So pods, if we go back to this diagram here, pods are where applications essentially run. In most cases in OpenShift, a pod will have a single container, a single docker image that's running inside of it. But in some cases, it may make more sense to have multiple docker images if they're very closely coupled components. Right, and pods. Eric, can I stop you for a second? Could you bump up the fonts a little bit? That's a little small for us. There you go. I have a question. Can anybody hear me? I have a question. So speaking of pods, how and if yes, does this concept diverge from the Kubernetes vanilla or well, upstream Kubernetes concepts of pod, replicated pods, and services, and so on? So from an OpenShift perspective at this level, it's basically the same. Basically the same concept as a pod. So in other words, the same concept. So in other words, OpenShift has the same concept of replicate pods and services as well. Yes, and we're going to see all of that, yep. Good, thank you. Yeah, so in the case of this particular pod that we've defined, the image is OpenShift, hello, OpenShift. Well, what is that? Well, if you go to Docker Hub into OpenShift's area on Docker Hub, you'll find this hello, OpenShift image. And this is our basic hello world example. It just returns hello, OpenShift when you access it. The other thing that we define, and we're going to see more of this in the UI as well, but I'm just showing the JSON quickly as an example. The next thing is a service. And we see a selector for the service. We talked earlier on this other diagram about how internally applications communicate using the service layer. The service layer is also leveraged, and we'll see the routing layer too. The service layer is used to provide a single entry point to access all of the pods as they're scaled out. So you'll start out with one. You may need to scale up to support more traffic, maybe to 10 to 15, whatever, hundreds, thousands. And the service layer is how OpenShift keeps track of all of those. And it does this using selectors. This name, key value selector, name, hello, OpenShift matches the name of the pod, or I should say the name of the image in the pod template. And then lastly, the important component here is a route. So we define this hello, OpenShift.cloudapps.example.com. And we will get our application. So, and then Diane, let me know if this is big enough. So I'm logged in, or I'm about to log in as the Joe user. I'm going to copy and paste something real fast. That's too small. Yeah, this one, this, this one you all don't have to see. I'm just, that's my notes. So we provide a command on the command line called OSC login. And just like I logged in with my credentials via the web UI, this is using the same API and ultimately it's calling into the HT password back end. I've provided the certificate authority file. Everything is secured via SSL. And I've also told the login command where to talk, which server to talk to. And the reason that's important, yes, I'm running this command on the actual master. I'm running it on the same system where the API is. But because of the way we've configured OpenShift, the SSL certificate is only valid for the correct domain. So if I had done local host, it would sort of get angry because the SSL certificate wouldn't match, it would complain. Joe only has access to one project demo. So the login tool conveniently just dumped me into that project. And we'll see how we switch the command line between the different projects in a minute. Just like how in the web UI, the project looked open or empty, I should say. If I do OSC status, operate in my current project, which is demo, and it says, hey, you got nothing. Empty. So we're going to pull down some of these resources that we're going to use. So I have this test complete file. That's the one that we just looked at. So from the command line, I can just tell it, hey, go ahead and consume this file, right? Create all of the resources that are defined in it. As soon as I do this, we're going to see a bunch of stuff come back. So these are all the resources that were created based on what was defined inside of that template. And now if I switch back to the web UI of OpenShift, this may just be painful. Oh, it gets bigger. Hey, that's great. So we see here now a graphical representation of what's going on in this project, right? I created a bunch of resources, and at the overview level, OpenShift tells me, hey, look, you ended up with a single pod that's running. It's based on this image. It's internally, it's talking to these ports. Looks like there's a bug there that needs to be fixed. And it's got the pod itself has an IP address. We're going to talk more on the administration call about networking and some of the other things. But suffice it to say that that IP address is on a software defined network, so that all the pods get connected to, so that they can communicate with one another. This is an internal IP that's not exposed anywhere outside of the OpenShift environment. There's a service IP, and there should be a port, but for some reason it's not. There's a service IP and a port that tell, or I should say that all the other applications running inside of the OpenShift environment may want to talk to. So if this Hello! OpenShift was some microservice that maybe gave me happy little horoscopes that other people wanted to consume, I might tell people, hey, the horoscope microservice lives here. Right? If I browse some of the resources that have been created, I can find a little bit more information about what's going on in there. Builds, we haven't done any builds. We'll get to that. Deployments, we'll talk more about that as well. Right, but there is a deployment. That's why there's something running. We can see the pods. And we can see, you know, there's a lot of things running in there. We can see the pods, and we can see the service, and we can also see the route, because the route is associated with the service. So if I click on this and open it in a new tab, we will see a response, right? This is actually working. The pod is running. I'm coming from outside the OpenShift environment, right? I'm coming from on my laptop, going into the OpenShift environment through the routing layer, and then into the application instance. Right? So on the command line, if I, for example, we see that the external IP and port is mapped to an endpoint. Awesome. What about scaling? The web console doesn't provide... Hold on one second. Let's see. Let's see. Web console currently doesn't provide a way to manually initiate scaling. You have to actually talk, and this answers the question from the gentleman earlier about replication controllers and replica management and scaling. So I can talk directly or edit the configuration of the replication controller directly to control scale. So for argument's sake, we're going to say, hey, I don't just want one, I want three. And as soon as I quit this, if I ask for the list of pods, I already have three. It spun them up that fast. And if I go back to the web UI, here we go. Overview. We see that there are three pods associated with this service right now. And if I ask the... Let's get to describe the service, I see all of my end points. Cool. All right. So we stood up a simple Docker image, sort of an arbitrary Docker image that could have been anything, could have been any image from any registry that was accessible by the system. And we routed traffic. Question with regards to replication. Yes. As far as I saw there, you basically just had to edit the JSON template to specify the number of replicas. Can you do the same via CLI command or you have to... Yes, there's supposed to be a scale command, a scale CLI command that's coming in from upstream right now. It's currently a cube command that's in development that will become like an OSC... I think it's going to be like OSC scale or something, I don't know, but you'll be able to just type a one word or one line command. You won't have to actually edit a file. In theory, we actually already support a command line way to do it because there's a syntax called, I think, patch... Nope. Update. Patch, yes. So there's a patch syntax where I could have actually said OSC, update, RC, blah, blah, blah, dash, dash, patch, some JSON stuff. So I could have done it as a one liner right now, but it's sort of still ugly. There's supposed to be a real command that's going to do that soon. And of course it's subject to what's coming in Kubernetes upstream though. Correct. Yeah, yeah, yeah, yeah. Yeah, exactly. Good. Thanks. The one other thing we'll talk about before we really get to interesting development stuff, the routing layer, right? I got to remember to wait two seconds so that screen's refresh. Okay, so routing layer. I'm going to switch back to the root user now. So I'm the master administrator of OpenShift. And my default project is called default. And what we've done is we've actually deployed two things, registry and a router. The router, if we look here, OSC, HA proxy router. So we're still using HA proxy as the routing layer in OpenShift version 3. But the cool thing is that it's actually a Docker container, Docker image that's being deployed on OpenShift itself. So the routing layer is being run, scaled and managed by OpenShift, which is cool. And the other thing is that because it's now external to the nodes, it's not tightly coupled anymore with the applications themselves. It's infinitely more pluggable. And it's also much more sort of scalable, configurable, flexible, et cetera, et cetera. The routing tier, the HA proxy instance, takes traffic that comes into it and directly proxies it to the pods. It actually does not go through the service layer for performance reasons, right? It's only one hop instead of two. Registry. Since, as we'll see in a minute, OpenShift builds Docker images for us today, it needs some place to put them so that after it's built them, it can actually then deploy them. And when it needs to scale out and deploy more of them, so that every node can find them and pull them down. So today, we're providing an instance of a Docker registry for you to use. As a company, Red Hat has a number of sort of Docker registry solutions that are coming. The enterprise Docker registry from Red Hat will be Red Hat Satellite. And I believe the 6.1 release is where they're targeting, providing a Docker registry. So you'll be able to integrate with a number of different Docker registries as a place to put those resulting built images. And we'll see how that works momentarily. All right. So now we're going to try and do something interesting. So we are going to move on to the first example of doing a build. And I believe that is our Sinatra example. So I'm a Ruby guy. I wouldn't say I'm a developer if I'm actually a very awful developer, but I know a little bit about Ruby. So Sinatra is a super lightweight Ruby application that basically just responds to HTTP requests. If you want it to respond to get on slash, you have to tell it exactly what to do when that traffic comes in. So we have an example that uses Sinatra. So what we're going to do is actually build that. So I just created, as the administrator, I created a new project for the Sinatra application. And as the user, Joe, if I go back to the home page of the web console, I see my application or my new project, I should say. I see this Sinatra example. So if I click into it, again, it's an empty project. From the command line, got to remember to wait, sorry, sorry. As the Joe user, the switch to using the Sinatra project. And if I ask for status, I see empty, right? Cool. Now, we could have JSON to make this stuff work or whatever, but the nice thing is, you know, this code is already available on GitHub. GitHub.com, OpenShift, simple Sinatra. So we have this Sinatra application, which is very, very, very simple. Same thing. All it does is respond to an HTTP get request with hello Sinatra. But this is just pure raw code right now, right? We don't have an existing Docker image for this. So from the web console, I've got this, and please tell me if I'm moving too quickly on the screen. There's this create button. So when we hit create, we're going to see a couple of interesting things. One is at the top, source repository. Two is at the bottom templates and Instanaps. We'll go to Instanaps later. URL for source repository. All right. Well, that's this one. Nope, that's a command line command. Here we go. URL for source repository hit next. Select a builder image. So currently the web console doesn't have the ability to auto detect the code that's at the end of that URL you just provided, right? So for me, I understand what that repository is. I know it's a Ruby application. I know maybe what version of Ruby it requires. And all these builder images that are available here, these were defined by the administrator previously. This is very similar to V2 in how you could install a certain set of cartridges. And if you didn't want people to use Python, you wouldn't install it, and it wouldn't be available. Same kind of concept in OpenShift 3. If the administrator doesn't configure these builder images to be available, they're not. And therefore, you can't really easily use that code unless you sort of built the whole Docker image yourself. So again, this is a Ruby 2.0 application. So we're going to select the Ruby 2 builder. Hey, dummy, are you sure this is what you want to do? Yes. And then it asks me some more questions. So name, it sources the name from the name of the repository, but that's too many characters. So we're just going to call this Ruby example. Do you want to create a route? Sort of yes or no. Deployment configuration. Do you want to change any of these default sort of settings? The build configuration, how do you want to do your build? Automatically build a new image when the code changes. Yes or no. Automatically build a new image when the builder image changes. And this is an awesomely cool thing about OpenShift. These two abilities. Why? Well, think about from a security perspective as a developer. When the underlying Ruby library is found to be vulnerable and a patch comes out upstream, wouldn't it be great if, as a developer, I didn't have to care about sort of what's going on down there? Like, oh, there's a new Ruby image that's been patched. Great. Let's rebuild my application using that new Ruby thing. Similarly with the code, hey, we detected Mr. Developer, Mrs. Developer, that your code has changed. Should we go ahead and rebuild your application with the new code? Yeah, that'd be great. And then right now here, I can change the default scale, but again, we can't change this after the fact yet. And if I wanted to add more labels for other services or for other people to find my application or to somehow associate it with other things, I can do that here. Note, after creation, these settings can only be modified through the OSC command. All right. So I'm going to hit create. And what's it going to do? So it's going to create all of those resources again that we had talked about earlier, right? So if we go back to browse, we see that it's created a service for us with a route. We'll talk about this ugly default route in a minute. It's created a build configuration. And we also have some webhook URLs. So this is where we can sort of do that auto, auto build if your code changes thing, right? You can integrate an external system. So let's say you've got already a Jenkins process. At the end of that Jenkins automated test run, you can tell Jenkins, hey, go ahead and hit this URL on OpenShift with I think it's a post or whatever. So when you hit this URL, OpenShift will say, oh, that means the code has changed. I need to rebuild. Or I can do it manually from the command line. That's what we're going to do. And here we go. So as Joe, I'm going to start the build. This is going to kick off a code build. This process is called source to image STI, right? So Ruby example one. Now, we're going to have to switch back to root here because we're in the middle of two sort of security paradigms. And even though Joe created the build, he can't actually see what's going on. So here are the build logs. And this is probably going to go by way too fast. But essentially what's happening is OpenShift stands up an instance of the builder image, that Ruby 2.0 rel image. It then pulls in the source code and performs whatever assemble steps are required for that Ruby image. In the case of Ruby, we're going to bundle any Ruby gems and do some other stuff. Once we get done installing all that stuff inside this image, this running container, we shut down the container and then take that image, that resulting image of the container, and push it into a Docker registry, which happens to be the one that's running on OpenShift. So what happens? At the end of this process, when the Docker image is finished being pushed, we look at deployments. We don't see any deployments. But when we created this application, we configured automatically, we configured OpenShift to deploy the image when it detects the image changed. Well, so the image going from not existing to existing is also a type of change. So as soon as this push is finished, we're going to see a deployment happen. If we go back to the overview, oh, I missed the build. It happened too fast. It would have shown us a little sort of spinny arrow when this was building. Oh, sorry. This is the builder pod. So that's another thing that's interesting. Remember, since the build happens inside of a container, it's actually happening inside of its own pod. And if I catch it fast enough, we'll see the STI builder pod where the build is happening. And the reason that this is taking so long is because I've got a little tiny laptop. It happens to have an SSD, but there's only so fast electrons can move. So the push is just taking me a while because my laptop is busy doing other things. Any questions so far while we watch paint dry on this process? What's going on doing the build, et cetera, et cetera? So there's been a few questions about Jenkins, but whoever just spoke up, just go ahead, ask that question. Oh, okay. Regarding the updating images when it changes, can you explain it again? I mean, I, as a shift administrator, I updated my build images from, I don't know, common bug. And will there are applications on the infrastructure you updated to? Yes. Automatically? If you configure your applications to be rebuilt on image change for their upstream source images, then yes. So what I've just done here is I've asked OpenShift, show me all of the image streams, IS, image stream, IS. And we have Ruby 2.0, well, seven that comes from registry.access. And if I look in the guts of my, nope, let's see, get bc, c, get b, edit bc, ruby, example. So if I look at the triggers, image change trigger, so we have an image change, image change trigger defined on the builder image. So image streams are resources that are monitored by OpenShift. We have an image change trigger on this build configuration. So when OpenShift detects a change in the image stream, the builder image, and we have an application that's monitoring for that change, OpenShift knows, hey, oh, the builder image changed. I automatically have to rebuild this particular application. If nobody else was watching that builder image, or if nobody had that change configured, nothing would happen. Does that make sense? Doesn't matter what the register is, doesn't matter if it's Dr. Hub or my private register. Nope. So if you were using, in our previous example, where we used that Hello OpenShift application, which was an image just coming directly from Dr. Hub, if somehow that was related to a build, if that was a builder image or whatever, and we had defined an image change against it, as long as it was defined as an image stream, as long as it existed in the image stream configuration from an administrator perspective, it would, well, let me rephrase that. As long as there was an image stream defined, it doesn't necessarily have to be defined by an administrator. So Joe himself could have created an image stream in his own namespace to do that, if he had the policy access to do so. Yeah, a couple of key concepts there that Eric covered, one is on the deployment, right? When he deployed the application, he took Ruby code from GitHub, Sinatra code from GitHub, and combined it with a Ruby base image, then we put those together, built a new image and deployed it to your application. So now you have a running instance, or maybe multiple instances if you deployed a cluster based on that built image. Then when he makes modifications to that application, or if he made modifications to the base image, we would build a new image and then either deploy it automatically or you deploy it manually, right? So what's building is building in one pod, but what's running is separate, right? And so then when you deploy the updated instance, if there's a problem, you can roll back to the previous version of the image and so forth. So that's the flexibility. It's not building in the running instances, it's building in a separate image. And then I think that the repository here, what Eric was mentioning, we have an integrated Docker repo that a Docker repository that ships with the product, but we'll probably talk about it on other briefings. You could connect that to other enterprise registries, right? So we're working on an integration with satellite, for example, where you could make that your internal private registry and then pull only certified images into OpenShift. But if you want to let developers pull stuff in from the Docker hub, you could do that as well. I think that's really something that we want to leave to the administrator to control. Okay, thanks. So the build is finished and we deployed our new built image. So if we look at this line here that I'm highlighting right now, this is actually the image from the internal registry and its tag. We generate sort of a UUID-like tag every time we do a build. So ultimately what happened was we did a build, the build finished, we pushed the built image into the internal Docker registry. OpenShift detected a change in the image for this application because there wasn't one and now there is one. So it said, aha, I need to do a new deployment. And it did that and then we got our image. So if I, so we have a service, right? So internally I can talk to the service, so I can talk to it with Curl, get back, hello, Sinatra. So the application is clearly working at this point, right? There's a pod. I went through the service. I got a response from the, from the end pod. Route. So right now, when using the web console for creating new applications, you get a default route. But the router installation process isn't very configurable quite yet. It's almost there. And that default route domain is sort of this bizarro default.local thing. There's also an internal DNS server that's running inside of OpenShift. It's called Sky DNS. It's a GoLang project affiliated with Kubernetes. And if I, it also does some default-y stuff. So if I do a dig for, yeah, Ruby example, I think this is right. Wiring local at, no, I think I made a mistake. Nope. Well, I promise you it's there. I just can never remember how to actually access it. But anyway, applications can also use these internal DNS names that are affiliated with services in order to reach their end points as well. So this route that was configured, this isn't a very useful route. DNS doesn't actually resolve it. It doesn't exist anywhere. So Eric, as you're talking right now, we can't hear you. This bit will be the bit that I edit out of the recording. I'm not back. Hello. Yes, you're back. So yeah, so that default.local route DNS doesn't work. I can't really access it because it's not configured. So we added a new route. That's a real route. Hello, Snutra, CloudApps, example.com. We see a response. Don't want to hear you, Eric. Hello. Well, I hear you now. I might get you to dial in. Diane, we all hear him now. Yeah. I think Diane may have her speakers muted. Anyway, so if I go to services, we have the route for the service. And if I open it in the browser, it works. Diane, you're going to have to mute one of your speakers. Okay, so that's the sort of simple. Go ahead. I have a question. Supposing you have an application that has a browser-based interface as part of it and you make a lot of changes on that, something like a Jenkins image and you create some plugins and all that sort of stuff. Can you do an OSC command to then say, okay, I want to make a snapshot or do a Docker commit on that image and use it for now on? Do you know what I mean? So if the changes are all through the web UI as opposed to code. Yeah. So yes, but not really. So the thing about what you're talking about is Jenkins has persistent state that's maintained in the database. So what you're actually doing, and I'm pretty, at least I think that's the case, when you add a plugin to Jenkins, you're installing it in the database or you're actually ending up with files on the file system. Okay, got it. No, no, that was a question. That was a legitimate question, because I don't know. Oh, I'm not sure either. I think they're probably dependent. Yeah. So in the case of stuff that goes in a database, well, I guess that it's really more generic, right? Whether it's a database or not. In this example, in this state of the beta, persistent storage hasn't been defined yet, hasn't been fleshed out. The next beta drop is going to have it. But effectively, if you had persistent storage for an application that was like files on a file system, we would have had to define a storage volume that lived somewhere outside. And you wouldn't necessarily put the stuff in and then say, hey, give me a new image. You would just connect everybody to that persistent storage volume. Yeah, that makes sense. Yeah. And in the case, in some cases, in the case of like static resources, maybe images or whatnot, you might want those to be part of the image so that they're actually closer to the pod. It just, it's going to depend. And that's, it's one of the painful things about being so flexible is that in some cases you have to make these interesting architectural decisions about where does this static thing need to go? Is it part of my image slash application? Or is it like a data resource that needs to live on the, on the storage volume? Yeah. And in the case of Jenkins, it might be an image thing. In other words, it might be, this is my app. This is my Jenkins for my app. And these are the plugins that are required. So I want to, I want to save that as part of my image potentially, right? Yeah, absolutely. So yeah, it's going to depend on use case as well. Quota, we can talk about as an admin. Quick start. Okay. So this is cool, you know, create a simple application whoop-de-doo. Let's do something more interesting. So the, the next thing we're going to do is a quick start or a template. If you think back when I hit that create button, we had that instant app. So I've created yet another project for Mr. Joe. If I go back to the web console, now I see the quick start project. If I hit quick start, nothing, right? Got an empty project because that's how they all start. And now I'm going to create this quick start. Instant app versus template, you know, the terminology, the naming might change, but, and Joe Fernandes, correct me if I'm wrong, but essentially when we talk about an instant app, what we mean is if you deploy this thing, you're getting a fully functional application that you can immediately start using. As opposed to a template, which might just be some bare bones deployment of a language or a framework or runtime, that you can then start adding your own code to. An instant app is just a type of template. So if I hit browse all, we're going to see the same guy listed there. And the reason that he's an instant app or shows up in the instant app area is because it is tagged with instant app. When this template was added into OpenShift, the tags are in the JSON yada yada. We're going to provide a number of these templates to help people get started, get up and running. This quick start application is very simple. It's a Ruby front end and a MySQL back end. So we're going to get two separate layers deployed. We're going to have a database layer that gets deployed and we're going to have a front end layer that gets deployed. And they're all going to get deployed at the same time. And because of the way the template was made, everything's pre-wired. So we don't have to do anything. There's no wiring. If we look at this parameters section, we see a bunch of parameters, username, password, et cetera. And we see this generated flag. In the template, these parameters are not only defined as being present, but there's a syntax that tells OpenShift how to generate these parameters so that you get a username with some level of complexity or whatever. So really, I don't have to do anything at this point. I can just go ahead and hit the create button. And so very similar stuff is going to happen, but slightly different. So we get this database service deployed with a pod that's going to come up. And we get a front end service deployed with a pod that's going to come up because we already instantly started our build. So as soon as we hit go on that template, OpenShift said, cool, I know what I need to do. I'm going to start building this thing. And we can see this little spinny guy that says a sample build is running. The MySQL container has already come up. So if I go to the command line as Joe and I look at my services, what? Oh, wrong project. Sorry. Oh, cool. I just showed you an example of permissions, right? Used to be called integrated. We changed the name. Okay. So OSE get service, 117, 230, 17, 232, 5, 4, 3, 4. Yes. Curl is not a MySQL client. I know. But at least tell us whether MySQL is working or not. So the back end got stood up. The database got stood up. We did a build. The build is complete. Now we're just waiting for the deployment to finish. Remember the build, remember the process. STI, we build the code from the builder image or we build a new image from the builder image and the code. Once that's pushed, there's a deployment of that image that happens. And then we get our new deployment of our front end. And then we have to get an IP for the pod. Once the pod comes up, we have to update the service. Once the service is updated, the route can get updated. Right? So at this point, if I go and look at my services, I see the front end service with an IP and a port that's missing for whatever reason. And this route, this route was part of the template. So it came along for the ride. At this time, it's not configurable, but we'll get there. So if I open this in a new tab, we should see the actual application running, which is a functional key value store get type application. So we can say open shift is awesome. And we can store that value. And then we can ask for open shift. What's the value of open shift? And we find out that open shift is awesome. We've only got 10 minutes left. I can sort of kind of demonstrate a docker build, but it's going to look very, very much the same. So if I just do a quick search here, OSC new app. Yes. Okay, cool. So this command here OSC new app, this is a command line way to do basically the same thing that we did when we pointed the web console at some code. And if I type this here at the command line as Joe and ask it to just output the YAML or the JSON, it'll just show me what it might do. It's not actually going to do anything yet. And this particular repository is a docker file that has WordPress embedded with my SQL. And where was the thing I wanted to show? Where's the strategy, right? I just lost it. Sorry. Where type docker boom. So this build strategy says docker. It doesn't say STI. And what that means is new app looked at the code in the repo, right? New app is a command line tool. It has more power, more flexibility. It looked at the repo and it said, oh, there's a docker file here. That means I need to do a docker build. So what actually would happen in this case is that when this application was built, OpenShift would detect the original source image, which is CentOS WordPress 7 latest. And then it would use the docker file and do all the things that the docker file says to do inside of that CentOS 7 WordPress doc. Sorry. I made a mistake. Inside the, where's the image definition? It's actually just supposed to be the CentOS 7 image. I'm pointing at the wrong thing. But anyway, it would start with the CentOS 7 image and then build in the rest of the docker file and then it would deploy the resulting image. So I really didn't leave a lot of time for questions. I totally apologize. But hopefully this was useful. So when you were describing the, I see that the source was specified there and a lot of them are from GitHub. Is it possible to have these sources behind a protected Git repository through a SSHQ or username password? So the auth piece is sort of a question mark. So we'll talk about sort of goals here. The goal is to provide hosted Git repositories with OpenShift. But if you have an existing internal Git infrastructure with authentication and everything, there should be a way to, if it's SSH keys, to define your key so that that key makes it into the builder image so that when it does the Git clone against your defined source, that it already has the SSH key to use and etc. So I'm pretty sure the answer to your question is yes. It's just a matter of when will it 100% land? I think we can get clone with SSH keys already using environment variables, which is documented. Hold on. I promise it's coming. Proxy, proxy, proxy, where are you? Anyway, it's in here. I know there's proxy stuff in here, proxy. So conceptually, this is the same. You can define stuff in the environment to tell your Git and you can define variables for the pods and everything. So effectively, yeah, you should be able to get clones from internal via SSH key stuff. That was very technical. Thank you. Eric, I have a less technical question, which is the workflow right now is a lot of command line, JSON, stuff like that. How far do you see the GA workflow being different from what we have here that would more resemble an OpenShift 2 workflow? Do you have a feel for that? Or are we not at a place where we really know what that end goal workflow is going to look like yet? So as far as OpenShift 2 workflow goes, if we go back to the web console, and Michael, it's good to hear from you, when we do this create with these templates. So for argument's sake, if you had a corporate standard default like Ruby Git repository that you were going to base everything off of that individual users would fork, what they could do is put in their source repository, define a couple of parameters in the web UI for the username and password, and then when they deploy the template for, let's say, the database that they want to add to their application, they would just edit the parameters and add some parameters and type in the username and the password. And the database template that we're shipping already accepts those things. So in terms of a simple two-piece web app kind of thing, we're already very close to that OSE 2 experience. Is it going to be 100% automatic where I could just say, boom, plus add a database? I don't know where that's going to land, but at GA it will be very easy to spin up a front end, spin up a database, connect the two, all from the web console. The web console here has the same concepts as the V2 web console. You can create from either an image or you can create from a quick start or instant app here. So there'll be more instant app templates. In V2 we had things like Drupal and WordPress and those types of quick start and instant app templates there. We'll have similar here. The only difference here is we're asking, the wizard is reversed. We're asking you for your Git repository. And this is sort of the main differences. In the initial release, we don't have Git repositories hosted on OpenShift by default. We're using an external Git repo either your own or GitHub or external SCM. And then on the command line, the OSC command line is modeled after the RHT command line from V2. It's just that you have a lot more flexibility in V3. So when Eric is showing you the JSON, he's actually showing you what's happening behind the scenes. And there's just a lot more flexibility there. But you should expect that more and more stuff like that scale command to come to get more and more abstracted so that if you don't care to see the JSON or go underneath the covers that you should just be able to do stuff just with simple. And as an example, you can plug in some of those parameters from the Web UI. You can also supply them at the console layer. So here we're processing a JSON file directly. But if the database template already existed, there's a way to instantiate a template from the command line by just saying, I don't think it's OSC create, but effectively it's OSC create instance of this template and then pass it with these parameters. And then that would be sort of a JSON free example of instantiating some template and specifying the parameters all in one easy command line command. There was one older question John Frizzle asked about tar balls and if it'd be possible to do a build from a tar ball rather than from a typical. Yeah, yeah. So we I asked that question early on in the in the beta cycle and there there's currently a card that identifies the need for that functionality. It's just not in there natively yet in terms of we're shipping it. But the build process is customizable. So if we go to open shift Ruby to send us, for example, if you were building Ruby stuff, you can modify this assemble builder script to basically fetch the tar ball. Based on the location of it that you put in your environment variable as the quote unquote source code. It's just that we're not we're not officially supporting it right yet in the current state of beta, but we know we need to support other deployment mechanisms, e.g. tar balls or just fetching jars and mores directly or downloading a whole folder, things like that. So you can customize it today and get it to work. But in terms of out of the box, if you just specified a zip file, would it magically happen that the answer is no right now, but it's on the roadmap, hopefully for GA. We're coming up on the end of our hour and I want to be respectful of Eric and everybody else's time. If you have any other questions, you can always post them on the developer list at devatlist.openshift.redhat.com or ask them via the commons mailing list if you're on that. We'll be doing a second session of this focused on the admin tasks and get Eric back here in a week or so, depending on his schedule to do that. So I really want to thank Eric and everybody else who's on the line for taking time and Joe and everyone who's been answering questions in the thread there, in the chat thread. It's been great to have you all and we hope you can get started or continue working on the beta with us and we look forward to the GA release coming up soon. So thanks everybody for coming and very much Eric. Thanks for all your efforts in getting this ready for us today. Oh you're very welcome. Take care all.