 Hello and welcome to another OpenShift Commons briefing on Diane Mueller the director of community development for OpenShift And I'm really pleased to have with us today Joe Fernandez and Mike McGrath to give us an update on the roadmap for both OpenShift and Atomic as Atomic is upstream into OpenShift. It's a very integral part of our Project, so I'm going to let Joe kick it off And I think Mike is going to come in about halfway through and we'll have Q&A at the end Or you can post things in the chat Publication after and you can ask all of your questions that all right. Thank you very much and here you go Joe take it away Great. Thanks Diane. So as Diane mentioned my name is Joe Fernandez and I leave product management for OpenShift So I'm going to talk a little bit about OpenShift and Atomic and I'll start with like kind of an explanation of how they relate to one another and both upstream and commercially and then talk about sort of where we're at from a roadmap perspective and where we're going in this upcoming year hopefully give you some insight into some of the things we're working on in the community and also Things that we're getting asked about by our users So so all OpenShift and Atomic are part of Red Hat's container solutions, right? And so this slide kind of just covers that really, you know types of things were focused on is how to help customers leverage containers and container-based platforms to modernize application delivery bring greater agility to both traditional applications as well as newer cloud native apps Drive consistency through life cycles from from the dev and test environments all the way out to production And then ultimately deploy those applications anywhere where they want to run So, you know, we talk about hybrid hybrid as being sort of the the predominant architecture and for us hybrid could mean You know public cloud and private cloud it could mean public multiple public cloud services It could mean, you know Virtualization or even bare metal we want to be able to to run across whatever infrastructure Or whatever combination of infrastructure as customers want to Deploy their applications on So so starting with Atomic really Atomic started out as Atomic host and Mike's gonna talk a little bit about what we're doing there But but Atomic host is a variant of our Linux Family, right? I've read of Red Hat Enterprise Linux. It's a it's really optimized for running containers So so both Red Hat Enterprise Linux and Red Hat Enterprise Linux Atomic host Which are the container which are the the Linux offerings that that we provide both of them now can can run Doc docker based containers, right? So our container runtime and packaging that we standardize is the docker container runtime and packaging format But you know You need more than just containers and Linux to build out an enterprise container infrastructure So so as we've talked about in prior calls Beyond these two foundational pieces You also have the need to orchestrate Containers across Multiple hosts and to manage that cluster and for that we do work in Kubernetes and basically bring that in To to manage to manage those container resources across the cluster But even that is not sufficient, right? Because when you're running containers on a cluster and you're dealing with You know Containers each having their own IP addresses. You need to be able to manage the networking of those containers Across the various hosts. You also need to manage things like storage if you want to run stateful services and containers You need to store those container images and a registry You may be interested in you know different telemetry like logs and metrics Together info on how containers are working and you need to sort of manage The security of the containers and the applications that run inside So all of these are challenges that that customers face when they're thinking about moving beyond just docker on the desktop And thinking about docker in the data center, right containerized Data center infrastructure that can run these to run enterprise applications And so we bring that all together under the atomic brand so upstream That's project atomic as well as Projects under that umbrella that we work on things like docker in the docker community kubernetes For telemetry we do work In the elk community in elastic search in cabana, which are now part of Part of the solution. We also on the networking side. We leverage open v-switch and so so all of these things sort of Are are part of the that container infrastructure and then commercially we announced in September Public preview of something called the red hat atomic platform So the red hat atomic platform is the packaging of the components that you see here that we will soon be offering as sort of a standalone commercial subscription that you can get from red hat Open shift then kind of takes all of those that entire infrastructure and then adds You know additional functionality to build out a full container application platform So this includes things like self-service capabilities for for your end users So if you've ever seen on prior calls Demonstrations of open shift whether it was through the web console or command line interface or the work we're doing on eclipse tooling These are all part of the self-service interface that we that we built on top of this infrastructure We also then build out different options for middleware and data services And publish them through a service catalog so that developers can consume those services And and we'll be talking about kind of where we're going with service catalogs to actually Allow them to consume other services And then ultimately It's about how do you manage the applications through the life cycle, right? So when you're running applications and containers Every time you change the application, you're not making the change in the container You're actually changing the image the Docker image that generated that container and then instantiating a new instance from the image so being able to build new images efficiently being able to manage updates to those images when you need to update the application or or Deploy new code and then being able to manage the deployments across the life cycle As I mentioned from dev to test to production. Those are some of the things that OpenShift adds that Beyond sort of the the atomic infrastructure to provide you an end-to-end platform so So there's even more still right so So beyond the platform that you see there in the center We're also working on a set of developer tooling and I know those are previous Commons briefing. I believe Maybe before the break where we talked about things like CDK our container development kit Which is basically a tool that allows developers to run containers on their local machine and then Upload them to to OpenShift or atomic and then and that's going to feed into a full development studio That ties the container development kit into other dev tools like IVEs and so forth And then on the management side, we you know, we tie these things into Red Hat's various management solutions Whether it's cloud forms, which is our hybrid cloud management solution Ansible which Red Hat recently acquired for For configuration management and automation and and then Red Hat satellite So kind of all of these things combined You know bring this together and then on the infrastructure side obviously Open stack and Red Hat virtualization and Our our options that that that basically you can run this on as well as running it on any of our certified public cloud providers So again the the work we're doing now in containers really touches All of the products in the Red Hat portfolio. So while we Really drive the roadmap for the stuff in the center We also work closely with our Our team who works on our management solutions our development solutions and on other infrastructure Solutions like OpenStack like storage Like KVM and so forth to make sure we have sort of a consistent and well-integrated platform across the board So in terms of our roadmap, this is just a high-level picture, right? So So, you know our journey really on this new platform We started in June of last year when we released OpenShift 3 and and that was sort of the culmination of Nearly two years worth of work to sort of rebuild the OpenShift platform on this new Docker and Kubernetes base as well as on rel7 and the new atomic host and then in the fall We put out our first point release of OpenShift Enterprise and also put out the atomic Enterprise platform public preview So so with each release now you'll just see a number of features and enhancements But if you're downloading either OpenShift Enterprise or the atomic Enterprise platform today or you're pulling the latest community bits from origin You'll see features that were introduced in the fall Whether it's features around auto scaling or enhancements on the UX side New middleware services that are now available The Elk stack integration that we added for logging Enhancements in storage and networking so quite a lot went into that point release And that's just you know now we're on to the next right so in the bottom right what I'll focus on today Is just a few of the things that we're working on in the first half of this year so so for those of you who've Kind of worked with us upstream. You know that we we do development on a on an agile scrum-based model But you know we we typically plan on on half-year cycles, right? So so while the the development happens scrum to scrum We you know we know as we say here today that there will be a 3.2 release targeted for early this spring and then Another point release 3.3 which will come out right around red hat summit or shortly after and that's this year That's end of June So so around end of June or July you'll see a 3.3 So we kind of plan them both together and the features there kind of represent stuff That's going into each of those releases obviously that the things that take longer will Will end up in 3.3 things that we can get out sooner will be in the 3.2 release So we don't only think in terms of Release versions and time frames we think in terms of themes So these are some of the themes that are driving the work. We're doing in the first half of this year One of the biggest things that we're doing is So bringing our last solution to the v3 platform, which is OpenShift online So commercially you can get OpenShift in three ways You can get it as a software solution, which is OpenShift Enterprise and that was that's been on the three platforms since last June Current version is three three dot one OpenShift dedicated is our second offering. This is a public cloud service that Red Hat runs That's basically single tenant, which means each customer gets their own OpenShift cluster their own dedicated instance of OpenShift For just running their applications and then OpenShift online is our multi-tenant You know public cloud service and you know, this is some of you guys may have worked with this at OpenShift comm Currently that's still on our v2 architecture, but we've been working hard To get that on to the new v3 platform So so developers who are consuming OpenShift in the public cloud I can get the latest and greatest capabilities that we're we're providing on-premise and in the dedicated offering and then The other bullets here represent different areas that we're working in I've included Tags and parentheses. So if you if you follow our Trello boards You'll see that we have a roadmap board where we sort of outlined some of these themes or epics and then through tags Like like what you see here, they'll link out to user stories on the team boards right now. There's As part of the Red Hat's overall engineering effort, there's I believe 15 scrum teams 14 or 15 scrum teams working on OpenShift and Atomic and they're each working in their respective areas But a lot of these features span multiple teams, right? So so through this tagging mechanism If you're trying to follow development and Trello, you can start from one of these themes and then drill down into the various user stories and Engineering work that that goes on. So certainly we're continuing to drive our developer experience You know through for OpenShift itself. We're working on where our build and deployment capabilities tying into a Continuous integration and and working towards a continuous deployment model on OpenShift itself At the Kubernetes and Atomic layer we're working on something called service linking Which I'll cover in a second and and that leads into sort of being able to publish out catalogs of services for users to consume we're adding Red Hat mobile as a new set of services for OpenShift users and then Expanding into additional products in the Jbos middleware portfolio We're working on our integrated registry to add more enterprise Registry capabilities I'll get into that and then Mike can Can elaborate on that as well continue to work on Scale right larger size deployments more nodes more users more instances and so forth So so doing quite a fair amount of performance tuning and scale testing and so forth on the platform and on components like Kubernetes Various efforts around container security, which is really Both in Kubernetes and upstream in the Docker community And getting into how to automate the provisioning of clusters so multi Multi node deployments and so forth and then the overall install and getting started experience and so forth. So this isn't a Complete list. I think there's a couple of things I missed but kind of shows you some of the You know bigger areas where you'll see work underway So so I won't be able to cover everything here, but I'm gonna give you some of the highlights In terms of the user interface enhancements So we continue doing point work on the open shift web console to allow you to do more things through our browser UI Right, so ultimately our goal is that everything that you can do through our API and through our command line interface you should also be able to do through our web UI and And so this is just the next set of things whether it's being able to To specify resource constraints on your pods on delete entities like pop projects and pods and so forth I'm add a routes to a service Display and attach storage volumes View view more metrics this time on on build Times and so forth These are all things that that you'll see and I've kind of marked a couple of them actually got into a Z stream release we're putting out this month, which is 3 1 1 Many of them will be in 3 2 which is spring timeframe, I'll say March slash April because so that way development doesn't get mad at me for for getting a specific date But probably talking late March to early April on the 3 2 time frame and then and then 3 3 would be sort of late June to July time frame And we'll have more specific dates as we go along The other thing is kind of introducing new UI so So deployment cut pipelines is a concept that we've been talking about for a bit But this is the ability to through the UI Visualize your applications from the standpoint of the stage of the lifecycle that they're in so so you know if your applications are running in a development environment And then you promote them to test or you promote them to to UAT or prod being able to map applications to environments and then and then Watch the deployment across various stages. That's something that we've been doing a lot of thinking about and are bringing not only to the UX but to To all the underlying supporting infrastructure Service linking Has to do with making it easier to attach one service to another service. So as an example if I am In OpenShift and I create a Node.js application And you know, you can start that up. You can scale up additional instances. Then you want to attach it to say a database mySQL That you can do that today It's just It's not as easy as it could be right if you don't have a template predefined to make those connections Yeah, there's a bit of manual work to to sort of have Have one service talk to another service and we just want to make that as simple as you know You know add add a service to Node.js or attach a service So the concept of service linking whether you're inside of the project Whether you're working across projects. This is a common use case Maybe I'm a front-end developer Working on UI, but I need to consume some back-end services that someone else is building and they're working in a different OpenShift project or Kubernetes namespace. I want to be able to to add those back-end services to my To my front-end service That's another use case for this and then linking to services outside of OpenShift, so if I want to attach to a database somewhere else in my data center or a service that I'm consuming in the public cloud These are all things that you can do today It's not that you you can't do that It's just making that easier and then from service linking that you get into sort of being able to Predefine those links and and be able to publish them through a service catalog Which is sort of where this goes next and then from from service catalogs you get into Being able to do billing and metering in terms of a marketplace So this is sort of a step on that and I know Diane's been working on an upcoming commons briefing Just to talk about this particular Feature because there's been a lot of interest among our customer base and in the community around what we're doing here YAML editor is just a you know when you do have to default to editing a big build config or deployment config You know this this is something that we added Are adding to the UX so that you can do that some of those config changes through the browser versus having to go go outside So beyond just sort of like the front-end developer experience and by the way There's this corresponding work on the command line interface as well in the underlying API. So for example on auto scaling We added auto scaling support In in three in three one we're enhancing that in in 32 to be able to scale on additional metrics, but being able to configure that through the web console is is also Work that's targeted here particularly for the June release So build automation So as I mentioned when you deploy an application in a container You know at some point that application is going to need to change right you're going to need to add Add more code you need to modify the config you might need to patch the base runtime and and For any of those changes what you want to do is actually not modify the running container, but actually build a new image, right? so essentially Docker build at the lowest level, but you want to build a new image that has those changes and deploy new instances out to to your to your cluster To to reflect that change There's different ways that you can do builds So in the initial release of OpenShift 3 we had a mechanism called source to image And we still have a mechanism called source to image or S2i that allows you to do a build From source. So you you provide source code and we or you push your source code to a git repository and Either manually or or automatically a build will be triggered That takes your source code and then does The application build as well as the image build the Docker image build so for example if you're pushing Java code to get We would do we do the Maven build and pull all the dependencies and then and then Take those Java binaries and then put them into a Docker image right do a Docker build So there's really two parts to that build. There's the the application build and then there's the the image build Well in the case of Java specifically, what if you already have binaries created? What if you already have? War files or jar files from your existing build systems from your existing CI What happens next? Well, you know, you still need to get those binaries into a container to run them So at that point you have the first half solved but not the second half. So we've We've entered, you know, had this concept of being able to build From a binary and that's something that we're enhancing, right? So so kind of the the original mechanism in 3.0 You still had to go through git so you'd push those binaries to get the way you push source and we can build them from there in 3.1 we introduced a Command that allowed you to push the binary via CLI you could bypass git and just start a build from a specified file or directory that you pass us and Then we're working on pull-based mechanisms improving the documentation the functionality around building from binaries that you send us That you specify for us via your Docker file or assembly script So that's a big area as well as then tying that all through your CI System we have a Jenkins image for folks are using Jenkins Well, we want to actually ultimately work with whatever CI systems customers may already have in place So you'll see other things here, but this is probably one of the the big areas for our developer experience team is to sort of Continue pushing forward on Build automation and CI integration and then from there Get into the CI CD from continuous integration into continuous deployment And this is sort of the deployment pipelines work that I mentioned. So we want to be able to visualize a Pipeline so in the near term work on UX to visualize a pipeline in different stages within a pipeline Be able to interface that at least through Jenkins so that you can sort of tie tie that pipeline to to our Jenkins Builders and then and then use that to to promote longer term where we're actually kind of continue to build on that to do more complex workflows, right? So basically be able to to do more More elaborate pipelines and and and be able to trigger like automated promotion via different actions We're adding more middleware and mobile services and I try to move along could just realize running a little bit behind We want to add more middleware and mobile services. So as I mentioned the Red Hat mobile application platform is a Red Hat commercial product that's based on feed Henry, which was a company we acquired More than a year ago now Which now forms the basis for our mobile solutions. There's a new version of JBoss EAP, which is our Enterprise application platform the commercial JBoss app server. So EAP 7 is coming out this year and that will be available in OpenShift BPM is the Red Hat JBoss business process management suite Which will be integrated this year and then API man and key cloak. These are both open source projects right now We're not selling them commercially, but but they're actually going to become products in the areas of API management and then key cloak is a is a single sign-on authentication solution that will be Red Hat will be productizing and those will both be services on OpenShift this year Dropping down into the infrastructure layer I mentioned enterprise registry capabilities So at the atomic layer OpenShift and the atomic platform include an integrated Docker registry. So this is a standard Docker V2 registry That's where all the images live And that's where we store new images that we build When you're working in either of those products So we're going to continue enhancing Those that registry what we're doing is adding features to allow you to Administrators to better manage images in the registry So there's a user interface that will allow you to actually inspect and and and and see what images you have And then at we're adding administration capabilities around allowing administrators to control access Import new images into the registry and ultimately we'd like this to be something that you could use on its own so if all if all you're looking for from from Red Hat is is a standalone registry to better control the work that your Developers are doing you should be able to do that and then ultimately from there you can expand into the full Set of functionality that that OpenShift provides atomic provides because those are the that'll be the same registry so so So yeah, I'm continuing to do work on storage So storage is basically how we how do we mount? Volumes from a storage cluster to containers that need them So if you're running a database, I like I mentioned my sequel or Postgres if you're running a database In a container for example, you don't want to store your data in that container because the containers ephemeral So we use Kubernetes storage volumes to mount storage from a storage cluster into those containers Some of the things that we're working on Excuse me in this first half of this year is things like dynamic provisioning of persistent storage volumes So for a storage solutions that support dynamic provisioning You could actually create those storage volumes on the fly as developers request them so AWS Google and OpenStack are three that support that And then we're getting into things like being able to define storage tiers via labels so that you can have different classes of storage and then and then make those available to different users Maybe a lower class of storage for dev environments Maybe a higher class of storage options for production environments, let's say And then we're adding more storage plugins. So So we added a number of plugins in the last release So we now have options for things like NFS and Gluster and SAF and iSCSI and Fiber Channel and Amazon and Google and Cinder Azure is the next Public cloud storage option that we're adding and then we're enhancing for specific storage solutions like NetApp and EMC kind of enhancing the existing Plugins that you would use to tie into those to do more more specific things like dynamic provisioning or others other You know provider specific features Continue to do work upstream in the Docker community, right? So all the containers work we do is actually I'm not in atomic or origin or Most most of it is actually in in the Docker community itself. There's a major new release of Docker 110 that's That's coming out imminently and we'll be Integrating that into the latest releases of open-shift and atomic User namespaces is something that came out in the fall. It's an experimental so it's not it's not full production yet in Docker But it's in the Docker experimental channel, but this is the ability to take Containers take processes and containers make them Allow them to run as root, but then map them to non-root users on the host so that basically You can allow processes to run that require root privilege without compromising the host without risking the host by by allowing that process have root on the underlying host So so still a bit of work to do there I think there was a major step forward here last year, but still a bit of work to continue polishing that in Docker as well as in Red Hat Enterprise Linux the corresponding capabilities at the operating system to leverage it Image scanning the ability to scan images This is basically something that we've been working on a feature called atomic scan that's integrated with SCAP We'll be productizing that this year We're working on optimizing the size of our base images for rel so the rel rel 7 rel 6 base images around which many of our Red Hat containers are built You know we're working on on that as well as things like se linux support For overlay FS and then more work upstream that's ongoing around image signing and notary To brunettes I already mentioned the auto scaling and service linking pod idling is another feature This basically is Something that we had an open shift in v2 that we're bringing to v3 which allows you to essentially Take a container. That's not not being actively used. It's not being accessed or Modified and then idle it so that Essentially you free up the resource to run other containers So we're driving that capability upstream in the Kubernetes community and it's something that I think we're looking forward to bringing Into the open shift and atomic offerings and Then I'll leave it here with atomic hosts. I know Mike's going to get more into this but atomic host as I mentioned is the optimized Linux container optimized Linux OS that we introduced last year as red hat enterprise Linux atomic hosts And there's more work that we're doing as we continue to evolve that and I think with that I'm going to stop sharing Let Mike take over because I know he's going to be talking more about Atomic hosts and some of the other features we mentioned so I Want to go over some of the atomic Atomic hosts and OS tree and basically try to dig into what all this stuff is and then touch a little bit on atomic enterprise platform as well as Open shift and how they all relate to each other a little bit deeper. I have some architectural diagrams that I'll be going over as well, which I think will help Take some of the feature topics and the The slides that Joe went over and kind of dig them down into something that you might use inside of your own environment So atomic host as you said is our container opera optimized operating system, and we have three flavors of it today In fedora centos and rel We're doing something new in fedora with it And it's kind of a fairly recent thing that this actually started happening But we're releasing a new version of atomic every two weeks And so it's free you can go and check that out right now and download them It's always got the latest and greatest docker bits in it that we have And you can always check it out there now centos tends to be a little bit slower there and tends to be still be rebuilt from the rel atomic host, but it's also free and And by slower, I mean it's not released as often And then finally we've got the red hat enterprise Linux atomic host and that one gets updated every six weeks or so It's got a full release cycle totally enterprise ready And so I think the first thing most people realize When they look at these hosts, especially if you're familiar with fedora or already a rel user is how quickly we're updating these you know, this is a very different model from what we've done in the past and so it's you know very helpful for devopsies and DevOps and and developers who want the latest and greatest either in Kubernetes or or docker or whatever We're really making strides to make sure that they have that in a supported and enterprise friendly way So the first thing about atomic host if you haven't looked at it is OS tree and it's the magic that makes atomic Different from regular operating systems. So there's no yum in it And it changes how you do updates basically if you have an atomic host You have to run the atomic command to do the updates And so if you want to run move from say version 721 to 722 you run atomic host update and when you're done with that The system will still be on the previous version. You actually have to reboot to pick up the changes significant portions of the of the file system are read-only and generally this allows you to think differently about your infrastructure and give you a better way to do updates and Especially in troubleshooting you generally don't even have to try to fix a host in some of these immutable situations You can just reboot or rebuild and that is almost always going to be faster than trying to to fix something So keep that in mind one of the other core components that goes into atomic host is Kubernetes and AP the atomic enterprise platform and OpenShift both rely on Kubernetes for their orchestration And they do this across several nodes Just to give you another you know view for those of you that aren't familiar It's got a full restful API. It's got some application templates built-in health checking It has a full self-healing infrastructure is totally extensible and you will see how much we've extended it and There's also resource limitations that are built in and scaling is something that Gets built in so you can easily add or move new containers Etsy D is what we're using on the back end to store configuration and metadata It's a basic key value pair very simple to interact with it's fast It has consistency and resiliency built into it. And so if you if you haven't looked at Etsy D It's it's worth a view Now the core of everything that we're doing here in the container space is Docker and we rely on Docker for the for our imaging format This is also what we're using today for our container name space It's also our distribution mechanism. Docker has this full layering Process built into it. And of course the process control comes with it. So hopefully I don't have to go into this into too much detail I do have a quick demo later for for those that haven't really looked at Docker and just a quick way to get started using some of Red Hat's technology So cockpit is something that is Gives a UI to individual atomic hosts So if you have an atomic host and say you don't really know Linux or Unix that well You can still make use all of this stuff Certainly, there's a lot a lot a lot of developers out there who are building things in containers On Windows machines and on Mac machines, and maybe they don't even really know that much about Linux But they do know they want to build things in containers with the extensive container ecosystem that's out there Well, you can still build all those things in containers and you can still have a reasonably Functional and customizable Linux host using cockpit without having to know any Unix at all, right? You can just use the UI and I'll give a quick demo of some of that in a little bit And of course, there's the deployment and so the the basic steps and this is kind of the important distinction between atomic enterprise platform and an open shift is that In atomic enterprise platform you have to create your container somehow you push it to the Registry and you have to create an application manifest and that application manifest then pulls down those existing images And so the input to an atomic enterprise platform Setup is the manifests and containers The flip side of that is open shift which supports that exact same deployment scenario But one it also allows for one additional input of source code and that's that s2i the source to image Bits that Joe was talking about earlier Once you're able to start providing source code it completely changes the game for you in terms of being able to do Being able to have a fully end-to-end developed a workflow where developers can provide source code It gets built and tested in the system and deployed to production and it provides that full DevOps experience Which is I think really great so the final component that we didn't talk about yet, which is The thing that makes Atomic enterprise platform and open shift so great is Origin now origin is the upstream community process that integrates with Kubernetes and it extends a lot of those features of Kubernetes and so for those of you that are wondering well Hey, you know I've used Kubernetes Now for about a year You know I've watched it when it came out about a year and a half ago and I really like it Why would I use you know open shift or atomic enterprise platform over? Over just a regular Kubernetes and so origin is the answer to that and it integrates very heavily with Kubernetes And it leaves the the Kubernetes there You know the normal Kubernetes is still part of all of this It's just that we extend that Kubernetes functionality with a far more extensive user management and role-based access control system We add the concepts of projects Origin also allows us to add custom registry and router implementations Adds things like image streams and of course the has a full template creation system Which is very important in any sort of advanced application where you need to tie several containers together You're going to need some way to to do that in this template creation system has the answer So if you want to look at a basic architectural diagram, this is what it looks like on the left here You can see the OC person That's the OC client tools which come with open shift and atomic enterprise platform That is your primary interface to an atomic enterprise platform Installation and it connects to the Kubernetes slash atomic master That also has the origin API All this is restful and that's your interaction and so once you send some job to the Kubernetes and atomic master The rest of it's handled by the system that goes out to the atomic node It can deploy and pull down your containers as needed It will you know do what other health checks you need all you have to do is describe your application and the system will go and try to deploy The containers is needed and keep them deployed if something Fails on the back end it will try to redeploy the containers all of that stuff one of the other nice things that we had at the atomic enterprise Platform level is that we have the atomic registry which Joe talked about a little bit It can actually run inside of the container in the system as well as an HTTP based router Which we're looking to extend and make even better all the time right now. It's based on h.a. Proxy In the next slide you'll see we have open shift and it adds it takes all of that same core functionality and adds even more to it and so you'll notice that the user now has an option to use a UI to interact with Kubernetes in the atomic master in addition to the original OC tools But it also adds the STI builder the Docker builder, which is very important if you need to rebuild Images so let's just say that you have no application change to make But maybe a shell shock happens or something in the underlying container Images needs to be built the Docker builder image can then rebuild those images Automatically and deploy them instead of having to go through and doing that manually It also includes some CI integrations that are great the Docker registry and it still has that same HTTP router So first I want to give look into a quick example of You know what these templates look like if you if you haven't seen them already The first is an example from the upstream Kubernetes repo and there's tons of examples up there already I'm just going to pull down this one which which includes several different little containers that it deploys The first is your basic hello world a template. And so if you look at this example, this is kind of your your basic template example and I had word wrapped it before but this actually will go through and and you can copy this into a file and It has all the information That to open shift or AP needs in order to create this application The first you know just to give you a view of what some of this config file contains first when you deploy one of these things you have to tell what kind of Component you're deploying in this in this case. It's a pod component The next is the name of the party have to name it and so ours is going to be hello atomic and Then you have to tell the image now these images are those Docker images And this is the part of the deployment config that maps The application desires to an actual container and so you know in this case that hello atomic container already exists out in the Docker hub somewhere and it's ready for download Next you have to do some mappings and so we know that this container internally listens on port 36061 And we want to expose that to to people Customers or whoever on port 8080 and this is the mapping that does that Finally in this example we run the OC create command that pulls in that hello pod JSON and it Sends that to the server via the rest API and creates the pod. It's very easy Once it's done downloading the once it's done no telling the node to download the image and start it You can then run an OC describe on it to let you know which IP address This is now listening on and in this example It got posted to 10 dot 1 dot 0 dot 2 if you remember from the earlier example We exposed that port on port 8080, which means that we should have some sort of Application available on port 8080 and you can see that by running the curl command You've got a 10 dot 1 dot 0 dot 2 port 8080. Hello atomic set up there So the next thing I want to show is something that you know just a little teaser of something we've been working on which is some of this visual boot stuff and The the concept of this is just the quickest possible way to get started with atomic So let's just say that you happen to be at this commons. Maybe you thought on Twitter You're not really sure What we're up to or maybe even that really have used a docker before but you want to give it a try Well, there's a virtual machine image that you can download on project atomic.io And you can actually run that image and that's what I'm going to do here so I hit play and At the boot prompt if you're not familiar with you next there's a grub prompt and you can kind of select what you want to Boot into and by default it'll boot into You know Fedora 23 mode and I'm sure that this is a little bit small. I apologize. I don't know that I can actually Make it much bigger. Let me see. Yeah, that should be better. So In this example, we also have this sort of developer mode boot option that we can do and so we'll select developer mode At this point what's going to happen on the back end is Atomic is actually going to boot like normal and then once it's done It will automatically download You like Also generate everything you need in order to get started and it will actually show you some of this in the background And so it's actually going out and trying to download the the pod now or I'm sorry the Cockpit container now and you can see that it's available at at the top there You get my mouse here at the top. It's available at 192.168.122.24 on port 9090 and this is the root password that it has auto-generated for us. So I'm going to Flip out a full screen because we don't actually need to do anything in that container anymore at all Let me just pull up that address 22.24 Okay, so you can see this is the the cockpit interface That is booted in that virtual machine. I just need to log into it with the root address or root password that I've got Now I can do all the things that you would commonly need to do on a Linux box without having to know the command line at all Without having to really know anything about Linux You can provision storage you can look at the logs and you can even download it and install containers And so if I want to in this case, I have previously downloaded the JBoss wild fly Image and I can actually start that container if I want. So this is the community edition of JBoss We know that it listens internally on port 8080 and so I can actually tell it to start this container also on port 8080 And I can run it and it'll take a little bit to to go and download and run But once it's up I can click See that everything's looking okay, and then I can actually bring that In the job all on my workstation and see that wild flies running So it's a really quick and easy way to get just looking at docker If you haven't had a chance to to look at it yet and see what it's all about This is a really easy way to get started and of course that will take you down the rabbit hole of always wanting more And so you know you can go take a look at the CDK options that we have as well for more serious development You know when you want to get out of the I want to create a container mode and into the I want to create an application mode That CDK contains all of the build and developer tools that you would need as well as OpenShift and Once we get the OpenShift online updated you can also just go there and test out and try whatever you want and So I think I've got a couple more slides and then I'll hand it back to I Hand it back to Dan so in this case we've got one thing I did want to mention is that atomic enterprise platform Is not quite yet GA we do have a public preview that's going on And so if you have a tam or an SA feel free to talk to them or you can take a look at access that red hat calm If you want something that you know we release you want to try it out directly from red hat And there's also community versions of origin OpenShift available So it's really easy to to download and try these things and so with that Dan, I'll hand it back to you All right Well, thank you very much both you and Joe and Joe That was a good introduction to their roadmap and the amazing amount of features that are getting put into the next Couple of releases and that the atomic explanation was perfect there Mike. So we only have one question Boris Posted something and I think that's more pointed towards Joe about so since there is a finally put online to move to v3 What is the likelihood of red hat releasing migration tools or at least? Documentation and what the strategy is for that and actually think that's a great topic for another OpenShift Commons session Yeah in terms of so I'll talk to migration strategy first race So it is a it is a side-by-side migration from from From OpenShift OpenShift 2 to 3 there's no Upgrade in place and the reason for that is because really everything changed from like the base operating system OpenShift 2 is on rel 6 in OpenShift 3 You're you're standing up either a rel 7 or a rel atomic host cluster and then things like the containers The orchestration that we do with Kubernetes these things are all new what we tried to keep consistent is the the services that are available so so everything that you do in everything that you do in V2 in terms of like the the things we supported like Jbos and Ruby and PHP and Python and no JS all of them have corresponding images in v3 So so we have the same set of images actually more images in v3 than we had in v2 For folks who created custom cartridges in v2 or customized our existing cartridges That's a essentially you're customizing the builder images We we have some examples of of how to build images and how to how to work with the assembly scripts But essentially the image is a Docker image So you're modifying the Docker file to change the base image and then and then you know Attaching the assembly scripts in terms of how we do builds Mike mentioned source to image and then I talked about how we're working on binary to image type enhancements That's the same way that you know you could push source code to a cartridge or push binaries through our binary build interface in v2 So so the goal is that like if you have an application If you have a Jbos a job application or a Python application on v2 that it would just run in v3 the things that are going to require changes is Two things one is how you integrate the infrastructure. So if you're tying if you're tying things like The networking to your DNS Or using an external router like an f5 or what have you You know those that's going to be a different configuration. Hopefully greatly simplified And then and then anything that you did on sort of like hooks like pre-deploy or post-deploy if you if you added any Any hooks we have the same hooks in in v3 or similar hooks in v3. And yeah, that's certainly something that we'd like to Work with users on and was certainly be working with a lot of users as we migrate the online environment and providing examples on that So so so I think yeah, maybe doing a follow-up on this topic thing and we'll talk about that as we So we can get into each of these areas so people can map what they're doing today to So what they would need to do to get those apps to be three So I think there's those are a lot of lessons learned at the Alchemy shift online operations team and absolutely Yeah, we can get going. Yep. We're almost to the end of the hour and So I think the other thing that I just want to point out a tossed in the Trello link to the Atomic OpenShift Roadmap Board and if folks want to Dive deep or look at where things are going or even perhaps contribute to some thoughts and feedback and code to any of these areas that both Mike and Joe have talked about this is a good place a good starting point To take a look at a lot of these things and like I said, there will be a service catalog session in February on the 11th. We'll talk about that. We'll set up something around migration I think that's a great topic Next week, we're going to be doing a session with blaze meter on continuous testing So that should be very interesting as well So we're trying to mix it up a little bit so that they're not all deep dives into roadmaps and technical stuff on the project But also some of the services. So this is really Then a great session for me. I know I've heard The roadmap explained a number of times a number of ways, but this one actually worked quite nicely for me So thank you both again for joining us today. And I if you have questions, you can always jump on the IRC or toss them to us at OpenShift on the support at Stack overflow so Just reach out and we'll be there and try and answer them and we look forward to having you all again next week So take care and thanks again guys